How Silicon Valley wants to fuck with our brains

Introducing his students to the study of the human brain Jeff Lichtman, a Harvard Professor of Molecular and Cellular Biology, once asked: “If understanding everything you need to know about the brain was a mile, how far have we walked?” He received answers like ‘three-quarters of a mile’, ‘half a mile’, and ‘a quarter of a mile’.

The professor’s response? “I think about three inches.”

Last month, Lichtman’s quip made it into the pages of a new report by the Royal Society which examines the prospects for neural (or “brain-computer”) interfaces, a hot research area that has seen billions of dollars of funding plunged into it over the last few years, and not without cause. It’s projected that the worldwide market for neurotech products — defined as “the application of electronics and engineering to the human nervous system” — will reach as much as $13.3 billion by 2022.

So, despite our admitted lack of understanding, it seems the brain is a new and significant frontier for tech-pioneers looking to reinvent — and perhaps irreversibly influence — the way we interact with the world.

The Royal Society report speculates:

Mental health conditions could be treated by using interfaces to target relevant parts of the brain, bringing relief to the hundreds of millions worldwide who have depression. Even Alzheimer’s disease, which has proved resistant to conventional therapies, might be halted or reversed.

Outside of medical use:

People could undergo ‘whole brain diagnosis’ to identify their unique talents and challenges. Today’s ‘brain training’ computer games, whose impact is debated, might give way to demonstrably effective ‘brain cleaning’ or ‘mind gym’ sessions to keep minds sharp and creative.

Neural interfaces offer myriad possibilities to enhance everyday life. We could use our minds to open doors, turn on lights, play games, operate equipment or type on computers.

Then there are opportunities to enhance or supercharge the brain itself. Implants, helmets, headbands or other devices could help us remember more, learn faster, make better decisions more quickly and solve problems, free from biases…

Mood, knowledge and memory could be securely and confidentially backed up or uploaded to a digital cloud.

I know, it’s a lot. And I’ve omitted the references to telepathy, the potential merging of humans with artificial intelligence, and the option to hook your neural interface up to that of another animal, like a bird.

To a sci-fi nut, this must all sound like manna from heaven. To the rest of us, it’s likely to be a little bewildering (to say the least). So, is this a real proposition? Or just the (fairly creepy) wishlist of some over-ambitious Silicon Valley nerds?

The truth is that it’s difficult to tell what the long-term trajectory for brain-computer interfaces will be but, to a degree, they are already here. Though still fairly elementary, we currently have drones and artificial limbs that can be controlled using the brain alone, as well as headsets that boost concentration and memory. Some of these technologies are invasive, but many are not. Some record and react to brain activity, some stimulate it, and some do both.

Reassuringly, it’s non-invasive technologies that look to be headed for commercial distribution. Most of these are re-imaginings of the electroencephalogram (EEG), a system that monitors and records electrical impulses in the brain. One of the leaders in the commercial space, CTRL-Labs, specifically focuses on what it calls ‘intention capture’. Their product is a electromyogram (EMG)-based wristband, which can respond to electrical signals as they activate in a user’s arm muscle. At the moment, the company’s demo has a player controlling a simple game using only this impulse detection and no physical movement (take a look).

If you’re cynical about how far this could go, you should know that Facebook acquired CTRL-Labs last month, and just a couple of weeks ago leaked transcripts from Mark Zuckerberg’s internal meetings reinforced the firm’s keen interest in brain-computer interfaces.

Giving his thoughts on Elon Musk’s Neuralink project, Zuck says:

I am very excited about the brain-computer interfaces for non-invasive. What we hope to be able to do is just be able to pick up even a couple of bits. So you could do something like, you’re looking at something in AR, and you can click with your brain. That’s exciting… Or a dialogue comes up, and you don’t have to use your hands, you can just say yes or no. That’s a bit of input. If you get to two bits, you can start controlling a menu, right, where basically you can scroll through a menu and tap. You get to a bunch more bits, you can start typing with your brain without having to use your hands or eyes or anything like that. And I think that’s pretty exciting. So I think as part of AR and VR, we’ll end up having hand interfaces, we’ll end up having voice, and I think we’ll have a little bit of just direct brain.

If a little bit of “direct brain” doesn’t bother you, it’s worth looking ahead to the possibilities that extend beyond basic control of an elementary system.

For example, we already have neural systems that can read moods and emotions. Last year, The South China Morning Post reported that this kind of technology had been deployed by Chinese firms looking to monitor employees for signs of anger, anxiety or depression using devices built into headwear and hats. And perhaps even more impressively (or disturbingly), researchers at Kyoto University in Japan have been able to use a deep neural network to convert brain signals from an fMRI scan (used to map neural activity) into an image that contains many of the shape and color characteristics as one viewed by the subject of the scan.

This is all just to say that these types of systems are unlikely to cease development once they provide the capabilities to click or scroll in Mark Zuckerberg’s AR hellscape.

The Royal Society report makes sure to flag some early concerns. Most rational-thinking people won’t be too far behind them: What would it mean if an external company or government could gain access to our moods, or even our thoughts? How might human privacy — and indeed autonomy — be protected in if these technologies became ubiquitous? How can we ensure that they wouldn’t be weaponized by bad actors or governments to influence and control entire populations? (And is it okay if they only want to subliminally coax us to eat more healthily or respect the rules…?)

It’s not hard to think of governments that will be watching the progression of this technology very keenly.

Though it’s only fair to weigh risks against benefits before eagerly ringing the alarm bell, even here there is ambiguity. The benefits of commercializing this technology seem extremely limited, at least on the face of it. Gameplay? Fitness? Hands-free navigation of augmented or virtual reality environment? None of these feel like strong arguments for selling access to our brains.

But what about neural interfaces that could improve memory or concentration, making us super productive in life and work? Presumably, one could make the case that this is a worthwhile trade? Well, incidentally, completely separate research released just after the Royal Society report should urge caution around attempts to enhance such functions.

A new journal in Science published findings that appear to affirm the long-held theory that there is an active “forgetting mechanism” which kicks in while we sleep. The study found that when researchers suppressed neurons that produce the naturally occurring hypothalamic melanin-concentrating hormone (MCH) in mice, their memory performance actually increased. In other words, without this unnatural suppression these hormones act very deliberately to impair — or “modulate” — our memories.

This is a biological addition, not some kind of “lack” that we must compensate for with technology. We might safely assume that it serves some worthwhile evolutionary purpose.

Indeed, there is good reason to believe that if we didn’t forget we would live in a perpetual state of confusion, our brains awash with confusing superfluous information. One curious story that speaks to the chaos of the ever-remembering mind is that of the man who became known as subject S; a young Moscow-based journalist (later identified as Solomon Shereshevsky) who approached neuropsychologist Dr. Alexander Luria in 1929 with a very peculiar problem: he could not forget.

According to Luria’s reports, subject S. was able to remember foreign poems, scientific formulas, and enormously long strings of words and numbers decades after he had been told them. He recited them to perfection every time Luria tested him.

Great asset, eh? To never forget a name at a cocktail party, miss a birthday, fail a test on a fact or formula you already learned? To remember your own human life with crystal clarity rather than with the foggy haze that tends to wash over even our dearest memories?

Not so. According to the New York Times:

S.’s ability to remember was also a hindrance in everyday life. He had a hard time understanding abstract concepts or figurative language, and he was terrible at recognizing faces because he had memorized them at an exact point in time, with specific facial expressions and features. The ability to forget, scientists eventually came to realize, was just as vital as the ability to remember.

Who knows what psychological or neural confusion could eventually be brought on by using brain-computer interfaces to optimize evolutionary facets…

But we probably shouldn’t run screaming for the hills just yet. These systems are in their infancy, and there have been incredible breakthroughs in the research that should yield great benefits for people with mental and physical impairments. Nevertheless, The Royal Society are right to get ahead of the ethical and moral dilemmas that will accompany the commercialization of this type of technology. It is unfamiliar terrain, and allowing a system to intervene on our physical and mental capacities is an unprecedented encroachment that could easily turn sour. Certainly if we are to judge by the ways technological intelligence and surveillance have been wielded so far.

For now we should keep a close watching brief on how this technology develops, as well as any-and-all proposals for its use. One thing seems to be true, if we thought society had already reached its technological saturation point, we “ain’t seen nothin’ yet.”

This article was originally published on Towards Data Science by Fiona J. McEvoy, a tech-ethics researcher and founder of YouTheData.com. She examines the use of technology, A.I., and data in our society.

from The Next Web https://thenextweb.com/syndication/2020/01/02/how-silicon-valley-wants-to-fuck-with-our-brains/

Understanding the Four Types of Artificial Intelligence

Machines understand verbal commands, distinguish pictures, drive cars and play games better than we do. How much longer can it be before they walk among us?



Photo Courtesy of Shutterstock

The common, and recurring, view of the latest breakthroughs in artificial intelligence research is that sentient and intelligent machines are just on the horizon. Machines understand verbal commands, distinguish pictures, drive cars and play games better than we do. How much longer can it be before they walk among us?

The new White House report on artificial intelligence takes an appropriately skeptical view of that dream. It says the next 20 years likely won’t see machines “exhibit broadly-applicable intelligence comparable to or exceeding that of humans,” though it does go on to say that in the coming years, “machines will reach and exceed human performance on more and more tasks.” But its assumptions about how those capabilities will develop missed some important points.

As an AI researcher, I’ll admit it was nice to have my own field highlighted at the highest level of American government, but the report focused almost exclusively on what I call “the boring kind of AI.” It dismissed in half a sentence my branch of AI research, into how evolution can help develop ever-improving AI systems, and how computational models can help us understand how our human intelligence evolved.

The report focuses on what might be called mainstream AI tools: machine learning and deep learning. These are the sorts of technologies that have been able to play “Jeopardy!” well, and beat human Go masters at the most complicated game ever invented. These current intelligent systems are able to handle huge amounts of data and make complex calculations very quickly. But they lack an element that will be key to building the sentient machines we picture having in the future.

We need to do more than teach machines to learn. We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.

How Many Types of Artificial Intelligence are There?

There are four types of artificial intelligence: reactive machines, limited memory, theory of mind and self-awareness.

1. Reactive machines

The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world.

The current intelligent machines we marvel at either have no such concept of the world, or have a very limited and specialized one for its particular duties. The innovation in Deep Blue’s design was not to broaden the range of possible movies the computer considered. Rather, the developers found a way to narrow its view, to stop pursuing some potential future moves, based on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even more powerful computer to actually beat Kasparov.

Similarly, Google’s AlphaGo, which has beaten top human Go experts, can’t evaluate all potential future moves either. Its analysis method is more sophisticated than Deep Blue’s, using a neural network to evaluate game developments.

These methods do improve the ability of AI systems to play specific games better, but they can’t be easily changed or applied to other situations. These computerized imaginations have no concept of the wider world – meaning they can’t function beyond the specific tasks they’re assigned and are easily fooled.

They can’t interactively participate in the world, the way we imagine AI systems one day might. Instead, these machines will behave exactly the same way every time they encounter the same situation. This can be very good for ensuring an AI system is trustworthy: You want your autonomous car to be a reliable driver. But it’s bad if we want machines to truly engage with, and respond to, the world. These simplest AI systems won’t ever be bored, or interested, or sad.

2. Limited memory

This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel.

So how can we build AI systems that build full representations, remember their experiences and learn how to handle new situations? Brooks was right in that it is very difficult to do this. My own research into methods inspired by Darwinian evolution can start to make up for human shortcomings by letting the machines build their own representations.

3. Theory of mind

We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.

4. Self-awareness

The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it.

This is, in a sense, an extension of the “theory of mind” possessed by Type III artificial intelligences. Consciousness is also called “self-awareness” for a reason. (“I want that item” is a very different statement from “I know I want that item.”) Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others. We assume someone honking behind us in traffic is angry or impatient, because that’s how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own. And it is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.

The Conversation

This article was originally published on The Conversation

from www.govtech.com https://www.govtech.com/computing/Understanding-the-Four-Types-of-Artificial-Intelligence.html

7 laws of UX design (with illustrations)

Designing UX is important, even by following the rules. (Photo by Daniel Korpai on Unsplash)

User Experience is how the user interacts with your product through usability, accessibility, and desirability. But sometimes your design can be a total waste because there are laws of UX design that you didn’t follow. So here are laws you need to follow for an efficient product.

Von Restorff Effect

Also known as the Isolation Effect, predicts that when multiple similar objects are present, the one that differs from the rest is most likely remembered. In design, you can make that important information or key actions more distinctive than others.

Hick’s Law

This is where how long to make a decision depending on how many and how complex the choices are. For example, too many choices might take the user a long time to choose. So try to simplify these things out, try not to make overwhelming users by highlighting the recommended options. Additionally, use progressive onboarding to minimize cognitive load. To simplify, keep it simple.

Fitts’s Law

Fitt’s Law is like Hick’s Law but it measures how long is the target will take to acquire based on its distance and size of the target. You can shorten the time by making it huge enough and placed on the bottom to make it easy to reach.

Zeigarnik Effect

The Zeigarnik effect states that incomplete or interrupted tasks are most likely to be remembered. You could help the users remember certain uncompleted tasks by adding a simple progress bar.

Serial Position Effect

This effect states that the first and the last terms are most likely remembered. Placing the least important items in the middle of the list and the key information on the first and last series is a good rule of thumb.

Law of Common Region

Elements can be grouped together, right? Well, the Law of Common Region expresses that elements are grouped together when they share an area with a clear boundary between them. Consider adding a border to an element or define a background behind an element to create a common region.

Law of Proximity

Objects that are near or proximate tend to be grouped together. Proximity establishes relationships and it helps users understand and organize information faster and more efficient.

After reading this article, you will be able to design beautiful products, fast and efficient and good for the users and beta testers (I don’t know if there are beta testers in UX). If this seems interesting to you, then maybe a few claps will do. Thank you for reading.

Sources:


7 laws of UX design (with illustrations) was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

from UX Collective – Medium https://uxdesign.cc/7-important-laws-of-ux-design-fdda087b4f9d?source=rss—-138adf9c44c—4