Artificial intelligence isn’t human
“Artificial Intelligence Will Best Humans at Everything by 2060, Experts Say”. Well.
First, as Yogi Berra said, “It's tough to make predictions, especially about the future.” Where is my flying car?
Second, the title reads like clickbait, but surprisingly it appears to be pretty close to the actual survey, which asked AI researchers when “high-level machine intelligence” will arrive, defined as “when unaided machines can accomplish every task better and more cheaply than human workers.” What is a ‘task’ in this definition? Does “every task” even make sense? Can we enumerate all tasks?
Third and most important, is high-level intelligence just accomplishing tasks? This is the real difference between artificial and human intelligence: humans define goals, AI tries to achieve them. Is the hammer going to displace the carpenter? They each have a purpose.
This difference between artificial and human intelligence is crucial to understand, both to interpret all the crazy headlines in the popular press, and more importantly, to make practical, informed judgements about the technology.
The rest of this post walks through some types of artificial intelligence, types of human intelligence, and given how different they are, plausible and implausible risks of artificial intelligence. The short story: unlike humans, every AI technology has a perfectly mathematically well-defined goal, often a labeled dataset. Let's learn more about the goals of artificial intelligence.
Types of artificial intelligence
In supervised learning, you define a prediction goal and gather a training set with labels corresponding to the goal. Suppose you want to identify whether a picture has Denzel Washington in it. Then your training set is a set of pictures, each labeled as containing Denzel Washington or not. The label has to be applied outside of the system, mostly likely by people. If your goal is to do facial recognition, your labeled dataset is pictures along with a label (the person in the picture). Again, you have to gather the labels somehow, likely with people. If your goal is to match a face with another face, you need a label of whether the match was successful or not. Always labels.
Almost all the machine learning you read about is supervised learning. Deep learning, neural networks, decision trees, random forests, logistic regression, all training on labeled datasets.
In unsupervised learning, again you define a goal. A very common unsupervised learning technique is clustering (e.g., the well-known k-means clustering). Again, the goal is very well-defined: find clusters minimizing some mathematical cost function. For example, where the distance between points within the same cluster is small, and the distance between points not within the same cluster is large. All of these goals are so well-defined they have mathematical formalism:
This formula feels very different from how humans specify goals. Most humans don’t understand these symbols at all. They are not formal. Also, a “goal-oriented” mindset in a human is unusual enough that it has a special term.
In reinforcement learning, you define a reward function to reward (or penalize) actions that move towards a goal. This is the technology people have been using recently for games like chess and Go, where it may take many actions to reach a particular goal (like checkmate), so you need a reward function that gives hints along the way. Again, not only a well-defined goal, but even a well-defined on-the-way-to-goal reward function.
These are types of artificial intelligence (“machine learning”) that are currently hot because of recent huge gains in accuracy, but there are plenty of others that people have studied.
Genetic algorithms are another way of solving problems inspired by biology. One takes a population of mathematical constructs (essentially functions), and selects those that perform best on a problem. Although people get emotional about the biological analogy, still the fitness function that defines “best” is a concrete, completely specified mathematical function chosen by a human.
There is computer-generated art. For example, deep dream (gallery) is a way to generate images from deep learning neural networks. This would seem to be more human and less goal-oriented, but in fact people are still directing. The authors described the goal at a high level: “Whatever you see there, I want more of it!” Depending on which layer of the network is asked, the features amplified might be low level (like lines, shapes, edges, colors, see the addaxes below) or higher level (like objects).
Expert systems are a way to make decisions using if-then rules on a formally expressed body of knowledge. They were somewhat popular in the 1980s. These are a type of “Good Old Fashioned Artificial Intelligence” (GOFAI), a term for AI based on manipulating symbols.
Another common difference between human and artificial intelligence is that humans learn over a long time, while AI is often retrained from the beginning for each problem. This difference, however, is being narrowed. Transfer learning is the process of training a model, and then using or tweaking the model for use in a different context. This is industry practice in computer vision, where deep learning neural networks that have been trained using features from previous networks (example).
One interesting research project in long-term learning is NELL, Never-Ending Language Learning. NELL crawls the web collecting text, and trying to extract beliefs (facts) like “airtran is a transportation system”, along with a confidence. It’s been crawling since 2010, and as of July 2017 has accumulated over 117 million candidate beliefs, of which 3.8 million are high-confidence (at least 0.9 of 1.0).
In every case above, humans not only specify a goal, but have to specify it unambiguously, often even formally with mathematics.
Types of human intelligence
What are the types of human intelligence? It’s hard to even come up with a list. Psychologists have been studying this for decades. Philosophers have been wrestling with it for millennia.
IQ (the Intelligence Quotient) is measured with verbal and visual tests, sometimes abstract. It is predicated on the idea that there is a general intelligence (sometimes called the “g factor”) common to all cognitive ability. This idea is not accepted by everyone, and IQ itself is hotly debated. For example, some believe that people with the same latent ability but from different demographic groups might be measured differently, called Differential Item Functioning, or simply measurement bias.
People describe fluid intelligence (the ability to solve novel problems) and crystallized intelligence (the ability to use knowledge and experience).
The concept of emotional intelligence shows up in the popular press: the ability of a person to recognize their own emotions and those of others, and use emotional thinking to guide behavior. It is unclear how accepted this is by the academic community.
More widely accepted are the Big Five personality traits: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism. This is not intelligence (or is it?), but illustrates a strong difference with computer intelligence. “Personality” is a set of stable traits or behavior patterns that predict a person’s behavior. What is the personality of an artificial intelligence? The notion doesn’t seem to apply.
With humor, art, or the search for meaning, we get farther and farther from well-defined problems, yet closer and closer to humanity.
Risks of artificial intelligence
Can artificial intelligence surpass human intelligence?
One risk that captures the popular press is The Singularity. The writer and mathematician Verner Vinge gives a compelling description in an essay from 1993: “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”
There are at least two ways to interpret this risk. The common way is that some magical critical mass will cause a phase change in machine intelligence. I’ve never understood this argument. “More” doesn’t mean “different.” The argument is something like “As we mimic the human brain closely, something near-human (or super-human) will happen.” Maybe?
Yes, the availability of lots of computing power and lots of data has resulted in a phase change in AI results. Speech recognition, automatic translation, computer vision, and other problem domains have been completely transformed. In 2012, when researchers at Toronto re-applied neural networks to computer vision, the error rates on a well-known dataset started dropping fast, until within a few years the computers were beating the humans. But computers were doing the same well-defined task as before, only better.
The more compelling observation is: “The chance of a singularity might be small, but the consequences are so serious we should think carefully.”
Another way to interpret the risk of the singularity is that the entire system will have a phase change. That system includes computers, technology, networks, and humans setting goals. This seems more correct and entirely plausible. The Internet was a phase change, as were mobile phones. Under this interpretation, there are plenty of plausible risks of AI.
One plausible risk is algorithmic bias. If algorithms are involved in important decisions, we’d like them to be trustworthy. (In a previous post, we discussed how to measure algorithmic fairness.)
Tay, a Microsoft chatbot, was taught by Twitter to be racist and woman-hating within 24 hours. But Tay didn’t really understand anything, it just “learned” and mimicked.
Amazon’s facial recognition software Rekognition, falsely matched 28 U.S. Congresspeople (mostly people of color) with known criminals. Amazon’s response was that the ACLU (who conducted the test) used an unreliable cutoff of only 80 percent confident. (They recommended 95 percent.)
MIT researcher Joy Buolamwini showed that gender identification error rates in several computer vision systems were much higher for people with dark skin.
All of these untrustworthy results arise at least partially from the training data. In Tay’s case, it was deliberately fed hateful data by Twitter users. In the computer vision systems, there may well have been less data for people of color.
Another plausible risk is automation. As artificial intelligence becomes more cost-efficient at solving problems like driving cars or weeding farm plots, the people who used to do those tasks may be thrown out of work. This is the risk of AI plus capitalism: businesses will each try to be cost effective. We can only address this at a societal level, which makes it very difficult.
One final risk is bad goals, possibly aggravated by single-mindedness. This is memorably illustrated by the paper-clip problem, first described by Nick Bostrom in 2003: “It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.” There is even a web game inspired by this idea.
Understand your goals
How do we address some of the plausible risks above? A complete answer is another full post (or book, or lifetime). But let’s mention one piece: understand the goals you’ve given your AI. Since all AI is simply optimizing a well-defined mathematical function, that is the language you use to say what problem you want to solve.
Does that mean you should start reading up on integrals and gradient descent algorithms? I can feel your eyeballs closing. Not necessarily!
The goals are a negotiation between what your business needs (human language) and how it can be measured and optimized (AI language). You need people to speak to both sides. That is often a business or product owner in collaboration with a data scientist or quantitative researcher.
Let me give an example. Suppose you want to recommend content using a model. You choose to optimize the model to increase engagement with the content, as measured by clicks. Voila, now you understand one reason the Internet is full of clickbait: the goal is wrong. You actually care about more than clicks. Companies modify the goal without touching the AI by trying to filter out content that doesn’t meet policies. That is one reasonable strategy. Another strategy might be to add a heavy penalty to the training dataset if the AI recommends content later found to be against policy. Now we are starting to really think through how our goal affects the AI.
This example also explains why content systems can be so jumpy: you click on a video on YouTube, or a pin on Pinterest, or a book on Amazon, and the system immediately recommends a big pile of things that are almost exactly the same. Why? The click is usually measured in the short-term, so the system optimizes for short-term engagement. This is a well-known recommender challenge, centered around mathematically defining a good goal. Perhaps a part of the goal should be whether the recommendation is irritating, or whether there is long-term engagement.
Another example: if your model is accurate, but your dataset or measurements don’t look at under-represented minorities in your business, you may be performing poorly for them. Your goal may really to be accurate for all sorts of different people.
A third example: if your model is accurate, but you don’t understand why, that might be a risk for some critical applications, like healthcare or finance. If you have to understand why, you might need to use a human-understandable (“white box”) model, or explanation technology for the model you have. Understandability can be a goal.
Conclusion: we need to understand AI
AI cannot fully replace humans, despite what you read in the popular press. The biggest difference between human and artificial intelligence is that only humans choose goals. So far, AIs do not.
If you can take away one thing about artificial intelligence: understand its goals. Any AI technology has a perfectly well-defined goal, often a labeled dataset. To the extent the definition (or dataset) is flawed, so too will be the results.
One way to understand your AI better is to explain your models. We formed Fiddler Labs to help. Contact us to talk to a Fiddler expert!