Artificial intelligence is slowly catching up with us. AI algorithms can now consistently beat us at chess, poker and multiplayer video games, create images of human faces that are indistinguishable from real ones, write articles ( not this!) And even love stories, and better car driving than most teenagers.
But the AI is still far from perfect, if Woebot is any stat. Woebot, as Karen Brown wrote this week in Science Times, is an AI-powered smartphone app that aims to provide a low-cost consulting service that uses dialogue to guide users through the basic techniques of cognitive-behavioral therapy. But many psychologists doubt whether an AI algorithm can show the empathy needed to make interpersonal therapy work.
“These apps really shorten the key ingredient that – the evidence shows – is what helps with therapy, which is the therapeutic relationship,” says Linda Michaels, a therapist in Chicago, co-chair of the Psychotherapy Action Network, a professional group, told The Times.
Empathy, of course, is a two-way street, and we humans don’t show that much more to bots than bots do to us. Many studies have found that when people are put in a situation where they can cooperate with a benevolent AI, they are less likely to do so if the bot is a real person.
Ophelia Deroy, a philosopher at Ludwig Maximilian University, in Munich, told me: “There seems to be something missing regarding reciprocity. “We’ll basically treat a perfect stranger better than an AI”
In a recent study, Dr. Deroy and her neuroscientist colleagues looked at why this is so. The researchers matched human subjects with invisible, sometimes human and sometimes AI counterparts; then each pair played a series of classic economic games – Trust, Prisoner’s Dilemma, Chicken Hunt and Stag, as well as a game they created called Reciprocity – designed to evaluate and reward collaboration.
Our lack of reciprocity towards AI is often thought to reflect a lack of trust. After all, it’s super rational and emotionless, certainly just for its own sake, incapable of cooperation, so why should we? Dr Deroy and her colleagues came to a different and perhaps less comforting conclusion. Their research shows that people are less likely to cooperate with bots even when bots are eager to cooperate. It’s not that we don’t trust bots, it’s that we do: The bot is guaranteed to be benevolent, being a sucker, so we mine it.
That conclusion was reached by subsequent conversations with study participants. Dr Deroy said: “Not only do they tend not to respond to the cooperative intentions of artificial agents, but when they betray the trust of bots, they do not report guilt, in contrast to humans. they did.” She added, “You can just ignore the bot and have no sense that you’ve breached any general obligation.”
This can make sense in the real world. When we think of AI, we tend to think of the Alexas and Siris of our future world, with whom we can form some kind of fake intimate relationship. But most of our interactions will be one-on-one encounters, often non-verbal encounters. Imagine you are driving on a highway and a car wants to rush in front of you. If you notice that the car has no driver, you will be less likely to let it in. And if the AI doesn’t account for your bad behavior, an accident can happen.
“What sustains cooperation in society of any size is the setting of certain norms,” says Dr. Deroy. “The social function of guilt is precisely to make people conform to social norms that compel them to compromise and cooperate with others. And we haven’t evolved to have social or ethical standards for sentient creatures and bots.”
That is, of course, half the premise of “Westworld”. (I’m surprised Dr. Deroy hasn’t heard of the HBO series.) But a guilt-free landscape can have consequences, she notes: “We are creatures of habit. So what ensures that behavior is repeated and where you show less politeness, less moral obligation, less cooperation, won’t color and contaminate the rest of your behavior you when you come into contact with other people? “
There are similar consequences for AI as well. “If people treat them badly, they are programmed to learn from what they go through,” she says. “An AI that is put on the road and programmed to be benevolent will start out not being that kind to a human, because otherwise it would be stuck in traffic forever.” (That’s basically the other half of the premise of “Westworld.”)
We’ve got it: The Turing test is a real road rage. When a self-driving car starts honking loudly from behind because you’ve cut it off, you’ll know that humanity has reached its pinnacle of achievement. By then, it is hoped that AI therapy will be sophisticated enough to help driverless cars with their anger management problems.