[ad_1]
Amy Ingram was trying to schedule a meeting earlier this month when she received a sharp response.
One of the meeting participants called her a “nag” for sending “PERSISTENT emails.”
The exchange might raise eyebrows in any office, but this one was notable for a different reason. Amy Ingram is not actually a person. It is a personal assistant powered by artificial intelligence. The product, from a startup called X.ai, has only one purpose: scheduling meetings by email.
Apple (AAPL, Tech30), Google (GOOGL, Tech30), Amazon (AMZN, Tech30) and Microsoft (MSFT, Tech30) have all invested in voice assistants connected to smartphones and smart speakers. More than 35 million Americans are expected to use a voice-activated assistant at least once a month this year, according to an estimate from research firm eMarketer. And that doesn’t even take into account chatbots and products like Amy Ingram.
But as tech companies introduce more advanced assistants and chatbots, some of these conversations inevitably turn hostile.
Maybe Apple’s Siri mistakenly calls your boss at night instead of your girlfriend. Or you’re upset that your meeting is being rescheduled and decide to shoot the automated messenger.
Call it version 2.0 of banging on your keyboard or throwing a mouse at the wall. And now, the machine talks back.
“Many of the interactions are you and the machine,” Dennis Mortensen, founder and CEO of X.ai, says about most of the assistant products on the market. “In that setting, people tend to feel comfortable applying their frustration.”
This creates a new challenge for tech companies: how seriously should the products respond to the hostility?
Sherry Turkle, director of the MIT Initiative on Technology and Self, says there’s more at stake here than just the customer experience. Venting at machines could lead to a “coarsening of how people treat each other.”
Related: Apple unveils an Amazon Echo competitor
“You yell at Alexa… you know Alexa is a machine,” Turkle told CNN Tech by email. “We treat machines as though they were people. And then, we are drawn into treating people as though they were machines.”
There is little data on the frequency of people raging against the new machines. Tech companies mostly stay vague on the details. Amazon declined to comment and Apple did not respond to a request for comment.
Cortana, the personal assistant product offered by Microsoft (MSFT, Tech30), is hit by curses and offensive language on a daily basis. It falls to Cortana’s in-house editorial staff to craft the right response.
“While Cortana is always ready and eager to have a productive and positive conversation, the goal is to shut down offensive behavior immediately,” says Deborah Harrison, a writer on the Cortana editorial team.
When someone calls Cortana a b****, for example, Cortana’s responds by saying, “Well, that’s not going to get us anywhere.” Try hurling the same insult again and you’ll get the same response, potentially limiting the incentive to repeat it.
Siri offers a similar mix of cute responses to foul language, but the same curse can elicit a number of replies. Tell Siri to “go f*** yourself,” and it might respond with, “I’d blush if I could,” or “I’ll pretend I didn’t hear that.”
It’s not just curses that elicit these types of responses. Complain to Siri that “you’re not helping” and she might reply with “If you say so.”
However, this approach may not be enough to curb our worst behavior or address the user’s underlying grievances. What’s needed, according to Cynthia Breazeal, is something more than “a system that fails and gives you a cute response.”
“This is a hard problem to solve,” says Breazeal, founder and chief scientist at Jibo, a startup developing a robot for the home. She expects the “next big thing” will be a deeper understanding of the user’s frustration and what can be done to fix it.
IBM attempts to do some of this now. Its Watson AI platform powers customer service chatbots and “virtual agents” for clients. IBM’s team relies on tone and emotion analysis to detect and deal with angry customers, according to Rob High, CTO of IBM Watson.
The first time Watson detects a user getting angry, it tries to “interpret and respond to their intention,” High says. The second time, Watson may apologize and say it doesn’t understand. If the incident continues beyond that, Watson goes to the last resort: actual people.
“We will offer to turn it over to a live agent,” High says.
Related: Alexa, can you save Sears? Sears to sell Kenmore appliances on Amazon
Of course, anger may only be one of possible explanations. Some users may simply want to vent, or test the limits of what they can and can’t get away with.
Mortensen of X.ai says he would like to see a “penalty” imposed on users who are hostile to personal assistants “just like in real life.” One idea he tosses out: the assistant’s response time could drop precipitously.
“Amy’s response speed [could go] from being almost instant to, ‘I’ll just let you cool off for a little bit’ so you’ll get a response in two hours.”
“You’d know immediately that you lost,” he says.
Then again the startup might lose customers, too.
[ad_2]
Source link