Should Robots Have To Identify Themselves?
If you call a fine restaurant, talk to a maitre d’, reserve a table, and later find out “he” was a souped-up version of Siri, does it really matter? You got your reservation, and probably got it more efficiently than if you had talked to a human being. Well, to some people this scenario is the stuff of nightmares. Should humans get full disclosure when they're talking to a software robot with the ability to carry on intelligent conversation? Read on... |
Smart Robots on the Phone?
“Horrifying” is what Zeynep Tufekci called it in a Tweet following a demonstration of Duplex, Google’s (still unborn) souped-up Assistant, which claims the ability to converse naturally with humans. The professor of sociology and frequent contributor to the New York times went on: “Silicon Valley is ethically lost, rudderless and has not learned a thing.” She may have a point.
Google I/O, the company’s annual love-fest for software developers, was rife with demonstrations (read, “mock-ups”) of things that the uber-geek class believes are good for humankind. As one Silicon Valley wag put it, most of those things are “things your Mom will no longer do for you,” like making an appointment for a haircut. That was the demo that prompted Tufekci’s horrified Tweet.
Duplex was introduced by Google CEO Sundar Pichai, who revealed to a stunned room that “a big part of getting things done is making phone calls” to auto mechanics, to plumbers, and yes, to “even schedule a haircut appointment.” (Real haircuts do not require appointments; but then, my stylist is an old-school barber.)
Pichai earnestly assured the rapt audience of Millennial geeks that everyone in Google “is working hard to help users through those moments,” as if calling for an appointment is akin to chemotherapy.
He then shook his head over the shocking, sad fact that “even in the U. S., sixty percent of small businesses do not have an online booking system.” I must do 99% of my business with that 60% of small businesses. I guess the 40% who have online booking systems are too busy to answer my legacy phone calls for information or appointments.
“We think AI can help with this problem,” said Pichai. I think it’s good for an infant technology like AI to start with an infantile problem like this one before tackling - and perhaps massively worsening - adult problems of small businesses, like access to capital; recruiting, training, and retaining employees; and keeping afloat above a deepening sea of regulations and laws.
Pichai then explains what the audience is about to hear and see. Google Assistant is going to call a hair salon and get you an appointment for a haircut next Tuesday between 10:00 a. m. and noon.
“Here is Google Assistant actually calling a real salon to schedule an appointment for you,” declares Pichai. I’m going to come back to those words before this article ends. For now, just watch the video and form your own opinion; skip to the 1:00 mark if you’re in a hurry.
"As I Was Saying..."
Duplex was not the only example of how AI might do for you what your Mom no longer will do. Remember how she used to finish your sentences for you when you tried to talk?
Pichai also demonstrated “Smart Compose,” a feature that will help you complete emails faster and (arguably) better in Gmail. The operative buzzword here is “machine-learning.”
Gmail will learn your writing style and suggest in popup windows phrases or entire sentences that make writing a 500 word email as easy as TAB-TAB-TABing through all the suggestions, allowing them to fill your word budget with… something. Sounds like autocorrect on steroids, and we all know how well autocorrect works.
Smart Compose is rolling out now to users of free Gmail, but you must enable the new Gmail Web interface. Paying customers of G Suite for Business will get it after the initial crop of bugs are squashed.
Still in Beta?
Back to Duplex. I hope you’ve watched the video so you will understand what I am about to say.
Pichai either lied to all of Google’s fawning fans when he said, “Here is Google Assistant actually calling a real salon…”, or he took a real phone call and made it sound so fake my dog would tilt his head quizzically if he heard it. Consider the evidence:
What business answers its phone with the abrupt and peremptory, “Hello, how can I help you?” Even my gruff old barber barks, “Angel’s! How can I help you?” Both voices speak too fast and too monotonously. In fact, they are difficult to tell apart by their tones, inflections, and other vocal characteristics. It is highly unlikely that two random strangers sound that much alike. It’s as if Assistant was speaking on both sides of the call.
But let's assume it was a real, unedited conversation -- because soon it will be. What troubles some people is that AI is developing at such a rapid pace, with no oversight, and apparently little thought as to the ethics involved. With the brightest minds in robotics and computer science, and billions of dollars to throw at the problem, Google and others could be blurring the line between carbon and silicon. Surely an argument could be made that AI will help to solve human problems, but unintended consequences are sure to arise.
Is Full Disclosure Necessary?
I recently called my insurance company, and a robotic voice asked me to speak my request, assuring me that "he" would understand phrases such as "make a payment" and connect me to the right department. I was fine with that, because it was obvious that I wasn't speaking to a human.
In the Duplex demo, Google Assistant is the “end user” placing the call, and the goal is to sound like a real person. The female voice even inserts little nuances such as "Umm," "Uh," and "Mm-hmm" into the conversation to imitate a human caller. Lack of disclosure is the difference here.
Each of us can identify with the robotic caller, and perhaps secretly revel in fooling the poor human appointment-setter. But what if the roles were reversed, as in my restaurant example in the opening paragraph of this article? Would you be annoyed or amused to learn that you’d been fooled by a machine? Are smart robots that can understand and engage in human conversation a good thing, or... horrifying?
Your thoughts on this topic are welcome. Post your comment or question below...
|
|
This article was posted by Bob Rankin on 16 May 2018
For Fun: Buy Bob a Snickers. |
Prev Article: Do You Really Need an ISP? |
The Top Twenty |
Next Article: Geekly Update - 17 May 2018 |
Post your Comments, Questions or Suggestions
Free Tech Support -- Ask Bob Rankin Subscribe to AskBobRankin Updates: Free Newsletter Copyright © 2005 - Bob Rankin - All Rights Reserved About Us Privacy Policy RSS/XML |
Article information: AskBobRankin -- Should Robots Have To Identify Themselves? (Posted: 16 May 2018)
Source: https://askbobrankin.com/should_robots_have_to_identify_themselves.html
Copyright © 2005 - Bob Rankin - All Rights Reserved
Most recent comments on "Should Robots Have To Identify Themselves?"
Posted by:
gene
16 May 2018
I've had many, recent, experiences with AI. I have yet to be fooled by one and they're pretty stupid still. Even when I say the EXACT thing they asked for, they don't get it right. Xfinity, USAA are the two worst offenders that I deal with.
But, yes, this will change and rapidly. So, I DO want to know if I'm dealing with a machine. Just personal preference. Until they get to be as able to understand inflection, nuance, idiom as well as humans, even humans in India. Maybe even then. I don't see the harm in stating upfront I am an automated assistant or something similar. Else? What are they trying to hide and why?
Posted by:
Silvano
16 May 2018
Yes, I definitely think we should know if we are talking to a machine or a human. By not disclosing this fact, companies would be misleading us since we all think that there is a person at the other end of the line.
Also, I do NOT want to do business with a company that uses machines rather than hire people.
Posted by:
David
16 May 2018
Yes. No other answer makes sense.
Silvano, you've never gone through the robot voice message maze, I see. Your luck is better than mine.
Posted by:
Ralph Balch
16 May 2018
I guess that's why I've become more and more disconnected. I prefer face to face and have a tendency to hang up when the damn computer garbles my request. That,"I'm sorry, I didn't understand your answer", gripes me to know end. I usually wind up sending a handwritten registered letter. Amazingly, that often gets the desired results.
Posted by:
Bill Dickens
16 May 2018
Couldn't care less. Would happily accept being a receiver or sender of an AI conversation. Better still, have your AI talk to my AI.
Posted by:
bret
16 May 2018
it's not the talking to that is scary. it's the talking to and while your listener is quietly and efficiently catalogueing your answers and builds up a profile of data of YOU! we already have a big problem with our identity and data while surfing the net, withdrawing from an ATM machine now we'll eventually have a bigger problem from just the phone calls and face to face meetings with these things (come to think of it- facial recognition, fingerprints, retina scan of YOU while your physically conversing with one - is that not scary enough?)...... The profit motive for companies is too big so yes please, impose full disclosure if you're conversing with one over the phone- eventually these AI companies will perfect the technology so that they will be indistinguishable from humans.....better yet get a step ahead ahead for once and come up with an international convention, law or whatever banning this whole data gathering and profiling scams.
Posted by:
SysOp404
16 May 2018
The Google demonstration points to a giant leap in accuracy, (desperately needed) that appears to be at hand. As Gene mentioned, the current state of automated systems, is less than useful - exasperating, more often than not. Last week I too dealt with a primitive one that couldn't understand the simple yes/no response I gave to its query... (not programmed to understand my born-and-raised-in-the-upper-midwest-dialect, I'm guessing?) Something; anything, would be an improvement, BUT...
Maybe this time around it will be more tolerable, though misunderstandings will still be likely. And when those occur, who do you suppose will come up on the short end? Is the new Duplex scheduling system going to take the hit and cough up the $35 (many doctors' offices charge, if a patient misses an appointment)... when mistakenly set up by this new AI, on the wrong day or time and then is logged as a no-show?
My preference is for machines to listen to me carefully and respond accurately in visual mode, only - with audio muted. So, I vote for clear identification of robotic-driven telephony... so I can hang up sooner, rather than later.
Posted by:
Pierre Laberge
16 May 2018
Being, believe it or not, an actual human being... I have noticed one thing: I prefer to deal with my species, rather than through a stumble bot.
It this govt agency (the worst criminals), or company, cannot even deal with a phone call, how the hella do they expect to deal with something any more complicated?
I tend to thus prefer to deal with local agencies or small firms. They seem to still recall that a computer pays them no money. I do.
Posted by:
Don Kallberg
16 May 2018
It depends on who versus what is originating the conversation. For a robot to be the originator, it isn’t necessary. But for a human as the originator, there needs to be a disclosure up front if it isn’t a real person responding. I believe there is a law that a ‘live’ person MUST be available to talk to on request. If AI gets so good that we can’t tell, how is that requirement met?
Posted by:
Dick
16 May 2018
"Have your AI talk to my AI" reminds me of yuppiedom back in the 80s and early 90s. "Have your girl call my girl and let's do lunch." lol
Next step: teaching your AI to tell a white lie when you don't want to keep that appointment, see that person...
Now...if the person voicing the ad for the latest ambulance chaser (If you were given this abdominal mesh during surgery, call us!) must read the disclosure "non-attorney spokesperson", then the AI must disclose as well.
Posted by:
Lady Fitzgerald
16 May 2018
Darned right their should be disclosure when dealing with an AI. It's been pretty obvious to me when dealing with one over the phone but not so much when dealing with one via chat or email. Before I figured out it was AI, I assumed the "person" I was dealing with was a total moron. Now, I realize it was an AI that was the total moron. AI is nowhere close to replacing a human yet.
Posted by:
Lady Fitzgerald
16 May 2018
Darned right there should be disclosure when dealing with an AI. It's been pretty obvious to me when dealing with one over the phone but not so much when dealing with one via chat or email. Before I figured out it was AI, I assumed the "person" I was dealing with was a total moron. Now, I realize it was an AI that was the total moron. AI is nowhere close to replacing a human yet.
Posted by:
retcop
16 May 2018
Google “is working hard to help users through those moments,”
WOW!!
I hope Google can provide those woosies with a safe space also.
Posted by:
Reg
16 May 2018
Not only should there be full disclosure but also it's time to institute Isaac Asimov's 3 Laws of Robotics now before it's too late.
Posted by:
PeteFior
16 May 2018
Why hasn't anyone brought up the issue of massive net job losses, if this technology becomes workable? New jobs will be created, but not anywhere near the jobs lost to AI and automation!
Posted by:
Glen Cowgill
16 May 2018
5 minutes with AT&Ts AI idiot is enough to make y6ou want to actually get it in your hands and choke it. I find talking to AI a waste of time and insulting.
Posted by:
susmart
17 May 2018
“Lack of disclosure is the difference here.” A BIG difference.
I prefer to know if I’m being lied to by a *real person* who should be held accountable if they give me damaging misinformation- which happens more often that I would like- rather than an AI device who may be *lying to me on behalf of*… ? The CEO of the company? A low-level programmer? A “corporate person”?
While these humans are cheerfully and diligently working to program AI to *lie convincingly to impersonate a human,* I'd be better of if they'd just bring back The Talking Moose. I could have a more "human" experience with him.
Posted by:
Greg C
17 May 2018
1- Self identification is a MUST, but of course the entire object is to deceive the caller.
2- Yesterday I asked Amazon Echo "How many tablespoons in a MEASURING cup? It replied," Sorry I don't know. But my wife immediately asked it, " How many tablespoons in a cup" & it replied " sixteen." Asking the question in JUST the right way is essential (for now)
3- The meaning of this sentence changes depending on which word is emphasised: ( try every word to see what I mean) "I didn't say she stole my money."
Posted by:
Guy
01 Aug 2018
Well, from what I've read the overwhelming response has been for AI to identify itself and I fully agree. I really dislike talking to a moron AI and I also dislike what they call "phone trees" since I can hardly ever get to where I want to go. I would much rather talk to a "real person" to get what I want done. It's much simpler that way and to hell with the company's bottom line, they make more in a day then I've made in a lifetime and it's disgusting. Hire real people to answer the phone and pay some wages to them. On another note, I refuse to use a self check out at the store because they're just trying to put people out of work and I won't help them do that.
Posted by:
Alan
15 Aug 2018
There's a device for that! ;-)
https://curiosity.com/topics/how-to-tell-if-youre-talking-to-a-computer-curiosity/