There’s been some negative reactions to the recent demonstration of Google Duplex where the AI assistant called a hairdresser to make an appointment.

The natural-ness of the voice and speech pattern is astounding. The use of fillers and minimal encouragers – umms and ahhs – is particularly impressive. It is only a matter of time when we have AI-based phone service, help desk, and crisis support/response in practical use.

The backlash to the demonstration cited in the article centred around the idea of deception; where “[the] digital assistant could mimic a human over the phone so convincingly the person on the other end of the line had no idea they were talking to a machine … adding ‘ummm’ and ‘aaah’ to deceive the human...”

Thinking about this, I came to one question: does it matter? The potential of the technology is separate from the potential of someone using the technology to deceive and defraud. The backlash is conflating the two.

When the cognitive and speech patterns of an AI is indiscernible from a human, that would effectively make the AI “human” in the context of that interaction. The intent and outcome of the interaction then becomes the focus. An AI making an appointment with a human using voice, text, or semaphore has the same intent and delivers the same desired outcome. An appointment is made.

If the intent is to defraud the human, then we as a society would respond to that intent in the same way we respond to any other similar activities. The laws may need to be clarified, but we already have mechanisms for dealing with bad or criminal behaviours.

Likewise, the use of an AI to provide clinical therapy services when the AI is not capable of doing so (and thus must engage in subterfuge to cover up its lack) falls into the same bucket as a human doing the same without the appropriate qualification and accreditation.

This technology is morally neutral. What people use it for is potentially the problem. And when they do, society will have mechanisms in place to deal with it.

Having said this, I think it may be prudent for AI’s to identify themselves as such prior to each interaction. At least in the early stages of deployment. This would help ease people into the idea that they could be talking to a machine. And we continue to educate ourselves on critical thinking and healthy scepticism when we interact with anyone/anyAI on the phone. Forcing AIs to disclose their nature will not stop those who seek to defraud others comply. (Perhaps we should also make marketing phone calls - robot or human  - identify themselves as such at the start of each recording. Not that this will stop the phone scammers.)

Banning AIs from using umms and ahhs is just silly and shortsighted. It would be akin to banning robots from using facial expressions. These mechanism have powerful roles in rich interpersonal engagements. They are baked into our species. And yes, they can be used to cheat and deceive, and we return then to the fact that we already have mechanisms to punish these undesired activities.

If we know up front before each interaction that we will be dealing with an AI, how will that change our behaviour? And should it?

Have a look at this effort by Microsoft: