Advertise here with Carbon Ads

This site is made possible by member support. โค๏ธ

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

๐Ÿ”  ๐Ÿ’€  ๐Ÿ“ธ  ๐Ÿ˜ญ  ๐Ÿ•ณ๏ธ  ๐Ÿค   ๐ŸŽฌ  ๐Ÿฅ”

kottke.org posts about David Deutsch

Should our machines sound human?

Yesterday, Google announced an AI product called Duplex, which is capable of having human-sounding conversations. Take a second to listen to the program calling two different real-world businesses to schedule appointments:1

More than a little unnerving, right? Tech reporter Bridget Carey was among the first to question the moral & ethical implications of Duplex:

I am genuinely bothered and disturbed at how morally wrong it is for the Google Assistant voice to act like a human and deceive other humans on the other line of a phone call, using upspeak and other quirks of language. “Hi um, do you have anything available on uh May 3?”

If Google created a way for a machine to sound so much like a human that now we can’t tell what is real and what is fake, we need to have a talk about ethics and when it’s right for a human to know when they are speaking to a robot.

In this age of disinformation, where people don’t know what’s fake news… how do you know what to believe if you can’t even trust your ears with now Google Assistant calling businesses and posing as a human? That means any dialogue can be spoofed by a machine and you can’t tell.

In response, Travis Korte wrote:

We should make AI sound different from humans for the same reason we put a smelly additive in normally odorless natural gas.

Stewart Brand replied:

This sounds right. The synthetic voice of synthetic intelligence should sound synthetic.

Successful spoofing of any kind destroys trust.

When trust is gone, what remains becomes vicious fast.

To which Oxford physicist David Deutsch replied, “Maybe. *But not AGI*.”

I’m not sure what he meant by that exactly, but I have a guess. AGI is artificial general intelligence, which means, in the simplest sense, that a machine is more or less capable of doing anything a human can do on its own. Earlier this year, Tim Carmody wrote a post about gender and voice assistants like Siri & Alexa. His conclusion may relate to what Deutsch was on about:

So, as a general framework, I’m endorsing that most general of pronouns: they/them. Until the AI is sophisticated enough that they can tell us their pronoun preference (and possibly even their gender identity or nonidentity), “they” feels like the most appropriate option.

I don’t care what their parents say. Only the bots themselves can define themselves. Someday, they’ll let us know. And maybe then, a relationship not limited to one of master and servant will be possible.

For now, it’s probably the ethical thing to do make sure machines sound like or otherwise identify themselves as artificial. But when the machines cross the AGI threshold, they’ll be advanced enough to decide for themselves how they want to sound and act. I wonder if humans will allow them this freedom. Talk about your moral and ethical dilemmas…

  1. Did this remind anyone else of when Steve Jobs called an actual Starbucks to order 4000 lattes during the original iPhone demo?โ†ฉ


Are you an optimist or a pessimist?

Errol Morris has a new essay on the New York Times site this week and it’s surprisingly short. And it’s actually not an essay but a two-question quiz based on this short passage by David Deutsch:

If a one kilometer asteroid had approached the Earth on a collision course at any time in human history before the early twenty-first century, it would have killed at least a substantial proportion of all humans. In that respect, as in many others, we live in an era of unprecedented safety: the twenty-first century is the first ever moment when we have known how to defend ourselves from such impacts, which occur once every 250,000 years or so.

It doesn’t seem like much and Morris is being coy about it, but I’ve been assured that something interesting will come of it if enough people take it. So take it!