© 2024 Wyoming Public Media
800-729-5897 | 307-766-4240
Wyoming Public Media is a service of the University of Wyoming
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Transmission & Streaming Disruptions

Ethical implications of making a chatbot using the voice or likeness of someone

LEILA FADEL, HOST:

Thanks to AI technology, there's now a chatbot that can use your voice and sometimes even likeness. And this can be done without your consent. So all of this got me thinking about whether this is morally OK. It's something I asked James Brusseau. He teaches philosophy at Pace University in New York and has written about how to make AI more ethical. As someone whose voice is literally their job, I wanted to know, will chat bots replace radio hosts like me?

JAMES BRUSSEAU: I can tell you that they're not going to replace you immediately. And I'm fairly certain of that because, in fact, I am trying to make a chatbot avatar of myself in coordination with a computer science department where I work in Italy, and we cannot make yet a conversational agent that can produce anywhere near the kind of conversation that I can produce, even though it's true that we can reproduce the way I sound. That part we have more or less under control.

FADEL: And what are the ethical issues when you think about reproducing people's voices? Because I was listening to that and I was thinking, but at some point, does it make you redundant...

BRUSSEAU: (Laughter) Yeah, yeah.

FADEL: ...If it gets to be too much of a good professor?

BRUSSEAU: Right. I mean, it's only one short step from there to say, well, why have me at all in the first place? And we do need humans interacting with others to see and understand what kinds of new issues are arising and so on. That's where the training data comes from, from these kinds of avatars. So we will always need philosophers to update the traditional ideas for the contemporary reality. But it's true that inevitably, this technology will reduce the number of philosophers that have jobs. That's true.

FADEL: But are there ethical issues that come with the possible benefits?

BRUSSEAU: So if you think the way we treat a simple tool like a pen - if the pen runs out of ink, well, we just throw it out and we replace it with another. We don't even hesitate. Whereas, by contrast, if a human being runs out of food, we don't just throw the human away and replace it with another, right? We have a sense in the way we live that humans are not like things, and so the idea of dignity. It's that quality of humanity, that quality of not being replaceable, substitutable by something else, even if it looks the same and acts the same. And we usually argue that these kinds of agents - and if there is a violation, they violate someone's dignity in the case that these chatbots are made of someone without their consent.

FADEL: So there's now technology out there that allows you to create what are being called death bots, which are using the voices of somebody who's passed away to create, in some cases, even the physical likeness of them. What are your thoughts on using material from someone who has died to create something in their likeness?

BRUSSEAU: I think that there is fundamentally a different human relationship between those we love, members of our family, and those who we do not know.

FADEL: Yeah.

BRUSSEAU: And within sort of the context of dignity that I had just outlined, there is a different kind of relationship that people have with those they love. And part of that is a willing to sacrifice, in some sense, your dignity for each other.

FADEL: Yeah.

BRUSSEAU: If it were used by a loved one to support themselves, then it may be in some abstract sense a violation of this notion of dignity that we discussed. But this is a way that families and loved ones coexist together.

FADEL: Are these chatbots or will these chatbots change the way we as humans communicate with each other?

BRUSSEAU: There is a real debate surrounding this subject. And I can give sort of a quick example of that from...

FADEL: Yeah.

BRUSSEAU: ...From another professor. He was asked to write a recommendation for one of his students. And the student wrote to him also with the prompt that he suggested the professor use for ChatGPT to write the recommendation. And if this is the way things develop, then the written part of a recommendation will just be ignored. And the only part that people will pay attention to is the numerical part that is usually in a recommendation.

FADEL: So it'll be like, oh, a bot wrote this, so we can ignore it.

BRUSSEAU: Right. Exactly. And then the result will be that recommendation letters will no longer have a narrative component. They will just be numerical. And so in that sense - in this case, at least - it seems as though ChatGPT is not coming up to our level, rather it's dragging us down to the level of computer numerical interaction. This is the way we grade each other, through numbers instead of with qualitative words.

FADEL: That's James Brusseau, who teaches philosophy at Pace University. Thank you so much.

BRUSSEAU: Thank you.

(SOUNDBITE OF DYLAN SITTS' "PAST LIFE LIMBO") Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Leila Fadel is a national correspondent for NPR based in Los Angeles, covering issues of culture, diversity, and race.

Enjoying stories like this?

Donate to help keep public radio strong across Wyoming.