News, Culture and NPR for Central & Northern Michigan
Play Live Radio
Next Up:
0:00 0:00
Available On Air Stations
91.7FM Alpena and WCML-TV Channel 6 Alpena are off the air. Click here to learn more.

Unpacking the OpenAI-Scarlett Johansson controversy


Artificial intelligence once again found itself at the center of a controversy this week. Actor Scarlett Johansson says she was approached by tech company OpenAI to be the voice of ChatGPT. The actress, who, of course, once starred as an AI voice in the movie "Her," said no thanks. But when the company released a voice assistant named Skye, it sure sounded a lot like Johansson. Hey, ChatGPT. How are you doing?

SKYE: I'm doing fantastic. Thanks for asking. How about you?

DETROW: Though OpenAI maintains it is not an attempt to clone the actor's voice, CEO Sam Altman wrote in a statement yesterday that, quote, "out of respect for Ms. Johansson, we have paused using Skye's voice in our products." But Charlie Warzel says this week's kerfuffle is just a sign of things to come. He's a staff writer at The Atlantic, and as I just told him before this taping, he is my favorite writer to explain the internet. Welcome to ALL THINGS CONSIDERED.

CHARLIE WARZEL: Thank you for having me.

DETROW: So of all the controversies and headlines to come from AI in the past couple of years, this feels like it's really stuck with people. Why do you think that is?

WARZEL: Well, I think that there is, broadly, a sort of low-level concern about generative AI, specifically that it is based off of the broad output of human knowledge work, right? The articles we write, the pictures we take, just sort of the human creative output of the internet is what these models are all trained on. And I think that there's, broadly speaking, a concern that these tech companies are sort of harvesting that and turning it into a product that is going to make them rich.

And having this happen to a celebrity, you know, someone with a lot of power, with a lot of influence who says, no, I don't want to be used and then perhaps, you know, the company goes and does it anyhow - I think that that is a bit of a wake-up call moment for certain people who are paying attention to say, look, whoa, if they're going to do this to her, I mean, what are they going to do, you know, when it comes to little old me?

DETROW: Yeah. And when you wrote about this this week in The Atlantic, you made the argument that this is basically a microcosm of what's to come. Can you explain what you meant by that?

WARZEL: Yeah. There is this feeling in generative AI, right? OpenAI is sort of the poster child company for this movement, and they have this really lofty goal of creating a AGI, artificial general intelligence, which is a human level of intelligence. And they say, you know, if they were to do that, which they clearly haven't yet, it could usher in a future of unknown and unprecedented prosperity, right?

It's these very grand gestures of almost this utopian world. There's this feeling that we're going to do whatever we can, and you really can't stop us. And yes, there might be some negative downstream consequences in the short term, but trust us - we are doing something, and it's just too important for you to try to stop us. And I think that rubs people the wrong way.

DETROW: And there's this flip side of that argument though, that and if we don't do it, a place like China will. Is there any validity to that or else point?

WARZEL: I think we don't know, right? I mean, there's two ways to look at the AI movement right now. And one is the reality on the ground, which is that there's a lot of generative AI and all these different productivity tools, ChatGPT usage for coding or to help you write a paper or something like that. There are uses for this technology, but they are, you know, still somewhat small. And then the other way to think about it is this idea of a potentially transformative - right? - intelligence that, you know, would almost, in a way, be like a weapon.

And so in terms of the second one, I mean, using it as a geopolitical excuse, I understand that it's very convenient for all these companies, but we just don't know what these things are capable of, you know, in the future. It's really still a guessing game. And it also means taking these technology companies' word for it that they're close to building something like this, that it's even possible.

DETROW: You ended your recent piece with an observation. I'm just going to read a sentence of it. Hubris and entitlement are inherent in the development of any transformative technology, but generative AI stretches this dynamic to the point of absurdity. Given that, what do you think happens next?

WARZEL: I think it's pretty difficult to say, right? I think one possible angle that could, you know, put a roadblock in the generative AI movement is probably a series of successful lawsuits against these technology companies that have to do more with, you know, use of likeness or copyright infringement. And that could sort of slow the gears down a little bit. But I also believe that the only thing that can truly slow this thing down is the progress of the innovation itself. There is the chance that, you know, people are sort of hyping this more than they should, but there's also the chance that we continue to see these breakthroughs. And I don't know, in that sense, if we're going to be able to stop it, as long as there's investors and people using the tools and money to be made.

DETROW: Charlie Warzel, staff writer at The Atlantic. Thanks so much.

WARZEL: Thanks for having me. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Scott Detrow is a White House correspondent for NPR and co-hosts the NPR Politics Podcast.