Subscribe here: Apple Podcasts | Spotify | YouTube
On this week’s Galaxy Brain episode, Charlie Warzel is joined by New York Times technology reporter Tiffany Hsu to discuss the rise of AI influencers—synthetic avatars, often indistinguishable from real people, that are flooding social-media feeds to sell supplements and promote brands. Hsu unpacks her reporting on the combination of forces converging around it, including the wellness industry, a historically fertile ground for scammers. The pair discuss how the volume of synthetic content online is producing a new kind of epistemic exhaustion: a fatigue so deep that many people have simply stopped caring whether what they’re seeing is real. So is authenticity already beside the point? And is an audience’s emotional response—rather than the truth behind the image—the only currency that matters?
The following is a transcript of the episode:
Tiffany Hsu: You have to create such a ridiculous volume of content, and it all has to feel fresh. Yeah; I can totally see why someone would be tempted to just make it on a computer.
[Music]
Charlie Warzel: I’m Charlie Warzel, and this is Galaxy Brain, a show where today we’re going to talk about AI influencers. There’s this post online that I think a lot about. It’s from Zachary Galia, who is this social-media content strategist. It reads: “Every post is a battle for three seconds. Platforms keep multiplying, content is seemingly endless, and attention spans are shorter than ever. Your content needs to capture your audience’s attention immediately and hang on to it for dear life.”
Maybe that stat makes your stomach drop a bit, as it does for me. What’s unquestionable, though, is that the war for attention—which is fought primarily online across this host of algorithmic infinite-scroll platforms—is being fought at almost inhuman speeds. Obsessive content marketers are in the volume game. Brands and influencers have adopted this buckshot-style approach to hawking their wares and attracting eyeballs.
And that can mean doing multiple posts about the same subject or product, but from different angles and locations—all of it to see if they can find some way to hit the sweet spot of the algorithm and go viral. It’s spamming as a strategy. And I think it’s a part of the reason why our feeds just feel so cluttered and chaotic.
And trying to feed the algorithmic beast at these inhuman speeds has meant enlisting the help of, well, not-humans. Late last year, the venture-capital firm Andreessen Horowitz invested in this company called Doublespeed, an AI company that does, quote, “bulk content creation.” In other words, it’s a bot farm. Doublespeed’s marketing is purposefully troll-y, with claims that it is, quote, “automating attention,” allowing people to create, quote, “one video a hundred ways.” The banner on the company’s website reads: Never pay a human again.
Now, it’s no secret that the internet is filling up with synthetic material or AI slop. The SEO company Graphite recently found that, beginning around November 2024, the internet experienced a slop tipping point—in which the quantity of AI-generated articles being published on the web surpassed the quantity of articles written by humans. But it’s not just text. Advances in generative-AI audio and video have meant that it has never been easier to create fake influencers from a few short text prompts. These are real-looking people. Often attractive, sometimes scantily clad women. And they’re selling real products. They’re attracting real eyeballs.
Now, some of these online influencers are pretty easy to spot, but others are good enough that they’re duping people. And in some cases, it seems almost impossible to know for certain whether a specific influencer is real or not.
We are, in essence, now just living in the uncanny valley. The stakes are real. For influencers who are worried about their jobs—but also for all of us, the people who are out here navigating this blurred reality and just trying not to get scammed or duped. So are AI influencers here to stay, or are they just this passing fad? Has the internet tipped into synthetic slop for good? How can people learn to spot what’s real and what’s fake?
To help me through all of this, I spoke with Tiffany Hsu. She’s a technology reporter at The New York Times who covers the information ecosystem, including foreign influence, political speech, and disinformation. Tiffany’s been reporting on the rise of AI influencers selling supplements and other products, and she joins me now to make sense of this very strange new world.
[Music]
Warzel: Tiffany, welcome to Galaxy Brain.
Hsu: Thanks for having me.
Warzel: So I wanted to start here. Let’s just talk very broadly. Who is Melanskia? Am I even saying that name right? Who is that person?
Hsu: Well, first of all, she’s not a person. She’s an AI avatar who looks stunningly like a real person and has apparently fooled many, many of her—think it’s now more than 300,000—followers that she is an honest-to-god human being. She’s not; she’s AI. She is meant to be an Amish lady who has several children, who posts about what you shouldn’t eat. Clean living. She talks about how she would never buy supermarket rotisserie chicken, which I saw as a personal affront. Yeah; I mean, she talks a big game about health and wellness, which is kind of surprising, given that she does not have a body.
Warzel: And that’s because she’s a generative-AI avatar, correct?
Hsu: Exactly.
Warzel: So how did you stumble upon this account? Maybe more broadly, we could talk about: How did you become interested in AI influence, or avatars in general? But also, thinking specifically, how did you stumble upon this one and decide this was an area of reporting inquiry?
Hsu: So a mom friend actually texted me one day, and she said, “So I found this account, and pretty sure it’s not a real person. I’m pretty wigged out. What do you think?” And I’ve been writing about AI for a while now. I see a lot of AI. I know what the usual tells are. But I looked at Melanskia, and I was like, This is incredible. Just purely from a technical standpoint, she’s very impressive.
And so I popped the link to her account into the group chat with the other disinformation reporters at the Times, and it just blew up. My colleagues were like, What the hell is this? How did they manage to get Costco looking so real, like down to the labels of the products that she’s holding up in the aisles that she’s walking through? We were kind of stunned at how sophisticated her account was.
Warzel: You have tracked Melanskia down to Melanskia’s creator, or whatever we’re calling this—the creator of a creator. You know, Russian nesting doll of individuals here. Josemaria Silvestrini, who is this entrepreneur you describe as using these AI avatars to promote brands.
Some of them are linked to, as you report, supplement brands, which is its own industry that is dubiously regulated and kind of a little bit of a Wild West in some ways. But what did you learn about Josemaria?
Hsu: My colleague Ken Bensinger, whom I wrote the story with, managed to track him down. We were actually pretty surprised that he was willing to talk. Because for the fact that there are countless posts on social media from people being like, “DM me for tips on how to make tens of thousands of dollars, hundreds of thousands of dollars, on AI influencers,” it’s still pretty murky. And people are not especially willing to chat. But Josemaria was like, “Yeah, dude, let’s talk.”
So I should back up and say that Melanskia is not Josemaria’s only game. He’s got a network of creators that he’s kind of supervising. But he’s not the one who’s creating the avatars. He outsources their creation to other people, and he basically pays them to talk up his products.
Warzel: So a way, maybe, to think about it is: He is both the creator of this supplement product and also, in a way, almost like an agent for these fake influencers for his product. Like, he kind of has this stable of people, which is in its own right very weird. And yes, it was really striking to me that Josemaria—there’s a great picture in the article of him sitting at what looks like a Parisian-style outdoor cafe. He’s smiley, he’s got the laptop. What was his attitude toward this? This is just the brave new world of business. Get on board or be left out.
Hsu: Yeah, basically. I mean, in his mind, he’s building the brand. You know, he’s like, Score. I got a mention in The New York Times; got some earned media out of this. I think for a lot of folks who are creating these AI avatars as, like, influencers or as advertising vehicles, they don’t think it’s sketchy. Even though a lot of them, including Melanskia, don’t disclose that they’re AI. To them, it’s just a more cost-effective, efficient way to market something. If you are able to create an avatar from scratch using AI, you can customize them to do whatever you want, to look however you want, to pitch whatever product you want. You don’t have to pay the fees that you normally do.
While researching this, I went online—always a dangerous thing—but I went online, and I looked at some of the people who are selling tutorials on how to create AI influencers. And, you know, they have taglines like Full guides to making $30,000 a month with AI influencers. There’s one company that’s pitching how to create beauty influencers specifically, which is really weird. Again, because if you’re AI generated, you don’t have skin to use skin care. But the ad for these tutorials says, you know, You don’t have to pay for influencers. Don’t have to pay for studio shoots. You don’t have to pay for the products to be sent out to you. We can make avatars that show a morning routine that can replicate bathroom lighting, that can do simple product close-ups, casual camera angles. It’s really easy to replicate that authentic feeling that makes human influencers so popular.
Warzel: Well, not only that, too, right? I found this great PowerPoint presentation made by this marketer online at the end of the last year that kind of goes through the digital trends of the year. And it’s, like, obsessive. It’s over 300 pages. Right. And it’s talking about what brands, what influencers need to do. One of the things they were showing is, I think they used Wimbledon from 2025 as an example. And it was like, Wimbledon had posted thousands upon thousands of social clips during the tournament—which is only a couple of weeks long, right? It’s just more content than you could ever imagine.
And the idea was like: This is the strategy. It is total bombardment. It’s like many, many times a day. And if you’re an individual influencer and not a brand, you should be posting the same message, like, four different ways, from four different types of locations. Try it at the bus stop, try it in your bathroom, try it at the grocery store. See what hits, right? And it’s just this constant sense of iteration to see what the algorithm is looking for that day, what it’s going to reward—then going back and doing that. And it seems to me that these AI avatars are like … that’s a godsend for this, right? Because it’s simple. You can have all those at-bats just by creating these prompts.
Hsu: Yeah, you copy-pasta yourself, or you copy-pasta your own AI avatar, I guess. But yeah; it’s just so much easier. I’ve talked to a lot of influencers in the past who said the job looks easy, right? You’re just pointing a camera at yourself, and you’re talking to it. But it’s actually really labor intensive, because of exactly what you said. You have to create such a ridiculous volume of content, and it all has to feel fresh. Yeah; I can totally see why someone would be tempted to just make it on a computer.
Warzel: Some of those influencers you’ve spoken with—what do they make of this trend? Is this, like, existential for them? Is this kind of like, Well, you know, they can’t do what we do. Where are their minds around that?
Hsu: I mean, I don’t know that I can exaggerate the number of times I’ve heard the word panic from influencers. Because they’re like, “Oh my God, these computer bots are coming for us.” And it’s true. You know, I think they realize that what they do is so easily done now by a computer that they have to differentiate themselves somehow. I know some of them are hoping that legislation is going to help them. I mean, let’s be frank—it’s not. Legislation, when it comes to AI, is never detailed enough. And it’s always behind, just because AI itself moves so quickly. But there are laws; there’s one in New York that goes into effect in June that requires disclosure. But then you get into the whole thorny issue of whether or not the audience cares if an influencer is AI generated. Let’s say that Melanskia did disclose that she was AI generated. You could really make the argument that A), people aren’t going to see it. You know, we see this a lot with AI, where even if there’s some sort of note somewhere that says, “This is an AI-generated piece of content,” you go into the comments and people are like: “Whoa, this is crazy. I can’t believe this happened.” Or, “Wow, you’re so gorgeous. What’s your number?” Or the second possibility—and to me this is a really scary one—is that people just don’t care. They’re like, “We know you’re a collection of pixels, but we like what you’re saying. We like what you’re showing us, and we are influenced.” And I’m seeing that a lot.
Warzel: So I saw a video of this blonde woman who looked like a soldier named Jessica Foster, who at scroll speed fooled me, right? I didn’t really pay much attention to it. But then I went back up, and I was like, Wait, Donald Trump’s meeting with, you know, this soldier in the Oval Office. It kind of, you know—Spidey Sense went off. And the post I saw that I felt really captured this well was, “Well; what’s the difference between this fake soldier woman and some of the other people who are real who some of these same men will see on Instagram? Both are unattainable in the same way.”
As in unattainable, not like in a dating sense, but in a, “I’m never going to meet this person,” right? Like, I’m never going to live the life. I’m never going to have access to the spheres of where they are—whether that’s on a battlefield or in the Oval Office or in Hollywood or in a penthouse by a pool. And I think that that’s a very real thing for a lot of people, right? Like the influencer is meant to be an avatar—even if they’re real—for a life that is aspirational. An AI avatar can do a lot of that, and it’s going to be just as hard, you know, for someone to feel like it’s attainable.
Hsu: Yeah. I mean, influencing, at its root, is an exercise in wishful thinking, right? So it doesn’t matter if, you know, you’re hoping fruitlessly that you’re going to be this real person or this AI-generated person. My team normally covers disinformation. And we came across this idea a lot when we were covering the hurricanes a while back in North Carolina, where there were a lot of AI images being posted of the devastation. There was one showing, I think it was a little boy, I want to say, who was sitting on a raft with a dog. Drenched, crying.
So this image gets posted by a local Republican official. And immediately people on X are like, “This is AI generated; this is not real.” And she responds in kind of a stunning way. Which is to say, “I don’t really care where this image came from.” Like, “It hurts my heart,” or “The feeling is real.” And what’s happening a lot on social media, just because there’s so much content, is that people are getting fatigued. They are so exhausted at having to parse what’s real and what’s fake that a lot of them are just saying, like, “If it makes me feel a certain way, that’s all I care about.”
Warzel: But do you think that fatigue goes the other way? With people saying, You know what, I’m becoming generally over the influencer thing. Because of the fact that it’s like, if you’re not even going to invest the bare minimum humanity into this product—that is, this product is used by actual people in some capacity—I’m going to bow out of it. Do you think it could sort of erode and taint the whole idea of the influencer-like ideal? Or do you think maybe it just supercharges it?
Hsu: No; I think you’re totally right. Aerie, this bra company, posted in October this pledge that they were only going to use real people. So they said, “A few years ago, we stopped retouching our models. And now we’re going to pledge to never use an AI body.” And that post was by far their most popular post of the entire year. I think you have a lot of brands that are catching on that, you know, people are over AI influence.
But then, on the other hand, you have avatars like Aitana Lopez. She is described as an influencer who’s based in Barcelona, who’s a fitness girl. She has pink hair. I mean, she discloses herself as being AI generated, but she has nearly 400,000 followers. She has been posting photos of herself in, like, Schiaparelli dresses at Paris Fashion Week. She’s got a brand deal with Alo Yoga. You know, her creators have been quoted saying that she makes up to 10,000 euros a month. So I think there is some resistance to this trend, but they’re still really popular.
Warzel: Do you feel like some of this is just a literacy issue? Or do you feel like, again, it’s really a change in what we expect, what we consume, and a lack of really caring?
Hsu: So I think that’s a really good question. I think the answer is—it’s a little bit of both. I did a story late last year where I shadowed a high-school class, and they were learning about disinformation and media literacy. But they had really interesting perspectives on AI influencers, which they independently brought up to me. Where they said, “You know, some of them are really hard to identify as being AI.” You know, a couple of them mentioned after Sora came out, there were a series of posts that featured Jake Paul, the YouTuber, supposedly embracing his queer identity and like doing makeup videos, whatever. A couple of the kids were saying, “We were almost duped by that.” And so for them, in part, it’s a media-literacy issue. To know that AI is capable of producing stuff like that is something that they want a lot of their peers to be on top of. But on the flip side, they raise a perspective that I think journalists often forget now, just because AI is so crazy—which is that AI is really impressive. Like, it’s used to create incredible things. So a lot of these kids are themselves experimenting with Google Veo or Nano Banana Pro or Seedance. And, you know, they’re playing. So to them, having the possibility of influence via AI avatar is like a fun thing.
You know, one of them mentioned this influencer studio. It’s basically like a modeling agency, but for AI avatars. It’s from a company called Higgsfield. They have a program called AI Influencer Studio. And you can pick from so many different options. It’s like Sims for the new age, right? You can choose whether you want your avatar to be human, an ant, or an iguana or an elf, right? You can choose genders, like trans man or nonbinary. Can choose skin color, eye color. You can choose, like, a skin condition that includes like burns and dry, cracked skin. They have settings like forked tongue, big or small horns, or fish skin, right? So I think, to a certain demographic, like messing around with this is a really great time. It’s just, they’re doing it for kicks.
Warzel: Right. And that makes sense. I think it is important to keep in the context here. There’s sort of two buckets of this, right? One is that bucket of being able to play around. That these are tools; these are forms of AI. You know, detractors and critics will probably bristle at the phrase, but there is an art form to this. A creative expression formed to all of this. And I think that we can’t discount how it can be fun, and it can just be a different way for people to move through the world with these avatars, right? And then I think even in the brand space, in the selling space, right? We’ve got Tony the Tiger; we’ve got people who sell things that aren’t real, right? And yes; it’s obvious that there’s not a talking tiger who’s real and selling you cereal, or whatnot. But there is this idea of, like, “characters sell things.” They always have; they always will. Brands have mascots. And so I think there’s precedent, probably, for a lot of this stuff. And there’s a lot of this that isn’t pearl-clutching panic.
And yet I think what interested me about your story that you did with Ken, and the reporting, is the way too in which it overlaps with this world of less-than-regulated supplements, and things like that, right? And so I think, that seems to me to be the concern here, right? That in this moment—where we’re all figuring out how to understand this, how to read this, up our literacy on all this—you have this group of people who are moving in. And they’re actually trying to take advantage of the fact that there is a lot of confusion in this space.
Hsu: Oh yeah. I talked to Tim Caulfield, who is a professor of health science up in Canada. He put it pretty succinctly, where he was like: People have realized that AI avatars are a great and easy way to make money. And now the scammers are like, “Hey, let’s hop in on that.” And the wellness space has always been a magnet for scammers.
I’m a mom, and I remember both times I was pregnant, the absolute amount of insane content I would get served on social media. And the idea that fake people who seem real could be selling that sort of stuff to me, with like a voice of authority, is pretty scary. Because I think—more so than fashion or beauty—wellness stuff, people are really willing to trust. Because most people don’t have the kind of science background you would need to really parse through what is snake oil and what is like a legit thing.
And so, if they see someone who seems like they know what they’re talking about, who seems to be talking from experience, they’re more inclined to trust. So wellness scamming has a long, illustrious history of bullying a whole lot of people. And you’re right. The presence of AI avatars in that space is only going to make that worse.
Warzel: So when you were doing the reporting, and it led you all to this guy, Josemaria—who’s sort of the keeper of this, the guy running the brand, but not making the avatars themselves—did you all get into how these avatars are created? There’s obviously these programs that exist. But the people who are sort of mercenaries for some of these folks, right? Did you get a sense of where the hotbeds of this AI-avatar creation are?
Hsu: Because it’s so easy to make an AI avatar, you don’t even need to be working in conjunction with other people. You know, this can be a do-it-yourself thing. There are guides all over the internet teaching random people how to do this. I could probably do it if I felt like it, pretty quickly. Because the regulations are so confused, and they’re so lax, not a lot of the platforms that offer AI generation are going to stop you from creating an AI avatar. ’Cause there’s nothing inherently illegal about creating a fake person who says certain things about a certain product. You could run afoul of rules about scamming, but the platform isn’t going to be able to monitor that. It’s not in their interest to put in a lot of resources into overseeing how people are using the characters that they’ve created with the program.
Warzel: Well, and you mentioned, with regulation, that in December, the governor of New York signed the nation’s first legislation, and that explicitly required this disclosure of quote unquote “synthetic performers” in certain advertisements. You said it’s kind of DOA in some sense, or it’s just not really going to have this impact. Can you say more about that?
Hsu: Look, I think it is helpful to have the legislation, just because it sets a precedent for other attempts to regulate. I think the more practiced lawmakers get trying to regulate this space, the better they’re probably going to get at it. It also demonstrates that the authorities care, that they recognize that there’s a problem. Even if the way they’re going about issuing consequences is a little bit hazy. I mean, it’s just—so many of these creators are anonymous. A lot of them are operating from outside the country. I mean, this has been a problem with all sorts of AI legislation, whether it has to do with deepfake porn or political deepfakes. Dealing with commercial scammers is not super high on the average legislators’ priority list. The deepfake porn is the area that most folks are really, really up in arms about at the moment.
Warzel: But even there, it’s also a volume game, right? It’s like, if you were going back to the old paradigm of people scamming, these things are conducted at human scale. And now at artificial-intelligence scale, and with agentic AI and the swarms of AI agents and doing stuff. It just sort of exponentializes the ability to do this stuff. So it seems like it goes from whack-a-mole to, you know, whatever … some inhuman game of whack-a-mole.
Hsu: I mean, I’ve talked to a lot of victims of deepfake porn or AI-generated threats. And some of them have said that they’ve complained, and they’ve managed to get the platforms to either block or remove the responsible accounts. And then another account pops up a day later and starts targeting them again. Like it’s turbo whack-a-mole; exactly what you said.
Warzel: Where do you think this goes from here? Because I get this sense that we’re in … I can talk myself into it two different ways, right? That this is the first inning of a very long game, the likes of which are going to get stranger and stranger and more dystopian. Although maybe we grapple with it. And the other, I think, too, is this idea that yes, the world is getting weirder, more unpredictable. There’s more tools for scammers and folks like this. But also this—like we were talking about earlier—this exhaustion. In, you know, being bombarded by this digital stuff. The internet becoming less and less human, and people dropping out, or not finding themselves all that interested in it.
When you think about what you’re trying to anticipate and what’s coming next, do you fall into one of those two camps? Do you have a different thought about where all this goes?
Hsu: Given our track record as humans, I’m not incredibly optimistic about us, you know, putting our foot down collectively as a society and saying, “Stop this AI nonsense.” I do think we’re going to run into a lot of issues with societal trust. I mean, that’s already happening. My colleagues and I just wrote a story about the “liar’s dividend”—which is the phenomenon that happens when the prevalence of AI makes it so that people can more easily discount actual footage, real footage. Which is what’s happening with the proof-of-life video that the prime minister of Israel has had to circulate.
Warzel: Explain that for a second, for people who will be less informed about this idea that Israeli Prime Minister Benjamin Netanyahu is dead. Explain that just for a second.
Hsu: Oh God; how many hours do we have? So in a nutshell, Netanyahu gives a speech that is recorded. And in the reshared versions of the recording, in certain frames he kind of looks like he has six fingers on one hand. Now, extra fingers is, or was, a pretty common tell that AI was involved. Now, AI’s gotten much, much better, so that’s not really the case anymore. Versions of that video start circulating, where people are like, My God, he has six fingers. This is AI generated. Obviously, he’s dead. Because we always jump to that conclusion now—that the world leader is dead. So amazingly, a few days later Netanyahu posts on his social platforms a video of himself at a cafe outside of Jerusalem, gesticulating very clearly with his five-fingered hands. I mean, this is, it’s a proof-of-life video. And, as far as I know, it’s the first proof-of-life video to directly address being deepfaked from a world leader, especially one as prominent as him.
Okay. So he posts this video. It is verified in several different ways. The café itself posts its own set of images, separately from Netanyahu’s, showing him ordering and talking to people. You know, several deepfake professionals analyze that video frame by frame. They’re like, “We don’t see any signs of digital manipulation here.” Regardless, the internet goes, “This proof-of-life video is also AI generated.” And you have people, some of them with millions of followers, post copycat videos of, like, the new leader of Iran doing the same things that Netanyahu did in that café. Or they show Netanyahu wearing a sports jersey in the café, just to prove how easy it is. And so the narrative just spiraled. It’s like people don’t trust the proof that was provided in response to the initial distrust of video proof.
Warzel: Doing reporting on all this disinformation stuff in like 2016, 2017, 2018, there was so much of this idea of like, “You think these Photoshopped images are bad? Just you wait till the AI stuff comes.” And like, the AI stuff was in the realm of the really bad videos of Will Smith eating pasta where his mouth detaches from his body. And there was sort of this, like, “Okay, yeah; I can see that happening.” But this idea of like, “No, the lines of reality will blur so fully that it just will be a free-for-all.” And I think that there’s so many people that received the articles that I wrote about this, or the reporting I did, and other people out there were like, “Okay, that’s really alarmist.” And I think it’s fair to say that we are just actually living in that future. Like, that was a, like, 100 percent success-rate comparison. And I think that it’s not where I think a lot of people would go with, you know, an immediate jump from AI avatars to this. But you’re totally right. The more that this technology comes into our lives in mundane ways, the more we expect to see it in these unprecedented, really high-stakes ways. And the more that people can basically say whatever they want to say and have plausible deniability.
Hsu: Yeah. People are so desensitized to it, too. We keep writing about how bold a lot of social-media creators are becoming now. Because, I mean, the [Trump] administration is a meme factory. Like, we have a president who communicates through AI images and digitally warped images of real events.
Warzel: I believe they refer to them as bangers, so that’s the White House term of art.
Hsu: Right. My God, I got a comment back from the White House when I was writing about this photo of a protester in Minneapolis who had been taken into custody. You know, the original photo is posted by some branches of the government, showing her, you know, pretty composed. She’s walking with an agent. And then the White House posts a photo of her with her skin darkened, and she’s sobbing. And I reached out to the White House for comment on this, and they’re like: “Justice will continue to be served. The memes will continue to be served.” Or something to that effect. And I was like, So this is our communication method now.
Warzel: Yeah; the paradigm is completely shifted in that sense. This is a little bit of a swerve here before we land the plane. But something I was thinking about is, you said: “I think I could create one of these avatars using these programs,” right? And I think I believe you. But could I get it to be, like, highly influential? Like if I had this thing to play with, is there still a skill game at this? It isn’t the slam dunk that people think. It’s actually like, these people are just really good at the game, and they just happen to do it with a costume on.
Hsu: Short answer is yes. I have a long answer. I’m going to tell you about my journalism white whale, which is that I have tried for years now, I think, to get a mention of one of my favorite movies into a story about AI. Which is the masterpiece called S1m0ne. Do you know this movie?
Warzel: I don’t think I saw it.
Hsu: I mean, honestly, it was not popular. I don’t think many people saw it. But for some reason, I love this movie. Mostly because it keeps echoing across my work. So in short, Al Pacino plays a director who’s down on his luck, because he’s not being successful. So he essentially creates an avatar named Simone, who he convinces everyone is a real person. Simone goes on to win Oscars. She, like, runs for office and wins. And Al Pacino’s character at some point is like, My God, I’ve created a monster. He puts all of the CDs that Simone is imprinted on, and he tries to like bury them in the ocean. And he gets, like, accused of murder. It’s a really convoluted, messy story. But the fact that an AI or a synthetic character manages to convince the entire world that she’s real, and she is able to exert huge amounts of influence, has always stuck with me. And I think increasingly more so, because it seems like something like that could happen now. That you could get someone who understands the way media works, who understands the way Hollywood or social media or audiences in general work. And you could easily have someone who creates a character that really is compelling to a lot of people.
Warzel: So real quick: What I want to do, I want to talk about a specific Melanskia video. The aforementioned rotisserie-chicken video with the caption “Most people buy this every day.” And I would love if you could walk me through the tells here. How people who are just scrolling through their feeds and stuff are going to be able to, like, distinguish this. You know, I’m seeing five fingers, for example, so it’s not that. What are some of these tells here?
Hsu: This is a little gross, but if you look at the way the chicken is dripping … I don’t think rotisserie chickens drip quite so lusciously. So there’s that.
Warzel: It is disturbing.
Hsu: Yeah, right. I don’t want to look at that more than I need to. One of the biggest tells for Melanskia specifically is that if you go to the grid of her overall account, you notice that she’s always kind of positioned exactly the same way. She’s always looking at the camera exactly the same way. She makes very similar facial movements and hand gestures. That tends to be a tell, but it’s hard to notice that in a single video. Other—sorry, go ahead.
Warzel: She’s always kind of lit with the same golden-hour light; like that’s what I see from the grid. It always looks like it’s 5 p.m. in the summer, you know. And she’s outside, and sometimes it looks like she’s … I mean, she’s inside in some of these. She’s like in Costco.
Hsu: I mean, that’s actually a really good tell, is lighting. Often with AI influencers, the lighting isn’t natural. Like, it kind of looks like they’re being lit from all sides instead of just from one direction. If you look at some of the older videos, she looks a little bit different.
Warzel: Oh wow; yeah.
Hsu: That tends to happen with a lot of the longer-running accounts. I don’t know if this is the case specifically with Melanskia, because she doesn’t really do super close-ups. But sometimes you’re able to look into the irises, and the reflections are different in both eyes. If you look along the hairline, often it’ll look a little blurry, or a little out of whack. Um, if there’s audio—audio is really, really good now, audio deep-faking, but sometimes they don’t breathe like a real person does. There aren’t as many … like, for example, the way I’m talking. I use a lot of ums, a lot of filler words. AI avatars do that less. But, of course, all of these things you can deal with if you’re a really good prompter.
Warzel: Yeah; I’m just watching this chicken drip. No liquid behaves that way. Anyway, sorry about that, to derail it. But no; those are all great. In some ways, I’m half heartened by all of this, right? Because I can’t tell you, always, what it is. As you’re walking through that laundry list, I’m like, Yep, I see that. The hairline; that one was novel to me. The zooming in, looking at the irises. But there is just something that my brain still recognizes as suspect, right? And I’m also worried, because I’m like, Is this the last glimmer? Like, My brain—is this the last flickering of this instinct before I lose it?, you know.
Pre this conversation—like, cards on the table—it’s sometimes hard for me to get really fascinated by AI influencers. Because I’m just like, Yeah, that’s not for me. That’s just not a thing; I’m not interested in engaging with an influencer who’s not human. And so how could everyone be? And I’m like, it seems really plausible to me in the same way. It’s obviously very different. But if you were to tell, like, someone in 1989, “This guy Donald Trump’s gonna be president, right? And people are just gonna be like, ‘He’s a genius and a strategy master,’ and all this stuff.” Right? People would be like … Okay.
That’s what I’m thinking about as you’re saying this. Yeah; it’s not gonna happen tomorrow, or anything like that. Like, we’re not gonna have, you know, whatever, President Simone. But I do think it’s really interesting to think about all of that, and that kind of dynamic, in a world where this becomes more normalized. And also in a world where maybe that North Carolina disaster-politician ethos of “I don’t care that it’s not real; it speaks to me.” If those things marry in a way—culturally, politically, whatever—I do think it really brings up this question of like: Man, are we gonna get to the place where there’s gonna be influential people? Who even have gained people’s trust in ways that right now seem really absurd? It seems really plausible in that sense.
Hsu: Yeah. You know, I completely agree. So in my reporting, I think a lot about, like, identity and anonymity. Because so much of the really sketchy shit that happens, in my line of work, is done anonymously. And, you know, recently, Banksy, the artist, has appeared to have been identified, right? As, I think, some 50-something-year-old man in the U.K. It just got me thinking that, you know, for years, this guy—whom no one could really identify—was out there, like, changing the art world and commanding ludicrous prices at auction. And now that he’s been identified, is that going to change any of that? Do people really care? Or is it just the content that they’re interested in? And I don’t know if this is a torture link back to AI avatars, but I think the same question is valid. Are we at a state where we really don’t care who’s behind a major cultural figure? And it’s really just the image that they’re putting out, or the product that they’re putting out, that is more compelling to the audience?
Warzel: Tiffany, this is, I think, a good place for us to leave it. I’m impressed with where we ended up getting to here from this. And I really feel like now I’m to go and actually contemplate an AI-avatar politician and stare into the abyss. So thank you for that.
Hsu: Hey, let’s leave journalism and make some real money. Seems easy.
Warzel: I think it’s time. I think I found my off-ramp.
Hsu: Let’s do it. All right. Let’s make it happen. Thank you.
Warzel: This is great. I appreciate it.
[Music]
Warzel: That’s it for us here. Thank you again to my guest, Tiffany Hsu. If you liked what you saw here, new episodes of Galaxy Brain drop every Friday. You can subscribe to The Atlantic’s YouTube channel, or on Apple or on Spotify or wherever it is that you get your podcasts. And if you’d like to support this work and the work of the rest of my colleagues, you can do so by subscribing to the publication at TheAtlantic.com/Listener. That’s TheAtlantic.com/Listener. Thanks so much, and I’ll see you on the internet.
This episode of Galaxy Brain was produced by Renee Klahr and engineered by Miguel Carrascal. Our theme is by Rob Smierciak. Claudine Ebeid is the executive producer of Atlantic audio, and Andrea Valdez is our managing editor.