Reality Bytes

So-called ‘AI,’ performative ignorance, and old-fashioned human assholery are degrading reality as we know it.

🛑 HOLD UP THERE, FRIEND! READ THIS FIRST! 🛑

WE’VE MOVED! You’re reading this on an archived version of The Flytrap that exists only to preserve links to content we published before August 27, 2025.

To read this post and get new editions in your inbox every week, subscribe over at our new home on Ghost. (And if you’re already a paid Flytrap subscriber, you should have experienced an almost seamless transition to be able to read our content on Ghost, but if you aren’t able to do so, please reach out to the Flyteam at [email protected].)

While we have you: The Flytrap merch store is now open! Get dripped out in the latest Flytrap merch or buy a poster from our collection.

Credit: Rommy Torrico

In the late ’90s, I spent several miserable weeks digging myself out from under high school rumors that I had made a sex tape (!) for my 15-year-old boyfriend. It was an awful, lonely experience that seemed to drag on endlessly. I remember thinking, at just 16 years old: Will my life ever be different from this? Will this crap follow me forever?

At the time, I wouldn’t have believed you if you’d told me that the terrible winter of my sophomore year would fade almost entirely from my memory. That it would retreat so thoroughly from my brain that, 25 years later, I would struggle to remember the details. But that’s what happened. I forgot—really, really forgot—about the whole thing.

Looking back, the ordeal was equal parts preposterous and predictable. Preposterous because I was a known goody-two-shoes, a real inspiration-for-Jesus Camp-level teen scold who, both to my credit and my detriment, actually walked the True Love Waits walk (in Jncos and neon-yellow Adidas shell-toes). But that’s probably what made it so predictable too: I was a juicy target for a group of mean-girls-before-Mean Girls who knew exactly how to hit me where it hurt the most.

My long-buried memories of that experience resurfaced when, last month, I read about what Forbes called the “largest-known instance of deepfake pornography made of minors in the United States.” Prosecutors allege that, in 2023, two teenage boys ripped photos from the social media accounts of 60 (!) of their girl peers and used them to produce hundreds of “AI”-altered pornographic videos and images. When I read that the boys reportedly shared the deepfakes with classmates on Discord and “other texting apps,” it hit me hard. I suddenly remembered the moment my phone rang in my bedroom, two giggling dudes on the line asking if I’d make a video for them too. Hanging up only for the phone to ring again as more of my peers were looped into the gossip grapevine.

All they were sharing were rumors. This was 1999, and the whole kerfuffle was ultimately a she-said/she-said battle of reputations that finally fizzled when proof of my alleged “sex tape” never surfaced—because it never existed in the first place—and the cliques involved moved on to other dramas. Sure, those dramas were undoubtedly heightened by the nascent social internet—we were all LiveJournal, DeadJournal, and Xanga devotees who talked endless shit online about our rivals du jour. But for all the goofy, even wannabe-sexy, aspirationally ~ artsy ~ selfies I took with my Web 1.0-era digital camera—which didn’t work unless it was plugged in, so I had to drag the whole-ass family Dell into our guest bathroom to capture the best light in the house!—I never worried that anyone my age (or anyone at all) was capable of wholesale fabricating a fucking sex tape.

Things are different for kids these days. We warn young people the “internet is forever,” but I don’t think we even really understand what that means. High school rumors fade over time, but deepfakes? Those may last a lifetime, or lifetimes, plural. 

Reality is different for kids these days. Soon it will be different for all of us. And I don’t think we’re ready for it.

Until the Pennsylvania deepfakes story, I’d mostly thought about so-called “AI” and the related zeitgeist in terms of its impact on adult workers, particularly those working in journalism. For DAME mag and on my own newsletter, I lamented the hype over large language models trained on uncompensated, and often ill-gotten, work. (Hypocritically, I also produced an epistolary fiction project illustrated with Midjourney images, which I’m now reimagining without such fuckery.) And as an avid reader of the big very-online-news newsletters—Today in Tabs, Garbage Day, and the like—I generally felt like I had my finger on the pulse of the ~ discourse ~.

But the Pennsylvania deepfakes story was something else, a harbinger not of problems to come but of problems that are here, that are now. Of complications not just for journos, who’ve lost jobs thanks to tech-bro-brained management, or for visual creatives sidelined because Canva subscriptions are a fraction of the price of real human talent. It’s a bigger, more terrifying omen, signaling a new now that is shaping the brains, the experiences, the forever-views of teens, kids and, of course, the adults who care for them.

We—Boomers, Gen X, Millennials, and even Gen Z—will never really understand what it is to grow up in a world where humans’ long agreed-upon markers of reality are fundamentally debatable, fungible, sus. Sure, pre-”AI” generations can and should rage over our lost jobs and our creative disenfranchisement at the hands of a bunch of intellectually lazy fuckos who have been terminally dickmatized by over-hyped predictive text and image generators that still can’t produce a picture of a full-to-the-brim wine glass because they’ve literally never “seen” one. (Refusing to write “AI” without the scare quotes is my own little rage against the machine. Whatever these technologies’ capabilities may be, they are not artificially intelligent. I refuse to further degrade reality by uncritically referring to them as such.)

Reality is different for kids these days. Soon it will be different for all of us. And I don’t think we’re ready for it.

But who cares if “AI” hilariously struggles to produce a photo of an overfull wine glass when a human—and it’s essential to remember that—has already programmed a computer that can produce hundreds of deepfake porn videos of teenagers at their bullies’ behest?

One of the first things people ever did with photographs was fake them. “Spirit photographs” of supposed spectres from beyond the grave emerged in the mid-19th century as artists and charlatans alike experimented with a new medium that showed radical potential: the capture, but also the modification, of memory and reality. Practically as soon as we could capture the “real” on camera, we sought to reshape images for our own diverse and sometimes nefarious aims.

Humans are an imagination-driven bunch; our capacity for storytelling is unique, wonderful, and terrifying. The history of media is fundamentally a tale of developing and refining this impetus: finding ways to preserve, share, and control the narratives that inspire, define, and inform us. It’s also a tale of limiting this impetus, or trying to. Any given comms student has likely been subjected to at least surface-level analyses of Biblical proscriptions against inappropriately invoking the name of God or creating images thereof. Fear of new media is practically as old as creation—or at least as humanity—itself.

If I were an “AI” enthusiast I would write off my brand of “AI” nay-saying as typical, predictable, media-phobic reticence to embrace inevitable progress. As retrograde nostalgia for an imagined “Time Before” that never really existed. 

But it’s no accident that the greatest champions of “AI” today—the tech-bro CEOs and billionaire captains of the information industry—seem to lack the cultural and historical competency required to make even that most basic argument. (Surprise! This is what happens when you let those same right-wing computer dweebs and capitalist shills defund and devalue humanities education.) They champion “AI” precisely because they believe its existence frees them from the burdens of responsibly facilitating human connections and stories—of writing a quick email, taking notes in a meeting, or summarizing anything longer than a brief text message. And that is only the most pedestrian and inoffensive manifestation of “AI” as it exists today; we have only just begun to see the ways in which terrorists might outsource the tedious logistics of mass murder to “AI.”

There is something deeply and meaningfully different, though, between 19th-century spirit photography and prompting Midjourney to cobble together a spooky picture of a menacing shadow on a stairwell. It is not the same thing to pen a satirical version of The Night Before Christmas for one’s own family and to ask ChatGPT to produce a “Rudolph the Red-Nosed Relative” song roasting your boozy brother-in-law. The difference, perhaps obviously, is human—or at least the understanding of the essential role of a real human hand.

Philosophers, historians, and public intellectuals have long bemoaned the possibilities of new technologies not despite human capacity, but because of the human capacity to misuse or abuse them. What’s terrifying about “AI” is not that it can generate photos, text, and videos that seem real, or which move, amuse, scare, or even threaten us. What’s terrifying about “AI” is the way it has been marketed as a technology that exists without us, of its own accord, and with its own unique abilities that surpass us. For all of humanity’s hand-wringing about new media, and it is indeed a nay-saying, hand-wringing, retro-conservative tradition that dates back to time immemorial, we have never before sought to convince each other that a new technology exists entirely beyond us. 

Even though “AI” evangelicals would have us believe it is more magic than machine, the technology is rife with the biases of its human creators and reliant entirely on the human work it has been trained on and trained to regurgitate. It is intelligent, but only to the extent that people are—the thing that’s primarily “artificial” about “AI” is the suggestion that it has nothing to do with us. 

As a sales mechanism, it’s the perfect pitch for a Western political moment obsessed with either the suppression or the spread of dis- and mis-information.

“AI” is not the only player in the reality-erasure landscape. There’s regular old social media, too—what Johns Hopkins professor Henry Farrell has termed “publics with malformed collective misunderstandings.” Reality—like democracy, with which Farrell is concerned—is an agreement. We have to want to make that agreement with others; consent is as essential to shared reality as it is to democracy. But the wheels are coming off there, too.

We are barrelling full speed toward what Ed Zitron has termed the “Slop Society,” the terms of which are increasingly dictated by, as The Guardian’s Rebecca Shaw aptly put it, cringe losers. It’s not just “AI,” it’s Elon Musk turning Twitter into a red-pill content farm. It’s Congress banning TikTok and, less than a day later, TikTok thanking Donald Trump for bringing it back. It’s TikTok and Meta suppressing all manner of content, from information on medication abortion and pro-Palestinian speech to, apparently, just anything to do with “Democrats” (which we’re supposed to believe was an honest mistake.) It’s Meta welcoming (even more) hate speech because Mark Zuckerberg is obsessed with the scrotal status of his industry.

What’s terrifying about “AI” is the way it has been marketed as a technology that exists without us, of its own accord, and with its own unique abilities that surpass us.

Sure, we’ll see even more “AI” crap on social platforms soon, but as moderation grinds to a halt, there will be plenty of actual people generating content and ideas that are meant to fuck with our understanding of the world, and plenty of mainstream and legacy news outlets who are prepared to lend credibility to this dis- and misinformation lest they be accused of left-wing bias. The result will be even more ridiculous debates over patently laughable claims (such as that “DEI” literally fans wildfire flames) and the further overloading of our digital lives with unchecked, even encouraged, bigotry. 

You don’t need fake photos to fuck with reality. You can freeze research and communications funding for the National Institutes of Health or turn federal workers into a DEI snitch brigade, as Trump 2.0 did practically as soon as he’d plopped his odious ass back down in the Oval Office. You can rope people into bad-faith arguments about whether trans women are women, seed Facebook comment sections with Holocaust denialism, and get a check from Substack for publishing anti-vax disinformation. Give enough hand-wringing, right-wing reactionary parents (one or two will usually do) the vapors about sex-ed books at the local library, and you can even get real people to censor content that corrects the unreal record.

Musk, Zuckerberg, Trump and the rest don’t really care if we question whether the photos and videos we see on social media are real. The point is to facilitate a social milieu in which we question whether other humans are human. It’s just that the fake photos help. Despite everything we know about Photoshopping and airbrushing—I think back to the Dove “Real Beauty” ad campaign that dominated the mid-2000s—we still really like to believe our own eyes.

And we still believe each other, too, even when we know there’s a good chance we’re being lied to or at least manipulated. I recently learned that young people these days do not mind pretending to be wrong on the internet as long as it makes them a little bit of money. In this economy, who can blame them? There are some older people who are on this tip too, but mostly it is a young person’s game, the thing of posing as unimaginably stupid, banal, dipshit—importantly: not racist, bigoted, or offensive, necessarily, unless it’s part of their schtick—if it garners clicks and views. 

This is something different than what I, as an Elder Millennial, would call “trolling.” It is not done anonymously and for the LULZ, but rather under one’s own name (or usual handle, not a fake) and for profit. In fact it is less meaningful, less credible, if done anonymously. (Recall that the most dangerous ur-troll of the pandemic era, Libs of TikTok, fomented hate and harassment for years before her identity was revealed.) This new phenomenon is not dissimilar to trolling in that the idea is to rile up randos, but the desired effect and related benefits are meaningfully different. Notably, the population perpetuating this phenomenon is different to your average 4Chan fucko.

I was stone-cold baited by the first iteration of this I ever encountered, caught hook, line, and sinker by a young, fashion TikTokker who I now know pretended not to have heard of Nirvana as she picked up a classic tour tee at a thrift shop. By the time I’d stumbled on the viral video, the comments section was verily afire with dressings-down about the twentysomething’s failure to recognize the cotton-blend gem in her hand. It was weeks before my “for you” page algorithm showed me an explainer on monetizing performative ignorance on TikTok. Soon after, I noticed that my timeline was full of ragebait—people (mostly white people, of course) pretending to go barefoot at Target or preparing elaborately disgusting meals (yes, there’s fetish content overlap there). 

The point is to be genuinely seen as being in error because the wronger the video and the hotter the hot take, the higher the views and more active the comments section, and the more the algorithm will promote your account. For “content creators” in a position to earn money from the platform, this is absolute alchemy—turning dipshittery into gold. Fetish creators had already figured this tactic out as a money-making strategy for their non-explicitly pornographic productions; mainstream creators took the premise and used it for their own means.

I simply can’t understand it, even though I do, on some level, get it. My credibility, my self-identity, my self-worth, even and especially as I express it online, depends on my believing that people who follow me understand me as a credible, real, reliable source. I’m as invested in being ~ myself ~ in my newsletter or on Bluesky or Instagram as I am in real life. I want people who know me in “real” life to recognize me online. But the youngest internet users seem unbothered by being perceived as ignorant, unreliable, goofy, or just plain wrong. 

I don’t know whether to laugh, cry, or celebrate at the proliferation of performative dipshittery. Maybe this phenomenon is a statement about the power of extreme self-knowledge and the outer limits of giving no fucks. But the fact that I care—that I’d like to know whether this is something to celebrate or recoil from—is all the evidence I need that I am unprepared to navigate an a-realistic, or reality-agnostic, future.

What would my life be like if I’d been born into the time of teen boys making deepfake porn instead of the era of burn books? What if I knew some awful video might always be out there, ready to be weaponized against me on future-Meta or future-X, bastions of “free speech” where harassment is welcome and truth is irrelevant?

Would I shrug it off? Would it be normal? Would deepfake porn become so common a form of bullying that its sting might be lost entirely? Could reality ever be that fungible? Maybe. It’s hard to imagine. It seems possible, if terrifying, that the kids these days will inherit just such a fucked up world. 

But they will. They will inherit fires and its floods and they will manage climate disasters exacerbated by the ever-thirstier demands of the “AI” technology that, as far as they have ever known, writes boring emails to their work colleagues and creates porno videos of their friends and enemies. For everyone, the boring work emails and the damning porno videos will be generated from the same non-place and non-entity from which they get their non-news, if they even consume it. A minority will keep their ears and eyes on the information grapevine, a mix of hearsay from trusted peers on social media and vaguely verifiable facts posted online by the last-gasp news organizations still sourcing reporting from a handful of living, breathing journalists. Maybe this minority will wear their dedication to the “real” world with pride.

Many more people—indeed, many, many more as Gen X and Millennials shuffle off—will accept the unreal, “AI”-moderated world as they are encouraged to perceive it. The question of what is “real” will become cute, even nostalgic, a throwback inquiry raised at parties for a goof. “Real” won’t matter; the public discourse will trade on something akin to but beyond vibes, which will become a retro term that aging Gen Alpha, as they turn 30 and 40, will look back on with fond nostalgia.

If this seems like an unlikely dystopian fever-dream, consider this: it is already in motion.

This piece was edited by Katelyn Burns and copy edited by Evette Dionne.