menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The Story Behind Those Fruit and Vegetable AI Slop Videos

25 0
24.03.2026

“Stay close, my tiny little noodles,” the mother spaghetti strand says, desperately clinging to her two adorable pasta daughters as a disembodied fist grips them over a boiling pot of water. “My sweet girls,” she says, tears in her giant eyes. “Don’t look down.”

“Mommy, the air is burning!” one of the pasta daughters shrieks. “I’m scared I’m gonna fall!”

“Mommy, don’t let go! I don’t wanna break! Please don’t let go! It’s so hot!” the other noodle daughter screams.

They all cry out in unison as they’re submerged in the boiling water, a mournful violin playing in the background. The clip cuts to an animation of a human holding a fork, stirring a pan of spaghetti and meatballs. At the end of the 72-second video, the pasta father remains stuck in the box in the cupboard, bereft with no idea where his family has been taken.

Over the past ten days, I’ve watched probably 150 viral videos like this — cartoon food characters, rendered in brightly colored Pixar-style animation, acting out heart-wrenching plot lines (that are, nonetheless, patently absurd). There’s the homophobic clementine that kicks his gay clementine son out of his house when he catches him experimenting with a strawberry (1.8 million views on Instagram). There’s the pregnant broccoli that dumps her broccoli child in the trash, only to FaceTime him years later begging for forgiveness (2.1 million views). There’s the strawberry that sings a lullaby to her terrified children as they are immolated in a blender to make a milkshake (111,000 views). Many of the videos in this genre are silly — a banana kicks his strawberry wife out of the house when she farts in the bathtub — but others have distinctly misogynist or racist subtext.One video that went viral on X, for instance, features two buff eggplants pulling a blanched, innocent-looking strawberry into a motorhome for sex; the disheveled strawberry becomes pregnant, and the possible fathers refuse to help or acknowledge what she calls her “dark purple” babies. Yet people can’t seem to turn away. “we ain’t gonna have no clean water next month because yall wanna watch fruit love island on tiktok,” one X user wrote in a post. The pop star Zara Larsson posted about one of these videos on TikTok last week with the caption, “Sorry I can’t hang out today, I gotta see what’s happening with choclatina and strawberto.” (After a back and forth with fans who were mad that she was promoting generative AI, her video appears to be deleted.)

The trend stems from a single program: Object Talk, a customized version of ChatGPT. If you enter a food item — like, say, a chicken nugget — the program produces a detailed prompt for a “Pixar-style 3D render,” complete with the chicken nugget’s specific facial features (“tiny crumb teeth” paired with “wide round glossy eyes with a slightly oily sparkle”) and a sassy script (“I’m not just a ‘nugget,’ alright? I’m engineered poultry architecture”). Plug the prompt into an AI video-generation platform like OpenArt, and you’ll have your chicken-nugget video in a little less than five minutes.

Object Talk appears to be the brainchild of AICenturies.com, which advertises itself as a company “founded by two brothers with the mission to equip the next generation of AI creators.” (They didn’t respond to multiple requests for comment.) The website teaches creators how to make “AI slop,” featuring custom prompts for such other popular genres as animal-podcast clips, “viral Roblox rant shorts,” and “Minecraft block-slicing ASMR videos.” A tutorial video the company posted on Instagram about how to make talking-object videos is its most popular with almost 2 million views. This underlines perhaps the strangest part of this phenomenon: Not only are tons of people are watching cartoon fruit-and-vegetable drama, but tons of other people are making it, eking profits out of the dissemination of this specific kind of AI slop — and upping the ante with wilder and wilder plotlines to further their reach.

The AI talking-object videos didn’t start out with baby carrots begging for their lives. The first were somewhat educational, according to Fana Yohannes, a trend curator and digital strategist. She started seeing videos of anthropomorphized household products providing life hacks, such as how to remove a wine stain or skin care mistakes to avoid (“Using me too often strips the moisture from your skin,” a hand-soap dispenser angrily proclaims) in December. The accounts posting these videos were accumulating hundreds of thousands of followers. “At first I thought it was just bots,” she says. “Then I realized my mutuals were actually following them. The content was just that engaging.”

Over the past few months, Yohannes has watched the trend shift from practical advice to narratives featuring extreme human emotions — separation, terror, betrayal. The more heightened the drama, the more viral the post. “It seems like it went from a tool to create educational content to something people can use to make a quick buck and farm engagement,” she says. She refers to the spate of talking-fruit videos as “the first-ever custom-GPT-generated social-media trend,” adding, “There’s a very precise formula to make this type of content, and there’s no limit or friction to creation.”

As someone who has very little talent for content creation, I was curious whether that was true. After paying $8 for a Gemini subscription and entering a deliberately upsetting prompt — “sad chicken nugget gets separated from his mother” — into Object Talk, I was able to create an eight-second video of a weeping baby chicken nugget begging not to be dipped in barbecue sauce. “Hold me,” the mom chicken nugget said as the baby quaked with fear. “I think it’s dipping time.” When I entered a more specific description — “a viral one-minute video about a chicken nugget being born and then separated from his mother because someone at McDonald’s is eating them” — the program generated a detailed prompt for such a clip, urging me to give it a “viral boost” by using captions like “WAIT—PLEASE—,” “MOM — WAIT —,” and “I wasn’t made to live.”

Like the baby chicken nugget in the video, I felt momentarily paralyzed with fear while watching my creation. Yes, the text captions were poorly synced, and there was little of the structured dramatic tension of the truly viral fruit-family-separation videos I’d seen on my Reels page. (Also, the rendering for the baby nugget was a little bit off; its texture was less crispy golden-brown and more porous and puckered, triggering my trypophobia.) But I was moved, somewhat against my will. (I’d also be lying if I said I didn’t tear up watching the pasta video.) Moreover, I could absolutely see how someone very young and very bored might sit on their phone posting dozens of these per day. “You have cute, recognizable characters, which keeps viewers watching and leads to high retention,” Yohannes says. “There’s also infinite possibility for content ideas because every object can be a character.”

That the most popular story lines tend to provoke horror or sadness or outrage is a function of the internet: Bad stuff has always performed well online. “There’s something wired in the human brain so that we really react to negative news more than we do to positive. And we’re always seeking out more drama,” says Donatas Bailys, CEO of the creator marketing platform Billo. The talking-food videos “are basically just pushing the buttons of the sensitive parts of our brains.” In this respect, the talking-fruit trend is similar to the disturbing videos targeted at kids that were endemic to YouTube a few years ago, featuring beloved children’s characters like Peppa Pig and Elmo overdosing on drugs or getting decapitated in car accidents. “It starts out innocent and then it goes to weird places,” says Bailys.

The bigger concern with the AI food videos, however, is not specific to children — and it’s not even really specific to AI food videos. A video of a chicken nugget begging for its life is clearly and preposterously fake, but because our brains do not immediately register it as such, some of us are so drawn into its emotional register that we can’t help but feel something. We’re already aware of how AI-generated content can collapse our ability to distinguish what’s fake from what is real: AI-generated videos of anti-ICE protesters attacking cops proliferated on social media after Renee Good was killed in Minneapolis, and AI-generated “footage” of the U.S.-Israeli war with Iran is rampant right now. But the talking-food videos indicate that AI isn’t just getting better at messing with our heads — it’s going to get better at messing with our hearts as well. And if we are primed to react even to a sad chicken-nugget video, what does that mean for how we react to AI-generated content that actually looks realistic? There is the horrible and real violence of the news — wars, natural disasters, devastating family separations — and then there is a whole other world of fake terror encroaching on our algorithms. And the fake world is even more fine-tuned to our political leanings, emotional triggers, and soft spots.

It’s helpful to remember that for all the concern trolling about AI and how computers are taking over the world, one of the largest threats posed by AI is a human one: Bad actors can easily harness this technology to prey on people’s emotions, even if the content itself is patently ridiculous. And that’s scary.

“Remember when Sora came out and people were making videos of cats figure skating or bears flying?” Bailys says. “At the time, those were incredibly hard to make. But the wow factor goes away so fast.” As AI videos become more violent, or more explicit, or more disturbing, the chicken nugget begging for its life may seem quaint. “There is definitely a segment of the AI content-generation market that is willing to get views at any cost,” Bailys says.

It’s difficult to envision what, exactly, the future of user-generated AI video might look like. But with most platforms doing the bare minimum to regulate this kind of content, it seems inevitable that each person’s consumption will skew more toward computer-generated slop that taps into their deepest capacities for emotion without actually earning it. Maybe that looks like deepfake images of bombings and terrorist attacks. For now, it might be shrieking tomatoes being murdered to make marinara sauce. Either way, hold on. It’s dipping time.

artificial intelligence


© Daily Intelligencer