- Make it Pop by Khulan
- Posts
- Make it Pop #09 - The 2026 AI video playbook for brands: How to avoid the "AI slop" trap
Make it Pop #09 - The 2026 AI video playbook for brands: How to avoid the "AI slop" trap
Why Coca-Cola failed, how challenger brands won, and the new rules for AI video in 2026.

If 2023 was the year of experimentation and 2024 was the year of tentative adoption, 2025 was the year AI video collided violently with reality.
This was the first year AI-generated ads aired on TV. Household brands leaned in. Some used AI like a scalpel, enhancing creativity, speeding up iteration, and unlocking new ideas. Others used it like a bludgeon, a blunt instrument to slash costs.
The result?
A flood of content that the internet quickly labeled “AI slop”, a term so pervasive that Merriam-Webster crowned slop its word of the year.
But let’s be honest: slop isn’t new.
We’ve had human slop for decades, lazy concepts, low-effort execution, bad advertising. What is new is the intensity of the reaction. “AI slop” isn’t just a critique of visual quality; it’s a rejection of perceived cynicism. Audiences aren’t reacting to videos; they’re reacting to intent.
Today, we’re breaking down the pivotal AI video campaigns of 2025:
Anatomy of an AI slop
Where things went wrong
Where they surprisingly worked
PR & creative playbook brands need to survive the era of synthetic media
Anatomy of an AI slop = Quality control failure
AI is just a tool. You could spend weeks in legacy creative software and still produce something unusable. AI slop doesn’t come from using AI. It comes from shipping work that feels unfinished, careless, or indifferent to reality
These are the signals audiences pick up on, often subconsciously, when something gets labeled “slop.”
1. Physics drifts even slightly from reality
Objects don’t behave the way the brain expects them to. Motion feels weightless, surfaces slide instead of grip, liquids move like rubber, person’s movements are off.
2. Bad acting
Characters appear to speak past each other, reactions arrive too late or not at all, faces emote without intention. The result isn’t just “bad acting”, it’s a lack of emotional coherence that makes scenes feel hollow. You wouldn’t accept bad acting from human actors, so don’t accept it from synthetic ones.
3. Broken continuity
Details reset between shots, objects disappear, wardrobe, lighting, or environment subtly change. These inconsistencies break trust, because the brain reads them as carelessness.
4. Technology failure
Some models are simply bad at certain things like faces, hands, motion, physics. If the tool isn’t right for the job, change the scene, change the approach, or don’t use AI models at all.
5. Intent vacuum
The most damaging signal of all: the work feels like it exists because it could be made, not because it had something to say. Audiences may not articulate this, but they feel it instantly. Slop is what happens when random generation replaces intention/story.
Failures: When efficiency kills equity
The biggest mistake of 2025 was assuming AI could replace human intent.
1. Nostalgia trap: Coca-Cola
Coca-Cola attempted to remake its iconic 1995 "Holidays Are Coming" ad using fully generative video. While technically sharper than the 2024 attempt, viewers spotted "gliding" trucks that lacked friction, animals with "plastic" fur, and a weird license plate.
Lesson: You cannot recreate heritage built on human warmth using synthetic approximation. The visible efficiency signaled that the brand was cheapening something sacred.
2. The tone-deaf satire: McDonald’s Netherlands
They released "The Most Terrible Time of the Year," using AI to depict holiday chaos (burning trees, traffic). The AI aesthetic combined with a miserable narrative read as "nightmare fuel" rather than comedy. Who truly wants to spend their Christmas at McDonalds?
Lesson: AI models struggles with nuance. Satire requires human timing, restraint, and empathy. Without it, humor becomes grotesque.
3. Luxury paradox: Valentino
Luxury sells scarcity, craftsmanship, and human effort. AI sells abundance and automation (that’s the current cultural perception). When Valentino released surreal, morphing AI handbag ads, backlash was immediate “tacky,” “embarrassing,” “off-brand.”
Lesson: Using tools associated with mass production violates the core codes of luxury. In this category, human labor is the premium product.
Cultural reflex: “AI slop” as default
There is a growing cultural trend where audiences default to commenting "AI slop" the moment they detect synthetic media, regardless of the craft involved. It has become a reflex. However, this reaction often fades when the technology is used to empower the audience or tell a genuine story rather than just cut costs. Let’s take a look.
"Good pass": When AI content actually worked
The winners of 2025 didn’t try to trick the audience. They invited them in, prioritized narrative, or openly mocked the tech itself.
1. Burger King: Participation & co-creation
Instead of replacing creatives, Burger King turned customers into them. The Million Dollar Whopper campaign let users design burgers, with AI instantly generating visuals.
Why it worked: It wasn't about the brand showing off AI content; it was about giving the audience a tool to play with. People love being part of the process.
2. Lewis Capaldi: Story over spectacle
For the "Something In The Heavens" music video, Capaldi used AI to generate the visuals. Despite the general anti-AI climate, the reception was largely positive because the technology served a clear, emotional narrative.
Why it worked: The emotional narrative came first. When the story is strong, the medium disappears. It proves that "slop" is only slop when there’s no soul behind it.
3. "Anti-AI" pivot: Zevia
Zevia launched "Break From Artificial," a campaign that parodied the "slop" aesthetic with intentionally horrifying AI glitches (woman with multiple hands) before cutting to real people.
Why it worked: They weaponized the glitch. By equating "artificial intelligence" in ads with "artificial ingredients" in soda, they positioned "realness" as a premium commodity.
The 2026 playbook: what we learnt from 2025
If you’re publishing AI video for a brand in 2026, rigorous quality control is non-negotiable. AI mistakes don’t just hurt performance, they erode brand equity. Brand equity is now the primary risk vector, not production cost.
Off physics
If an AI video looks 90% real, it looks 100% creepy. Audiences instantly spot physics errors and uncanny motion. Either fully polish with human VFX or lean intentionally into stylization. The danger zone is imperfect realism.
Quality control
AI output now needs the same review rigor as broadcast ads. Frame-by-frame checks, continuity, and performance plausibility are non-negotiable. AI content raises the cost of mistakes.
Match medium to message
High-emotion moments (holidays, luxury, nostalgia) demand human warmth. Utility or niche campaigns can survive rougher AI if the value is clear. The higher the emotional stakes, the less tolerance audiences have.
Hybrid “sandwich”
The strongest campaigns follow a simple structure: human intent at the start, AI in the middle, human polish at the end. AI scales ideas; humans protect meaning and taste.
Don’t recreate sacred assets
AI should not approximate heritage moments, iconic ads, or emotionally loaded brand memories. Synthetic nostalgia signals cost-cutting, not innovation, and erodes long-term trust.
Respect the luxury code
Luxury sells scarcity, craft, and human effort. AI signals automation and scale. For premium brands, “No AI Used” is becoming the new “Organic.” Authenticity is now a luxury input.
I made a list of big brands that published campaigns created with AI:
Before we wrap up, I want to commend the teams and creatives behind every single one of these campaigns, hits and misses alike.
Innovation is messy: It takes immense bravery to be the first to test the waters with emerging technology, especially on a global stage where every frame is scrutinized. Whether the result was a viral success or a "teachable moment," these teams stepped into the arena when others played it safe. They are showing the way for the rest of us, and without their experiments, we wouldn’t have a map for the future. And it doesn’t help that AI is currently wrapped in negative cultural narratives.
Conclusion: The tools have changed, but our job hasn’t. The tension we’re seeing isn’t about technology, it’s about how we use it. In 2026, the brands that win won’t be the ones that generate the most content. They’ll be the ones that understand the difference between producing media and creating meaning.
Personal note
I spent the holidays in Europe indulging in way too many sweets and an ungodly amount of bread, honestly, it’s the only time of year it feels completely justified! 🤣
Looking ahead, I plan to expand my presence to X and Instagram in 2026. While LinkedIn was incredible to me in 2025, I’m craving a space to share more ad-hoc, in-the-moment creativity and connect on a personal level. I sometimes feel the pressure to stay "polished" on LinkedIn (perhaps because my co-workers are watching, or maybe it’s just in my head!).
Anyway! New year, new us! Let’s make 2026 another amazing year to remember.

Yours,
Khulan