• Make it Pop by Khulan
  • Posts
  • Make it Pop #06 - Gemini 3 for motion graphics, a replacement for After Effects?

Make it Pop #06 - Gemini 3 for motion graphics, a replacement for After Effects?

Generate seamless animated B-roll with Gemini 3: prompt structure & examples

Raise your hand if you’ve ever opened After Effects, looked at the Graph Editor, and immediately wanted to close your laptop 👀

We’ve all been there. You have a vision for a cool motion graphic, maybe a floating 3D orb, a retro grid landscape, or some kinetic typography, but the barrier to entry is huge. You have to worry about easing, null objects, camera layers, and render queues.

But what if I told you the best way to create motion graphics right now isn't by moving pixels manually? It’s by asking Gemini 3 to write the code for you. I tested this over the past week, and I am truly impressed!

The "vibe coding" motion graphics

Gemini 3 speaks HTML, CSS, JavaScript, Three.js.

Prompt structure

  • OBJECT: What is the main subject - the shape e.g., sphere, cube, text.

  • LOOK: How should the material and color appear - Material e.g., shiny gold, frosted glass, neon green. Colors e.g., dark black background, purple and cyan accents.

  • ACTION: How does the object move? Movement e.g., rotates slowly, expands rhythmically, swirls like liquid.

The 3-step "no-keyframe" workflow

I’ve been experimenting with a workflow that saves hours of manual animation time. Here is how you can do it:

1. Prompt Open Gemini App, select “Thinking” and describe the motion graphic you want.

Prompt example:

“Write a single, self-contained HTML file.

Create a centered container on a dark background (#111).

Inside, place a sleek, dark-mode text input field and a button next to it labeled 'GENERATE'.

Style them to look modern: dark grey backgrounds, rounded corners, and a subtle glowing purple border when active.

Behind the UI, set up a full-screen Three.js canvas.

Create a very slow, subtle, dark plasma or smoke effect using purple and blue colors to make the background feel alive and high-tech, but not distracting.

Simulate a user typing: Automatically type the text "A futuristic city at sunset..." into the input field, one character at a time.

Once typing is complete, trigger a 'click' animation on the Generate button (make it pulse brightly for a moment). Loop this sequence.”

2. Preview: Gemini will create the animation within Gemini Canvas.

  • Click “Share” and copy & paste the link into a new tab so the animation in bigger screen so you can screen record

3. Capture: Screen record

  • Use QuickTime to screen record the animation playback (or any other screen recorder)

  • Drag that video file right into your video editor or GIF convertor :)

Why this changes everything

You are effectively skipping the "manual labor" phase of motion design.

  • No rendering: The code runs instantly in the browser.

  • No keyframes: The movement is procedural (driven by math and code), making it look incredibly smooth and organic, something that is very hard to fake by hand in After Effects.

  • Infinite iteration: Don’t like the color? Just tell Gemini, "Make the particles gold instead of purple."

We are entering an era where your ability to describe a visual is more important than your ability to manipulate a timeline. Give it a try on your next project, your CPU (and your sanity) will thank you.

Other examples:

You can also do this in Chat GPT: select “Canvas”, type your prompt, and then once the code is completed, click on “Preview”

Gemini 3 is better than After Effects when you need procedural, seamlessly looping motion graphics, like abstract noise fields, glowing geometric patterns, data visualizations or UI mock ups because code generates movement using math. After Effects, by contrast, is better when you need manual, frame-by-frame control over keyframes, complex character animation, or precise synchronization to a voice-over.

⚡ AI creative news updates you should know

10 - 15 December 2025

  • OpenAI’s next-gen image model code named Hazelnut has surfaced in benchmarks and appears to be the core of the upcoming GPT Image 2, showing high-fidelity outputs and improved text/code rendering.

  • Alibaba’s Tongyi Lab released Qwen-Image-i2L, an open-source tool that turns one image into a customizable LoRA style module for AI generation, making it easy for creatives to capture and apply a visual aesthetic.

  • Disney is investing $1 billion in OpenAI and licensing 200+ Disney, Marvel, Star Wars and Pixar characters for use in OpenAI’s Sora and ChatGPT Images, enabling AI-generated content featuring those IPs (excluding actors’ voices/likenesses).

  • Runway unveiled GWM-1, its first General World Model, a real-time, physics-aware AI that simulates environments and interactions, powering explorable worlds, avatars and robotics scenarios.

  • Higgsfield Shots is a new AI storyboard generator that takes a single image and instantly expands it into a 9-panel cinematic grid showing essential camera angles and perspectives. this means you can go from one reference photo to a full visual narrative layout

  • Adobe now lets you use Photoshop, Adobe Express and Acrobat directly inside ChatGPT, so you can edit photos, design graphics, and tweak PDFs by typing what you want. This collapses editing + conversation into one workflow.

Two days left, then I'm off to enjoy the holidays, create content, and reflect.

2025 was the year I found my niche. My focus for 2026? Honing the craft and doubling down on knowledge sharing. I’ll be using the break to crystallize exactly how I want to execute that.

I hope you get some downtime to reflect, too! We’re walking into a big year. With the launch of Gemini 3 and ChatGPT 5.2, AI agents may be shifting from “hype” to reality, and get ready for Generative UI that adapts real-time to our interactions.

Until next week,

Khulan