- Make it Pop by Khulan
- Posts
- Make it Pop #03 - 3 creative personas that are defining the creative AI era
Make it Pop #03 - 3 creative personas that are defining the creative AI era
Are you a Creative Builder, a Creative Professional, or an AI-Native Creative?

The creative AI space is evolving at an unbelievable speed. There has never been a better time to be creative.
Every week, a new model or feature drops as companies compete fiercely for your attention. This intense evolution isn't just changing our tools; it's changing who creates and how they create.
I've been watching this pattern unfold. As AI tools multiply, they’re quietly dividing the creative world into three distinct groups. Each group is defined by the kind of tools they use and how they think about creation.
🧱 1. Creative Builders
They’re the ones turning fragmented models & tools into full pipelines. They understand budgets, timelines, and render farms. They think like engineers but create like artists.
They don’t see AI tools as "apps." They see them as systems.
Workflow: Concept → Pipeline → Node orchestration → Optimization
Mindset: Efficiency, control, scale
Example tools: ComfyUI, Flora, Weavy, Fal
UX need: Modular, customizable, and programmable
They’re building machines of creativity.
🎨 2. Creative professionals
They are designers, editors, and filmmakers fluent in composition, lighting, motion, and story, the visual language of creativity. They’ve lived inside Adobe for years, and now they’re expanding that craft into the generative world.
They might not want to wire a node tree, but they demand precision, flexibility, and aesthetic control.
Workflow: Moodboard → Generate → Refine → Composite → Deliver
Mindset: Craft, polish, storytelling
Example tools: Flow, Photoshop, Firefly Studio, Freepik, Higgsfield, HeyGen, Leonardo AI, Midjourney, ElevenLabs, Kling AI
UX need: Layered, intuitive, visual, full control over aesthetics
For them, AI is a powerful new addition to their existing toolkit, allowing them to bring their unique vision to life with greater speed and iteration. They are testing node-based workflows.

Midjourney ui example
⚡️ 3. AI-native creatives
These are the explorers of the new creative frontier.
They didn’t grow up with Photoshop. They might come from marketing, strategy, or tech, but they’re now creating using AI. They’re experimenting, remixing, and posting.
They live inside tools built for immediate expression.
Workflow: Prompt → Generate → Edit → Share
Mindset: Play, expression, experimentation
Example tools: Higgsfield (Popcord), Chat GPT, Gemini, Sora app, Veed, HeyGen, Canva Magic Studio
UX need: Linear, guided, fast

Gemini app
The tool–persona feedback loop
Personas shape tools, tools shape personas.
Builders demand modular control, so tools are getting node-based.
Professionals demand creative control, so tools are focusing on UX and fidelity that allow their unique style to lead, avoiding the "generic AI look."
AI-natives demand simplicity, so tools are becoming guided and social.
And for any company building creative products right now, this is the most important UX question to ask: “Who are you building for: a builder, a professional, or an AI-native?”
Each group contributes to the same ecosystem, just from a different layer of abstraction. And this is what makes this moment so special.
We’re living through an era where companies are competing to make you more creative. There has never been a better time to be creative.
⚡ AI creative news updates you should know
27 - 3 November 2025
Figma has acquired Weavy for $200m, integrating its node-based media generation tech (images, video, multi-model workflows) into the Figma ecosystem.
CapCut just integrated Veo 3.1 and Sora 2, letting users create videos directly inside the video editing tool. This collapses video generation + editing into one tool, cutting production time.
Creators can now generate Midjourney-like images in the Meta AI app. In August 2025, Meta partnered with Midjourney to license its AI image and video generation technology. Currently it’s free to use.
Adobe just launched Firefly Image Model 5, delivering ultra-realistic, high-resolution visuals. Firefly app now supports custom models, includes partner models for image, video, and audio generation, and video editor.
Minimax released Hailuo 2.3 model which achieves significant improvements in the portrayal of physical actions, stylization, and character micro-expressions.
Google launched Pomelli which helps build branded campaigns by analyzing your website, generating tailored ideas, and producing visuals and assets.
🏆 AI creator competitions worth joining
If you’ve got a video or concept brewing, these competitions are open right now, and they’re giving real prizes + visibility to your creative AI work
Wonder Film Festival Chance to win up to $6000 and be considered as a filmmaker in the next chapter of Wonder’s Anthology series.
| AI Film Award by 1 Billion Summit The winner will be awarded a prize of USD $1 million. Must use Google’s AI models.
|
Chroma Awards Three competition divisions - Film, Music Videos, and Games - each with unique categories, rules and prizes
|
On a personal note, I got promoted at work! 🎉 I’ve been on my current team since the end of March, so it feels surreal that I’ve already been recognized for my work.
Thanks for reading and supporting me! :)
If this newsletter sparked an idea, share it with a friend who’s building with AI or building AI tools for creatives.
Stay curious, but more importantly, reduce the time between curiosity and executing on that curiosity!
Khulan
