We’ve been exploring vibe coding video. If you haven’t come across it yet, the idea is simple. You describe what you want to an AI, and it builds the software for you. No traditional coding required. When you apply that thinking to video, you get some fascinating results.

What People Are Actually Making

The recent integration of Anthropic’s Claude Code with a framework called Remotion has sparked a wave of experimentation. So far, the output is almost exclusively motion graphics and animated explainers. Product promos, data visualisations, text-on-screen animations, benchmark showcases, and branded announcement videos. In essence, this is a potential replacement for After Effects-style motion design — not traditional video editing.

As one developer aptly put it: “You’re not making videos with prompts. You’re making React code with prompts. That code then makes videos.” This is a crucial distinction. The AI is a powerful code generator that uses the Remotion framework to render frames. It is not an editor piecing together camera footage.

What It Can’t Do Yet

Because Remotion renders visuals from code, it does not work with real camera footage. The classic event highlight reel — a dynamic mix of multi-camera footage, crowd shots, speaker moments, and energy cuts to music — is not something anyone is building with this stack. Not even close.

The technology’s sweet spot is programmatic motion graphics with clear, repeatable patterns. Think templates and data-driven content, not creative editorial decisions.

The Event Content Use Case That Caught My Eye

The most relevant example for our industry comes from a project called Shortvid.io. Built on Remotion, it was used by GDG Nantes to generate a huge volume of video assets for their DevFest conference — over 2,000 attendees and 100+ sessions. The team created videos showcasing event details, speaker information, and session highlights. They even displayed them on TV screens at the venue to promote upcoming talks.

From a single JSON configuration file, they produced 50+ branded videos. Speaker announcements, session schedules, room location displays, and social media assets — all generated programmatically. No editor sat there making 50 individual graphics. That’s a genuine time-saver for event communications.

It’s worth noting, though, that this is still templated information design. It’s not cut footage. The distinction matters.

Where This Gets Interesting for Event Production

While vibe coding for video won’t replace our core editing work, it opens up a genuinely useful new layer of deliverables. Here’s where I can see it fitting into what we do at gassProductions:

Event openers and countdown videos. Rather than templating these in After Effects, we could vibe-code bespoke branded animated intros for each client. More customisation, less repetitive production time.

Speaker title cards and lower thirds. These could be generated programmatically from a schedule.

Post-event social content. Animated stat cards, quote graphics, and recap summaries layered over high-quality stills from the event. The kind of content that takes hours to produce manually but follows a clear visual pattern.

Promotional videos for upcoming events. The product launch video style that’s proving effective for tech companies could easily be adapted to create compelling event promos.

The Honest Assessment

Let’s be clear about what this is and isn’t. Vibe coding for video is not a shortcut to a finished highlight reel. The AI still struggles with spatial awareness, proportions, and timing. One developer who went viral admitted the process took multiple days, not “just a prompt.” Making good vibe-coded video is still significantly slower than building an app with the same tools.

But for motion graphics, branded assets, and template-driven content at scale? The potential is real. And for a production company like ours, the opportunity isn’t about replacing what we do. It’s about adding to it.

Imagine handing a client their cinematic highlight reel and 30 branded social assets, all from the same booking. That’s a meaningful uplift in value without a proportional increase in production time.

What’s Next

Vibe coding for video is still early. The trajectory is clear, though. As AI models improve their understanding of spatial relationships and visual coherence, the gap between what a prompt can produce and what requires a professional editor will keep narrowing — at least for certain types of content.

The smart move for production companies is to start experimenting now. Understand the capabilities. Know the limitations. Be ready to integrate these tools into client workflows when the time is right.

The companies that will win here are the ones who understand both worlds — the creative, human-led storytelling of traditional production and the efficiency of AI-driven content generation. At gassProductions, we’re keeping a very close eye on this space.

What are you seeing? We’d love to hear from anyone experimenting with vibe coding for video. Get in touch or drop a comment below.