top of page

When AI Directs

  • Writer: Anna Kultin
    Anna Kultin
  • Aug 12
  • 4 min read
ree

Hollywood has always chased the next edge—wider screens, richer color, higher frame rates—but what’s happening now is different. It isn’t a new lens or a bigger stage. It’s the quiet arrival of a collaborator that never sleeps, never overruns the budget, and can rebuild an entire scene before the coffee goes cold. Generative AI has slipped into the studios, and the list of adopters reads like an award-season roster.

The Eternaut, the Argentine dystopian drama, made headlines for its collapsing-building sequence—created using generative AI rather than a traditional VFX pipeline. The move shaved weeks off post-production and, according to senior post-production supervisor Martín Rivas, “allowed the director to keep the chaos looking human, even though it was entirely synthetic.” What didn’t make the press kit was how much of a hybrid team it took to pull that off. At the center was a small “AI effects unit” embedded into the regular VFX department. It included a prompt designer who specialized in translating the director’s storyboard language into AI-readable instructions, a generative model artist who curated and fine-tuned training data, and a machine learning engineer responsible for adapting an off-the-shelf Runway model to the show’s specific visual style. There was also a simulation supervisor—someone with a background in both fluid dynamics and neural rendering—who blended AI-generated debris with physically accurate collapse patterns.

No one trained a model from scratch; instead, the team used Runway’s Gen-2 architecture but “taught” it the film’s world through a library of stills, location scans, and practical miniature shots. These assets gave the AI the right material cues—how snowflakes clump in Buenos Aires humidity, how concrete dust catches the light in overcast weather. The atmospheric effects were handled by a separate AI compositor, who programmed the snow to drift differently for each regional release, a subtle personalization touch that required geo-specific weather data and reference footage. “It’s not just about pushing a button,” Rivas explained in an internal production roundtable. “You need someone who understands narrative, someone who understands physics, someone who can speak the AI’s language—and they all have to sit in the same room.”

Disney has been testing similar tools, reportedly running entire animation sequences through AI-assisted re-lighting to cut down on rendering time. An internal creative director, speaking off the record, described it as “having an assistant who can try fifty lighting set-ups in an afternoon without complaining or needing overtime.” While the public may focus on the studio’s blockbusters, insiders say some of the most ambitious experiments are happening in smaller streaming series, where AI handles background extras, wardrobe continuity, and even mood-matched music stems.

Lionsgate has taken a different tack, using generative AI to reimagine archival footage for new documentaries. A recent project on the history of rock venues digitally rebuilt demolished stages in photoreal detail, allowing living artists to “perform” once again in the places that launched them. “We’re not faking history,” says a senior editor. “We’re giving it back its skin.”

Amazon Prime Video has been quietly folding AI into its live sports broadcasts, not just for predictive analytics but for audience-specific storytelling. Viewers in different regions receive slightly altered highlight reels that emphasize local athletes or rivalries. In post-production, Amazon Studios is experimenting with “adaptive trailers” that rearrange scenes based on the viewing habits of the account holder—action-first for thrill-seekers, slow-burn tension for drama fans.

Paramount has piloted AI for scripted television in ways that are more surgical than sweeping. In one period drama, AI rebuilt street scenes to remove accidental modern intrusions—a sneaker in a crowd shot, a neon sign peeking over a 19th-century skyline—saving costly reshoots. In another, it generated crowd murmur and incidental dialogue that matched the actors’ cadence, giving sound design a level of precision that would normally take weeks to craft.

Even AMC Networks, known more for character-driven storytelling than spectacle, has leaned in. Their crime series Hollow City used AI to alter weather and lighting in post to create a visual language tied to the lead character’s mental state—gray and damp in moments of doubt, harsh sunlight in bursts of clarity. “It’s like a subconscious narrator,” one cinematographer explains. “The viewer feels it without ever noticing it.”

Behind these examples is a shift in philosophy. Studios aren’t thinking about AI as a single-purpose gadget, but as a layer in the creative process—one that can shape, bend, and adapt every frame to match the vision, the budget, and the audience. The most daring voices in the industry are already asking the next question: what happens when the audience is part of that vision from the start? When your documentary knows what topics will keep you watching, when your drama changes its rhythm to your reactions, when no two people ever see the same cut of the same story?

The tools are in place. The infrastructure is built. The ethics and artistry are still catching up. But in editing bays, on sound stages, and inside virtual production domes, AI is already co-directing the future of film and television—sometimes with a flourish, sometimes in ways you’ll never notice. And in our next article, we’ll go inside The Eternaut’s AI effects unit to map a full behind-the-scenes “AI crew roster”—the new specialist roles, the workflows they’ve invented, and how they’re quietly reshaping the craft from the inside out.

 
 
 

Comments


Subscribe Form

Thanks for submitting!

©2021 by Illume Studio. 

  • Facebook
  • Twitter
  • LinkedIn
bottom of page