From Director to Curator.

A New Framework for the Age of AI in Video Production.

Introduction - The Real Conversation

The conversation surrounding generative artificial intelligence in the creative industries is currently trapped in a cycle of spectacle and speculation. Industry forums, trade publications, and boardroom presentations are dominated by demonstrations of what the latest models can do - generating fantastical landscapes, mimicking cinematic styles, and producing novel visual concepts in seconds. While these capabilities are undeniably impressive, this fixation on the tool itself is a dangerous distraction. The more urgent and strategic conversation is not about what AI can do, but how we must fundamentally re-architect our creative processes, team structures, and strategic mindsets to harness its power effectively. The industry is asking, "Can AI do this?" when it should be asking, "How do we organise ourselves to create value with this new capability?"

Attempting to force generative AI into legacy, linear production workflows is a recipe for mediocrity, inefficiency, and strategic failure. The technology is not a more efficient version of an old tool; it is a new paradigm that demands a new operational and creative framework. To thrive in this new era, industry leaders must move beyond the hype and adopt a structured approach to integration. This article provides that structure. It is built on a deep analysis of historical precedents - examining how our industry has adapted to previous technological disruptions - and uses those lessons to frame a new strategic framework for the age of AI. This framework is organised around five core pillars: The Mindset Shift, from the top-down director to the collaborative curator; The Reality Spectrum, which defines a newly stratified three-tiered market; The New Skillset, which elevates the system thinker over the tool operator; The Strategic Goal, which reframes AI's output as a tool for engineering emotion; and The Complexity Advantage, which argues that video's technical difficulty creates a unique opportunity for early adopters. Together, these pillars offer a comprehensive guide for navigating the most significant creative transformation of our time.

Historical Precedent - Lessons in Adaptation

To understand the future, we must first examine the past. The advertising and video production industries have been shaped by a series of technological disruptions, each of which forced a fundamental rethinking of roles, workflows, and the very definition of creative value. These historical shifts provide a powerful lens through which to analyse the current AI revolution, revealing recurring patterns of adaptation that can guide our strategy today.

A family of three watching a vintage television screen showing a race car, with a dog sleeping on the floor nearby and a woman bringing snacks in a tray.

The Agency Transformed: From the Assembly Line to the Creative Team

Before the advent of television, the dominant advertising workflow was a linear assembly line. A copywriter, considered the primary conceptual force, would craft the words for a print ad and then pass the finished text to an art director, whose job was largely executional: to design a layout around the writer's established concept. This sequential, siloed process reflected the nature of the medium, which was primarily text-driven.  

The arrival of television, a medium that synthesised sight, sound, and motion, rendered this assembly-line model obsolete. The new creative challenge was not to add a picture to words, but to conceive of an idea that integrated both from the outset. This technological shift catalysed an organisational revolution, spearheaded by creative director Bill Bernbach at Doyle Dane Bernbach (DDB) in the 1960s. Bernbach dismantled the assembly line by pairing art directors and copywriters into collaborative "creative teams". This was not a minor tweak to the org chart; it was a fundamental change in the creative process. By putting the two disciplines in a room to "think and work and create together, simultaneously," Bernbach shifted the workflow from linear execution to parallel, simultaneous ideation. The "big idea" no longer belonged to a single discipline but emerged from the dynamic interplay between visual and verbal thinking.This new structure, born of technological necessity, proved so effective that it became the industry standard for the next half-century.  

This historical shift provides a direct and powerful precedent for the organisational change required by generative AI. The old, linear "copy-first" model is analogous to a modern creative director attempting to write a "perfect" prompt and hand it off to an AI operator for execution. This approach fundamentally misunderstands and fails to leverage the technology's iterative, unpredictable, and collaborative nature. The Bernbach model of simultaneous, cross-disciplinary collaboration is the true analogue for the new required process: a dynamic, real-time dialogue between a human creative and the AI system, where the final concept is not dictated in advance but discovered through iteration. Just as television required a synthesis of art and copy, generative AI requires a synthesis of human taste and machine-generated variation.

Simultaneously, the economic realities of broadcast television gave birth to the 30-second spot. It is critical to recognise that this format was standardised not because it was the most creatively effective length, but because it was the most profitable for networks seeking to maximise ad revenue. This historical fact serves as a crucial reminder that industry standards are often shaped by commercial pressures as much as by creative ideals - a pattern that is already repeating in the AI era.  

The Digital Deluge and the Demand for Volume

The next great disruption was the transition from broadcast to the internet and social media. This era fundamentally altered the demands placed on creative agencies, shifting the focus from crafting a few high-stakes assets for a mass audience to producing a high volume of content tailored for fragmented platforms and niche audiences. The singular, monolithic 30-second TV spot was replaced by a constant stream of banner ads, pre-roll videos, social posts, and interactive experiences.  

This shift created a new set of priorities and a new cast of essential roles. The need for data-driven targeting, A/B testing, and real-time optimisation gave rise to specialists like social media managers, search engine optimisation (SEO) experts, and data analysts. The core creative skillset expanded to include an understanding of analytics, user engagement metrics, and platform-specific best practices. The strategic goal was no longer just to create a memorable message, but to deliver personalised and relevant content at scale.

Commuters sitting and standing in a subway train, all engaged with their smartphones, with a young man reading a book in the middle.

The digital era conditioned the industry for the very attributes - scale and personalisation - that generative AI now offers on an unprecedented level. For two decades, the strategic appetite for high-volume, tailored creative has been growing, but the production models to satisfy that appetite have remained largely human-and-labour-intensive, creating a significant operational bottleneck. AI is not inventing a new demand; it is providing a revolutionary supply-side solution to a demand that has been building since the dawn of the internet. This pre-existing tension explains why the first and most immediate commercial applications of generative AI have been in low-end, high-volume ad generation - it is the path of least resistance, solving one of the industry's most chronic and costly operational headaches.

A person assembling film reels at a workbench, surrounded by tools and various film canisters.

Linear editing was a painstaking, destructive process. Every cut was permanent. Changing the sequence required physically re-splicing film or re-dubbing tapes, a time-consuming and costly endeavour that discouraged experimentation. NLEs fundamentally changed this dynamic. By digitising footage, they made the editing process non-destructive; the original source material was never altered. Editors were suddenly free to experiment, to try countless variations, to undo mistakes with a single click, and to rearrange the entire structure of a project without penalty. This democratisation of the post-production process accelerated creative iteration and elevated the editor's role from that of a technical operator executing a pre-defined plan to a more central creative partner in the storytelling process.  

The NLE revolution is a powerful metaphor for the creative freedom that generative AI offers. Just as NLEs allowed editors to experiment without the fear of "destroying" the original footage, generative AI allows creatives to generate countless conceptual variations without the prohibitive upfront cost of traditional production. The economic and psychological friction associated with trying a new idea and failing is reduced to near zero. This transforms the creative process from one of high-stakes, pre-planned execution to one of low-stakes, rapid-fire exploration and discovery. The culture of the edit bay was permanently changed by the freedom to make mistakes; generative AI promises to bring a similar cultural shift to the ideation and pre-production phases of the creative workflow.

Person working on a computer with a flowchart or code diagram displayed on the monitor, in a dimly lit office environment.

The Production Pipeline Reimagined: From the Splice to the Timeline

Parallel to the shifts in advertising, the video production industry underwent its own technological upheaval. The transition from linear editing on physical film and videotape to non-linear editing (NLE) on computers was a watershed moment, catalysed by the introduction of the Avid/1 system in 1989.

The Rise of the Technical Artist: How CGI Blurred the Lines

The final and most direct historical precedent comes from the rise of computer-generated imagery (CGI) and visual effects (VFX). From its experimental beginnings in films like Tron (1982) and Westworld (1973) to its role as a central pillar of modern blockbusters like Jurassic Park (1993) and Avatar (2009), the integration of CGI has fundamentally reshaped filmmaking.

As the software for creating these effects grew more powerful and complex, a critical gap emerged between the artists who envisioned the shots and the engineers who built the software. Traditional artists often lacked the deep programming and technical knowledge to manipulate the software to its full potential, while software engineers lacked the artistic sensibility to guide the creative output. This gap was filled by the emergence of a new, hybrid role: the VFX Technical Director (TD).

The TD acts as a crucial bridge between the creative and technical worlds. They are problem-solvers who possess both artistic taste and deep technical proficiency. Their primary function is to develop custom tools, write scripts, and design production pipelines that enable artists to achieve their creative vision within the complex constraints of the software.They translate artistic intent ("we need the explosion to feel more chaotic") into technical execution (writing a new particle simulation script in Python). This role was born of necessity; the technology became too powerful and non-intuitive for a traditional artist to leverage alone.

The VFX Technical Director is the single most important historical analogue for the new, essential creative role in the age of AI. As generative AI models become increasingly complex and opaque - often described as "black boxes"  - a similar hybrid role is required. This new professional must be able to translate a creative director's strategic and aesthetic intent into the "language" of the AI system, which involves not just writing a text prompt but also understanding different model architectures, integrating various APIs, and fine-tuning system parameters. The historical emergence of the TD proves that when a technology's complexity outpaces the traditional creative skillset, a new, specialised, hybrid role inevitably arises to bridge the gap.

The Five Pillars for the Future of Video

Drawing upon these historical lessons, it is possible to construct a strategic framework for navigating the AI-driven future of video production. This framework is not a technical manual for using AI tools, but a strategic guide for reorienting mindset, talent, market positioning, and creative goals. It consists of five interdependent pillars.

Pillar 1: The Mindset Shift - From Director to Curator

The first and most fundamental shift required is in the creative mindset itself. The traditional model of creative direction has been one of top-down control. A creative director conceives a singular, fully-formed vision and then directs a team to execute that vision with precision. In this "director" model, the primary value lies in the initial idea, and the goal of the production process is to replicate it faithfully. Deviation is a flaw; control is paramount.

Generative AI renders this model obsolete. To work effectively with AI is to engage in a collaboration with a "chaotic partner" - a system that is probabilistic, not deterministic. It excels at generating unexpected variations and novel connections, but it resists precise, top-down control. Forcing it into a purely executional role is to ignore its greatest strength.  

The new, more effective model is that of the "curator." In this framework, the creative director's primary role shifts from originating the perfect idea to guiding the AI, evaluating its vast and varied outputs, and curating the most promising results. The process becomes a dialogue. The creative leader provides the initial strategic direction and aesthetic constraints, the AI generates a spectrum of possibilities, and the leader then refines, combines, and selects from those options. As creative professionals who have embraced these tools describe it, the process is an active collaboration, not a passive command. One technologist aptly compares interacting with current generative AI to "yelling at a black box," a process that requires a fundamentally different approach than playing a predictable instrument like a violin.

In the curatorial model, the most valuable human skills are no longer manual dexterity or the ability to perfectly articulate a final vision from the start. Instead, the premium is on taste, strategic discernment, and the ability to recognise and cultivate unexpected brilliance. This shift does not diminish the role of the creative director; it elevates it. When the volume of potential creative output becomes nearly infinite, the bottleneck shifts from production to selection. The value is no longer in the craft of making, but in the wisdom of choosing. This elevates the importance of a clear brand strategy and a discerning creative eye above all else. However, this elevated creative director does not need to operate in isolation. Just as an executive producer bridges the gap between a director's vision and the logistical realities of production, a new, essential role is emerging to bridge the gap between traditional creative direction and the complex, technical world of generative AI. This new partner is the key to unlocking the curatorial model at scale.  

Pillar 2: The Reality Spectrum - A Three-Tiered Market

The impact of generative AI on video production is not monolithic. The market is rapidly stratifying into a three-tiered spectrum, each with distinct workflows, cost structures, talent requirements, and strategic applications. Agencies, brands, and production houses must consciously and strategically decide where on this spectrum they intend to compete.

Tier 1: Low-End Automation

This tier is defined by the use of AI for high-volume, template-driven, and personalised content, primarily for performance marketing, social media, and e-commerce. The strategic goal is efficiency and scale. Companies in this space, such as Creatopy and Pencil, offer platforms that can generate hundreds or thousands of ad variations from a simple product URL or text prompt, automating tasks like resizing, copy generation, and localisation.A prime example of this tier in action is Carvana's campaign that created over 1.3 million unique, personalised videos for individual customers, recapping their specific car-buying journey - a feat of scale impossible through traditional means.

Person holding a smartphone showing an advertisement for Carvana with the text 'Buy Your Next Car' and an image of a blue car next to a car vending machine.

Tier 2: The "Messy Middle" This tier represents a cautionary tale, populated by early AI efforts that have been criticised for a lack of strategy, poor execution, or using the technology for hype rather than substance. A notable example is the controversy surrounding robotics company Figure AI's partnership with BMW. The founder's claims of a "fleet" of humanoid robots performing "end-to-end operations" were reportedly contradicted by BMW, which stated it was only testing a single robot on a single task during off-hours. This highlights the risk of over-promising on AI's capabilities for marketing purposes. Another example is  Coca-Cola's AI-generated holiday campaign, which produced abstract and bizarre visuals that many felt lacked the brand's characteristic warmth and emotional connection, demonstrating a failure of strategic curation. This middle tier is where technology is adopted without a clear framework, leading to results that are either ineffective or reputationally damaging.

Tier 3: High-End Integration At the apex of the market, AI is not a replacement for human talent but a powerful, specialised tool integrated into larger, expert-led production pipelines. Here, the goal is not automation but augmentation. A powerful, real-world example of this hybrid approach is a recent project for the United Nations. After delivering a landmark campaign, the client requested a new 10-second introductory sequence to provide additional context. With reshoots being logistically impossible, an AI-enhanced VFX workflow was proposed. The team extended a sequence from the original footage, using generative AI to create dozens of iterations of new backgrounds, objects, and people to anchor the scene in multiple environments. Crucially, these AI-generated assets were not the final product. Instead, they were treated as raw elements and imported into a traditional VFX pipeline, where they were meticulously composited and integrated. The result was a flawless new intro that seamlessly matched the existing film, solving a complex production problem that would have otherwise been insurmountable. This process highlights both the power and the investment required for high-end work: the single 10-second sequence required six full days of dedicated work, underscoring that Tier 3 integration is about expert augmentation, not simple automation.

This approach is becoming more common across the industry. Major VFX houses like DNEG leverage AI and machine learning in their workflows for blockbuster films like Dune to handle complex tasks more efficiently. AI-powered rotoscoping, used in productions like The Mandalorian, automates the painstaking frame-by-frame process of isolating characters, freeing up artists for more creative work. Other high-end applications include using AI to generate highly complex 3D textures for digital environments or to rapidly create hundreds of unique digital props, tasks that would be prohibitively time-consuming to create manually. This stratification of the market is not a temporary phase; it is a fundamental restructuring. The business model, talent pool, and technology stack required to succeed in Tier 1 are fundamentally different from those required in Tier 3. Leaders must make a conscious choice about their position on this spectrum, as attempting to be a "jack-of-all-trades" will likely lead to being a master of none.

Pillar 3: The New Skillset - From Tool Operator to System Thinker

The value of mastering a single, complex software suite - being an "Adobe expert" or a "Houdini guru" - is diminishing. Generative AI is rapidly automating many of the manual, repetitive, and technically intricate tasks that have long defined specialised creative roles. As this shift accelerates, a new, more valuable skillset is emerging: that of the "System Thinker."  

This shift from tool operator to system thinker creates a critical gap in the traditional creative department - and a powerful opportunity for a new, pivotal role to emerge. This individual acts as the essential bridge not only between the Creative Director's vision and the capabilities of AI, but also between the entire AI-enhanced workflow and the established realities of traditional video production. This requires a complex, bilingual skillset: a deep understanding of conventional production - from on-set practicalities to post-production pipelines - is just as critical as the ability to navigate the iterative, often unpredictable nature of generative technology. They are the integrator who ensures that the novel outputs of AI can be successfully incorporated into a structured, deadline-driven production environment. The exact title for this emerging discipline is still taking shape, but it revolves around a fusion of artistic, technical, and production expertise.

Here are three potential archetypes for this role:

  • The Technical Art Director: This title represents a direct evolution of the traditional Art Director, augmenting their established responsibility for visual taste and execution with the deep technical proficiency of a TD. This individual not only directs the aesthetic but also understands and builds the generative pipelines required to achieve it, translating artistic intent directly into technical workflows.  

  • The Director of Artistic Technology: This is a more senior, strategic title that positions the role as a key leader at the intersection of the creative and technology departments. This person is responsible for the strategic evaluation, implementation, and creative application of all new visual technologies, defining the technological future of the creative department and ensuring its tools serve the artistic vision.  

  • The Technical Creative Producer: This title explicitly merges three core domains, reflecting the multifaceted nature of modern production. As a Producer, they manage the end-to-end process, from budget and timelines to final delivery.

A clear glass perfume bottle with a pink liquid inside, sitting on pink fabric. The bottle has a square shape with a faceted surface and a clear square cap.

As a Creative, they possess the aesthetic judgement and storytelling sense of an art director. As a Technical lead, they have the problem-solving and system-building skills to navigate the AI landscape. This role is the ultimate integrator, ensuring that the creative vision is technically feasible, financially viable, and brilliantly executed.

The core competencies of this new role include:

  • Advanced Prompt Engineering: This goes far beyond simply writing a description in a text box. It is the discipline of crafting nuanced, context-rich prompts to guide AI models effectively. It involves an iterative process of experimentation and refinement, treating the interaction as a dialogue to "fine-tune directions" rather than just giving orders.  

  • Understanding Model Architectures: A System Thinker possesses a functional, non-academic understanding of the fundamental differences between various types of AI models and their creative implications. They know, for example, that Diffusion models are currently superior for generating photorealistic imagery, while Generative Adversarial Networks (GANs) can excel at more stylised or abstract generation, and Transformers are the foundation for language and sequential data. This knowledge allows them to select the right type of model for a specific creative task.  

  • API Integration: The true power of AI in production lies not in a single, all-powerful tool, but in the ability to connect multiple specialised tools into a custom workflow. The System Thinker understands how to use Application Programming Interfaces to chain together different services - for example, using one service to generate a script, another to generate images, and a third to automatically assemble those assets into video variations.

This skillset marks a definitive shift in where value is located. The most valuable creative professional of the next decade will not be the best pixel-pusher, but the best problem-solver and system-builder. This is not an evolution of an existing role but the creation of a new one. Companies that wish to lead in this new era must actively seek, hire, and cultivate this hybrid talent.

Pillar 4: The Strategic Goal - Engineering "The Feeling"

For decades, the most successful brands have understood a fundamental truth of marketing: in a battle between logic and emotion, emotion always wins. Research consistently shows that campaigns built on emotional connection outperform those based on rational, feature-based messaging. Emotionally connected customers are more loyal, more valuable, and more likely to become brand advocates. The ultimate goal of high-level branding is not to inform the consumer, but to make them feel something.

Historically, one of the most powerful tools for achieving this has been surrealism. The surrealist art movement, deeply influenced by Freudian psychoanalysis, sought to unlock the power of the unconscious mind and the logic-free world of dreams. In advertising, this translated into imagery that was often bizarre, unexpected, and dreamlike. Iconic campaigns for brands like Playstation ("Mental Wealth"), Guinness ("Dreamer"), and Cadbury ("Gorilla") used surreal visuals to bypass the rational mind and create a direct, memorable, and subconscious emotional impact. This approach works because it doesn't try to argue with the viewer; it creates an indelible mood, a feeling that becomes inextricably linked with the brand.  

This brings us to the native aesthetic of today's generative AI video models. Their output is often described as uncanny, dreamlike, strange, or non-literal. While many in the industry view these qualities as technical flaws to be overcome in the pursuit of photorealism, this is a strategic error. These inherent aesthetic qualities are not a bug; they are a powerful creative feature. The "weirdness" of AI is perfectly suited to the strategic goal of engineering a specific feeling or mood, rather than depicting a literal reality. The technology's ability to generate novel, abstract, and surreal visuals can create an immediate and memorable emotional impact that cuts through the clutter of conventional advertising.  

Therefore, the most sophisticated strategic application of AI video today is not to fight its nature but to embrace it. The goal should shift from "making AI video look real" to "using AI video to create a real feeling." This reframes the technology's current limitations as a unique stylistic advantage, aligning a cutting-edge tool with a timeless principle of effective branding.

Feature Google Veo 3 Runway Gen-4 Hailou-02
Max Resolution Up to 4K Up to 1080p 1080p
Max Clip Length 8 seconds 5-10 seconds 6-10 seconds
Key Strengths High fidelity, contextual understanding, and integrated sound generation (dialogue, SFX, music). Excellent character and object consistency using reference images. The Aleph model allows for conversational, in-context video editing. Advanced physics simulation. Strong character consistency and "Director" mode for cinematic camera controls.
Key Limitations Short clip length, prompt misunderstanding, strict content filters, and inconsistent character models. Steep learning curve. The Aleph model is limited to processing short clips. Users report chaotic motion and long queue times. No native audio generation. Short clip length and struggles with complex scenes. Primary interface is in Chinese.

A Call for Strategic Partnership

The five pillars presented in this article - a curatorial mindset, a clear understanding of the market spectrum, a focus on developing system thinkers, a strategic goal of engineering emotion, and an investment in mastering complexity - are not isolated concepts. They form an integrated strategic framework. Success in the age of AI video requires a holistic adoption of this framework, as each pillar reinforces the others. A curatorial mindset is useless without the system thinkers to execute it; a first-mover advantage in complexity is wasted if the strategic goal is misaligned.

For senior leaders tasked with navigating this transition, the path forward requires deliberate, strategic action, not reactive technological acquisition. Based on the preceding analysis, three clear recommendations emerge.

An engagement ring with a large diamond on a black textured stand under focused light.

First, structure pilot projects for learning, not just for public relations. The primary goal of initial AI initiatives should be to build internal capabilities, not to generate a single "viral" output for a press release. Design these projects to test and refine new workflows, to upskill teams on the core competencies of prompt engineering and system integration, and, most importantly, to rigorously document and share learnings across the organisation. Each experiment, whether successful or not, should build the organisation's collective intelligence.

Second, aggressively hire and develop the System Thinker. This new hybrid talent is the most critical human resource for the coming decade. Leaders must redefine their hiring criteria to look for candidates with a portfolio of skills: experience with APIs, foundational knowledge of scripting, and a demonstrated ability to work across multiple AI models and platforms. Simultaneously, they must create internal pathways to develop this talent, identifying high-potential individuals within existing teams - such as VFX artists, creative coders, or technically-minded producers - and investing in their training to bridge the gap between creative and technical domains.  

A designer working on a car advertisement on an iMac computer, with fashion sketches and character concept art on the desk.

Third, embrace symbiotic partnerships. For most large agencies and brands, the most effective and capital-efficient path forward is not to attempt to build a massive, in-house AI research and development team from scratch. The technology is evolving too rapidly, and the required expertise is too specialised. Instead, the optimal strategy is to form deep, symbiotic partnerships with the specialised, agile AI and VFX studios that operate on the bleeding edge. In this model, the large agency or brand brings the core client relationships, the deep brand strategy, and the market access. The specialist studio brings the deep technical mastery, the custom-built workflows, and the day-to-day expertise in navigating the complex tool landscape. This collaborative approach allows each party to focus on its core strengths, creating a whole that is far greater than the sum of its parts.

Ultimately, the companies that win in this new era will not be the ones that simply adopt AI the fastest, but those that adopt it the most strategically. The challenge is not a technological race but a race to achieve organisational and strategic adaptation. The future of video production does not belong to the best tool operators, but to the discerning curators, the holistic system thinkers, and the strategic partners who choose to build lasting capability over chasing fleeting trends.

A silver MacBook laptop sits on a black pedestal in front of a large illuminated yellow backdrop, with a dark industrial space surrounding it.

Let’s work together

Navigating this new landscape requires not just a new framework, but new forms of collaboration. We are actively seeking partnerships with forward-thinking agencies and brands who are ready to move beyond the hype and build a real, sustainable competitive advantage. By combining our deep expertise in AI-enhanced, as well as traditional production and post-production workflows, with your brand vision and market knowledge, we can co-create the future of video production. If you are ready to explore these new possibilities and help define the next era of creative excellence, we invite you to connect with us.

https://www.linkedin.com/in/czaicki/
https://www.linkedin.com/in/icimelifsenol/