AI
Marketing

Create a complete AI commercial in 72 hours. From ChatGPT scripting to MidJourney visuals & Runway animation. We break down the full pipeline, challenges, and workflows for creation.

Project Manager Using AI for Workflow

72 Hours with AI: Creating Our Bigfoot Commercial

As a Portland-based AI agency, we're always looking for new ways to push the boundaries of what's possible with technology. At SLIDEFACTORY, our roots are in digital innovation—but today, that means more than just building websites or apps. We're focused on the future, leveraging AI and emerging technologies to reimagine creative production, workflows, and storytelling.

Curious about the limits of generative AI in an AI video production process, we decided to put our capabilities to the test. The challenge? Create a complete commercial, from concept to final cut, using only AI tools. No traditional cameras, no live crews. Just prompts, models, and our creative instincts.

We gave ourselves exactly 72 hours.

What followed was an intense, fast-paced sprint that was equal parts exhilarating and eye-opening. The result wasn't just a finished piece of content—it was a firsthand look at how far AI-powered production has come, and where it's heading.

For us, this wasn't just an experiment. It was a real-time case study in how a nimble, creative AI agency can move quickly, adapt new tools, and deliver results that would have taken weeks in a traditional pipeline.

The Spark: Why Bigfoot?

Every great project starts with that moment when someone throws out a seemingly random idea that somehow clicks. We were brainstorming concepts for our own promotional video when someone in the office mentioned Bigfoot. Not as a joke, but as a genuine creative direction. There's something universally appealing about the myth—mysterious, larger than life, and surprisingly relatable when you think about it. Plus, what Portland, Oregon based Digital Agency doesn't love a good cryptid story?

The more we discussed it, the more it made sense. Bigfoot could represent the elusive "perfect client" or embody the mysterious creative process itself. The concept had legs (big, hairy legs), and we decided to run with it. With our tight 72-hour deadline looming, we dove headfirst into the AI production pipeline.

Building the Foundation: Script and Concept Development

Initially, we considered building out a complex N8N workflow to automate our content generation process. However, we quickly realized this would be overkill for our specific use case. When working under tight deadlines, we've found that the simplest approaches often work best—and ChatGPT's direct interface would serve us just as well without the additional setup time.

So, after discarding our initial fancy ideas, ChatGPT became our first stop. To benign, we fed it detailed information about SLIDEFACTORY. Our services, our studio personality, our target audience. Then we asked it to generate multiple concept outlines. The AI didn't disappoint. Within minutes, we had a dozen different approaches, from documentary-style mockumentaries to sleek corporate presentations featuring our mythical protagonist.

After reviewing the options, we found a direction that felt authentically us while leveraging Bigfoot's inherent charm. But having a concept was just the beginning—we needed a script that could actually work within our technical constraints and timeline.

This brought us back to ChatGPT, this time asking for multiple script variations based on our chosen concept. The AI generated everything from snappy 30-second spots to longer narrative pieces. We refined, combined, and iterated until we had something that felt right—a script that balanced humor with genuine information about our services, all delivered through our unlikely spokesperson.

The collaborative process with AI felt surprisingly natural. Rather than replacing our creative instincts, ChatGPT became a brainstorming partner that never ran out of ideas. It helped us explore directions we might not have considered while keeping our core vision intact.

Designing Our Star: The Great Bigfoot Character Hunt

With our script locked, we needed to bring our character to life visually. This is where MidJourney entered the picture, and honestly, this phase became one of the most entertaining parts of the entire project.

We started broad: What kind of Bigfoot were we creating? The classic forest-dwelling cryptid? A modern, tech-savvy influencer version? Someone in an obvious costume winking at the camera? MidJourney helped us explore every possibility, generating image after image as we refined our prompts.

The iterations were fascinating. We saw Bigfoot in business suits, Bigfoot holding smartphones, Bigfoot that looked convincingly real, and Bigfoot that was obviously theatrical. Each generation taught us something about our character and helped us narrow down the personality we wanted to convey.

Eventually, we landed on a design that struck the perfect balance—clearly Bigfoot, but with a contemporary, approachable quality that would work for our brand. The character had personality in his eyes and a presence that suggested he'd be comfortable on camera discussing slide presentations and creative services.

Finding the Voice: From Pixels to Personality

Once we had our visual locked down, we faced an interesting challenge: What should Bigfoot sound like? We took our final character image back to ChatGPT and asked it to describe the voice that would match this particular Bigfoot. The AI analyzed the visual elements and suggested characteristics—warm but authoritative, slightly rustic but intelligent, with just enough gravitas to be taken seriously.

Armed with this description, we headed to ElevenLabs to find our voice actor. The platform's V3 Alpha compatible voices offered impressive options, and we auditioned several candidates against our character description. When we found the right match, it was immediately obvious—the voice fit our Bigfoot like a perfectly tailored suit.

Using ElevenLabs V3 Alpha, we were able to generate our voiceover with remarkable control over emphasis and delivery. The technology allowed us to direct the performance, adding natural inflections and emotional beats that brought the script to life. It was like having a voice actor who never got tired and could nail every take.

Scene by Scene: The Visual Production Pipeline

With our character and voice established, we needed to create the world around him. ChatGPT helped us break down our script into detailed scene descriptions, providing the kind of specific visual direction that would translate well to our various AI tools.

This is where things got really interesting from a production standpoint. We experimented with multiple platforms—ChatGPT for conceptual development, Veo3 for video generation, and MidJourney for static scenes. Each tool had its strengths, and finding the right application for each became part of our workflow optimization.

MidJourney consistently delivered the strongest starting points for our scenes. The image quality was exceptional, and the platform's ability to interpret our detailed prompts gave us reliable results. However, we quickly discovered that MidJourney's animation features didn't provide the level of control we needed for professional work.

Veo3 offered better animation control, but our licensing restrictions meant we couldn't feed in our custom images as starting points. This led to inconsistent results that didn't match our established visual style. We needed a solution that could bridge the gap between MidJourney's excellent stills and the motion we required.

The Animation Solution: Runway Gen-4 Takes Center Stage

Runway Gen-4 became our secret weapon for bringing static scenes to life. By feeding our MidJourney images into Runway, we could generate sophisticated animations while maintaining visual consistency with our character design. The platform excelled at creating atmospheric background movement, subtle character animations, and environmental effects that elevated our scenes from static to cinematic.

While Runway was undeniably powerful, we quickly learned that AI animation comes with its own set of challenges. Results were often unpredictable—some animations turned out stiff and unrealistic, while others simply didn't match our input images or creative vision. It became a process of iteration, requiring multiple attempts to achieve shots that truly worked for our story. What looked promising in our static image didn't always translate effectively into motion.

Still, for scenes without dialogue, Runway Gen-4 proved to be our go-to solution despite these hurdles. The tool's ability to interpret and extend our visual concepts while maintaining quality made it indispensable for our tight timeline. We could generate multiple animation options quickly, then select the best results from several attempts for our final cut—a workflow that became essential given the unpredictable nature of the output.

However, speaking scenes required a different approach entirely.

Bringing Bigfoot to Life: HeyGen Avatar IV

For the portions of our AI commercial where Bigfoot needed to speak directly to the camera, we turned to HeyGen Avatar IV. This platform allowed us to upload our ElevenLabs-generated voiceover and create synchronized character animations that felt natural and engaging.

The level of control HeyGen provided was impressive. We could direct our character's performance, adjusting gestures, expressions, and timing to match our creative vision. The technology seamlessly integrated our custom voice with believable character animation, creating moments that felt genuinely conversational rather than obviously AI-generated.

Watching our static Bigfoot character come to life and deliver our script with personality and charm was one of those magical production moments that reminded us why we love this work.

Post-Production: Bringing It All Together

With our visual elements generated, we entered familiar territory: post-production. Using AI-powered tools in Photoshop, we refined our MidJourney-created images—expanding edges, enhancing details, and adjusting composition to ensure consistency across scenes.

An interesting discovery emerged during this phase: certain elements, particularly signs with text or written messages, were actually easier to create using ChatGPT than MidJourney. Our workflow evolved into a hybrid approach: we'd generate the base image in MidJourney, then describe the scene and desired edits to ChatGPT for clearer, more contextually accurate text elements. These ChatGPT-generated components were then seamlessly integrated back into our Photoshop workflow, where we blended everything together for a polished final look.

Refining and Integrating Elements

Once all assets were refined, we moved into Premiere Pro, where traditional editing skills became crucial in tying our AI-generated elements into a cohesive final cut. The assembly process felt remarkably similar to conventional production workflows—just with different source materials. Our AI-generated elements cut together naturally, and the familiar rhythm of editing allowed us to craft the pacing and emotional beats essential for an effective commercial.

For the soundtrack, time constraints led us away from AI music generation and toward Envato's proven library. This decision reinforced an important lesson: sometimes the hybrid approach—combining AI innovation with established, reliable resources—produces the strongest results.

The Power of Hybrid Workflows

What struck us most was how naturally our AI-assisted workflow integrated with traditional post-production techniques. The technology enhanced our creative process without replacing the fundamental storytelling skills that make commercial production effective.

The Verdict: 72 Hours Well Spent

Did we succeed in creating a compelling commercial entirely with AI tools in just three days? Absolutely. But more importantly, we learned that AI isn't about replacing traditional production skills—it's about augmenting them in exciting ways.

The technology allowed our Portland AI agency to explore creative directions we might never have considered, iterate rapidly without the usual resource constraints, and produce high-quality content on an aggressive timeline. However, success still required human creative judgment, technical problem-solving, and the kind of storytelling instincts that come from years of production experience.

Our Bigfoot commercial proved that AI tools like MidJourney, ElevenLabs, Runway, and HeyGen are ready for professional creative work, but they're most powerful when wielded by people who understand both the possibilities and the limitations. Here in the Pacific Northwest, we're seeing more agencies embrace this hybrid approach—combining AI innovation with traditional creative expertise.

As we wrapped up our 72-hour sprint, one thing was clear: Bigfoot might be mythical, but the potential of AI tools like MidJourney, ElevenLabs, Runway, and HeyGen in creative production is very real. For agencies ready to embrace this technology, the future isn't human versus AI—it's human with AI, and that future is incredibly exciting.

Looking for a reliable partner for your next project?

At SLIDEFACTORY, we’re dedicated to turning ideas into impactful realities. With our team’s expertise, we can guide you through every step of the process, ensuring your project exceeds expectations. Reach out to us today and let’s explore how we can bring your vision to life!

Contact Us
Posts

More Articles

Vision Pro Headset
Contact Us

Need Help? Let’s Get Started.

Looking for a development partner to help you make something incredible?

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.