Skip to content

Behind the scenes: How I created this AI-generated video

Custom title applied. Post title: Behind the scenes: How I created this AI-generated video

By Alexander Evjenth

Behind the scenes: How I created this AI-generated video

This article describes my creative process for my first image-to-video project—yes, the first, but not the last. Using a combination of ChatGPT, Midjourney, and RunwayML, this project was enjoyable and generated many new ideas for future productions.

I signed up for RunwayML about a year ago to test the image-to-video feature. A colleague and I had worked on creating some animated characters for a branding project. While working, we came up with the idea that bringing our characters to life would be advantageous by creating AI-generated videos of our images. However, the result could have been better, as our characters changed appearance during the short videos.

Time passed, and I almost forgot about Gen Ai image-to-video because I thought it was not quite there yet.

Fast-forward one year, I found some generative AI videos in my LinkedIn feed. This time, they were much better quality, and the characters in the videos maintained the same appearance. So I logged into my account on Runway, and wow, it has improved a lot in the past year! 

This time, I wanted to test whether I could create a short intro to a video, and this is how I did that. 

ChatGPT as my co-director

I wanted to do a short test video, a reflective narrative that could create a certain mood for the video without over-committing to a clear message or storyline. I asked ChatGPT for a narrative:

My prompt: "I want to create a 20-second video with a reflective, philosophical tone. The scene should be set in a bustling city, capturing the contrast between the fast-paced world around and a girl sitting on a bench, lost in thought."

That's where the conversations started, and about 10 minutes later, ChatGPT delivered the scene and script for the narrator:

ChatGPT output: "The film opens with an aerial view of a bustling city at dusk. The sky is streaked with hues of orange and purple as city lights flicker into life. The camera slowly zooms in on a lone woman sitting on a park bench, gazing up at the sky, lost in thought."

Narrator (soft, philosophical voice):

There's a fleeting moment in each day—a quiet pause between what was and what is yet to come—when time seems to hold its breath. And in that space, we wonder: What does it all mean? This brief life we walk is so fragile yet so full of possibilities.

Wow, this can be good, I thought...

Midjourney as my photographer

For the video, I needed the following three images, which I generated in Midjourney:

My prompt: "Ultra-realistic helicopter image above a vibrant city at dawn, 2024, ultra-high-definition."

Ultra_realistic_helicopter-image_from_above_of_vibran_

My prompt: "Hyper-realistic Cinematic mid-shot of a young woman sitting alone on a contemporary park bench, gazing at the sky, surrounded by blurred city lights, 2024, high-resolution, dawn lighting."

Hyper_realistic_Cinematic_mid_shot_of_a_young_woman_

My prompt: "Hyper-realistic cinematic mid-shot of a close-up of a young woman's face, sitting on a contemporary park bench, with warm twilight lighting, lost in thought, soft bokeh city lights in the background, 2024, ultra-high-definition, clean, sharp focus on the face."

Hyper_realistic_Cinematic_mid_shot_of_a_close-up_of_a_95260915-5266-4051-a8c8-760640633153

Runway as my filmmaker

The last step was to add the images in RunwayML and describe the camera movements. I used the image-to-video feature to create the motion and the text-to-audio feature to create the narrator's voice reading the script. 

I also used Runway's video editor, which has an easy-to-use interface but is robust and offers many options. 

Here is the result. 


I think this was an excellent first version. Next, I'll start experimenting with Runway Act-One, which allows you to generate expressive characters.

If you have a creative idea that needs to be brought to life—a short video, a dynamic visual story, or something unique—I’d love to hear about it and see if we can help you. 

Feel free to book a meeting to discuss your next project!

Book a meeting

 

Alexander Evjenth
Alexander Evjenth is a content creator at Zooma and has been part of the team since 2018. He's a specialist in producing knowledge-driven content that helps B2B companies explain what they do in a clear and useful way, but he's been known to step in as a project manager at times. His work spans articles, guides, webinars and more, always with a strong focus on findability and long-term value. He's also our resident AI video specialist, working hard to leverage AI to cut the cost and boost the quality of video content, for customers and for Zooma. Importantly, Alexander is one of the hosts of The B2B Enablement Pod (formerly The Onlinification Pod), and has been instrumental in building Zooma's podcasting and audio content expertise. Colleagues know him for his curiosity and deep focus — and of course, for his surfing prowess.
Keep me updated!
Subscribe