Skip to main content

Runway brings precise camera controls to AI videos

Content creators will have more control over the look and feel of their AI-generated videos thanks to a new feature set coming to Runway’s Gen-3 Alpha model.

Advanced Camera Control is rolling out on Gen-3 Alpha Turbo starting today, the company announced via a post on X (formerly Twitter).

Recommended Videos

Advanced Camera Control is now available for Gen-3 Alpha Turbo. Choose both the direction and intensity of how you move through your scenes for even more intention in every shot.

(1/8) pic.twitter.com/jRE6pC9ULn

— Runway (@runwayml) November 1, 2024

The new Advanced camera controls expand on the model’s existing capabilities. With it, users can “move horizontally while panning to arc around subjects … Or, move horizontally while panning to explore locations,” per the company. They can also customize the direction and intensity of how the camera moves through a scene “for even more intention in every shot,” while combining “outputs with various camera moves and speed ramps for interesting loops.”

Unfortunately, since the new feature is restricted to Gen-3 Alpha Turbo, you will need to subscribe to the $12-per-month Standard plan to access that model and try out the camera controls for yourself.

Or quickly zoom out to reveal new context and story.

(7/8) pic.twitter.com/dovmMUsGEx

— Runway (@runwayml) November 1, 2024

Runway debuted the Gen-3 Alpha model in June, billing it as a “major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Models.” Gen-3 powers all of Runway’s text-to video, image-to-video, and text-to-image tools. The system is capable of generating photorealistic depictions of humans, as evidenced in the X post, as well as creating outputs in a wide variety of artistic styles.

Advanced Camera Controls arrive roughly a month after Runway revealed gen-3’s new video-to-video capabilities in mid-September, which allows users to edit and “reskin” a generated video in another artistic style using only text prompts. When combined with Apple’s Vision Pro AR headset, the results are striking. The company also announced the release of an API so that developers can integrate gen-3’s abilities into their own apps and products.

The new camera controls could soon be put to use by film editors at Lionsgate, the studio behind the John Wick and The Hunger Games franchises, which signed a deal with Runway in September to “augment” humans’ efforts with AI generated video content. The deal reportedly centers on the startup building and training a new generative AI model fine-tuned on Lionsgate’s 20,000-title catalog of films and television series.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
OpenAI opens up developer access to the full o1 reasoning model
The openAI o1 logo

On the ninth day of OpenAI's holiday press blitz, the company announced that it is releasing the full version of its o1 reasoning model to select developers through the company's API. Until Tuesday's news, devs could only access the less-capable o1-preview model.

According to the company, the full o1 model will begin rolling out to folks in OpenAI's "Tier 5" developer category. Those are users that have had an account for more than a month and who spend at least $1,000 with the company. The new service is especially pricey for users (on account of the added compute resources o1 requires), costing $15 for every (roughly) 750,000 words analyzed and $60 for every (roughly) 750,000 words generated by the model. That's three to four times the cost of performing the same tasks with GPT-4o.

Read more
I tried out Google’s latest AI tool that generates images in a fun, new way
Google's Whisk AI tool being used with images.

Google’s latest AI tool helps you automate image generation even further. The tool is called Whisk, and it's based on Google’s latest Imagen 3 image generation model. Rather than relying solely on text prompts, Whisk helps you create your desired images using other images as the base prompt.

Whisk is currently in an experimental phase, but once set up it's fairly easy to navigate. Google detailed in a blog post introducing Whisk that it is intended for “rapid visual exploration, not pixel-perfect edits.”

Read more
Google strikes back with an answer to OpenAI’s Sora launch
Veo 2 on VideoFX

Google's DeepMind division unveiled its second generation Veo video generation model on Monday, which can create clips up to two minutes in length and at resolutions reaching 4K quality -- that's six times the length and four times the resolution of the 20-second/1080p resolution clips Sora can generate.

Of course, those are Veo 2's theoretical upper limits. The model is currently only available on VideoFX, Google's experimental video generation platform, and its clips are capped at eight seconds and 720p resolution. VideoFX is also waitlisted, so not just anyone can log on to try Veo 2, though the company announced that it will be expanding access in the coming weeks. A Google spokesperson also noted that Veo 2 will be made available on the Vertex AI platform once the company can sufficiently scale the model's capabilities.

Read more