Categories: Tech

Make-A-Video, which creates videos from text, is introduced by Meta.

Make-A-Video may produce video on demand using a text description or an already-existing image.

 

Make-A-Video is an AI-powered video generator that Meta launched today. It is similar to previous image synthesis tools like DALL-E and Stable Diffusion in that it can generate original video material from text or image cues. Though it isn’t currently accessible to the general public, it can also create versions of already-existing videos.

 

A young couple walking in severe rain and a teddy bear creating a picture are just two examples of videos made from words that are displayed on the Make-A-Video announcement website, which also demonstrates the software’s capacity to animate static source images. For instance, after processing through the AI model, a still image of a sea turtle may look to be swimming.

 

In July, Meta announced its own text-to-image AI model called Make-A-Scene, which builds off existing work with text-to-image synthesis used with image generators like OpenAI’s DALL-E. This is the key technology behind Make-A-Video and the reason it has arrived sooner than some experts had predicted.

 

Instead of using labelled video data to train the Make-A-Video model (for instance, captioned descriptions of the actions shown), Meta used image synthesis data (still images trained with captions) and unlabeled video training data to train the model. As a result, the model gains an understanding of the possible locations of text or image prompts in time and space. After then, it can foretell what would happen next and briefly show the scene in motion.

 

In a white paper, Meta stated, “We extend the spatial layers to include temporal information during the model initialization stage via function-preserving transformations.” The new attention modules in the extended spatial-temporal network learn the dynamics of the temporal environment from a collection of films.

 

Who would have access to Make-A-Video and how or when it may be made public have not been disclosed by Meta. If anyone is interested in testing Meta in the future, there is a sign-up form available.

 

The possibility of instantly producing photorealistic videos is acknowledged by Meta as posing some social risks. According to Meta, all AI-produced video material from Make-A-Movie has a watermark to “help ensure viewers realise the video was generated with AI and is not a taken video,” as is stated at the bottom of the announcement page.

 

If history is any indication, competitive open source text-to-video solutions might come after (some, like CogVideo, already exist), rendering Meta’s watermark protection useless.

Shailen Vandeyar

Shailen Vandeyar is a Journalist at Flaunt Weekly.

Share
Published by
Shailen Vandeyar

Recent Posts

WATCH: BLACKPINK’s Jennie and Dua Lipa Embrace The Thrill Of Going Over The “Handlebars” In New MV

Flaunt Weeekly Blackpink’s Jennie has gifted fans with yet another music video!On March 11 at…

1 hour ago

Death And Vanilla Confirm ‘Whistle And I’ll Come To You’ Release

Flaunt Weeekly They've soundtracked a cult 1968 BBC drama... 10 · 03 · 2025 Swedish…

1 hour ago

Melin Melyn’s Debut Album ‘Mill On The Hill’ Is Remarkable

Flaunt Weeekly A refreshing dose of spring-time originality... 10 · 03 · 2025 Welsh wizards…

1 hour ago

Lady Gaga Performs “Abracadabra” and “Killah” on Saturday Night Live: Watch

Flaunt Weeekly Lady Gaga served as both host and musical guest on last night’s episode…

3 hours ago

MusicGeneratorAI: Advanced music creation

Flaunt Weeekly MusicGeneratorAI is a state-of-the-art tool that harnesses AI technology to help businesses create…

3 hours ago

Lady Gaga Performs “Abracadabra” and “Killah” on Saturday Night Live: Watch

Flaunt Weeekly Lady Gaga served as both host and musical guest on last night’s episode…

5 hours ago