Meta introduced Emu Video and Emu Edit at Meta Connect, enhancing generative AI for high-quality video and image creation. The foundational model, Emu, also debuted.
Emu Video employs diffusion models for text-to-video generation, accommodating text, image, or combined inputs. The process involves two stages: generating images based on text and then creating videos with both text and images, enabling efficient model training.
Emu Edit simplifies image manipulation with precise instructions for tasks like editing, background modification, and geometry transformations. Notably, it maintains precision by leaving unrelated pixels untouched.
While Emu Video and Emu Edit don’t replace professional artists, they offer new avenues for self-expression through technology.