
Meta’s AI-powered EMU Edit takes novel approach that aims to streamline various image manipulation tasks and bring enhanced capabilities and precision to image editing, without requiring prompt engineering from the user. This means a simple text prompt can be used for tasks such as local / global editing, removing / adding a background, color / geometry transformations, detection / segmentation, and lots more.
EMU Edit is capable of precisely following instructions to ensure that pixels in the input image unrelated to the instructions remain untouched. For example, when adding the text “Go Team!” to a baseball cap, the cap itself should remain unchanged. Meta’s EMU Video also leverages their EMU model to present a simple method for text-to-video generation based on diffusion models. It can respond to a variety of inputs: text only, image only, as well as both text and image.
Sale
Meta Quest 2 — Advanced All-In-One Virtual Reality Headset — 128 GB
- Experience total immersion with 3D positional audio, hand tracking and easy-to-use controllers working together to make virtual worlds feel real.
- Explore an expanding universe of over 500 titles across gaming, fitness, social/multiplayer and entertainment, including exclusive releases and…
- Enjoy fast, smooth gameplay and immersive graphics as high-speed action unfolds around you with a fast processor and immersive graphics.
Unlike prior work that requires a deep cascade of models (e.g., five models for Make-A-Video), our state-of-the-art approach is simple to implement and uses just two diffusion models to generate 512×512 four-second long videos at 16 frames per second,” said Meta.















