Snap previews its real-time image model that can generate AR experiences

Snap previews a real-time image model for AR experiences and unveils new generative AI tools for creators with Lens Studio 5.0.

: Snap introduced a real-time, on-device image diffusion model at the Augmented World Expo, capable of generating AR experiences in response to text prompts. The model is designed to run efficiently on smartphones, and will be integrated into Snapchat Lenses in the near future. Lens Studio 5.0 launched with new generative AI tools to expedite AR effect creation for developers.

Snap revealed a real-time image diffusion model for augmented reality at the Augmented World Expo, which can generate AR experiences based on text prompts. This model is optimized to work on smartphones, allowing it to re-render frames in real time, enhancing the AR experience considerably.

Snapchat users can expect to see Lenses utilizing this new technology in the coming months, with a broader release to creators by the end of the year. The introduction of this model signifies a major advancement in how AR experiences are created and rendered.

Additionally, Snap launched Lens Studio 5.0, which includes generative AI tools designed to speed up the creation of AR effects. These tools can generate realistic face effects, custom stylization, 3D assets, and characters like aliens or wizards from text or image prompts, revolutionizing the workflow for AR creators.