On the Augmented World Expo on Tuesday, Snap teased an early version of its real-time, on-device image diffusion model that may generate vivid AR experiences. The corporate also unveiled generative AI tools for AR creators.
Snap co-founder and CTO Bobby Murphy said on-stage that the model is sufficiently small to run on a smartphone and fast enough to re-render frames in real-time, guided by a text prompt.
Murphy said that while the emergence of generative AI image diffusion models have been exciting, these models have to be significantly faster for them to be impactful for augmented reality, which is why its teams have been working to speed up machine learning models.
Snapchat users will begin to see Lenses with this generative model in the approaching months, and Snap plans to bring it to creators by the tip of the 12 months.
“This and future real time on device generative ML models speak to an exciting recent direction for augmented reality, and is giving us space to reconsider how we imagine rendering and creating AR experiences altogether,” Murphy said.
Murphy also announced that Lens Studio 5.0 is launching today for developers with access to recent generative AI tools that may help them create AR effects much faster than currently possible, saving them weeks and even months.
AR creators can create selfie Lenses by generating highly-realistic ML face effects. Plus, they will generate custom stylization effects that apply a sensible transformation over the user’s face, body, and surroundings in real time. Creators can even generate a 3D asset in minutes and include it of their Lenses.
As well as, AR creators can generate characters like aliens or wizards with a text or image prompt using the corporate’s Face Mesh technology. They can even generate face masks, texture and materials inside minutes.
The newest version of Lens Studio also includes an AI assistant that may answer questions that AR creators can have.