Nvidia showcased a new text-to-3D model generative AI model late last week that it calls LATTE3D.

In a blog post announcement, the model is likened to a “virtual 3D printer” that creates textured meshes from user-input text in less than a second.

LATTE3D was trained on “animals and everyday objects”, and generates multiple models for each prompt, similar to Luma’s Genie, which we covered last month. It also uses a diffusion model to create 3D models, like OpenAI’s Point-E.

Featured image of The Best AI 3D Model Generators
Welcome To The Future
The Best AI 3D Model Generators

Speaking of the competition, if Nvidia’s comparisons of LATTE3D to other text-to-3D model generators are anything to go by (these are showcased on a Labs page dedicated to the model), LATTE3D will provide the best of both worlds in terms of speed and quality.

That said, the model is not yet ready for use. When it will become available remains unclear, as is to whom it will be available to, although Nvidia mentions “video games, ad campaigns, design projects or virtual training grounds for robotics” as potential use cases. It also hints at the model’s future as an animation application, showing off some unconvincing AI-generated animations that are nonetheless impressive.

LATTE3D’s reveal comes months after Nvidia said it was “no longer a graphics company”, subsequently declaring itself “The AI Computing Company”. Everything about the model looks and reads impressive – but that’s essentially all we have right now: pictures and words, so we’d suggest tempering your excitement for the time being.

That said, there is a great deal of information available regarding the model’s functionality. The aforementioned Labs page offers a research paper and multiple videos –including a “Model Usage Demo” – so be sure to check those out in the meantime. But for more succinct info, check out Nvidia’s recent blog article.

You’ve read that; now read these:

Advertisement
Advertisement
Advertisement