Visit StickyLock

NVIDIA Launches Brand-New AI-Powered Avatars, Says It Is a Crucial Step Towards the Forthcoming Metaverse

NVIDIA’s new AI-powered avatars will help everyone on the web to have a digital twin.

During NVIDIA GTC Conference held earlier this month, the company’s CEO Jenson Huang introduced a host of new and updated SDKs from NVIDIA’s upcoming developments. According to Huang, these SDKs will have an overall impact on every immersing technology, from cybersecurity to self-driving automobiles, from robotics to cloud computing, and much more. As part of the innovation, these new technologies will also introduce all-new interactive AI avatars.

Arguably the most ambitious announcement in the event was NVIDIA Omniverse Avatar. It is a platform that one can use to create interactive AI avatars in real-time. These AI avatars will be able to speak, see, hold conversations on a wide range of subjects, as well as understand the intent of the speaker as you talk to them. The company views this innovative platform as a potential tool that can support a variety of industries in the coming years, such as retail, food, banking to name a few.

In his speech, Huang said that the time has arrived for intelligent virtual assistants. And as part of the progress, NVIDIA’s AI avatars will become an integral part of future technology. These Omniverse avatars are created by combining NVIDIA’s foundational simulation, graphics, and artificial intelligence technologies. They are regarded as one of the most complex real-time applications to date.

Huang also took time in this conference to talk about NVIDIA Omniverse Replicator at length. It is a powerful synthetic-data-generation engine that can produce physically simulated virtual worlds appropriate for training purposes. Rev Lebaredian, VP of Simulation & Technology at NVIDIA stated that the Omniverse Replicator will allow developers to create massive, diverse, and accurate datasets to build high-performing, high-quality, high-security datasets, which is integral for artificial intelligence. This could be potentially used by major industries. For example, companies can utilize this to safely train their employees who work in dangerous environments.

Part of Huang’s speech focused on how these innovative technologies will play an instrumental role in transforming major companies around the world through Project Maxine. Essentially a GPU-accelerated SDK featuring state-of-the-art AI attributes, Project Maxine could be used by developers to create life-like audio and video effects, as well as immersive experiences of various kinds. These creations can be used for entertainment, work, education, and social situations.

The keynote also revealed that Project Maxine could be easily integrated into immersing computer vision technology, like NVIDIA’s Riva Speech. This is an AI tool that helps to create real-time multilingual interactive avatars. The Metropolis Team of NVIDIA, for example, created a talking kiosk called Tokkio using Maxine. Tokkio can greet you in different languages and help you while ordering food. On the other hand, the Drive team used Maxine to develop Concierge, another AI assistant to be used in self-driving cars.

Huang said that it is the perfect time to introduce these new technologies as we are heading towards the metaverse.

Join the Discussion

Visit StickyLock
Back to top