Generative AI is becoming a strong tag-team partner for the XR space. Recently Microsoft noted how MR represents the “eyes and ears” of AI.
So, it appears that the two emerging technologies, XR and AI, coexist to enhance outcomes for end-users across the board – instead of competing.
However, while competition between the two technologies may not be as drastic as audiences once thought. Following a mainstream steer away from the Metaverse by some towards genAI – the latter may be gaining the most focus currently – but roadmaps are already in place by firms such as Microsoft to develop the industrial Metaverse by 2024.
However, genAI is not just applicable to an immersive product vendor. GenAI is already incredibly common on the developer side of XR hardware and software.
RT3D developmental engines such as Unreal, Unity, and Omniverse also leverage genAI to optimize developmental workflows, notably for budding XR content creators.
The small XR developmental talent pool is an issue as new hardware comes readily, from AR to VR, to MR. Having content creators designing systems selling applications for enterprises and consumers is crucial.
Otherwise, why should a user buy an expensive XR headset without a solid application offering? GenAI can simplify XR content creation and allow more individuals to design AR/VR/MR services.
NVIDIA Omniverse Gains Core GenAI Upgrade
Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA, stated that the latest Omniverse update allows XR developers to genAI-empowered workflows.
The genAI updates come as part of NVIDIA’s introduction of OpenUSD on its services – an interoperable graphics framework.
By increasing the usage of OpenUSD on Omniverse, NVIDIA can add genAI services, including Cesium, Convai, Move AI, SideFX Houdini and Wonder Dynamics, to assist with industrial digitization operations.
According to Lebaredian, the OpenUSD, GenAI-ready workflows enhance Omniverse developer tools, allowing enterprises to “build larger, more complex world-scale simulations as digital testing grounds for their industrial applications.”
Moreover, Lebaredian added:
Industrial enterprises are racing to digitalize their workflows, increasing the demand for OpenUSD-enabled, connected, interoperable, 3D software ecosystems.
The Omniverse update also introduces a USD Composer that allows RT3D developers to design large-scale scenes using the OpenUSD framework.
Additionally, Omniverse is gaining Audio2Face, a genAI-powered programming interface suite that allows XR developers to create realistic facial animations and gestures from audio files. The Audio2Face feature also supports multilingual support and a new female base model.
Moreover, the firm is adding new developer templates and resources to lower the entry for using the new OpenUSD tools. The templates also support a more comprehensive range of developers with no-code considerations.
The OpenUSD integration does more than introduce new genAI services to Omniverse. The Pixar-produced graphics format opens up the Omniverse product to a broader number of industry use cases, again driving forward how RT3D engines are serious tools for developing custom solutions.
Key Improvements and Upgrades
Amongst the genAI and OpenUSD integration comes a series of updates that continue growing the opportunities available on Omniverse.
NVIDIA is expanding its Omniverse Kit Extension Registry with modular app-building features to improve the process of making custom apps with additional access to more than 600 core first-party extensions.
NVIDIA is also boosting Omniverse’s efficiency and user experience by introducing Ada Lovelace-powered rendering optimizations. Also introducing features such as an AI denoiser that supports “real-time 4K path tracing of massive industrial scenes.”
Moreover, NVIDIA is integrating XR developer tools for designing spatial environments within the Omniverse ecosystem. Spatial environment refers to the 3D computing space around a headset user – as showcased by Apple’s Vision Pro.
The lineup of updates comes as NVIDIA looks to lead in providing developers and enterprise end-users with the tools for designing immersive applications to drive positive outcomes for the workforce – alongside consumer-facing use cases.
NVIDIA Scales Support for XR Development
The news comes after NVIDIA partnered with Varjo in August to introduce ray-tracing tools to its Omniverse platform, delivering high-quality rendering on the cloud for industrial XR enterprise clients.
Marcus Olssen, Director of Software Partnerships, Varjo, said at the time:
Graphical and computing demands made it impossible to ray trace true-to-life scenes like these in high-resolution VR until now. With support for NVIDIA Omniverse, mixed reality developers and users will have the ability to render photorealistic scenes in Varjo XR-3 and unlock ray tracing in mixed reality.
At the GPU Technology Conference (GTC) 2023, NVIDIA introduced the CloudXR 4.0 platform for scaling XR deployments across cloud-based frameworks, with additional, optimized server and client flexibility for developing applications.
With the upgrades, developers can tap the platform for use in the cloud, over 5G mobile compute edge (MEC), and corporate and enterprise networks.
Read More: www.xrtoday.com