Explore high-fidelity and diverse sensor simulation for safe autonomous vehicle development.
Simulation / Modeling / Design
Automotive and Transportation
Return on Investment
Risk Mitigation
NVIDIA Omniverse Enterprise
NVIDIA OVX
NVIDIA DGX
Developing autonomous vehicles (AVs) requires vast amounts of training data that mirrors the real-world diversity they’ll face on the road. Sensor simulation addresses this challenge by rendering physically-based sensor data in virtual environments. Conditioned on these physics, world foundation models (WFM) add variation to sensor simulation, amplifying lighting, weather, geolocations, and more. With these capabilities, you can train, test, and validate AVs at scale without having to encounter rare and dangerous scenarios in the real world. The precision and diversity in sensor data and environmental interaction are crucial for developing physical AI.
Why AV Simulation Matters:
Render diverse driving conditions—such as adverse weather, traffic changes, and rare or dangerous scenarios—without having to encounter them in the real world.
Accelerate development and reduce reliance on costly data-collection fleets by generating data to meet model needs.
Deploy a virtual fleet to configure new sensors and stacks before physical prototyping.
Quick Links:
Developers can get started building AV simulation pipelines by taking the following steps.
NVIDIA NuRec provides APIs and tools for neural reconstruction and rendering, allowing developers to turn their sensor data into high-fidelity 3D digital twins, simulate new events, and render datasets from new perspectives.
Cosmos Transfer-1 is conditioned on ground truth and structured data inputs to generate new lighting, weather and terrain—turning a single driving scenario into hundreds. Developers can use prompt as well as sensor data as input to generate different variants of an existing scene.
Both NuRec and Cosmos Transfer-1 are integrated with CARLA, a leading open-source AV simulator. This integration allows developers to generate sensor data from Gaussian-based reconstructions using ray tracing, and to increase scenario diversity with Cosmos WFMs.
With these tools, developers can:
The integration includes a starter pack of pre-reconstructed scenes, enabling rapid creation of diverse, corner-case datasets for AV development.
Developers can use the latest NVIDIA Cosmos Predict-2 world foundation model to enhance AV development with faster, scalable synthetic data generation. The WFM has two variants:
Cosmos Predict-2 lets developers generate a starting frame from a text prompt, then use that frame to condition longer video sequences, speeding up scenario design. The model is easily post-trained on specific environments, tasks, or camera systems using curated AV data and tools, enabling tailored outputs for different use cases.
Predict-2’s diffusion-based architecture enables text-to-image and video-to-world generation, balancing speed and realism for scalable scenario design.
Quick Links
Learn how our partners are delivering physically-based simulation for safe and efficient autonomous vehicle development.