Autonomous Vehicle Simulation

Explore high-fidelity and diverse sensor simulation for safe autonomous vehicle development.

Workloads

Simulation / Modeling / Design

Industries

Automotive and Transportation

Business Goal

Return on Investment
Risk Mitigation

Products

NVIDIA Omniverse Enterprise
NVIDIA OVX
NVIDIA DGX

Overview

The Need for High-Fidelity AV Simulation

Developing autonomous vehicles (AVs) requires vast amounts of training data that mirrors the real-world diversity they’ll face on the road. Sensor simulation addresses this challenge by rendering physically-based sensor data in virtual environments. Conditioned on these physics, world foundation models (WFM) add variation to sensor simulation, amplifying lighting, weather, geolocations, and more. With these capabilities, you can train, test, and validate AVs at scale without having to encounter rare and dangerous scenarios in the real world. The precision and diversity in sensor data and environmental interaction are crucial for developing physical AI.

Why AV Simulation Matters:

Safety

Render diverse driving conditions—such as adverse weather, traffic changes, and rare or dangerous scenarios—without having to encounter them in the real world.

Cost Efficiency

Accelerate development and reduce reliance on costly data-collection fleets by generating data to meet model needs.

Scalability and Flexibility

Deploy a virtual fleet to configure new sensors and stacks before physical prototyping.

Accelerate AV Simulation with Neural Reconstruction and World Foundation Models

In this tech blog, we highlight the latest NVIDIA APIs, Cosmos world foundation models, and NIM microservices for developers to kickstart their data pipelines.

Quick Links:


Technical Implementation

Running Physically Accurate AV Simulation at Scale

Developers can get started building AV simulation pipelines by taking the following steps.

Reconstruct Real-World Data in Digital Twins & Amplify Data Variation

NVIDIA NuRec provides APIs and tools for neural reconstruction and rendering, allowing developers to turn their sensor data into high-fidelity 3D digital twins, simulate new events, and render datasets from new perspectives.

Cosmos Transfer-1 is conditioned on  ground truth and  structured data inputs to generate new lighting, weather and terrain—turning a single driving scenario into hundreds. Developers can use prompt as well as sensor data as input to generate different variants of an existing scene.

Both NuRec and Cosmos Transfer-1 are integrated with CARLA, a leading open-source AV simulator. This integration allows developers to generate sensor data from Gaussian-based reconstructions using ray tracing, and to increase scenario diversity with Cosmos WFMs.

With these tools, developers can:

  • Simulate new trajectories and camera views in reconstructed scenes
  • Use CARLA’s APIs and traffic models to create varied, realistic scenarios
  • Leverage behavioral agent models like ITRA and Foretellix for advanced traffic and behavior diversity

The integration includes a starter pack of pre-reconstructed scenes, enabling rapid creation of diverse, corner-case datasets for AV development.

Generate Diverse Scenarios with World Foundation Models

Developers can use the latest NVIDIA Cosmos Predict-2 world foundation model to enhance AV development with faster, scalable synthetic data generation. The WFM has two variants:

  • Predict2-2B: Optimized for speed and lower memory usage
  • Predict2-14B: High-fidelity outputs for complex scene understanding and temporal coherence.

Cosmos Predict-2 lets developers generate a starting frame from a text prompt, then use that frame to condition longer video sequences, speeding up scenario design. The model is easily post-trained on specific environments, tasks, or camera systems using curated AV data and tools, enabling tailored outputs for different use cases.

Predict-2’s diffusion-based architecture enables text-to-image and video-to-world generation, balancing speed and realism for scalable scenario design.


Partners

Autonomous Vehicle Simulation Partner Ecosystem

Learn how our partners are delivering physically-based simulation for safe and efficient autonomous vehicle development.

Quickly expand Omniverse Cloud AV Simulation V&V capabilities by connecting to Foretellix's Foretify™ coverage-driven validation platform.

See one of the latest autonomous vehicle safety frameworks for industry-wide deployment.

Tap into a shared ecosystem of compatible, simulation-ready content.

Rapidly import environments into Omniverse Cloud with MathWorks RoadRunner.

Analyze, curate, and evaluate Omniverse data with the FiftyOne platform.

Amplify data variation with Cosmos Transfer and Parallel Domain scene rendering.

FAQs

Please sign up using our interest form to get the latest information.

News

NVIDIA Announces Early Access for Omniverse Sensor RTX

Read how Accenture and Foretellix are accelerating the development of next-generation self-driving cars and robots with high-fidelity, scalable sensor simulation.

NVIDIA Launches Cosmos World Foundation Model Platform to Accelerate Physical AI Development

Companies can accelerate development of physical AI, including robots and self-driving vehicles, with models and data processing pipelines for images and video.

NVIDIA Supercharges Autonomous System Development With Omniverse Cloud APIs

NVIDIA Omniverse Cloud APIs are designed to deliver large-scale, high-fidelity sensor simulation.

NVIDIA Research Wins CVPR Autonomous Grand Challenge for End-to-End Driving

NVIDIA was named an Autonomous Grand Challenge winner at CVPR in the End-to-End Driving at Scale category, outperforming more than 400 entries worldwide.

Use Cases and Demos

Foretellix

Autonomous Vehicle Sensor Simulation, Powered by NVIDIA Omniverse

See how Foretellix uses NVIDIA Omniverse Blueprint for AV Simulation to generate high-fidelity sensor simulation for autonomous vehicle development.

Foretellix

Safely Deploy Autonomous Vehicles

Foretellix, an AV validation tool developer, unlocks sensor simulation with Omniverse Cloud APIs to improve safety while accelerating workflows and reducing costs.

WPP

Enhance 3D Brand Experiences

Produce high-quality content with generative AI tools built on NVIDIA Picasso and publish interactive brand experiences with the NVIDIA Graphics Delivery Network (GDN).

Discover End-to-End Autonomous Vehicle Development

NVIDIA Omniverse Cloud Sensor RTX microservices let you test and validate your workflows in a physically accurate environment before testing in the real world.