Helm.ai Unveils VidGen-1: A Breakthrough in AI-Generated Driving Scene Videos
The new generative AI model creates realistic video sequences for autonomous driving development, leveraging advanced deep neural networks and deep teaching techniques
June 24, 2024 /UKi /IZZY WOOD -- Helm.ai, a provider of advanced AI software for high-end ADAS, Level 4 autonomous driving and robotic automation, has unveiled a generative AI model that produces highly realistic video sequences of driving scenes for autonomous driving development and validation.
Credit: Helm.ai
The AI technology, known as launch VidGen-1, follows Helm.ai’s announcement of GenSim-1 for AI-generated labeled images and is useful for both prediction tasks and generative simulation.
Trained on thousands of hours of diverse driving footage, the AI video tech leverages deep neural network (DNN) architectures and deep teaching, an unsupervised training technology, to create realistic video sequences of driving scenes. These videos – produced at a resolution of 384 x 640 with variable frame rates up to 30 frames per second and lasting up to several minutes – can be generated either randomly without an input prompt or prompted with a single image or input video.
The company says VidGen-1 can generate videos of driving scenes across different geographies and from multiple types of cameras and vehicle perspectives.
The model can produce both highly realistic appearances and temporally consistent object motion, and learn and reproduce human-like driving behaviors, to generate motions of the ego-vehicle and surrounding agents in accordance with traffic rules.
It can simulate realistic video footage of various scenarios in multiple international cities, encompassing urban and suburban environments; a variety of vehicles; pedestrians; bicyclists; intersections; turns; weather conditions (such as rain and fog); illumination effects (like glare and night driving) and accurate reflections on wet road surfaces; reflective building walls, and the hood of the ego-vehicle.