Our latest generative AI video model, VidGen-2, is here!
With up to 2X higher resolution than VidGen-1 at 696x696, improved 30 FPS realism, and multi-camera support at 640 x 384 resolution per camera, VidGen-2 generates highly realistic driving video sequences for autonomous driving development.
Trained on thousands of hours of diverse driving footage using NVIDIA H100 Tensor Core GPUs, VidGen-2 leverages Helm.ai’s innovative generative deep neural network (DNN) architectures and our Deep Teaching™ training technology.
VidGen-2 creates driving scenes across different geographies, camera types, and vehicle perspectives, simulating real-world conditions like urban and highway driving including pedestrians, vehicles, intersections, and various weather and lighting conditions. Additionally, the model learns human-like driving behaviors and enables producing videos of rare corner case scenarios.
Overall, VidGen-2 delivers even smoother and more detailed AI generated videos, to accelerate the training and validation of autonomous driving systems.
🔗 Learn more about VidGen-2 here: https://lnkd.in/gwEQ-D8G
#autonomousdriving #selfdrivingcars #embodiedai #machinelearning #artificialintelligence #generativeai #computervision #deepteaching #helmai
Head of Marketing at Quobyte
1moHey! Have reached out to the sales team multiple times and filled out the form on your website but haven’t heard back. Can you help?