LidarView
LidarView is an open source platform developed by Kitware for real-time visualization, recording, and processing of 3D LiDAR data. Built atop ParaView, it efficiently renders large point clouds and offers features such as 3D visualization of time-stamped LiDAR returns, a spreadsheet inspector for attributes like timestamp and azimuth, and the ability to display multiple data frames simultaneously. Users can input data from live sensor streams or recorded .pcap files, apply 3D transformations to point clouds and manage subsets of laser data. LidarView supports various sensors, including models from Velodyne, Hesai, Robosense, Livox, and Leishen, enabling visualization of live streams and replaying of recorded data. The platform integrates advanced algorithms for Simultaneous Localization and Mapping (SLAM), facilitating accurate environmental reconstruction and sensor localization. It also incorporates AI and machine learning capabilities for scene classification.
Learn more
Cognata
Cognata delivers full product lifecycle simulation for ADAS and autonomous vehicle developers. Automatically-generated 3D environments and realistic AI-driven traffic agents for AV simulation. Autonomous vehicles ready-to-use scenario library and simple authoring to create millions of AV edge cases. Closed-loop testing with painless integration. Configurable rules and visualization for autonomous simulation. Measured and tracked performance. Digital twin grade 3D environments of roads, buildings, and infrastructure that are accurate down to the last lane marking, surface material, and traffic light. A global, cost-effective, and efficient architecture built for the cloud from the beginning. Closed-loop simulation or integration with your CI/CD environment are a few clicks away. Enables engineers to easily combine control, fusion, and vehicle models with Cognata’s environment, scenario, and sensor modeling capabilities.
Learn more
NVIDIA DRIVE Map
NVIDIA DRIVE® Map is a multi-modal mapping platform designed to enable the highest levels of autonomy while improving safety. It combines the accuracy of ground truth mapping with the freshness and scale of AI-based fleet-sourced mapping. With four localization layers—camera, lidar, radar, and GNSS—DRIVE Map provides the redundancy and versatility required by the most advanced AI drivers. DRIVE Map is designed for the highest level of accuracy, the ground truth map engine creates DRIVE Maps using rich sensors—cameras, radars, lidars, and differential GNSS/IMU—with NVIDIA DRIVE Hyperion data collection vehicles. It achieves better than 5 cm accuracy for higher levels of autonomy (L3/L4) in selected environments, such as highways and urban environments. DRIVE Map is designed for near real-time operation and global scalability. Based on both ground truth and fleet-sourced data, it represents the collective memory of millions of vehicles.
Learn more
Aurora Driver
Created from industry-leading hardware and software, the Aurora Driver is designed to adapt to a variety of vehicle types and use cases, allowing us to deliver the benefits of self-driving across several industries, including long-haul trucking, local goods delivery, and people movement. The Aurora Driver consists of sensors that perceive the world, software that plans a safe path through it, and the computer that powers and integrates them both with the vehicle. The Aurora Driver was designed to operate any vehicle type, from a sedan to a Class 8 truck. The Aurora Computer is the central hub that connects our hardware and autonomy software and enables the Aurora Driver to seamlessly integrate with every vehicle type. Our custom-designed sensor suite—including FirstLight Lidar, long-range imaging radar, and high-resolution cameras—work together to build a 3D representation of the world, giving the Aurora Driver a 360˚ view of what’s happening around the vehicle in real time.
Learn more