Passengers may not be skittish when self-driving cars are commonplace, but in the early years, most of us will be on high alert. Self-driving system developer Drive.ai employs multiple visualization technologies to reassure passengers, as well as to help company engineers understand what the system is “seeing” and how it performs.
Drive.ai recently outlined four primary visualization tools it uses for internal study and passenger reassurance in Medium. The company described how it uses dashboard displays, 3D data visualization, annotated data sets, and interactive simulations in product development.
Onboard displays
Passenger reassurance and comfort motivates Drive.ai’s dashboard displays. The onboard display combines data from lidar sensors and full-surround cameras to create 3D images as the car drives. By enhancing the image with data from radar, GPS, and an inertial measurement unit (IMU), the system helps passengers understand what the vehicle is about to do as well, as what it picks up with the various sensors.
The many uses of AI visualization
- Explore our planet through an immersive AR experience with the Earth app
- Today’s archaeologists are putting down shovels and turning to tech
- Minority Report comes true: Hitachi just developed real-life crime-predicting technology
Off-board analysis
Drive.ai’s engineers use real-time data from cars to create 3D visualizations that include mapping, motion planning, perception, and localization and state estimation, plus a host of additional robotics elements. The full assemblage enables the company to dive deeper into self-driving performance.
Synchronizing the timing of the various sensor data signals is a crucial element of successful autonomous vehicle performance. By incorporating a wide range of vehicle sensor, mapping, and traffic network data into a single visualization, the engineers can tweak the various algorithms to enhance the timing coordination. This toolset also facilitates testing and variable analysis.
Annotated datasets
According to Drive.ai, it takes about 800 human hours to correctly label all the data collected during one hour of driving. Human annotators label the initial data sets. Deep-learning A.I. applies what it “learns” from the human-annotated data to label additional data quickly and reliably. The human annotators work on new types of data and quality check the machine-labeled data.
Simulation
Working from what the company calls “massive libraries of scenarios,” Drive.ai engineers test and evaluate the company’s autonomous systems by using driving simulators in 3D visualized worlds. With the company’s autonomous system running in the background, the team can change elements such as traffic light patterns and pedestrian behaviors to observe how the self-driving program reacts and responds.