To help autonomous cars navigate safely in the rain and other inclement weather, researchers are looking into a new type of radar.
Self-driving vehicles can have trouble “seeing” in the rain or fog, with the car’s sensors potentially blocked by snow, ice or torrential downpours, and their ability to “read” road signs and road markings impaired.
Many autonomous vehicles rely on lidar radar technology, which works by bouncing laser beams off surrounding objects to give a high-resolution 3D picture on a clear day, but does not do so well in fog, dust, rain or snow, according to a recent report from abc10 of Sacramento, Calif.
“A lot of automatic vehicles these days are using lidar, and these are basically lasers that shoot out and keep rotating to create points for a particular object,” stated Kshitiz Bansal, a computer science and engineering Ph.D. student at University of California San Diego, in an interview.
The university’s autonomous driving research team is working on a new way to improve the imaging capability of existing radar sensors, so they more accurately predict the shape and size of objects in an autonomous car’s view.
“It’s a lidar-like radar,” stated Dinesh Bharadia, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering, adding that it is an inexpensive approach. “Fusing lidar and radar can also be done with our techniques, but radars are cheap. This way, we don’t need to use expensive lidars.”
The team places two radar sensors on the hood of the car, enabling the system to see more space and detail than a single radar sensor. The team conducted tests to compare their system’s performance on clear days and nights, and then with foggy weather simulation, to a lidar-based system. The result was the radar plus lidar system performed better than the lidar-alone system.
“So, for example, a car that has lidar, if it’s going in an environment where there is a lot of fog, it won’t be able to see anything through that fog,” Bansaid stated. “Our radar can pass through these bad weather conditions and can even see through fog or snow,” he stated.
The team uses millimeter radar, a version of radar that uses short-wavelength electromagnetic waves to detect the range, velocity and angle of objects.
Enhanced autonomous vehicle vision is also the goal of a project in Europe—called AI-SEE—involving startup Algolux, which is cooperating with 20 partners over a period of three years to work towards Level 4 autonomy for mass-market vehicles. Founded in 2014, Algolux is headquartered in Montreal and has raised $31.8 million to date, according to Crunchbase.
The intent is to build a novel robust sensor system supported by artificial intelligence enhanced vehicle vision for low visibility conditions, to enable safe travel in every relevant weather and lighting condition such as snow, heavy rain or fog, according to a recent account from AutoMobilSport.
The Algolux technology employs a multisensory data fusion approach, in which the sensor data acquired will be fused and simulated by means of sophisticated AI algorithms tailored to adverse weather perception needs. Algolux plans to provide technology and domain expertise in the areas of deep learning AI algorithms, fusion of data from distinct sensor types, long-range stereo sensing, and radar signal processing.
Dr. Werner Ritter, Consortium Lead, Mercedes Benz AG: “Algolux is one of the few companies in the world that is well versed in the end-to-end deep neural networks that are needed to decouple the underlying hardware from our application,” stated Dr. Werner Ritter, consortium lead, from Mercedes Benz AG. “This, along with the company’s in-depth knowledge of applying their networks for robust perception in bad weather, directly supports our application domain in AI-SEE.”
The project will be co-funded by the National Research Council of Canada Industrial Research Assistance Program (NRC IRAP), the Austrian Research Promotion Agency (FFG), Business Finland, and the German Federal Ministry of Education and Research BMBF under the PENTA EURIPIDES label endorsed by EUREKA.
The ability of the autonomous car to detect what is in motion around it is crucial, no matter the weather conditions, and the ability of the car to know which items around it are stationary is also important, suggests a recent blog post in the Drive Lab series from Nvidia, an engineering look at individual autonomous vehicle challenges. Nvidia is a chipmaker best known for its graphic processing units, widely used for development and deployment of applications employing AI techniques.
The Nvidia lab is working on using AI to address the shortcomings of radar signal processing in distinguishing moving and stationary objects, with the aim of improving autonomous vehicle perception.
“We trained a DNN [deep neural network] to detect moving and stationary objects, as well as accurately distinguish between different types of stationary obstacles, using data from radar sensors,” stated Neda Cvijetic, who works on autonomous vehicles and computer vision for Nvidia; the author of the blog post. In her position for about four years, she previously worked as a systems architect for Tesla’s Autopilot software.
Ordinary radar processing bounces radar signals off of objects in the environment and analyzes the strength and density of reflections that come back. If a sufficiently strong and dense cluster of reflections comes back, classical radar processing can determine this is likely some kind of large object. If that cluster also happens to be moving over time, then that object is probably a car, the post outlines.
While this approach can work well for inferring a moving vehicle, the same may not be true for a stationary one. In this case, the object produces a dense cluster of reflections that are not moving. Classical radar processing would interpret the object as a railing, a broken down car, a highway overpass or some other object. “The approach often has no way of distinguishing which,” the author states.
A deep neural network is an artificial neural network with multiple layers between the input and output layers, according to Wikipedia. The Nvidia team trained their DNN to detect moving and stationary objects, as well as to distinguish between different types of stationary objects, using data from radar sensors.
Specifically, we trained a DNN to detect moving and stationary objects, as well as accurately distinguish between different types of stationary obstacles, using data from radar sensors.
Training the DNN first required overcoming radar data sparsity problems. Since radar reflections can be quite sparse, it’s practically infeasible for humans to visually identify and label vehicles from radar data alone. However, Lidar data, which can create a 3D image of surrounding objects using laser pulses, can supplement the radar data. “In this way, the ability of a human labeler to visually identify and label cars from lidar data is effectively transferred into the radar domain,” the author states.
The approach leads to improved results. “With this additional information, the radar DNN is able to distinguish between different types of obstacles—even if they’re stationary—increase confidence of true positive detections, and reduce false positive detections,” the author stated.
Many stakeholders involved in fielding safe autonomous vehicles, find themselves working on similar problems from their individual vantage points. Some of those efforts are likely to result in relevant software being available as open source, in an effort to continuously improve autonomous driving systems, a shared interest.
Fake news