Deep Learning Based Perception on Radar Data for Autonomous Driving
This thesis investigates perception tasks using automotive radar sensors for autonomous driving. Compared to cameras and lidars, radar sensors offer unique advantages including weather robustness and direct velocity measurements. We specifically focus on leveraging range-beam-doppler tensors as input for deep neural networks, a data representation that contains the full radar signal rather than the commonly used sparse radar pointclouds. The thesis investigates how range-beam-doppler tensors can be processed by neural networks through the exemplary task of scene classification and evaluates the effectiveness of complex-valued convolutions in processing radar data.
Our findings show that while scene classification is achievable solely based on radar data, complex-valued convolutions do not yield significant benefits in this context, and the magnitude information alone suffices for this perception task. Furthermore, we investigate how polar radar tensors can be efficiently processed in convolutional neural networks. This leads to the development of radar 3D object detection model using graph neural networks, which improved detection performance by more than 8.9% compared to a model without graph convolutions. We demonstrate that ego-motion compensation, while more complex for range-beam-doppler tensors than pointclouds, is essential for accurate object detection, yielding an 8.7% improvement in average precision. We also demonstrate that discrete autoencoders can effectively compress range-beam-doppler data while maintaining sufficient information for multi-sensor fusion tasks. In comparing radar tensor and pointcloud approaches, we find that current pointcloud-based models achieve slightly superior detection performance. However, our research has significantly narrowed the performance gap between these two data representations and suggests that the range-beam-doppler data level holds considerable promise for future improvements.
Vorschau
Rechte
Nutzung und Vervielfältigung:
Bitte beachten Sie, dass einzelne Bestandteile der Publikation anderweitigen Lizenz- bzw. urheberrechtlichen Bedingungen unterliegen können.