Adversarial Sensor Attacks on LiDAR-based Cooperative Perception in Autonomous Driving Environments

Project Objective

The cooperative perception in autonomous driving utilizes the Lidar prediction results generated by multiple connected vehicles to enhance the prediction accuracy. However, the cooperative perception system could be compromised by the fake prediction results conducted by the attacker. To address this issue, this work aims to study the security of LiDAR-based cooperative perception in autonomous driving. We will design methods to generate the adversarial samples to fool the cooperative perception system to predict the wrong results. Meanwhile, the defense strategy will be proposed and the simulation will be conducted to evaluate the attacking method and defense work.

Development Tools Tutorials

Machine Learning Models for Autonomous Vehicles

Reading Material

Week 1 Activites

Get ORBIT/COSMOS account and familiarize oneself with the testbed procedures

Last modified 13 months ago Last modified on Jun 2, 2020, 9:47:09 PM
Note: See TracWiki for help on using the wiki.