Improving Resiliency in ML-based Traffic Sign Recognition against Adversarial Attacks for Autonomous Vehicles

Poster Number

8B

Lead Author Affiliation

Business Analytics

Lead Author Status

Masters Student

Second Author Affiliation

Computer Science

Second Author Status

Undergraduate - Junior

Third Author Affiliation

Computer Science

Third Author Status

Faculty

Fourth Author Affiliation

Computer Science

Fourth Author Status

Faculty

Research or Creativity Area

Engineering & Computer Science

Abstract

Autonomous Vehicles (AVs) have emerged as one of the most revolutionary developments in the field of vehicular technology. These smart vehicles significantly improve road safety, accessibility, and environmentally friendly driving practices through advanced sensing capabilities. Unfortunately, this technology can be compromised through adversarial attacks that can target various AV applications like traffic sign recognition. Within this paper, we highlight the evolution of AVs through the ages. Following this, we highlight some contemporary adversarial attacks and defenses that have been carried out on AV applications like traffic sign recognition. Finally, we introduce open research areas/opportunities that can be investigated to improve the resiliency of ML-based traffic sign recognition in AVs against adversarial attacks.

Purpose

One of the most captivating interdisciplinary developments in the last 10 years is the emergence of autonomous vehicles. These vehicles possess advanced sensing capabilities and are designed to significantly improve road safety, enhance accessibility, and promote environmentally friendly driving, by leveraging the usage of machine learning. Unfortunately, machine learning-enabled applications in autonomous vehicles face safety concerns as adversarial attacks can endanger these systems. These attacks deliberately and strategically attempt to deceive a machine learning model, leading to incorrect predictions. This can put users, passengers, drivers, and the public at risk. In this research, we survey the current evolution of autonomous vehicles through multiple generations and look at contemporary adversarial attacks on autonomous vehicles that have been studied in the literature. We then introduce open research areas that can be investigated to improve the resiliency of machine learning-enabled applications in vehicles against adversarial attacks. The main purpose of this work is to spark discussions around the research community and introduce potential solutions to protect machine learning frameworks from being exploited by adversarial attacks.

Results

The main result of this research is the identification of open research areas that can be investigated by security researchers to protect machine learning-enabled applications from adversarial attacks in autonomous vehicles. In these results, we have identified two types of open research solutions: On-device and off-device. On-device solutions include developing mechanisms that can be investigated onboard the autonomous vehicle and placing priority on making the machine learning framework resilient to perturbations. Examples of such solutions include perception safety by integrating mapping software and increasing resiliency through using reinforcement learning. In contrast, off-device solutions include developing mechanisms that can be investigated outside the autonomous vehicle and place priority on making the learning environment resilient to inconsistencies. Examples of such solutions include developing resilient roadway infrastructures and using cross-validation to ensure correct predictions.

Significance

Currently, adversarial attacks on machine learning-enabled systems in autonomous vehicles are a trending cybersecurity challenge that is being faced. However, despite being a notable research problem, no standard defensive solution can protect these systems. The significance of this research is to introduce prospective defensive solutions that can be investigated. This way, we can spark discussions around the research community and develop solutions to protect machine learning frameworks from being exploited by adversarial attacks in the context of autonomous vehicles.

Location

Don and Karen DeRosa University Center (DUC) Poster Hall

Start Date

27-4-2024 10:30 AM

End Date

27-4-2024 12:30 PM

This document is currently not available here.

Share

COinS
 
Apr 27th, 10:30 AM Apr 27th, 12:30 PM

Improving Resiliency in ML-based Traffic Sign Recognition against Adversarial Attacks for Autonomous Vehicles

Don and Karen DeRosa University Center (DUC) Poster Hall

Autonomous Vehicles (AVs) have emerged as one of the most revolutionary developments in the field of vehicular technology. These smart vehicles significantly improve road safety, accessibility, and environmentally friendly driving practices through advanced sensing capabilities. Unfortunately, this technology can be compromised through adversarial attacks that can target various AV applications like traffic sign recognition. Within this paper, we highlight the evolution of AVs through the ages. Following this, we highlight some contemporary adversarial attacks and defenses that have been carried out on AV applications like traffic sign recognition. Finally, we introduce open research areas/opportunities that can be investigated to improve the resiliency of ML-based traffic sign recognition in AVs against adversarial attacks.