Virtual Driving School

Poster Number

1

Lead Author Affiliation

Engineering Science

Lead Author Status

Masters Student

Second Author Affiliation

Electrical Engineering

Third Author Affiliation

Engineering and Computer Science

Third Author Status

Faculty

Introduction/Abstract

Self-driving vehicles provide clear societal benefits from reducing fatalities to automating the transport of goods. Automation of these vehicles faces a number of challenges in identification of and response to other vehicles, pedestrians, traffic signs, and other tasks that people handle almost effortlessly. Dealing with these challenges often requires significant data from the actual system operating in the real world - video of a person driving a car around a city, for example. Along with the collection of the data, someone labels the data, identifying the different objects within it (the people, stop signs, etc.). Currently researchers work equally on the reduction of the challenges and the collection of usable, labeled data.

Purpose

Our project aims to solve automated driving challenges without the need for data collection on a real system or labeling of that system. Our system collects training data via a simulation environment; this data directly reflects the inputs and outputs of the automated system. The input data consists only of the camera images and the output provides steering direction. This data trains a convolutional neural network to determine steering outputs based on current camera inputs. Validation of the neural network occurs via a small RC car, instrumented for full automation. The system uses ‘Grand Theft Auto 5’ (GTA V) as its simulation environment. Video games are ideal simulators because they provide low cost, controllable virtual environments from which the information needed to train a neural network to drive can be collected. Modern video games often employ highly realistic graphics to create immersive experiences for players, the popular action title GTA V is one such game. With this training data, a convolutional neural network learns how to drive. Neural networks are computational systems inspired by the human brain and they learn in a similar way. Convolutional neural networks extend the capabilities of a traditional neural network by running filters over the input image to help the network identify important patterns. In this scenario the image filter will help the network extract information about the road and remove unimportant information like the shape of clouds in the distance. Finally, to ensure end-to-end operation, a small RC car provides the real world implementation of the system. The key features of the car necessary for this project are reasonably continuous steering and throttle control and a platform that is large enough to mount a camera on. Imprecise control of the steering would result in very swervy paths (there has to be a better word than swervy). Much like a real car, the number of drive wheels don’t matter in most situations.

Method

The system combines the collection of the training data, the training of the neural network, the validation of the neural network within the simulation environment, and the verification on the small automated car. To create the training dataset necessary for neural network training we recorded in-game video of researchers driving in GTA V. Each frame of the recording is paired with steering, gas and brake measurements taken from the exact moment that the frame was recorded. After recording is complete, the data is passed through a deep convolutional neural network. The network learns to produce appropriate steering, gas and brake control outputs by imitating the actions of the researcher in the recording. To achieve road following behavior on the RC car we mounted a Raspberry Pi and Raspberry Pi camera on the car. As the car drives it continuously takes pictures and sends them back to a laptop. The laptop runs the convolutional neural network over the image sent from the car and generates an output steering angle. In order to have the RC car respond to computer control an additional control system was implemented on the RC system. A wifi enabled microcontroller called a Photon was embedded within the RC car’s remote control. The stock remote controller reads user input from potentiometers in the steering wheel and throttle lever. The voltage output from the potentiometers is easily simulated using the microcontroller’s onboard DACs (digital to analog converters). A toggle switch is used to control whether the remote control receives its input from the standard potentiometers or the microcontroller’s DACs. The output of the DACs is then controlled using a TCP connection to the microcontroller.

Results

The evaluation of the neural network driver is a two step process. The first step is to collect another set of driving recordings from within GTA V. The network can then be evaluated based on how closely it can mimic the real driver on this new dataset. Thus far the system has displayed an accuracy level of approximately 86% at this task. Analysis of the network’s ability to control our RC car on a test track is primarily qualitative in nature but, it has successfully completed several artificial tests successfully. The system has demonstrated the ability to follow a straight road, execute right and left turns and execute turns of varying sharpness.

Significance

Self-driving cars require improved training processes to extend utility. To provide a breadth of scenarios to the vehicle and maybe have vehicles drive in ways people don’t, we need ways to collect training data that minimize the processing of that data by people. Our system provides such a method and will increase the speed at which self-driving cars gain traction in our society.

Location

DUC Ballroom A&B

Format

Poster Presentation

Poster Session

Afternoon

This document is currently not available here.

Share

COinS
 
Apr 29th, 1:00 PM Apr 29th, 3:00 PM

Virtual Driving School

DUC Ballroom A&B

Self-driving vehicles provide clear societal benefits from reducing fatalities to automating the transport of goods. Automation of these vehicles faces a number of challenges in identification of and response to other vehicles, pedestrians, traffic signs, and other tasks that people handle almost effortlessly. Dealing with these challenges often requires significant data from the actual system operating in the real world - video of a person driving a car around a city, for example. Along with the collection of the data, someone labels the data, identifying the different objects within it (the people, stop signs, etc.). Currently researchers work equally on the reduction of the challenges and the collection of usable, labeled data.