2019SpringTeam1

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search

Team Members

• Ken Shaffer - Computer Engineering

• Benjamin Chang - Electrical Engineering

• Zizhao Gong - Mechanical Engineering

• Chris Cunningham - Aerospace Engineering

Problem Overview

The goal of our project is to build a 1/10 scale autonomous car to showcase the capabilities of a self-driving car.

We use a small and cheap RC car to minimize testing costs. The sensors on the vehicle are a small camera and a LIDAR sensor. Our project goal was to build on the donkeycar autonomous RC car framework by training a deep learning model with LIDAR data to see if we can significantly increase the self-driving accuracy of our car.

Following this, our goal is to train our RC car to change road lanes to avoid downscaled roadway obstacles such as cones and water bottles. The ability for vehicles to be able to consistently avoid obstacles is useful in the real-world since the reaction time of a vehicle is faster than that of a person.

IMG 2439.JPG

Objectives

The car will be able to drive autonomously, switch lanes to avoid obstacles, and stay within a lane when there are no obstacles.

Primary Objectives

Autonomous driving - Stay within road

Train new model with LIDAR data with camera data

Secondary Objectives

Train model to avoid obstacles

Train model to switch lanes to avoid obstacles

Mechanical Design

Baseplate The baseplate was made out of a quarter inch thick acrylic and cut out of a laser cutter. It was cut with a relatively short width and length, to allow a thin and sleek aesthetic. A slot was cut out of the middle to allow the connection between the motors and battery on the chassis to the electrical components on top of the baseplate.

Baseplate

First Camera Mount Design The initial design for the camera was made so that it would be robust and rigid, while capable to be adjusted with ease. It was given two degrees of freedom, where it could rotate about the base-mount and slide radially along the camera-mount.:

First Camera Mount


RPLIDAR and Camera Mount This mount was design to hold both the camera and the RPLIDAR. Although this design is not nearly as rigid and robust as the initial camera mount design, its flexibility allows for more options position and angle of the camera and RPLIDAR. The design was made to be simplistic so that it can be rapidly produced and reproduced, if necessary. All holes were designed with some clearance to allow a M3 screw to pass through.

Camera and RPLIDAR Mount Camera and RPLIDAR Mount

Electrical Design

Components

• Raspberry Pi

• PICAM

• PCA9685 PWM

• Wireless Relay

• ESC

• LiPo Battery

• RGB LED

• Servo

• Motor

• RB LED

• Switches

• Step-down module

• RPLidar A1 (Final Project)

Breadboard Schematic

ECE 148 - Breadboard Schematic.png

(Note - The Lidar was the only thing added for our final project and the lidar was connected to the raspberry pi through a usb cable.)

Software

We used the default implementation of the raspberry pi as outlined in the google doc where a Rasbian OS was used with various packages (tensorflow) and frameworks (donkey car) installed.


All the other software changes we did during our project is outlined below.


Most software was available to us due to RPLidar being a part in lidar.py. However, small tweaks had to be done in mainly 5 files: datastore.py, lidar.py, keras.py, manage.py, and train.py.


datastore.py, lidar.py, and keras.py are in the donkeycar parts folder

manage.py and train.py are in the d2t folder

These changes allowed us to get wanted data from the lidar and stored it in a tub to train on, allowing us to create a driving model that avoided obstacles.



In datastore.py: function put_record()

elif typ == 'nparray':                         # if array type
    json_data[key] = (np.array(val)).tolist()  # store the input as an array into tub



In lidar.py: function run_threaded() for RPLidar

We took the outputted angle array and distance array data from the lidar, then performed python filtering since angle and distance array data correlate to eachother, angle[1] has a distance distance[1]

lowerang = 44
higherang = 136

angs = np.copy(self.angles)
dists = np.copy(self.distances)

filter_angs = angs[(angs > lowerang) & (angs < higherang)]
filter_dist = dists[(angs > lowerang) & (angs < higherang)] #sorts distances based on angle values

angles_ind = filter_angs.argsort()         # returns the indexes that sorts filter_angs
sorted_distances = filter_dist(angles_ind) # sorts distances based on angle indexes



In keras.py: Any keras model

(note - in donkey framework it allows one to create and call a specific model in the config file by typing in options like 'categorical' or 'linear', however it requires some code to setup so we didn't do that due to time taken to look through framework. Instead, we changed the default model (Categorical) to make the the default running of training run on our lidar model.)

Example of what was done

img = Input(shape=image_shape, name='img_in')
x = Cropping2D( ...) (img)
x = Convolution(2D ... )(x)
x = ...
.
.                                # Default model for images until flattened
.

lidar_data = Input(shape=lidar_shape, name='dists_in')
y = LocallyConnected1D( ...)(lidar_data)  # local convolution to maintain and emphasize
y = Dropout(drop)(y)                      # the spatial nature of the data
y = LocallyConnected1D( ...)(y)
y = Dropout(drop)(y)
y = LocallyConnected1D( ...)(y)
y = Flatten(...)(y)
y = Dense(...)(y)                         
y = Dropout(drop)(y)
y = Dense(...)(y)

z = concatenate([x,y])     # Here is where the two data starts merging
z = Dense(...)(z)
z = Dropout(...)(z)
z = Dense(...)(z)
z = Dropout(...)(z)

angle_out = Dense(...)(z)
throttle_out = Dense(...)(z)

Also two lines of code was added to be able to run the trained model

The run function was adjusted to include two inputs img_arr and dists

 
    def run(self, img_arr, dists):

The predict function was made to take in 2 inputs

    angle_binned, throttle = self.model.predict([img_arr, dists])



In manage.py: function drive()

Snippets of code was added to interface the lidar data with various parts like the tub and keras


The RPLidar part was added and imported

from donkeycar.parts.lidar import RPLidar
V.add(RPLidar(), inputs=[],outputs =["lidar/dist_array"], threaded=True)

The lidar data was added as an input to the keras model

(Note - Have to search through manage.py since default manage.py has if,else statements to decide what inputs will go into the model)

    inputs = ['cam/image_array', 'lidar/dist_array']

The tub part input and types are edited to include lidar data

(Note - Again have to search through manage.py for where the tub part is located and added to the vehicle)

    inputs = ['cam/image_array', 'lidar/dist_array',     # <-- this string was added
              'user/angle', 'user/throttle', 'user/mode']

    types = ['image_array', 'nparray',  # <-- this string was added 
             'float', 'float', 'str']   # nparray type was accounted for in datastore.py



In train.py:


function collate_records(...)

This function takes the stored tub data and reads them so more interfacing needs to be done to use and read the lidar data

for record_path in records:
    ...
    angle = float(json_data['user/angle'])
    throttle = float(json_data['user/throttle'])
    dists = np.array(json_data['lidar/data_array']) # <-- this was added to 
    ...                                             # retrieve the lidar data
    .                                               # that was stored
    .
    sample['angle'] = angle
    sample['throttle'] = throttle
    sample['dists'] = dists.reshape(some_shape)     # <-- this was added
    ### MAKE sure that dists is of the right shape or else will get an error
                                                  

function generator() in train(...)

while looping must create an array that stores a batch of lidar distance data that are then provided as an input batch to the model as X along with the images

    ...
    dists = []                          # empty distance batch
    for record in batch_data:           # record = sample in collate_records
        ...
        dists.append(record['dists'])   # creates a batch of distances in dists
        ...
    if ...
    elif ...
    else:
        X = [img_arr, np.array(dists)]  # added the np.array(dists) as input
    ...
    yield X,y                           # yields batches X,y to model                         


Overall, the software implementation was done through trial and error along with a lot of code reading to understand exactly what needs to be done and how does the donkey framework connect and work together. The process we pretty much took with the software is defined below.

Needed data
    so retrieved needed Lidar data in Lidar.py 
Needed to add Lidar to Vehicle 
    so we interfaced the Lidar data to the vehicle in manage.py
Had 2 inputs of different shapes in a keras model
    so Googled online and looked through other models in keras.py as templates
        for how to create a multi-input keras model
Encountered Errors When interfacing
    so reduced amount of interface changes done at once in manage.py
    and looked into how exactly the tub was stored (datastore.py)
Got data to be stored however Vehicle couldn't train
    so looked into train.py to figure out how to incorporate lidar tub data
    and looked for the generic model that was being ran (KerasCategorical)
Created the model but more errors encountered.
    so code was debugged and various models were tested and trained to see
        how keras model should be structured.
Finally training of obstacle avoidance was done.



During these steps, we accustomed to how the lidar gets data and brainstormed what data we should use. Since the angles were in a sorted array that ranged from ~0 to ~361, we thought that angles would not provide valuable data to the keras neural network. The rest of the steps were just implementation of our idea through the interfacing of the lidar with the vehicle and various parts of the donkey framework. Since getting the model to train and work took awhile, we got the tub writing for the lidar data working first. This allowed us to start training the car early in a variety of obstacle avoiding situations like swerving away from obstacles if the car was starting to almost crash into that obstacle. Initial models weren't able to work, and had a high training loss comparatively to the our regular and final lidar model.

Useful Knowledge

This is certain information that gave us some trouble and would have been nice to know in the beginning. Most of this deals with the Lidar.


Lidar orientation.png

1.) Information about what exactly is 0 degrees on the lidar was hard to come by so this image shows that the triangle portion of the lidar is where we found the lidar to output 0 degrees.

2.) Lidar angle data is not exact integers but floats that range from 0 to about ~361

3.) Lidar array data does not always start at angle 0.XX but could start at like angle 359.XX

4.) The Lidar array data it outputs is not all the same size (can be 333, 331, 330). Also depending on how close an object is to the lidar, the lidar might reduce the amount of data it outputs to various sizes like 47, 17, or even 0 when too close.

5.) Training the Lidar model on the gpu cluster is either not possible currently or would require doing some coding or utilizing some workaround since you have no donkeycar/parts/keras.py to change that can change the model to the one being used. (Maybe train 1 epoch on the new model, put it in the gpu cluster, and then transfer the model when training in the gpu cluster) For our project, all the models were trained either overnight or for a couple hours on our personal computers.

Results

Overall, we achieved the goals we were aiming for. The original goals of doing some laps both on the outside and inside track were done. Not only that, but we were able to achieve our proposed project of doing some kind of autonomous driving that avoids obstacles which could be seen in multiple videos with different configurations of obstacles.



Indoor Autonomous Driving

This video shows the car completing 6 Autonomous Indoor Laps. This was done using the "vanilla" Donkey Car training method, using data received only from the camera.

Outdoor Autonomous Driving

This video shows the car completing 7 Autonomous Outdoor Laps. This was done using the "vanilla" Donkey Car training method, using data received only from the camera.

Obstacle Avoidance

The following videos show the car's ability to avoid obstacles on the indoor race track. This model was trained with the Modified Donkey Training Program, so that it would use data from the camera and RPLIDAR to determine its driving course.

Multiple Turns in Succession

Tight Turns

Recovery From Turning Too Early

Another Different Obstacle Arrangement

Misc Obstacle Arrangement

File:Misc avoidance.mp4

Challenges

1. RPLidar default part implementation outputs an array of both angles and distance of varying sizes (333, 331, 330, etc.) Objects too close to the Lidar reduced amount of output causing problems like crashing.

 Sol: Edited Lidar part to filter out the front ~70 degrees of the car, and took only the middle 48 values from that filtered array. 
 (Represents ~52 degrees in front of car)

2. Tub Writer wasn't able to store arrays of values

 Sol: Tub Writer had new option added to store an array of values

3. Keras Neural Network does not include and run the Lidar data.

 Sol: Keras models was implemented as two separate branches that merged with the default Keras model

4. Large Wire connections to parts such as motor were unreliable

 Sol: User heat shrinks to maintain reliability of wire connections

5. LiPo Battery often nearly dies or loses connection to cells within the battery

6. Trained model on car would occasionally stop its throttle.

 Temp Sol: A slight nudge on car would get it starting or the throttle in the config.py file is manually increased by a bit to get the car running.

Future Work

There are many more aspects regarding autonomous vehicles which can be worked on in the future. For our project, some future work we would like to do to improve our vehicle includes using a higher quality camera for better quality training images. We can also use grayscale images for training so that our driving model focuses less on the type of obstacles which are detected, but more on when an obstacle actually appears in the sight of the vehicle.

We would also like to train our vehicle to avoid a higher variety of obstacles and possibly try using stereo cameras for depth estimation as stereo cameras have higher resolution and are cheaper than LIDAR for commercial use.

It would also be interesting to integrate object recognition of specific objects.This could enable the car to better determine how to avoid specific obstacles, what to do in special cases scenarios such as dealing with pedestrians, other vehicles, or determining static and dynamic objects.

Lastly, testing some histogram equalization on the camera images to see if it allows the car to drive better in different lighting situations would be an interesting thing to test.

References

• DonkeyCar
https://github.com/tawnkramer/donkey