2021SummerTeam3

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to: navigation, search

Team Members

148Team3Summer2021.jpg

From Left to Right:
Naiwen Shi: Electrical Engineering
Sean Park: Aerospace Engineering
Jeffrey Dungan: Electrical Engineering

Mechanical Design

Chassis Mount
The chassis mount was designed with 3 mm diameter slots across the entire piece. This was to allow for flexibility during prototyping so that components could be easily screwed in and moved around to their desired positions. We also designed a square hole for the receiver antenna to pass through. The chassis is mounted via four 3 mm screw standoffs attached to the car.

ChassisMount.PNG

Camera Mount
The camera mount is designed with two mounting holes to allow for the camera to be pitched at different angles. The mount itself is screwed into the chassis via four screws.

CameraMountFront.PNG CameraMountSide.PNG

Jetson Nano Case
The Jetson Nano case was downloaded from Thingiverse and can be found here. The screw mounts on the bottom part (picture on the right) were added by us in CAD.

NanoCaseTop.PNG NanoCaseBottom.PNG

Electrical Design

List of Components
1. Jetson Nano 2GB

2. Battery

3. USB Camera

4. Servo

5. DC Motor

6. LED

7. ON OFF Switch

8. DC to DC Converter

9. PWM Board

10. Electronic Speed Controller

11. Relay Module

Wiring Diagram

Team3WiringDiagram.PNG

DonkeyCar Autonomous Laps

Footage of our 3 autonomous laps can be found here. The model was trained on 20 laps.

Donkey Car2.png

Some Challenges we faced:

1. Cars may turn less than expected
With low battery the car's steering value would decrease, thus resulting in an insufficient turn and running off track. The way we dealt with it is by increasing the original turn value to let the car turn more, but after the turning range should to be fixed back to its original one.

2. Light conditions (Only in Donkey Car Training)
Because our track is outdoors plus we collect and train the data from images taken by the car, light condition should be considered. If the images were taken during the day, the car would probably not follow the track in the evening. Or if the images were taken in the evening, the car could also run off track because the shadow in the evening was not predicted by the trained model. The solution to this problem is to collect more data from both the light and dark conditions. In this way, the donkey car would perform better.

OpenCV Autonomous Laps

Ros-logo.jpeg

Footage of our robot completing 3 autonomous laps using OpenCV lane detection.

Some Challenges we faced:

OpenCV Logo.png

1. It takes time to let the car focus on certain color we need</font>
The method we use in OpenCV Autonomous Car method is to let the car follow yellow dotted line in the middle of the road, therefore we have to let the camera only recognize yellow color and only follow the yellow lines. But this process needs certain saturation, highlight, vision and gray_thresh, thus our team has to test each difference these factors bring, and that takes time.

2. Proper steering and throttle value
If you want your car to be controlled by its attempt from receiving next few yellow dotted lines, you might want to slow down your car. If the car is too fast, it will not have time to react promptly. Also, when yellow dotted lines are not in the middle of the car, there must be errors detected to turn the car back to track. However, if even the smallest error would result in big turn, the car would not like to go straight but the "s" shape. To solve this problem, we decrease the steering value and increase the throttle value(value)to let the car not be too sensitive to react on every detected error.

Final Project: Stop Sign Detection

Footage of our robot reaction facing stop sign using OpenCV lane detection.

vision of detecting objects

Our final project used our TA Dominic's docker image and YoloV5 to autonomously have the car stop when it detects a stop sign. The docker image contains code to autonomously control the car through ROS and OpenCV. YoloV5 is a real time object detection framework with models that have already been trained to detect various objects including stop signs. We edited Dominic's preexisting lane guidance code to subscribe to a separate YoloV5 python script that would return a '1' if a stop sign was detected or a '0' if there were no stop signs. When a stop sign was detected, the throttle would be set to zero stopping the car. Footage of our car stopping can be seen here. The Jetson Nano does not have the power to run Yolo's object detection efficiently so there is a bit of lag before the car comes to a complete stop. A more powerful single board computer should be used in future tests.

YoloV5 Object Detection Code

import rospy
import cv2
import numpy as np
from std_msgs.msg import Int32, Int32MultiArray
from sensor_msgs.msg import Image
from decoder import decodeImage
import time
from cv_bridge import CvBridge
from elements.yolo import OBJ_DETECTION

# Give names for nodes and topics for ROS
STOPSIGN_NODE_NAME = 'stopsign_node'
STOPSIGN_TOPIC_NAME = 'StopSign'
CAMERA_TOPIC_NAME = 'camera_rgb'

# types of objects that can be detected

Object_classes = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',                'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',                'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',                'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',                'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',                'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',                'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',                'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',                'hair drier', 'toothbrush' ] 


Object_colors = list(np.random.rand(80,3)*255)
Object_detector = OBJ_DETECTION('weights/yolov5s.pt', Object_classes)


class StopSignDetection:
        def __init__(self):
            self.init_node = rospy.init_node(STOPSIGN_NODE_NAME, anonymous=False)                 # initialize the node
            self.StopSign_publisher = rospy.Publisher(STOPSIGN_TOPIC_NAME,Int32, queue_size=1)    # make this node a publisher
            self.camera_subscriber = rospy.Subscriber(CAMERA_TOPIC_NAME,Image,self.detect_stop)   # subscribe to the camera feed
            self.bridge =CvBridge()
            self.stopsign = Int32()

        def detect_stop(self,data):
                frame = self.bridge.imgmsg_to_cv2(data)          # get frame from camera feed data

                        # detection process
                objs = Object_detector.detect(frame)             # detect the object 

                        # plotting
                for obj in objs:
                                # print(obj)
                                label = obj['label']
                                score = obj['score']
                                [(xmin,ymin),(xmax,ymax)] = obj['bbox']
                                color = Object_colors[Object_classes.index(label)]
                                frame = cv2.rectangle(frame, (xmin,ymin), (xmax,ymax), color, 2)
                                frame = cv2.putText(frame, f'{label} ({str(score)})', (xmin,ymin),
                 cv2.FONT_HERSHEY_SIMPLEX , 0.75, color, 1, cv2.LINE_AA)
                                cv2.imshow('stopsign',frame)                    # create window to show objects detected
                                cv2.waitKey(1)
				
                           # if a stop sign is detected send out a 1 else send out 0

                                if label == 'stop sign' and  score > 0.1:
                                           self.stopsign.data = 1
                                           self.StopSign_publisher.publish(self.stopsign)
                                else:
                                           self.stopsign.data = 0
                                           self.StopSign_publisher.publish(self.stopsign)
def main():
    StopSign_detector = StopSignDetection()
    rate = rospy.Rate(15)
    while not rospy.is_shutdown():
       rospy.spin()
       rate.sleep()

if __name__=='__main__':
    main()


Edited Lane Guidance Code

#!/usr/bin/python3
import rospy
from std_msgs.msg import Float32, Int32, Int32MultiArray

LANE_GUIDANCE_NODE_NAME = 'lane_guidance_node'
STEERING_TOPIC_NAME = '/steering'
THROTTLE_TOPIC_NAME = '/throttle'
CENTROID_TOPIC_NAME = '/centroid'
STOPSIGN_TOPIC_NAME = 'StopSign'

class PathPlanner:
    def __init__(self):
        # Initialize node and create publishers/subscribers
        self.init_node = rospy.init_node(LANE_GUIDANCE_NODE_NAME, anonymous=False)
        self.steering_publisher = rospy.Publisher(STEERING_TOPIC_NAME, Float32, queue_size=1)
        self.throttle_publisher = rospy.Publisher(THROTTLE_TOPIC_NAME, Float32, queue_size=1)
        self.steering_float = Float32()
        self.throttle_float = Float32()
        self.centroid_subscriber = rospy.Subscriber(CENTROID_TOPIC_NAME, Float32, self.controller)
        self.stopsign_subscriber = rospy.Subscriber(STOPSIGN_TOPIC_NAME,Int32,self.stopsign_detection_callback) # subscribe to the YoloV5 stopsign detection code
        # Getting ROS parameters set from calibration Node
        self.steering_sensitivity = rospy.get_param('steering_sensitivity')
        self.no_error_throttle = rospy.get_param('no_error_throttle')
        self.error_throttle = rospy.get_param('error_throttle')
        self.error_threshold = rospy.get_param('error_threshold')
        self.zero_throttle = rospy.get_param('zero_throttle')

        # # Display Parameters
        rospy.loginfo(
            f'\nsteering_sensitivity: {self.steering_sensitivity}'
            f'\nno_error_throttle: {self.no_error_throttle}'
            f'\nerror_throttle: {self.error_throttle}'
            f'\nerror_threshold: {self.error_threshold}')

# function to get the stopsign data
    def stopsign_detection_callback(self,data):        
       self.stopsign = int(data.data)

    def controller(self, data):
        # try:
        kp = self.steering_sensitivity
        error_x = data.data
        rospy.loginfo(f"{error_x}")
# if there is a stop sign set the throttle to zero otherwise drive normally
        if self.stopsign == 1:
            throttle_float = self.zero_throttle
        elif error_x <= self.error_threshold:
            throttle_float = self.no_error_throttle
        else:
            throttle_float = self.error_throttle
        steering_float = float(kp * error_x)
        if steering_float < -1.0:
            steering_float = -1.0
        elif steering_float > 1.0:
            steering_float = 1.0
        else:
            pass
        self.steering_float.data = steering_float
        self.throttle_float.data = throttle_float
        self.steering_publisher.publish(self.steering_float)
        self.throttle_publisher.publish(self.throttle_float)
        rospy.on_shutdown(self.clean_shutdown)

    def clean_shutdown(self):
        print("Shutting Down...")
        self.throttle_float.data = self.zero_throttle
        self.throttle_publisher.publish(self.throttle_float)
        print("Shut down complete")

def main():
    path_planner = PathPlanner()
    rate = rospy.Rate(15)
    while not rospy.is_shutdown():
        rospy.spin()
        rate.sleep()


if __name__ == '__main__':
    main()