Difference between revisions of "2018SpringTeam2"
Line 66: | Line 66: | ||
We started off our project by examining Project Fetch's (a previous MAE 148/198 project) GitHub and webpage. Project Fetch used OpenCV. We used a laptop camera to get live video feed, and changed RGB limits to fit our target object's color profile accordingly. The steering and throttle is limited to lighting conditions. Other problems encountered within Project Fetch include erosion function modification, distance parameters, low pass filters, and changes to the lower/upper HSV limits. | We started off our project by examining Project Fetch's (a previous MAE 148/198 project) GitHub and webpage. Project Fetch used OpenCV. We used a laptop camera to get live video feed, and changed RGB limits to fit our target object's color profile accordingly. The steering and throttle is limited to lighting conditions. Other problems encountered within Project Fetch include erosion function modification, distance parameters, low pass filters, and changes to the lower/upper HSV limits. | ||
[[File:yologreenball.JPG]] | [[File:yologreenball.JPG]] | ||
''Used Project Fetch's code to explore contour recognition and RGB limits'' | ''Used Project Fetch's code to explore contour recognition and RGB limits'' | ||
[[File:Yoloprobs.JPG]] | [[File:Yoloprobs.JPG]] | ||
''Project Fetch and OpenCV has various problems, including erosion function modifications and other parameters.'' | ''Project Fetch and OpenCV has various problems, including erosion function modifications and other parameters.'' | ||
Revision as of 23:11, 7 June 2018
Team 2: Follow Me
Project Objective: Identify and follow a person, given two people in the field of view. Tiny YOLO was fed into OpenCV to identify targets in a live feed.
Some applications of this project include medical robotics (e.g. medical assistance) and military applications (e.g. target location). This project be applied to crowds, collecting data on people through human computer interactions.
Hardware
Camera mount:
Electronics board design:
Raspberry Pi Camera V2:
Initial Research
The first objective was to be able to identify a person in a live feed. Our project takes Tiny YOLO's person identification information and puts it into OpenCV, which we use to follow the person.
Tiny YOLO: real time object detection. Tiny YOLO (45 MB) is a reduced version of YOLO (248 MB). Tiny YOLO is capable of classification and segmentation, and can recognize objects with accuracy.
Several identifiable objects, such as people, dogs and cars, are already built in to the program. There are many resources available online, including source code and video tutorials. Online forums suggested that timing would run at <1 Hz with Tiny YOLO.
Pros: Capable of recognizing objects with accuracy confidence levels, and can distinguish between dogs and people.
Cons: Runs at a slower frequency and requires further retraining.
OpenCV: a library of functions specified for real-time computer vision. OpenCV is capable of identifying color blocks and boundaries, and has functions to determine centroids. The centroids would be used to identify the location of the person/object, and subsequently to follow the person.
Pros: Simple code/logic, easy to follow.
Cons: Not as intricate as YOLO.
Issues Encountered
- Source code had fatal errors.
- Running took so long the Raspberry Pi killed the process.
- Image reader choked when opening certain images.
- Source code did not provide an easy interface for Python.
- Certain versions had several configurations, of which exactly one would work.
- Several respositories only worked with exactly one version of YOLO.
- Certain configuration files were mislinked on the website/README.
- Several required file dependencies were not described in documentation and had to be found elsewhere.
- The actual code had a bug that took 2 days to find. We went to pull request, but another UCSD student had ran into the same problem and posted it, but the code was unchanged.
- The only Tiny YOLO configuration file that works with the yolov2-tiny weights (which is required for YAD2K) is not listed in the list of valid yolov2-tiny weights.
Process
We ended up using YAD2K (Yet Another Darknet 2 Keras) as our Tiny YOLO library. YAD2K is fast and easy to interface (exposes functionality through Python).
The original YOLO repository takes about 34 seconds to load the neural network and add weights on the RPi3, while YAD2K loads in about 6 seconds. YAD2K is much faster, but likely needs to delay driving until the weights are loaded.
Load times using different YOLO libraries
We started off our project by examining Project Fetch's (a previous MAE 148/198 project) GitHub and webpage. Project Fetch used OpenCV. We used a laptop camera to get live video feed, and changed RGB limits to fit our target object's color profile accordingly. The steering and throttle is limited to lighting conditions. Other problems encountered within Project Fetch include erosion function modification, distance parameters, low pass filters, and changes to the lower/upper HSV limits.
Used Project Fetch's code to explore contour recognition and RGB limits
Project Fetch and OpenCV has various problems, including erosion function modifications and other parameters.
OpenCV has issues with live feed connection. cv2.imshow.() crashes the Raspberry Pi and the VNC Viewer. cv2.imwrite() is able to save image to the Raspberry Pi, but does not give access to the live feed. There is no graphical output: the Pi can be connected to the monitor, but there is still only feedback to the command prompt.
We wanted to use the centroid coordinates (X,Y) to understand the distance between the person and the Robocar. We modified the code to print output X,Y pixel coordinates, as well as having changed the throttling and steering limits to increase speed if the object is further away (similar to Project Fetch), otherwise to keep a constant speed.
We wanted to reduce noise from the contour, so we modified Project Fetch's code to introduce kernels (matrix that expands and reduces pixel size).