2018SpringTeam2

From MAE/ECE 148 - Introduction to Autonomous Vehicles
Jump to navigation Jump to search

Team 2: Follow Me

Project Objective: Identify and follow a person, given two people in the field of view. Tiny YOLO was fed into OpenCV to identify targets in a live feed.

Some applications of this project include medical robotics (e.g. medical assistance) and military applications (e.g. target location). This project be applied to crowds, collecting data on people through human computer interactions.


Hardware

Camera mount:

Electronics board design:

Raspberry Pi Camera V2:


Initial Research

The first objective was to be able to identify a person in a live feed. Our project takes Tiny YOLO's person identification information and puts it into OpenCV, which we use to follow the person.


Tiny YOLO: real time object detection. Tiny YOLO (45 MB) is a reduced version of YOLO (248 MB). Tiny YOLO is capable of classification and segmentation, and can recognize objects with accuracy.

Several identifiable objects, such as people, dogs and cars, are already built in to the program. There are many resources available online, including source code and video tutorials. Online forums suggested that timing would run at <1 Hz with Tiny YOLO.

Pros: Capable of recognizing objects with accuracy confidence levels, and can distinguish between dogs and people.

Cons: Runs at a slower frequency and requires further retraining.


OpenCV: a library of functions specified for real-time computer vision. OpenCV is capable of identifying color blocks and boundaries, and has functions to determine centroids. The centroids would be used to identify the location of the person/object, and subsequently to follow the person.

Pros: Simple code/logic, easy to follow.

Cons: Not as intricate as YOLO.