For my capstone project, I worked on an autonomous self-navigating electric vehicle. The electric vehicle (referred to as the "robot" from now on) could navigate around an obstacle course of boxes. The obstacle course was placed on a grid, in which the robot would navigate from a start grid to a finish grid. Grids in which the robot detected an obstacle would be marked, and ultimately avoided in the navigation path. The robot would sense obstacles using feature detection. The calibration image chosen was a cheerio box. By taking a ratio with a ground truth image, the robot could sense the depth and horizontal displacement. This information was converted to grid coordinates. With an updated map, the robot would then replan a route using the A* search algorithm. Finally, the robot would perform the navigation with calls to the motor API, which we had written. Thus within our project there were 3 subsystems:
1. Vision: Responsible for all sensing and feature detection.
2. Mapping: Responsible for navigation and motion planning.
3. Robot: Responsible for the hardware-side, including camera, raspberry pi interface, and motor controls
My particular focus within the group was the robot subsystem. One of the largest tasks I encountered was providing the motor API. Since we were navigating along a grid, the motor interface provided simple function calls: forwards(), backward(), clockwise90(), and counterclockwise90(), which would be called by the mapping algorithm. 4 independent motors are used. They are controlled using PID control feedback and run on parallel threads to avoid the jerking motion of 4 motors running discretely. The raspberry pi drives GPIO pins which controls current going into the motor, through an H-Bridge (L298N chip). The motors are driven by a 9V battery, while the raspberry pi is powered by a portable powercore. The camera is the Raspberry Pi Camera 2, for ease of integration. Various attachment parts, such as the motor mount and the motor-wheel adapter were CAD'ed then 3D printed.