LoCO is a general-purpose, single-person-deployable, vision-guided AUV, rated to a depth of 100 meters. All design details, instructions on assembly, and code for this robot will be released under a permissive, open-source license. We discuss the open and expandable design of this underwater robot, as well as the design of a simulated version of the robot using the simulation software Gazebo. Additionally, we explore the platform's preliminary local motion control and state estimation abilities, which enable it to perform maneuvers autonomously. In order to demonstrate its usefulness for a variety of tasks, we implement a variety of our previously presented human-robot interaction capabilities on LoCO, including gestural control, diver following, and robot communication via motion. Finally, we discuss the practical concerns of deployment and our experiences in using this robot in pools, lakes, and the ocean.
Michael is a fourth-year PhD candidate and NSF Graduate Research Fellow, working with Dr. Junaed Sattar in the Interactive Robotics and Vision Lab. His research interests include human-robot interaction (particularly in challenging environments), robot perception, and developing low-cost underwater robots. When not working on robots, Michael enjoys reading, playing video games old and new, and spending quality time with his cats.