Robots have long been used in manufacturing environments as stand-alone machines to perform repetitive tasks, with safety guards in place to separate robotic workspaces and protect humans from injury. With advancing technology, robots are moving out of factories into everyday life, and have now been designed to work collaboratively with humans, blurring the lines of demarcation between human and robot tasks and workspaces. This has opened up a whole new field of research, Human-Robot Interaction. In this presentation, I will provide some background on the field of HRI, and present my most recent research project, “What’s the Point?”, developing a computer vision algorithm for robotic perception of pointing gestures in terrestrial settings. This research project had three main phases: design of a human study to gather data on natural pointing gestures, dataset preparation, and Deep Learning algorithmic development. We will discuss each phase of the project, present the final proof of concept algorithm, and discuss its limitations along with potential future work.
Andrea Walker is a second-year Masters student in the Interactive Robotics and Vision Lab (IRVLab). Coming from a background in Physics, her research interests lie in the area of Robotic Perception. Currently her work focuses on applying Computer Vision and Machine Learning to tackle perception challenges within the domain of underwater Human-Robot Interaction (HRI).