CSGSA

GRaDS talk: Semantically-Aware Strategies for Stereo-Visual Robotic Obstacle Avoidance

by Jungseok Hong on 2021-04-23

participants.png

Abstract

Mobile robots in unstructured, mapless environments must rely on an obstacle avoidance module to navigate safely. The standard avoidance techniques estimate the locations of obstacles with respect to the robot but are unaware of the obstacles’ identities. Consequently, the robot cannot take advantage of semantic information about obstacles when making decisions about how to navigate. We propose an obstacle avoidance module that combines visual instance segmentation with a depth map to classify and localize objects in the scene. The system avoids obstacles differentially, based on the identity of the objects: for example, the system is more cautious in response to unpredictable objects such as humans. The system can also navigate closer to harmless obstacles and ignore obstacles that pose no collision danger, enabling it to navigate more efficiently. We validate our approach in two simulated environments: one terrestrial and one underwater. Results indicate that our approach is feasible and can enable more efficient navigation strategies.

Bio

Jungseok Hong is a Ph.D. candidate in Robotics at the Department of Computer Science and Engineering and the Minnesota Robotics Institute, University of Minnesota Twin Cities, and a member of IRV LAB advised by Junaed Sattar. His primary research focuses on enabling underwater robots to understand a given environment autonomously. Generally, he is interested in vision-guided field robotics, visual underwater robotics, and generative models for robot vision.