My research interests span the areas of Robotics, Computer Vision, and Artificial Intelligence.
I am currently focusing on vision-based robot navigation. More specifically, I develop computer vision algorithms/systems for robots to autonomously navigate in man-made environments like indoor and urban scenarios.
In the past, I had research experience in Electrocardiogram (ECG) signal de-noising and pattern analysis.
Research Projects
Localization and Mapping for Autonomous Driving
NVIDIA's lidar-free autonomous driving is powered by my visual localization work. Check out the demo videos!
Large lighting variation challenges all visual odometry methods, even with RGB-D cameras.
Line segments are abundant indoors and less sensitive to lighting change than point features.
We propose a line segment-based RGB-D indoor odometry algorithm robust to lighting variation.
We also investigate fusing point and line features for RGB-D SLAM/odometry. Project Website
We design a multilayer feature graph (MFG) to
facilitate scene understanding and robot navigation in urban areas. Nodes of
an MFG are features such as SIFT points, line segments, lines, and
planes while edges of the graph represent different geometric
relationships such as adjacency, parallelism, collinearity, and
coplanarity. Project Website
Mobile robots need to recognize objects at their vicinity for navigation and safety purposes. However, highly reflective surfaces, such as glassy building exterior and mirrored walls, challenge almost every type of sensors including laser range finders, sonar arrays, and cameras because light and sound signals simply bounce off the surfaces. Therefore, such surfaces are often invisible to the sensors. Detecting these surfaces is necessary
to avoid collisions. In this project, we develop algorithms for detecting planar mirrored walls based on two views from an on-board camera.
Active Learning on Smartphone based Human Activity Recognition
In this project, we design a robust human activity recognition system based on a smartphone. The system uses a built-in smartphone accelerometer as the only sensor to collect signals for classification. Active learning algorithms are exploited to reduce the labor and time expense of labeling tremendous data.