I am currently at Autel Robotics working on Visual-Inertial SLAM. Previously, I was at Google AR during 2019-2021, where I have made two major contributions to ARCore: 36% decrease in CPU usage of motion tracking, and 28% reduction in motion tracking resets, both highlighted at Google I/O 2021.
I received my PhD at Texas A&M University, where I developed a visual odometry system based on heterogeneous landmarks, as well as an RGB-D odometry algorithm solely based on line landmarks, being the first of its kind. Before joining Google, I worked at Honda Research, NVIDIA, and Amazon Lab126 on a varieaty of cool projects. At NVIDIA we developed a top-notch visual localization solution that showcased the possbility of lidar-free autonomous driving on highway. Check out the amazing demo videos!
Highlights
Visual SLAM for Drones (Evo Max)
Improved ARCore efficiency/robustness
Visual localization for Autonomous Driving
News
2021-05-20: Two projects I led at Google AR are highlighted in Google I/O 2021.
I developed a feature that decreased the CPU usage of ARCore motion tracking (SLAM) by 36%.
I created a new algorithm that reduced ARCore tracking resets by 28%.