A Learning-based Approach to Cover Short-term Camera Failure in a Monocular Visual Inertial Odometry System

Y. Tian
Embry-Riddle Aeronautical University,
United States

Keywords: LSTM, Sensor Fusion, Localization

Summary:

Localization and navigation have multiple established methods in sensing and fusion algorithm. Previous studies use different sensors such as Global Navigation Satellite System (GNSS) and Global Inertial Navigation System (GINS) using GPS and an inertial measurement unit (IMU). Visual Inertial Odometry (VIO) has drawn attention recently, however, one disadvantage is camera susceptibility to disturbances such as sudden illumination changes, fast motions, and moving objects. Existing researchers usually test their algorithm assuming good camera performance. In this research, we propose a learning-based method to estimate pose during brief periods of camera failure. A Long Short-Term Memory (LSTM) network is trained during periods of good camera operation and, once trained, the LSTM provides an alternate pose estimate, available as soon as camera failure is detected. When the camera provides a good visual odometry result, this system fuses visual and inertial odometry inputs using a Kalman Filter. Meanwhile, the LSTM network is trained with inertial measurements as inputs and the Kalman-Filter camera pose outputs as targets. We tested our algorithm by removing the visual inputs and comparing Kalman Filter results, IMU-only, and pre-trained LSTMs. The results indicate the implemented LSTM increased the positioning accuracy by 76.2% and orientation accuracy by 26.5%.