Localization of a Lunar Rover Using Monocular Vision Inertial Sensor Fusion
Sejong Heo, Jaehyuck Cha, ChanGook Park*
In this paper, we present a localization method for the lunar rover vehicle which has monocular camera and low cost IMU (Inertial Measurement Unit). To estimating the pose of a lunar rover which has monocular camera, the well-known Visual Odometry(VO) method is generally used. But monocular VO method only estimates 5DOF ego motion of camera with scale ambiguity. But we can estimate 6DOF ego motion without scale ambiguity by fusing the IMU data with monocular camera. The Multi-State Constraint Kalman Filter(MSCKF) is used to fuse the visual and inertial information based on moving window estimation. This filter’s main strength is that it uses the geometrical constraints that arise from observing the same features in the different images. This paper shows the characteristics and performance analysis of the proposed monocular visual inertial fusion method by simulation tests and real-world experiments.
Keywords: visual-inertial fusion, localization, multi-state constraint Kalman filter
|