Fusion of multimodal imaging techniques towards autonomous navigation
|Helia_Sharif_Fusion_of_Multimodal_Imaging_Techniques_Towards_Autonomous_Navigation.pdf||112.67 MB||Adobe PDF||View/Open|
|Authors:||Sharif, Helia||Supervisor:||Suppa, Michael||1. Expert:||Suppa, Michael||2. Expert:||Frese, Udo||Abstract:||
“Earth is the cradle of humanity, but one cannot live in a cradle forever.”
-Konstantin E. Tsiolkovsky, an early pioneer of rocketry and astronautics.
Space robotics enable humans to explore beyond our home planet. Traditional techniques for tele-operated robotic guidance make it possible for a driver to direct a rover that is up to 245.55Mkm away. However, relying on manual terrestrial operators for guidance is a key limitation for exploration missions today, as real-time communication between rovers and operators is delayed by long distances and limited uplink opportunities. Moreover, autonomous guidance techniques in use today are generally limited in scope and capacity; for example, some autonomous techniques presently in use require the application of special markers on targets in order to enable detection, while other techniques provide autonomous vision-based flight navigation but only at limited altitudes in ideal visibility conditions. Improving autonomy is thus essential to expanding the scope of viable space missions.
In this thesis, a fusion of monocular visible and infrared imaging cameras is employed to estimate the relative pose of a nearby target while compensating for each spectrum's shortcomings. The robustness of the algorithm was tested in a number of different scenarios by simulating harsh space environments while imaging a subject of similar characteristics to a spacecraft in orbit. It is shown that the fusion of visual odometries from two spectrums performs well where knowledge of the target's physical characteristics is limited.
The result of this thesis research is an autonomous, robust vision-based tracking system designed for space applications. This appealing solution can be used onboard most spacecraft and adapted for the specific application of any given mission.
|Keywords:||fusion of multimodal sensors; visual odometry; monocular imaging; Extended Kalman Filter; autonomous vision-based navigation for space applications||Issue Date:||17-Sep-2021||Type:||Dissertation||DOI:||10.26092/elib/1077||URN:||urn:nbn:de:gbv:46-elib52813||Institution:||Universität Bremen||Faculty:||Fachbereich 03: Mathematik/Informatik (FB 03)|
|Appears in Collections:||Dissertationen|
checked on Oct 16, 2021
checked on Oct 16, 2021
Items in Media are protected by copyright, with all rights reserved, unless otherwise indicated.