Ítem
|
Eichhardt, Iván
Fenkart, Maximilian |
|
| Florescu, Liviu-Daniel | |
| juny 2025 | |
|
Out of the five basic senses that humans use to experience the world, vision accounts for 80%
of the information input that our brains operate with [1]. Naturally, robotics research aims to
replicate this and develop systems that can not only collect high-quality visual data but also
create rich artificial representations of the world, enabling autonomous systems to confidently
reason about their environment.
This work will focus on the use of 3D Light Detection and Ranging (LiDAR) sensors in outdoor
settings. When mounted on an arbitrary mobile base and moved around a target environment,
the sensor collects information about the geometry of the scene, which can be merged in order
to create a general 3D model. However, this process depends on accurate displacement
measurements that are not trivial to obtain. An existing solution relies on Global Navigation
Satellite System (GNSS) localization, corrected using Real-Time Kinematic positioning (RTK),
and orientation from an Inertial Measurement Unit (IMU). Together, these create an Inertial
Navigation System (INS) whose output can be interpreted as the 3D transformation between
sensor poses at discrete time steps. Even though this represents the state-of-the-art technology
for outdoor localization, with centimeter-level position error, its accuracy is unsatisfactory
when it comes to high-quality pointcloud registration. Additionally, this is not feasible for all
outdoor scenes, because GNSS accuracy varies heavily depending on surroundings and signal
strength.
We will investigate methods that address the limitations of the INS-based registration, by using
the visual information in the scene, such that the system is less reliant on a sensor with
fluctuating uncertainty. Previous research in this area [2] indicates that visual cues alone should
be enough to achieve reliable displacement estimation, enabling the computation of odometry
from LiDAR data, as well as creating a 3D map of the explored environment. A comparison
between LiDAR odometry and GNSS localization is also within the scope of this work. The
research questions that we aim to answer are the following:
• What metrics exist for measuring the accuracy of pointcloud registration?
• Can methods that use only visual information achieve higher quality pointcloud
registration (3D mapping) than RTK-based merging?
• To what extent is LiDAR-based odometry an alternative to GNSS localization?
The work will be carried out in collaboration with Sodex Innovations GmbH, who provide the
sensor rig (LiDAR, RGB cameras, GNSS + RTK + IMU) as well as several existing datasets
consisting of sensor data collected while exploring outdoor rural environments. The project will
span approximately 15 weeks, and is tentatively structured as shown below, subject to change
brought by results at different stages. 9 |
|
| application/pdf | |
| http://hdl.handle.net/10256/28366 | |
| eng | |
| Universitat de Girona. Institut de Recerca en Visió per Computador i Robòtica | |
| Attribution-NonCommercial-NoDerivatives 4.0 International | |
| http://creativecommons.org/licenses/by-nc-nd/4.0/ | |
|
Detectors òptics
Sensors òptics tridimensionals 3D mapping Digital mapping Cartografia digital Robots -- Sistemes de navegació Robots -- Navigation systems Optical detectors LiDAR odometry Pointcloud registration GNSS RTK |
|
| LiDAR odometry and mapping beyond RTK accuracy | |
| info:eu-repo/semantics/masterThesis | |
| DUGiDocs |
