Methods and Applications on Object Segmentation for Service Robots
|Other Titles:||Methoden und Anwendungen zur Objekt-Segmentierung für Service-Roboter||Authors:||Enjarini, Bashar||Supervisor:||Gräser, Axel||1. Expert:||Frese, Udo||2. Expert:||Anheier, Walter||Abstract:||
The importance of service robots is increasing every day. They have been deployed recently to replace human operators in the field of nursing and elderly caring tasks. Working in a human like environment imposes a direct challenge on the service robot in sensing and understanding the surrounding scene. The human environment, unlike the industrial one, is hard to model; it is dynamic and the location of objects is constantly changing. Moreover, the robot should be able to autonomously identify objects that are useful for task execution and to mark other objects as obstacles to avoid. In that respect, the vision system in service robots plays a crucial role in understanding the surrounding scene and providing the robot with reliable knowledge by means of vision sensors. The best way to bring the robot to a higher level of scene understanding is to design a vision system that simulates the humans in how they understand a scene. The humans analyze an unknown scene by focusing on parts of it instead of considering the whole scene. Similarly, in order to understand the surrounding scene, the robot has to have the ability to segment the different objects located in the scene and identify them, which in turn helps the robot to focus on the objects that are necessary for executing the task. Hence, object segmentation is one of the key capabilities that are needed to elevate the robot into more complex levels of cognition and reactivity. Despite the huge developments in the field of object segmentation in service robots, there are still a lot of challenges related to the problem of object segmentation that are still open to further research and development. One challenge is the ability to segment unknown objects in a cluttered scene. Most of the state-of-art-algorithms use either a pre-defined models or implement an offline training phase to guide the segmentation process. Other methods use special hardware configuration that inevitably adds additional constraints on the segmentation process. Another challenging point is the generality of the algorithm; most of the developed segmentation algorithms are developed to work in specific scenarios which raise the redundancy problem in object segmentation algorithms. Last but not least is the consistency and the reliability in delivering high accuracy results of object segmentation. It is known that most of the developed segmentation algorithms are tested in laboratory configuration and thus lacks the long term testing phase in a real life environment. This thesis contributes toward solving the problem of depth-based object segmenting for grasping purpose in indoor scenarios for service robots. The methods and algorithms presented in this thesis address some of the challenges in the object segmentation problem which are 1) building a general segmentation algorithm that is used on different scenes in different applications, 2) developing an intelligent segmentation algorithm that is able to segment new and unknown objects in different support scenarios without the need of a prior-knowledge or the use of a special hardware configuration and 3) propose a robust vision-based robot control for the task of object grasping in a real workplace which delivers high accuracy results and ensure reliable and consistent robot operation on the long term.
|Keywords:||Object segmentation, Service robot, Book segmentation, library scenario, concavity description, Gradient of Depth, Planar segmentation, 3D Projection, color-depth integration||Issue Date:||4-Nov-2014||URN:||urn:nbn:de:gbv:46-00104193-12||Institution:||Universität Bremen||Faculty:||FB1 Physik/Elektrotechnik|
|Appears in Collections:||Dissertationen|
checked on Sep 23, 2020
checked on Sep 23, 2020
Items in Media are protected by copyright, with all rights reserved, unless otherwise indicated.