Declarative reasoning about space and motion in visual imagery - theoretical foundations and applications
|doctoral_thesis-jakob_suchan-declarative_reasoning_about_space_and_motion_in_visual_imagery_PDFA.pdf||36.29 MB||Adobe PDF||View/Open|
|Authors:||Suchan, Jakob||Supervisor:||Bhatt, Mehul||1. Expert:||Bhatt, Mehul||Experts:||Krieg-Brückner, Bernd||Abstract:||
Perceptual sensemaking of dynamic visual imagery, e.g., involving semantic grounding, explanation, and learning, is central to a range of tasks where artificial intelligent systems have to make decisions and interact with humans. Towards this, commonsense characterisations of space and motion encompassing spatio-temporal relations, motion patterns, and events provide an abstraction layer to perform semantic reasoning about (embodied) spatio-temporal interactions observed from visuospatial imagery.
This thesis develops: (1). a general theory about space and motion for representing and reasoning about interactions founded in declaratively grounded models pertaining to space, time, space-time, motion, and events, and (2). a computational cognitive vision framework for perceptual sensemaking with visuospatial imagery, systematically developed to be compliant with declarative programming methods such as Constraint Logic Programming (CLP), Answer-Set Programming (ASP), and Inductive Logic Programming (ILP).
The thesis provides general tools and methods for declarative reasoning with visuospatial imagery, encompassing question-answering, abduction, and integration of reasoning and learning; contributed publications in this thesis focus on:
1. Grounded Semantic Interpretation and Question-Answering rooted to expressive declarative models of (embodied) visuospatial semantics to characterise (human) interactions with respect to their relational spatio-temporal structure;
2. Visuospatial Abduction, for hypothesising object interactions explaining perceived visuospatial dynamics, tightly integrating low-level (neural) visual processing and high-level (relational) abductive reasoning; and
3. Declarative Explainability and Inductive Generalisation based on declarative formalisations of visuospatial image characteristics grounded in (symbolic and subsymbolic) image elements and (neural) image features thereby providing a relational abstraction layer suitable for relational (inductive) learning.
These developed representation and reasoning capabilities are demonstrated and evaluated in the context of real-world applications (with requirements such as real-time processing, robustness against noise, etc.), where the processing and semantic interpretation of (potentially large volumes of) highly dynamic visuospatial imagery is central. Example applications included in this thesis encompass cognitive robotics, autonomous vehicles, and assistive technologies for human behaviour research.
|Keywords:||Declarative space and motion; Cognitive vision; Visuospatial sensemaking; Vision and semantics; Commonsense reasoning; Knowledge representation and reasoning; Human-centred AI||Issue Date:||22-Jun-2022||Type:||Dissertation||DOI:||10.26092/elib/1652||URN:||urn:nbn:de:gbv:46-elib60472||Institution:||Universität Bremen||Faculty:||FB3 - Mathematik/Informatik|
|Appears in Collections:||Dissertationen|
checked on Aug 17, 2022
checked on Aug 17, 2022
This item is licensed under a Creative Commons License