Citation link:
https://doi.org/10.26092/elib/2739
Learning to improve arguments: automated claim quality assessment and optimization
File | Description | Size | Format | |
---|---|---|---|---|
Skitalinska23-upd.pdf | 5.55 MB | Adobe PDF | View/Open |
Authors: | Skitalinska, Gabriella | Supervisor: | Breiter, Andreas Wachsmuth, Henning |
1. Expert: | Wachsmuth, Henning | Experts: | Lauscher, Anne | Abstract: | Possessing strong argumentative writing skills is a crucial competency for academic and professional success. Such skills enable individuals to articulate their thoughts, beliefs, and opinions effectively while engaging in constructive discourse. Not only do they facilitate personal expression, but they also foster critical thinking and the ability to communicate persuasively. However, argumentation skills are challenging to acquire, especially for novice writers. This prompts the need to develop scalable computational solutions capable of guiding writers in improving their argumentative writing skills and assisting them in effectively communicating their ideas, regardless of their skill level. Despite recent advancements in machine learning and natural language processing and extensive studies on argument quality in the past, the questions of automating argumentative writing support remain largely unexplored. In this thesis, we aim to address this gap and explore the following research question: What makes a good argument and how can we computationally model this knowledge to develop tools supporting individuals in improving their arguments? To do so, we suggest using human revisions of argumentative texts as a basis to understand and model quality characteristics of arguments. We argue that akin to how individuals learn through revisions to recognize gaps in their reasoning, organize ideas, and convey arguments in a clear and concise manner, computational models can be similarly conditioned to develop such competencies. In this thesis, we make several contributions to the field of computational argumentation, specifically in automated argument assessment and generation. In particular, we introduce several new tasks focusing on identifying low quality content, characterizing the flaws within them, and suggesting types of improvement to increase their quality. The differences between the tasks and their scope allow for a more nuanced and targeted assessment when capturing argument quality, making them applicable to a wide range of content quality control applications in online moderation or education processes. To enable making such assessments with cutting-edge computational methods, we compile the first large-scale corpus of argumentative claim revisions from a popular online debate platform. With this data in hand, we investigate the important aspects, inter-dependencies, and attributes that shape the perceived quality of the argument and assess the impact of the revision processes on the various dimensions of argument quality. We find that working with revision-based data offers many opportunities and allows us to learn a more general notion of argument quality, which generalizes well across the topics, aspects, and stances covered in argumentative text. However, it also comes with several challenges related to the representativeness and reliability of data, topical bias in revision behaviors, appropriate model complexities and architectures, and the need for context when judging claims. In a detailed analysis, we outline the strengths and weaknesses of various approaches and strategies exploiting different types of knowledge specific to text and argument revisions to tackle said challenges. For example, we find that using revision distance-based sampling can improve performance when identifying claims that require improvement and incorporating contextual information allows to make more accurate quality assessments. Finally, keeping in mind the lessons learned from quality assessment tasks, we address the problem of automatically generating improved versions of argumentative texts. Specifically, we propose a neural approach that first generates a diverse range of candidate claims and then selects the best candidate via a ranking process using several argument and text quality metrics. We empirically show that our approaches can perform a diverse range of improvement types and successfully revise argumentative texts. Moreover, the results show that the proposed solutions generalize well to other domains, such as instructional texts, news, scientific articles, and encyclopedia entries. With this work, we take another step towards automatically assessing the quality of argumentative texts and generating their improved versions. We have done so by adopting a new perspective that looks at argument quality through the lens of revisions. By proposing a set of methods that can guide writers and help them improve their argumentative writing skills and produce more compelling and persuasive texts, we showcase that, with the right approach, the art of persuasion becomes an attainable endeavor. |
Keywords: | Computer Science; Computational Argumentation; Natural Language Processing | Issue Date: | 13-Dec-2023 | Type: | Dissertation | DOI: | 10.26092/elib/2739 | URN: | urn:nbn:de:gbv:46-elib76403 | Institution: | Universität Bremen | Faculty: | Fachbereich 03: Mathematik/Informatik (FB 03) |
Appears in Collections: | Dissertationen |
Page view(s)
211
checked on Nov 24, 2024
Download(s)
183
checked on Nov 24, 2024
Google ScholarTM
Check
This item is licensed under a Creative Commons License