Users & Machine Learning-based Curation Systems
Users are increasingly interacting with machine learning (ML)-based curation systems. YouTube and Facebook, two of the most visited websites worldwide, utilize such systems to curate content for billions of users. Contemporary challenges such as fake news, filter bubbles, and biased predictions make the understanding of ML-based curation systems an important and timely concern.
Despite their political, social, and cultural importance, practitioners' framing of machine learning and users' understanding of ML-based curation systems have not been investigated systematically. This is problematic since machine learning - as a novel programming paradigm in which a mapping between input and output is inferred from data - poses a variety of open research questions regarding users' understanding.
The first part of this thesis provides the first in-depth investigation of ML-based curation systems as socio-technical systems. The second part of the thesis contributes recommendations on how ML-based curation systems can and should be explained and audited.
The first part analyses practitioners' framing of ML by examining how the term machine learning, ML applications, and ML algorithms are framed in tutorials. The thesis also investigates the beliefs that users have about YouTube and introduces a user belief framework of ML-based curation systems. Furthermore, it demonstrates how limited users' capabilities for providing input data for ML-based curation systems are. The second part evaluates different explanations of ML-based systems. This evaluation uncovered an explanatory gap between what is available to explain ML-based curation systems and what users need to understand such systems. Informed by this explanatory gap, the second part of this thesis demonstrates that audits of ML systems can be an important alternative to explanations. This demonstration of audits also uncovers a popularity bias enacted by YouTube's ML-based curation system. Based on these findings, the thesis recommends performing audits to ensure that ML-based systems act in the public's interest.
|Algorithmic Bias; Algorithmic Experience; Algorithmic Transparency; Algorithms; Fake News; Human-Centered Machine Learning; Human-Computer Interaction; Machine Learning; Artificial Intelligence; Recommender Systems; Social Media; Trust; User Beliefs; User Experience; Video Recommendations; YouTube
|Fachbereich 03: Mathematik/Informatik (FB 03)
|Appears in Collections:
checked on Feb 24, 2024
checked on Feb 24, 2024
Items in Media are protected by copyright, with all rights reserved, unless otherwise indicated.