How do we trust AI? A Qualitative Analysis On Aspects Influencing The Perceived Trustworthiness Of AI
As artificial intelligence (AI) becomes increasingly embedded in high- and low-stakes decision-making processes, understanding the factors that shape user trust in these systems is critical.
Dorian Popic, 2025
Art der Arbeit Bachelor Thesis
Auftraggebende University of Applied Sciences and Arts Northwestern Switzerland
Betreuende Dozierende Grimberg, Frank
Views: 9
This thesis investigates the aspects of perceived trustworthiness of AI by exploring how users evaluate AI systems across technical, axiological, and contextual dimensions.
The study adopts a twofold methodology: first, a structured literature review using Wolfswinkel et al.’s (2013) five-phase approach synthesizes current academic discourse on AI trust; second, six semi-structured interviews provide empirical insights into user perceptions.
Findings highlight that users rely heavily on technical indicators such as accuracy and explainability but also emphasize axiological aspects like fairness and privacy. Transparency emerged as a double-edged factor: it can support trust if tailored to users’ knowledge levels but becomes irrelevant, when poorly aligned. Trust judgments were found to be highly context-sensitive and shaped by external validation mechanisms such as institutional trust or social proof. Verification strategies, such as cross-checking AI outputs, play a crucial role in both building and diminishing trust over time, underscoring the dynamic nature of trust. The study concludes that perceived trust in AI is a multifaceted, dynamic construct that extends beyond system performance, high-lighting the need for socio-technical design and user-centered trust calibration to foster trustworthy AI.
Studiengang: Business Information Technology (Bachelor)
Keywords Artificial Intelligence, Trust in AI, Perceived Trustworthiness, Adaptive Trust Calibration
Vertraulichkeit: öffentlich