Trust by Design: How Glass Box AI Shapes User Acceptance through Explainability

This thesis tests whether transparency and explainability increase user trust in AI. In a healthcare scenario, a transparent “glass-box” system is compared with a non-transparent “black-box” system to assess effects on trust and acceptance.

Danilo Ribeiro Da Silva, 2025

Art der Arbeit Bachelor Thesis
Auftraggebende FHNW
Betreuende Dozierende Karg, Jona, Misyura, Ilya
Views: 4
AI is increasingly used in high-stakes domains such as healthcare, yet decision processes are often opaque. This study asks whether explainability improves user trust and acceptance. Building on Karg’s Path Model of Trust in AI, it focuses on process-level explainability as a core design element.
A between-subjects online experiment randomly assigned participants to: (a) a glass-box system (AI recommendation plus a rule-based explanation grounded in clinical guidelines) or (b) a black-box system (same recommendation without explanation). Of 115 completions, 112 were analyzed (56 per group). Outcomes: Performance, Process, Trust in AI, Intention to Use, and Propensity to Trust. Independent-samples t-tests with Cohen’s d quantified group differences.
The glass-box system scored higher on four of the five trust dimensions. The largest difference was in process understanding, where participants felt much clearer about how the AI reached its recommendation (t(110) = 5.28, p < .001, d = 0.79). They also showed a stronger intention to use the transparent system (t(110) = 3.15, p = .002, d = 0.60). Smaller but significant effects appeared for perceived performance (t(110) = 2.48, p = .015, d = 0.47) and overall trust in AI (t(110) = 2.05, p = .042, d = 0.39). A minor difference was also observed in Propensity to Trust (t(110) = 1.99, p = .048, d = 0.38), though this reflects a general trait and likely comes from random variation between groups. In sum, clear and relevant explanations made the AI more understandable, increased trust, and raised willingness to adopt it. The study recommends embedding explainability as a strategic element of AI governance, tailored to stakeholders’ needs, to build trust and reduce regulatory and reputational risks. Limitations include the scenario-based design and the omission of several questionnaire items not suited to this context.
Studiengang: Business Information Technology (Bachelor)
Keywords Trust in AI, XAI, Explainable AI, Glass Box AI, Black Box AI
Vertraulichkeit: öffentlich
Art der Arbeit
Bachelor Thesis
Auftraggebende
FHNW
Autorinnen und Autoren
Danilo Ribeiro Da Silva
Betreuende Dozierende
Karg, Jona, Misyura, Ilya
Publikationsjahr
2025
Sprache der Arbeit
Englisch
Vertraulichkeit
öffentlich
Studiengang
Business Information Technology (Bachelor)
Standort Studiengang
Basel
Keywords
Trust in AI, XAI, Explainable AI, Glass Box AI, Black Box AI