Kim, Seung Kyu: The Explanatory Trap: Structural Incompatibility in Explainable AI
Explainable AI (XAI) is standardly defended as a tool for making intelligent systems transparent and accountable. This paper argues that the demand for explanation imposes a structural cost that standard defenses fail to acknowledge. Drawing on Pearl’s distinction between interventional and retrospective causation, we argue that XAI misappropriates the causal apparatus designed for forward-looking intervention and redeploys it for backward-looking narrative justification. These two functions are
Tags
