TY - GEN
T1 - A User-Centric Exploration of Axiomatic Explainable AI in Participatory Budgeting
AU - Hashemi, Maryam
AU - Darejeh, Ali
AU - Cruz, Francisco
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/10/5
Y1 - 2024/10/5
N2 - Explainable Artificial Intelligence (XAI) has been widely used to clarify the opaque nature of AI systems. One area where XAI has gained significant attention is Participatory Budgeting (PB). PB mechanisms aim to achieve a proper allocation concerning both the votes collected based on user's preferences and the budget. An essential criterion for evaluating these mechanisms is their ability to satisfy desired properties known as axioms. However, even though there are complex voting rules that meet some axioms, concerns regarding transparency persist. In this study, we propose an approach to provide explanations in a PB setting by treating axioms as constraints and seeking outcomes that adhere to these constraints. This method enhances system transparency and explainability. Each potential allocation is accepted or rejected based on whether it satisfies the axioms, and the linear nature of the axioms reduces computational complexity. We evaluated our approach with real-world users to assess its effectiveness and helpfulness. Our pilot study shows that users generally find explanations helpful for understanding the system's decisions and perceive the outcomes as fairer. Additionally, users prefer general explanations over counterfactual ones.
AB - Explainable Artificial Intelligence (XAI) has been widely used to clarify the opaque nature of AI systems. One area where XAI has gained significant attention is Participatory Budgeting (PB). PB mechanisms aim to achieve a proper allocation concerning both the votes collected based on user's preferences and the budget. An essential criterion for evaluating these mechanisms is their ability to satisfy desired properties known as axioms. However, even though there are complex voting rules that meet some axioms, concerns regarding transparency persist. In this study, we propose an approach to provide explanations in a PB setting by treating axioms as constraints and seeking outcomes that adhere to these constraints. This method enhances system transparency and explainability. Each potential allocation is accepted or rejected based on whether it satisfies the axioms, and the linear nature of the axioms reduces computational complexity. We evaluated our approach with real-world users to assess its effectiveness and helpfulness. Our pilot study shows that users generally find explanations helpful for understanding the system's decisions and perceive the outcomes as fairer. Additionally, users prefer general explanations over counterfactual ones.
KW - Explainable Artificial Intelligence
KW - Participatory Budgeting
KW - Social Choice
KW - Users
UR - https://www.scopus.com/pages/publications/85206155892
U2 - 10.1145/3675094.3677599
DO - 10.1145/3675094.3677599
M3 - Conference contribution
AN - SCOPUS:85206155892
T3 - UbiComp Companion 2024 - Companion of the 2024 ACM International Joint Conference on Pervasive and Ubiquitous Computing
SP - 126
EP - 130
BT - UbiComp Companion 2024 - Companion of the 2024 ACM International Joint Conference on Pervasive and Ubiquitous Computing
PB - Association for Computing Machinery, Inc
T2 - 2024 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp Companion 2024
Y2 - 5 October 2024 through 9 October 2024
ER -