References
Aas, Kjersti, Martin Jullum, and Anders Løland. 2020. “Explaining
Individual Predictions When Features Are Dependent: More
Accurate Approximations to Shapley Values.”
arXiv:1903.10464 [Cs, Stat], February. http://arxiv.org/abs/1903.10464.
Adler, Philip, Casey Falk, Sorelle A. Friedler, Gabriel Rybeck, Carlos
Scheidegger, Brandon Smith, and Suresh Venkatasubramanian. 2016.
“Auditing Black-Box Models for
Indirect Influence.”
arXiv:1602.07043 [Cs, Stat], November. http://arxiv.org/abs/1602.07043.
Basu, Debraj. 2020. “On Shapley Credit
Allocation for Interpretability.”
arXiv:2012.05506 [Cs, Stat], December. http://arxiv.org/abs/2012.05506.
Beckers, Sander. 2022. “Causal Explanations and
XAI.” arXiv:2201.13169 [Cs], February. http://arxiv.org/abs/2201.13169.
Castro, Javier, Daniel Gómez, and Juan Tejada. 2009. “Polynomial
Calculation of the Shapley Value Based on Sampling.”
Computers & Operations Research 36 (5): 1726–30. https://doi.org/10.1016/j.cor.2008.04.004.
Chen, Hugh, Joseph D. Janizek, Scott Lundberg, and Su-In Lee. 2020.
“True to the Model or True to the
Data?” arXiv:2006.16234 [Cs, Stat], June.
http://arxiv.org/abs/2006.16234.
Covert, Ian C. 2020. “Explaining by Removing:
A Unified Framework for
Model Explanation.”
arXiv:2011.14878 [Cs], 90.
Datta, Anupam, Shayak Sen, and Yair Zick. 2016. “Algorithmic
Transparency via Quantitative
Input Influence: Theory and
Experiments with Learning
Systems.” In 2016 IEEE
Symposium on Security and Privacy
(SP), 598–617. https://doi.org/10.1109/SP.2016.42.
Feldman, Barry. 2005. “The Proportional
Value of a Cooperative
Game,” 30.
Frye, Christopher, Colin Rowat, and Ilya Feige. 2020. “Asymmetric
Shapley Values: Incorporating Causal Knowledge into
Model-Agnostic Explainability.” arXiv:1910.06358 [Cs,
Stat], October. http://arxiv.org/abs/1910.06358.
Grömping, Ulrike. 2007. “Estimators of Relative
Importance in Linear Regression
Based on Variance
Decomposition.” The American Statistician
61 (2): 139–47. https://www.jstor.org/stable/27643865.
Halpern, Joseph Y., and Judea Pearl. 2005a. “Causes and
Explanations: A
Structural-Model Approach.
Part I: Causes.” The
British Journal for the Philosophy of Science 56 (4): 843–87. http://www.jstor.org/stable/3541870.
———. 2005b. “Causes and Explanations: A
Structural-Model Approach.
Part II: Explanations.”
The British Journal for the Philosophy of Science 56 (4):
889–911. https://www.jstor.org/stable/3541871.
Heskes, Tom, Evi Sijben, Ioan Gabriel Bucur, and Tom Claassen. 2020.
“Causal Shapley Values:
Exploiting Causal Knowledge to
Explain Individual Predictions of
Complex Models.” arXiv:2011.01625
[Cs], November. http://arxiv.org/abs/2011.01625.
Janzing, Dominik, Lenon Minorics, and Patrick Blöbaum. 2019.
“Feature Relevance Quantification in Explainable AI:
A Causal Problem.” arXiv:1910.13413 [Cs,
Stat], November. http://arxiv.org/abs/1910.13413.
Kilbertus, Niki, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz
Hardt, Dominik Janzing, and Bernhard Schölkopf. 2018. “Avoiding
Discrimination Through Causal
Reasoning.” arXiv:1706.02744 [Cs, Stat],
January. http://arxiv.org/abs/1706.02744.
Kruskal, William. 1987. “Relative Importance by
Averaging Over Orderings.”
The American Statistician 41 (1): 6–10. https://doi.org/10.2307/2684310.
Kumar, I. Elizabeth, Suresh Venkatasubramanian, Carlos Scheidegger, and
Sorelle Friedler. 2020. “Problems with
Shapley-Value-Based Explanations as Feature Importance
Measures.” arXiv:2002.11097 [Cs, Stat], June. http://arxiv.org/abs/2002.11097.
Lindeman, Richard, Peter Merrenda, and Ruth Gold. 1980. Introduction
to Bivariate and Multivariate
Analysis. Glenview, IL.
Lipovetsky, Stan, and Michael Conklin. 2001. “Analysis of
Regression in Game Theory Approach.” Applied Stochastic
Models in Business and Industry 17 (4): 319–30. https://doi.org/10.1002/asmb.446.
Lundberg, Scott, and Su-In Lee. 2017. “A Unified
Approach to Interpreting Model
Predictions.” arXiv:1705.07874 [Cs, Stat],
November. http://arxiv.org/abs/1705.07874.
Merrick, Luke, and Ankur Taly. 2020. “The Explanation
Game: Explaining Machine
Learning Models Using
Shapley Values.” arXiv:1909.08128
[Cs, Stat], June. http://arxiv.org/abs/1909.08128.
Miller, Tim. 2018. “Explanation in Artificial
Intelligence: Insights from the
Social Sciences.” arXiv:1706.07269
[Cs], August. http://arxiv.org/abs/1706.07269.
Mittelstadt, Brent, Chris Russell, and Sandra Wachter. 2019.
“Explaining Explanations in AI.”
Proceedings of the Conference on Fairness, Accountability, and
Transparency, January, 279–88. https://doi.org/10.1145/3287560.3287574.
Owen, Art B. 2014. “Sobol’ Indices and
Shapley Value.” SIAM/ASA Journal on
Uncertainty Quantification 2 (1): 245–51. https://doi.org/10.1137/130936233.
Owen, Art B., and Clémentine Prieur. 2017. “On
Shapley Value for Measuring
Importance of Dependent
Inputs.” SIAM/ASA Journal on Uncertainty
Quantification 5 (1): 986–1002. https://doi.org/10.1137/16M1097717.
Pearl, Judea. 2009. Causality: Models,
Reasoning, and Inference. Second.
Cambridge University Press.
Pearl, Judea, Madelyn Glymour, and Nicholas P. Jewell. 2016. Causal
Inference in Statistics: A
Primer. Wiley.
Shapley, L. 1953. “A Value for n-Person
Games.” In Contributions to the
Theory of Games, 2:307–17. Princeton, NJ:
Princeton University Press.
Singal, Raghav, George Michailidis, and Hoiyi Ng. 2021.
“Flow-Based Attribution in
Graphical Models: A
Recursive Shapley
Approach.” {SSRN} {Scholarly} {Paper} ID 3845526.
Rochester, NY: Social Science Research Network. https://doi.org/10.2139/ssrn.3845526.
Song, Eunhye, Barry L. Nelson, and Jeremy Staum. 2016. “Shapley
Effects for Global Sensitivity
Analysis: Theory and
Computation.” SIAM/ASA Journal on Uncertainty
Quantification 4 (1): 1060–83. https://doi.org/10.1137/15M1048070.
Strumbelj, Erik, and Igor Kononenko. 2010. “An
Efficient Explanation of
Individual Classifications Using
Game Theory.” The Journal of
Machine Learning Research 11 (March): 1–18.
Štrumbelj, E., I. Kononenko, and M. Robnik Šikonja. 2009.
“Explaining Instance Classifications with Interactions of Subsets
of Feature Values.” Data & Knowledge Engineering 68
(10): 886–904. https://doi.org/10.1016/j.datak.2009.01.004.
Štrumbelj, Erik, and Igor Kononenko. 2014. “Explaining Prediction
Models and Individual Predictions with Feature Contributions.”
Knowledge and Information Systems 41 (3): 647–65. https://doi.org/10.1007/s10115-013-0679-x.
Stufken, John. 1992. “On Hierarchical
Partitioning.” The American Statistician 46
(1): 70–71. http://www.jstor.org/stable/2684415.
Sundararajan, Mukund, and Amir Najmi. 2020. “The Many
Shapley Values for Model Explanation.”
arXiv:1908.08474 [Cs, Econ], February. http://arxiv.org/abs/1908.08474.
Viswanathan, Vignesh, and Yair Zick. 2021. “Model
Explanations via the Axiomatic
Causal Lens.” arXiv:2109.03890
[Cs], September. http://arxiv.org/abs/2109.03890.
Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2018.
“Counterfactual Explanations Without
Opening the Black Box:
Automated Decisions and the
GDPR.” arXiv:1711.00399 [Cs], March. http://arxiv.org/abs/1711.00399.
Wang, Jiaxuan, Jenna Wiens, and Scott Lundberg. 2021. “Shapley
Flow: A Graph-Based
Approach to Interpreting Model
Predictions.” arXiv:2010.14592 [Cs, Stat],
February. http://arxiv.org/abs/2010.14592.
Zhao, Qingyuan, and Trevor Hastie. 2021. “Causal
Interpretations of Black-Box
Models.” Journal of Business & Economic
Statistics 39 (1): 272–81. https://doi.org/10.1080/07350015.2019.1624293.