THE RIGHT TO EXPLANATION IN THE PROCESSING OF PERSONAL DATA WITH THE USE OF AI SYSTEMS

Authors

  • Ria Papadimitriou Law School, Aristotle University of Thessaloniki

DOI:

https://doi.org/10.54934/ijlcw.v2i2.53

Keywords:

GDPR, AI, explainability, automated decision-making, right to explanation.

Abstract

Transparency is one of the basic principles enshrined in the General Data Protection Regulation (GDRP). Achieving transparency in automated decision-making processing especially when artificial intelligence (AI) is involved is a challenging task on many aspects. The opaqueness of AI systems that usually is referred as the “black-box” phenomenon is the main problem in having explainable and accountable AI. Computer scientists are working on explainable AI (XAI) technics in order to make AI more trustworthy. On the same vein, thus from a different perspective, the European legislator provides in the GDPR with a right to information when automated decision-making processing takes place. The data subject has the right to be informed on the logic involved and to challenge the automated decision-making. GDPR introduces, therefore, a sui generis right to explanation in automated decision-making process. Under this light, the paper analyzes the legal basis of this right and the technical barriers involved.

References

Brkan, M., Do Algorithms Rule the World? Algorithmic Decision-Making in the Framework of the GDPR and Beyond (August 1, 2017). A revised version of this paper has been published in International Journal of Law and Information Technology, 11 January 2019, DOI; 10.1093/ijlit/eay017, Available at SSRN: https://ssrn.com/abstract=3124901 or http://dx.doi.org/10.2139/ssrn.3124901, p. 15.

Wachter, S., Mittelstadt, B., Floridi, L., Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation (December 28, 2016). International Data Privacy Law, 2017, Available at SSRN: https://ssrn.com/abstract=2903469 or http://dx.doi.org/10.2139/ssrn.2903469, p. 6.

Wachter, S., Mittelstadt, B., Floridi, L.

Wachter, S., Mittelstadt, B., Floridi, L.

Wachter, S., Mittelstadt, B., Floridi, L., p. 15.

Wachter, S., Mittelstadt, B., Floridi, L., p. 16-19.

Edwards, L., Veale, M., Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For (May 23, 2017). 16 Duke Law & Technology Review 18 (2017), Available at SSRN: https://ssrn.com/abstract=2972855 or http://dx.doi.org/10.2139/ssrn.2972855, p. 51.

Edwards, L., Veale, M., p. 52.

Mendoza, I., Bygrave, L. A., The Right Not to Be Subject to Automated Decisions Based on Profiling (May 8, 2017). Tatiani Synodinou, Philippe Jougleux, Christiana Markou, Thalia Prastitou (eds.), EU Internet Law: Regulation and Enforcement (Springer, 2017, Forthcoming), University of Oslo Faculty of Law Research Paper No. 2017-20, Available at SSRN: https://ssrn.com/abstract=2964855, p.16-17.

Brkan, M., Do Algorithms Rule the World? Algorithmic Decision-Making in the Framework of the GDPR and Beyond (August 1, 2017), p. 15.

Goodman, B., Flaxman S., European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”, AI MAGAZINE, Vol. 38 No. 3: Fall 2017, p. 50-57.

Kaminski, M. E., The Right to Explanation, Explained (June 15, 2018). U of Colorado Law Legal Studies Research Paper No. 18-24, Berkeley Technology Law Journal, Vol. 34, No. 1, 2019, Available at SSRN: https://ssrn.com/abstract=3196985 or http://dx.doi.org/10.2139/ssrn.3196985.

Kaminski, Margot E., p. 215.

Brkan, M., p. 20. The author offers a clear-cut and an apt categorization of the obstacles in algorithmic transparency.

Wachter, S., Mittelstadt, B., Russell, C., Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR (October 6, 2017). Harvard Journal of Law & Technology, 31 (2), 2018, Available at SSRN: https://ssrn.com/abstract=3063289 or http://dx.doi.org/10.2139/ssrn.3063289, p. 842.

Wachter, S., Mittelstadt, B., Russell, C., p. 843 and 883.

Wachter, S., Mittelstadt, B., Russell, C.

Wachter, S., Mittelstadt, B., Russell, C., p.860.

Edwards, L., Veale, M., p. 55-59.

Zhang, C., Cho, S., Vasarhelyi, M., Explainable Artificial Intelligence (XAI) in Auditing (Aug 1, 2022). International Journal of Accounting Information Systems, Available at SSRN: https://ssrn.com/abstract=3981918 or http://dx.doi.org/10.2139/ssrn.3981918.

Ferrario, A., Loi, M., How Explainability Contributes to Trust in AI (January 28, 2022). 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22), Available at SSRN: https://ssrn.com/abstract=4020557 or http://dx.doi.org/10.2139/ssrn.4020557.

Davis, R., Lo, A. W., Mishra, S. et al., Explainable Machine Learning Models of Consumer Credit Risk (January 12, 2022). Available at SSRN: https://ssrn.com/abstract=4006840 or http://dx.doi.org/10.2139/ssrn.4006840. See Desai, D.R., Kroll, J. A., Trust But Verify: A Guide to Algorithms and the Law (April 27, 2017). Harvard Journal of Law & Technology, Forthcoming, Georgia Tech Scheller College of Business Research Paper No. 17-19, Available at SSRN: https://ssrn.com/abstract=2959472.

Desai, D.R., Kroll, J. A., p. 47.

Desai, D.R., Kroll, J. A., p. 48.

Kroll, J. A., Huey, J., Barocas, S. et al., Accountable Algorithms (March 2, 2016). University of Pennsylvania Law Review, Vol. 165, 2017 Forthcoming, Fordham Law Legal Studies Research Paper No. 2765268, Available at SSRN: https://ssrn.com/abstract=2765268, p. 24.

Kroll, J. A., Huey, J., Barocas, S. et al., p. 23.

See Khedkar, S., Subramanian, V., Shinde, G. et al., Explainable AI in Healthcare (April 8, 2019), 2nd International Conference on Advances in Science & Technology (ICAST) 2019 on 8th, 9th April 2019 by K J Somaiya Institute of Engineering & Information Technology, Mumbai, India, Available at SSRN: https://ssrn.com/abstract=3367686 or http://dx.doi.org/10.2139/ssrn.3367686.

Downloads

Published

2023-12-28