Ðóñ Eng Cn Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Administrative and municipal law
Reference:

Ensuring Autonomy of Decision-Making by Artificial Intelligence for the Purposes of Legal Public Relations.

Atabekov Atabek Rustamovich

PhD in Economics

Associate Professor of the Department of Administrative and Financial Law, Legal Institute, Peoples' Friendship University of Russia

117198, Russia, Moscow, Miklukho-Maklaya str., 6

atabekoff1@mail.ru
Other publications by this author
 

 

DOI:

10.7256/2454-0595.2023.1.39893

EDN:

GYFCBG

Received:

27-02-2023


Published:

06-03-2023


Abstract: Within the framework of this article, a comparative analysis of existing approaches is carried out to determine the basic conditions for ensuring the autonomy of AI in the context of public legal relations of foreign countries and Russia. As part of the comparative analysis, the basic problems in the field of AI decision-making transparency in world practice, practical situations of integrating non-transparent AI into the sphere of public legal relations in foreign countries, as well as possible compensatory legal measures that ensure the safe integration of AI into the sphere of public administration in Russia are being investigated. The subject of the study is the formalization of the actions of artificial intelligence as a representative of a government body. The object of the research is normative documents, recommendations and other documents regulating the implementation of AI autonomy for the purposes of public legal relations in Russia and foreign countries, judicial practice, academic publications on the issues under study. The research methodology integrates a complex of modern philosophical, general scientific, special scientific methods of cognition, including dialectical, systemic, structural-functional, hermeneutical, comparative legal, formal legal (dogmatic), etc. Within the framework of this study, special emphasis is placed on the implementation of a comparative legal study of the phenomenon of AI autonomy, which implements public functions based on the experience of various states. The measures proposed as a result of the study can be applied in the legislative and law enforcement practice of the relevant authorities implementing the integration of artificial intelligence into the sphere of public relations in Russia.


Keywords:

artificial intelligence, electronic person, comparative legal research of AI, machine learning, counterfactual analysis, safety AI, public law, administrative law, information law, law enforcement practice

This article is automatically translated. You can find original text of the article here.

Regulation of legal relations arising from the phenomenon of artificial intelligence and transparency of decision-making for the purposes of public legal relations is strategically important both for Russia and for foreign countries. At the same time, the issue of AI positioning in the sphere of public legal relations occupies a separate role, the issue remains systemic and complex both at the level of theoretical research and practical solutions in the field of clarity and transparency of decisions made by AI.

At the level of the German Ministry of Justice [1], the EU Commission [2], individual US states [3], the issue of the need to resolve the "black box" is already being systematically considered.

Among representatives of foreign and domestic scientific schools, questions also systematically arise both in terms of the technical feasibility of opening this "box" [4], as well as the fact that the illusion of AI accountability for humans raises the question of terminological certainty of the concepts of "explainability", "comprehensibility", "meaningfulness", etc.[5-8].

When considering the technical and legal aspect of the functioning of AI for the purposes of public authority, the first aspect is to ensure the transparency of AI, since the dynamic structure of the processed data is used, which is expressed by the absence of a direct relationship between input data and "output" decisions (outcomes) [9]. The reason lies in the fact that AI (especially those based on machine learning) use a large number of ways to analyze and achieve goals with different algorithmic principles and approaches to their interpretation[10]. For example, the issue of interpretation of a typewritten text, a logarithmic formula for assessing damage (provided by one or another subordinate institution), the evidence base of the participant in the administrative process, which may include oral explanations using non-standard speech turns, etc., and AI data processing.

In addition, the issue of data processing speed and constant data processing forms a conditionally "living organism" having a constant dynamic data structure, potentially updated with each interaction with the user[11-12].

At the same time, at the level of doctrinal perception of AI for the purposes of public legal relations, it should also be noted that AI and its basic component element - machine learning - work according to the sample of cases that were previously developed by the relevant official.

It should be understood that the general codification of administrative law and process allows, for the most part, to put the activities of employees of relevant ministries and departments on "algorithmic rails"; at the same time, it should be noted that within each case there are both common features (inherent in a particular class of cases provided for by the codes) and distinctive features forming a subset of cases with generalizable properties.

It is necessary to understand that the issue of generalization of administrative cases, as well as their personalization for AI purposes, generates a number of problems, such as problems of the selection effect and formed bias.

Analyzing the problem of the selection effect, we see that an algorithm running on one database may cease to effectively implement its public functionality for the purposes of another department or generate data in the same segment, but with a random sample of the source data [13]. The issue of data bias is determined by the extent to which the applicable norm is used in administrative practice and whether there is a representative sample of facts formed, which is also systematically highlighted by both domestic and foreign scientists[14-16].

The basic solution to these problems can be the following three tools (individually or in combination):

  1. Generation of random situations/samples, which allows the most accurate formulation of the practice of AI decision-making, especially if the basis of this is the widest range of cases taken by various officials within the same process [17].
  2. The use of counterfactual analysis of the case [18], which gives the generation of branches to the decisions made, and segments that influence AI decision-making; the information noise created by this sample and branches minimizes the number of unconventional decisions made by a person.
  3. Introduction of the AI + human bundle, where AI acts as a means of monitoring random samples that highlight problem areas to an official for verification and evaluation of evidence.

It should be understood that this bundle can also preserve the risks of unethical decision-making, unreliability of the sample when making an AI decision (human factor), as well as the potential impact of AI on the basic rights provided for by the Constitution (Part 2 of Article 19). However, we understand that these actions will later make human biases or mistakes more noticeable.

In addition, it should be borne in mind that AI technology can be used not only for the benefit of the state, but also in opposition to the current measures provided for by legislation.

The whole complex of measures laid down when making an autonomous AI decision can be empirically identified, and, as a result, strategic countermeasures can be formulated provided for by the implementation of the principles of legal certainty [19].

Among these, it is possible to highlight the issues of creating information noise of the AI databases used (especially those located behind the contour of the formation of profile authorities), as well as the formation of behavioral adaptation when a controlled subject, who is not at risk of being included in the sample, is less "law-abiding".

These doctrinal studies of the integration of AI into the sphere of public legal relations are reflected in the practical public activities of the authorities of various countries:

The Dutch authorities, as part of the control of fraud in the field of social security, used the SyRi system [20], which, in fact, used a large array of data in its work, but was not sufficiently transparent in its data processing and the empirical results provided. As a result, the specified algorithm was banned for use by the authority due to the inconsistency of its principles with article 8 of the ECHR (European Convention on Human Rights) [21].

The Ministry of Labor and Social Policy of Poland used a highly specialized AI, which was supposed to categorize potential recipients of unemployment benefits [22].  At the same time, the binary processing of the AI solution passed the appropriate verification of the department's employees, however, due to negligence, in almost 100% of cases, the position of the AI was not disputed by the employee. As a result of the analysis of AI actions for transparency, the Constitutional Court of Poland recognized this product as unconstitutional [23].

The Unemployment Insurance Agency of the State of Michigan (USA) used a similar SyRi tool for the purpose of detecting fraud cases related to receiving unemployment benefits. At the same time, the specified AI carried out a completely autonomous decision-making regarding the recovery of benefits, without the possibility of appealing the decision within the agency [24], which subsequently entailed a trial and appropriate penalties to the AI developer[25].

Based on the above, in order to form a balanced approach to the use of AI for the purposes of public legal relations, the following is proposed:

  1. It is necessary to identify the sectors of AI application that do not entail a violation of the basic constitutional rights of citizens.
  2. It is necessary to form test data polygons within the framework of profile FOI, with the possibility of rapid response within the framework of failures, errors, etc. allowed by AI and profile employees of the FOI.
  3. It is necessary to use AI on the most representative sample of cases, eliminating the risk of predicted prosecution due to the poor quality of the sample of cases or weak administrative practices within the specified sector of legal relations.
  4. It is necessary to introduce counterfactual explanations of AI in the actions of all participants in the process, in order to remove the problems caused by the technological nature of AI.
  5. It is necessary to control the risks associated with AI in the context of the formation of countermeasures by intruders, by creating information noise in external databases or the formation of behavioral adaptation in the target category of citizens.
  6. When implementing fully autonomous AI, it is important to take into account the prevailing negative international experience in terms of non-transparent and unconstitutional AI decision-making mechanisms, with the introduction of appropriate compensatory mechanisms in the Russian law enforcement field to ensure transparency of decision-making and its verification by a relevant FOIV employee.
References
1. Zwischenbericht der Arbeitsgruppe “Digitaler Neustart” zur Frühjahrskonferenz der Justizministerinnen und Justizminister am 6. und 7. Juni 2018 in Eisenach: [website]. — URL: www.justiz.nrw.de/JM/schwerpunkte/digitaler_neustart/zt_fortsetzung_arbeitsgruppe_teil_2/2018-04-23-Zwischenbericht-F-Jumiko-2018%2D%2D-final.pdf (accessed: 21.02.2023).
2. Proposal for a Regulation on promoting fairness and transparency for business users of online intermediation services (COM(2018) 238 final / 2018/0112 (COD)): [website]. — URL: https://eur-lex.europa.eu/procedure/EN/2018_112 (accessed: 21.02.2023).
3. The initial proposal (Int. 1696–2017) would have added the text cited above to Section 23-502 of the Administrative Code of the City of New York. However, the law that was finally passed only established a task force which is designated to study how city agencies currently use algorithms: [website]. — URL: legistar.council.nyc.gov/LegislationDetail.aspx? ID¼3137815&GUID¼437A6A6D-62E1-47E2-9C42-461253F9C6D0 (accessed: 21.02.2023).
4. Burrell J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms //Big data & society. – 2016. – Ò. 3. – ¹. 1. – P. 2053951715622512.
5. Ananny M., Crawford K. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability //new media & society. – 2018. – Ò. 20. – ¹. 3. – P. 973-989.
6. Fenster M. The transparency fix: Secrets, leaks, and uncontrollable government information. – Stanford University Press, 2017.
7. Grey C., Costas J. Secrecy at work: The hidden architecture of organizational life. – Stanford University Press, 2016.
8. Martynov A. V., Bundin M. V. On the Legal Principles of Exploiting Artificial Intelligence in Executing Control and Supervisory Activities by Executive // Journal of Russian Law. – 2020. – ¹. 10. – P. 59-75.
9. Leese M. The new profiling: Algorithms, black boxes, and the failure of anti-discriminatory safeguards in the European Union //Security Dialogue. – 2014. – Ò. 45. – ¹. 5. – P. 494-511.
10. Bundesanstalt für Finanzdienstleistungsaufsicht (2018) Big Data trifft auf künstliche Intelligenz. Herausforderungen und Implikationen für Aufsicht und Regulierung von Finanzdienstleistungen: [website]. — URL: www.bafin.de/SharedDocs/Downloads/DE/dl_bdai_studie.html (accessed: 21.02.2023).
11. Tutt A. An FDA for Algorithms’(2017) //Administrative law review. – Ò. 69. – P. 83.
12. IBM. Continuous relevancy training: [website]. — URL: console.bluemix.net/docs/services/discovery/continu ous-training.html#crt (accessed: 21.02.2023).
13. Hermstrüwer Y. Artificial intelligence and administrative decisions under uncertainty //Regulating Artificial Intelligence. – 2020. – Ñ. 199-223.
14. Lehr D., Ohm P. Playing with the data: what legal scholars should learn about machine learning //UCDL Rev. – 2017. – Ò. 51. – P. 653.
15. Vorobyova I.B. Ethical aspects of the use of artificial intelligence systems in crime investigation // Bulletin of the Saratov State Law Academy. – 2022. – ¹. 4 (147). – P. 162-172.
16. Kharitonova Yu. S., Savina V. S., Pagnini F. Predvzyatost’ algoritmov iskusstvennogo intellekta: voprosy etiki I prava [Artificial Intelligence’s Algorithmic Bias: Ethical and Legal Issues]. Vestnik Permskogo universiteta. Juridicheskie nauki – Perm University Herald. Juridical Sciences. 2021. Issue 53. Pp. 488–515. (In Russ.).DOI: 10.17072/1995-4190-2021-53-488-515
17. Cowgill B., Tucker C. Algorithmic bias: A counterfactual perspective //NSF Trustworthy Algorithms. – 2017.
18. Lewis D. Counterfactuals. Harvard University Press. Cambridge, MA. – 1973.
19. Constitutional and legal protection of entrepreneurship: current aspects (based on the decisions of the Constitutional Court of the Russian Federation in 2018-2020) (approved by the decision of the Constitutional Court of the Russian Federation 17.12.2020) [website]. — URL: http://www.consultant.ru/document/cons_doc_LAW_374913/ (accessed: 21.02.2023)
20. SyRI legislation in breach of European Convention on Human Rights: [website]. — URL: https://www.rechtspraak.nl/Organisatie-en-contact/Organisatie/Rechtbanken/Rechtbank-Den-Haag/Nieuws/Paginas/SyRI-legislation-in-breach-of-European-Convention-on-Human-Rights.aspx (accessed: 21.02.2023)
21. District Court of the Hague, 6 March 2020, ECLI:NL:RBDHA:2020:865: [website]. — URL: uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:1878 (accessed: 21.02.2023)
22. MAKING O. F. A. D. Profiling the unemployed in poland: social and political implications. [website]. — https://panoptykon.org/sites/default/files/leadimage-biblioteka/panoptykon_profiling_report_final.pdf (accessed: 21.02.2023)
23. Koniec profilowania bezrobotnych: [website]. — URL:https://www.prawo.pl/kadry/bezrobotni-nie-beda-profilowani-utrudnialo-to-ich-aktywizacje,394701.html (accessed: 21.02.2023)
24. Michigan’s MiDAS Unemployment System: Algorithm Alchemy Created Lead, Not Gold: [website]. — URL: https://spectrum.ieee.org/michigans-midas-unemployment-system-algorithm-alchemy-that-created-lead-not-gold#toggle-gdpr (accessed: 21.02.2023)
25. Cahoo v. SAS Analytics Inc. Nos. 18-1295/1296: [website]. — URL:https://casetext.com/case/cahoo-v-sas-analytics-inc (accessed:21.02.2023)

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

A REVIEW of an article on the topic "Ensuring the autonomy of decision-making by artificial intelligence for the purposes of public legal relations". The subject of the study. The article proposed for review is devoted to ensuring "... the autonomy of decision-making by artificial intelligence for the purposes of public legal relations." The author has chosen a special subject of research: the proposed issues are investigated from the point of view of information and administrative law, while the author notes that "... the issue of positioning AI in the field of public legal relations occupies a separate role, the issue remains systemic and complex both at the level of theoretical research and practical solutions in the field of clarity and transparency of decisions made by AI". NPA and foreign judicial practice relevant to the purpose of the study are being studied. A large volume of Russian and foreign (in English) scientific literature on the stated issues is also studied and summarized, analysis and discussion with these opposing authors are present. At the same time, the author notes: "When considering the technical and legal aspect of the functioning of AI for the purposes of public authority, the first aspect is to ensure the transparency of AI ...". Research methodology. The purpose of the study is determined by the title and content of the work: "... the general codification of administrative law and process allows, for the most part, to put the activities of employees of relevant ministries and departments on "algorithmic rails" ... "... at the level of doctrinal perception of AI for the purposes of public legal relations, it should also be noted that AI and its basic component element - machine learning training - they work on the sample of cases that were previously worked out by the relevant official." They can be designated as the consideration and resolution of certain problematic aspects related to the above-mentioned issues and the use of certain experience. Based on the set goals and objectives, the author has chosen a certain methodological basis for the study. The author uses a set of private scientific, special legal methods of cognition. In particular, the methods of analysis and synthesis made it possible to generalize approaches to the proposed topic and influenced the author's conclusions. The most important role was played by special legal methods. In particular, the author applied formal legal and comparative legal methods, which made it possible to analyze and interpret the norms of acts of Russian legislation, foreign judicial practice and compare various documents. In particular, the following conclusions are drawn: "It is necessary to identify sectors of AI application that do not entail a violation of the basic constitutional rights of citizens," etc. Thus, the methodology chosen by the author is fully adequate to the purpose of the article, allows you to study many aspects of the topic. The relevance of the stated issues is beyond doubt. This topic is important in the world and in Russia, from a legal point of view, the work proposed by the author can be considered relevant, namely, he notes "... it must be borne in mind that AI technology can be used not only for the benefit of the state, but also in opposition to current measures provided for by legislation." And in fact, an analysis of the opponents' work should follow here, and it follows and the author shows the ability to master the material. Thus, scientific research in the proposed field is only to be welcomed. Scientific novelty. The scientific novelty of the proposed article is beyond doubt. It is expressed in the specific scientific conclusions of the author. Among them, for example, is this: "... the issue of generalizing administrative cases, as well as their personalization for AI purposes, raises a number of problems, such as problems of the selection effect and formed bias." As can be seen, these and other "theoretical" conclusions can be used in further research. Thus, the materials of the article as presented may be of interest to the scientific community. Style, structure, content. The subject of the article corresponds to the specialization of the journal "Administrative and Municipal Law", as it is devoted to ensuring "... the autonomy of decision-making by artificial intelligence for the purposes of public legal relations." The article contains an analysis of the opponents' scientific works, so the author notes that a question close to this topic has already been raised and the author uses their materials, discusses with opponents. The content of the article corresponds to the title, since the author considered the stated problems and achieved the goal of his research. The quality of the presentation of the study and its results should be recognized as improved. The subject, objectives, methodology, research results, and scientific novelty directly follow from the text of the article. The design of the work meets the requirements for this kind of work. No significant violations of these requirements were found, except for descriptions "as common features", "predicted persecution due to poor quality". The bibliography is quite complete, contains publications that the author refers to. This allows the author to correctly identify problems and put them up for discussion. The quality of the literature presented and used should be highly appreciated. The presence of scientific literature showed the validity of the author's conclusions and influenced the author's conclusions. The works of these authors correspond to the research topic, have a sign of sufficiency, and contribute to the disclosure of many aspects of the topic. Appeal to opponents. The author conducted a serious analysis of the current state of the problem under study. The author describes the opponents' different points of view on the problem, argues for a more correct position in his opinion, based on the work of opponents, and offers solutions to problems. Conclusions, the interest of the readership. The conclusions are logical and specific: "When implementing fully autonomous AI, it is important to take into account the prevailing negative international experience in terms of non-transparent and unconstitutional AI decision-making mechanisms, with the introduction of appropriate compensatory mechanisms in the Russian law enforcement field to ensure transparency of decision-making and its verification by a relevant employee of the FOI," etc. The article in this form may be of interest to the readership in terms of the systematic positions of the author in relation to the issues stated in the article. Based on the above, summing up all the positive and negative sides of the article, I recommend publishing it.