Ðóñ Eng Cn Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Law and Politics
Reference:

Creation and application of artificial intelligence for public purposes: a comparative legal analysis.

Atabekov Atabek Rustamovich

PhD in Economics

Associate Professor of the Department of Administrative and Financial Law, Legal Institute, Peoples' Friendship University of Russia

117198, Russia, Moscow, Miklukho-Maklaya str., 6

atabekoff1@mail.ru
Other publications by this author
 

 

DOI:

10.7256/2454-0706.2023.6.40848

EDN:

IIGGCY

Received:

27-05-2023


Published:

06-06-2023


Abstract: The article focuses on a comparative analysis of existing approaches regarding the compliance of the artificial intelligence (AI) designed for public purposes in foreign countries and Russia with the current national strategies and regulatory approaches. As part of the study, the research identified the basic problems in the field of transparency in the decision-making of artificial intelligence, specified challenges to implicit regulatory nature for AI in the public sphere arising from the technical design of AI systems designed by developers; suggested theoretical and practical situations of using artificial intelligence that does not comply with the principles of designing AI on the basis of fundamental legal norms; and outlined possible compensatory legal measures that ensure the safe integration of artificial intelligence into the Russian public sphere. The subject of the study covers the issues of the influence of the design of artificial intelligence on its subsequent application in the public sphere. The object of the study focuses on the normative documents, recommendations and other documents regulating the issues of artificial intelligence preoccupation for public legal relations in Russia and foreign countries, judicial practice, academic publications and analytical reports on the issues under study. The research methodology integrates a complex of modern philosophical, general scientific, special scientific methods of cognition, including dialectical, systemic, structural-functional, hermeneutical, comparative legal, formal legal (dogmatic), etc. Within the framework of this study, special emphasis is placed on the implementation of a comparative legal study of the problems of designing artificial intelligence and its subsequent application within the public law field in the context of the problems inherent in the basic stage of creation. Subsequently, potential scenarios for regulating responsibility for AI actions are identified. The research has resulted in asset of measures that can be applied in the legislative and law enforcement practice of relevant authorities implementing the integration of artificial intelligence into the sphere of public relations in Russia, as well as in the scientific field in order to determine subsequent vectors for analyzing the minimization of AI bias as a result of incorrect technology design in violation of the basic legal structures.


Keywords:

artificial intelligence, electronic face, comparative legal research of AI, tech design, biased by AI, secure AI, public law, administrative law, information law, law enforcement practice

This article is automatically translated. You can find original text of the article here.

 

 

Governments of different countries are increasingly using technical and information systems of the system within the framework of law enforcement practice. For example, government officials use computer systems to sentence the accused, approve or refuse to provide state benefits, predict the locations of future crimes and impose a ban on entry across the state border [1-2]. In each case, technology is used to make important decisions regarding individual legal rights or the allocation of public resources.

One of the important discoveries is that technologies can have "values" (aka policies and technical rules) implemented in their design [3].  In this context, we mean the influence of technical design (convenience of the interface, its accessibility, user intelligibility, legal awareness, etc.), which subsequently generates the same value, including through the formation of certain preferences for specific social subgroups.

In this logic, an interesting starting point is the position of Lessig, who noted that the engineering architecture of a technological system can be considered as an analogue of legal regulation (in the context of prohibitions and legal aspects)[4]. The researcher noted that while laws explicitly and openly regulate society, technical design can similarly influence behavior or "values" for society, but often in a less obvious way.

The same Internet technology (which in principle is one of the pillars of the current technological order), as part of the implementation of engineering solutions, faced problems in the field of anonymity [5]. On the one hand, anonymity ensured freedom of speech and behavior, on the other hand, it made it difficult to identify criminals and extremists.

In this context, the issue of technical design becomes relevant, which acts as an implicit regulatory tool that is beyond the control of the government and left to developers.

Considering the strategic national documents of France in the field of regulatory policy in relation to artificial intelligence (hereinafter - AI), it is necessary to pay attention to the need to develop not regulatory, but ethical requirements in the field of AI development, including the following: the introduction of explicable tools for auditing AI systems, the introduction of additional tools for monitoring the discriminatory impact of AI, control of the exclusive role of human judgment and the impossibility of its substitution by AI and other public aspects (creation of expert councils, etc.) [6].

At the same time, a different position is presented in the strategic document of the US President, which lacks these provisions and the entire regulatory emphasis is aimed at creating the necessary financial conditions for R&D, as well as at the safe integration of AI into the sphere of public administration from the point of view of data security and manageability [7].

A separate section on regulatory approaches for this technology has been prepared in the German national AI strategy [8]. In determining Germany's regulatory policy regarding AI, a significant emphasis is placed on the observance of fundamental rights that underlie the system of the Federal Republic of Germany and are enshrined in the Constitution of the Federal Republic of Germany, including, in particular, general freedom of action, protection of the privacy of a citizen and his control over his personal data.  It is noted that the current legislation needs to be revised taking into account the technological development of AI and the presence of unresolved aspects in relation to this technology. Separately, it is proposed to consolidate the control of technical design from the point of view of security for the user.

As part of the AI implementation strategy in the sphere of public legal relations in Russia, it is necessary to refer to the Decree of the President of the Russian Federation [9] and the Decree of the Government of the Russian Federation[10].

The provisions of the 49th Decree of the President of Russia indicate the need for a favorable data access regime, AI testing, removal of administrative barriers for AI products, development of unified standardization systems, stimulation of investment attraction, as well as the development of ethical rules. At the same time, the creation of a flexible AI regulation system is planned by 2030. From the point of view of the issues of regulating the technical design of AI, it is necessary to refer in more detail to the Decree of the Government of the Russian Federation, in subsection 4 of Section 1, in which the basic legal problems requiring solutions are noted. Among them are the balance between personal data protection and AI training, the definition of the subject and boundaries of AI regulation, the identification of AI systems when interacting with a person, the solution of issues of legal delegation of AI decisions, responsibility for AI actions, AI transparency, etc. Among the principles of AI regulation within the framework of this document, the obligation of developers to comply with the principles of compliance with laws when designing AI is fixed, i.e. the use of AI systems should not obviously lead to a violation of legal norms for the developer.

At the level of doctrinal research, the implementation of implicit strategies in the practical law enforcement field is noted by Solon Barokas and Andrew Selbst. Researchers point to racial or ethnic biases that may arise during algorithmic decision-making [11].  Joshua Kroll, Edward Felten, and Daniel Citron draw attention to the lack of accountability in computer-based decision-making, which is increasingly common in the context of algorithmic government decisions [12-13].

Domestic scientists also identify significant problems arising from the issues of technical design and its regulatory and legal features. D.V. Bakhteev in his voluminous research notes the need at the projected stage of development to take into account respect for the basic constitutional rights of man and citizen, the competence of the developer, as well as a number of other requirements aimed at reducing the risks of misuse of AI [14]. V. E. Karpov et al. they focus on the problem of the lack of ethical verification in the design and development of AI, which in turn forms the practice of correcting AI actions upon failures, rather than the proactive nature of the response on the part of the developer [15]. A.V. Minbaleev [16] and O.A. Yastrebov and D.R. [17] raise the subsequent question of the possibility of allocating a separate the legal personality of AI as an electronic person, which may be of fundamental importance at the conditional stage of its "birth".

When considering practical examples, it is proposed to refer to the US experience with MiDAS technology. As part of the automatic analysis of fraudulent actions related to receiving unemployment benefits, the Unemployment Insurance Agency of the State of Michigan has implemented this algorithm in its law enforcement practice. A distinctive feature of this algorithm is the absence of a procedure for challenging the AI's decision within the framework of the specified authority. The consequence of the incorrect operation of the corresponding AI is a large circle of victims of the actions of the authority, who appealed to the court and appealed the decisions wrongfully made by AI [18]. As part of the operative part of the court's decision, the imputation of guilt for the actions of AI to its developers is highlighted.

The second notable example is SyRi, which was also used to detect fraud in the field of social security [19]. However, within the framework of this case, it was not so much the actions of the AI itself that were challenged in court, as the issues of the "black box" [20], within which the algorithm of the decision made by machine learning was not clear.  The reasoning part of the judgment is based on the violation of article 8 of the European Convention on Human Rights when using this technology by civil servants.

At the same time, it should be noted that the basis of the above cases on the part of civil servants and users is also a certain prejudice against AI itself, as a more impartial system in decision-making. In this regard, it seems possible for us to focus on the provisions of the Arbitration Procedural Code of Russia, where the option of the court to make decisions based on internal conviction is explicitly fixed (Part 1. Article 71 of the APC of the Russian Federation [21]).

In addition, discussing this issue, it is impossible not to recall a case in a related field of law in foreign practice related to the Compass program, which assesses the risk of relapse in the defendant [22].

It should be understood that formally the recommendation report actually masks the main series of subjective judgments on the part of the system developers. This subjective choice includes the following positions: which data sources to use to build a predictive model, which parts of the selected data to include or exclude, how to weigh this data, which methods to use for data analysis, etc.

However, since the recommendation is generated by an automated system using some kind of mechanistic process and is presented in a strict computational form, the result may have a misleading impression of almost mathematical objectivity within the framework of the law. Because of this aura of mechanistic objectivity, judges and other officials may pay more attention to computer recommendations, as opposed to comparable human assessments. This human tendency to unreasonably attribute value neutrality to the technological decision-making process (compared to people in a similar position) and to rely on the apparent accuracy of mathematical analysis and data-based analysis should be thoroughly investigated in the context of technological systems that affect the legal evaluation of evidence and judgments.

However, it should be taken into account that legal norms have a share of uncertainty both in establishing guilt and its size.

The same Code of Administrative Offences of the Russian Federation provides an opportunity for a judge, an authority and an official to take into account aggravating and mitigating factors when determining the circumstances of an offense, which directly affects in some cases the type of administrative punishment, its size and duration (Chapters 3 and 4 of the Administrative Code of the Russian Federation [23])

In this regard, one of the main functions of officials is to eliminate these uncertainties in the application of laws in specific circumstances, taking into account such provisions as the rule of law itself, the general practice of its application, judicial practice, as well as the State policy itself. Society often does not know the final answer to such legal uncertainty until a specific legal official makes a binding, final determination, preferring one set of possible arguments and interpretations to others.

At the same time, it should be understood that the digitization of this process will take place exactly according to the same algorithms, including preferences in "judgments" and arguments of one side to other approaches. At the same time, it should be borne in mind that the set of implicit knowledge and signs that an official as a "person" can pay attention to is not directly fixed by the current legislation, the procedural aspect of evaluating evidence and other by-laws.

Based on the above, it is proposed to carry out the following activities:

1. Introduction of additional tools of administrative punishment for developers of AI systems, if they do not comply with the principles of the projected compliance with AI laws, in the form of disqualification and a ban on the implementation of these activities.

2. Formation and legislative consolidation of the register of AI technologies, within which the user can see the scope of AI, its developer, the volume of users, technical reliability (mathematical and other tools used, facts and volume of hacking), administrative activity from the point of view of law (the number of violations detected, judicial appeal, etc.).

3. When using AI for public purposes of supervision and control, it is necessary to observe the transparency of AI algorithms (for developers), maintain the procedure for challenging AI actions by an official, introduce mandatory additional qualification requirements for education and knowledge of officials considering these complaints, ban the use of foreign AI systems for public purposes.

References
1. Citron, D. K. (2007). Technological due process. Wash. UL Rev., 85, 1249.
2. Roth, A. (2015). Trial by machine. Geo. LJ, 104, 1245.
3. Stilgoe, J., & Guston, D. (2016). Responsible research and innovation. MIT Press.
4. Lessig, L. (2006). Code 2.0: Code and other laws of cyberspace.
5. Goldberg, I., & Wagner, D. (1998). TAZ servers and the rewebber network.
6. Villani C. et al. For a meaningful artificial intelligence: Towards a French and European strategy. Conseil national du numérique: [Website]. — URL: https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf (Accessed: 17.05.2023).
7. Maintaining American Leadership in Artificial Intelligence. A Presidential Document by the Executive Office of the President on 02/14/2019: [Website]. — URL: https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence (Accessed: 17.05.2023).
8. Germany AI Strategy Report: [Website]. — URL: https://ai-watch.ec.europa.eu/countries/germany/germany-ai-strategy-report_en (Accessed: 17.05.2023).
9. Decree of the President of the Russian Federation of October 10, 2019 ¹ 490 "On the development of artificial intelligence in the Russian Federation": [Website]. — URL: https://www.garant.ru/products/ipo/prime/doc/72738946/ (Accessed: 17.05.2023).
10. Decree of the Government of the Russian Federation of August 19, 2020 No. 2129-r On approval of the Concept for the development of regulation of relations in the field of artificial intelligence technologies and robotics for the period up to 2024: [Website]. — URL: https://www.garant.ru/products/ipo/prime/doc/74460628/ (Accessed: 17.05.2023).
11. Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California law review, 671-732.
12. Kroll, J. A. (2015). Accountable algorithms (Doctoral dissertation, Princeton University).
13. Citron, D. K. (2007). Technological due process. Wash. UL Rev., 85, 1249.
14. Bakhteev, D. V. (2019).. Risks and ethical and legal models of using artificial intelligence systems. Legal Research, (11), 1-11.
15. Karpov, V. E., Gotovtsev, P. M., & Roizenzon, G. V. (2018). On the issue of ethics and artificial intelligence systems. Filosofiia i obshchestvo, 2, 84-105..
16. Minbaleev À. V. (2018). PROBLEMS OF REGULATING ARTIFICIAL INTELLIGENCE. Bulletin of the South Ural State University Ser. Law, 18, 4, 82-87.
17. Yastrebov Î. À., Aksenova Ì. À.(2022). THE LAW ISSUES OF IMPACT OF ARTIFICIAL INTELLIGENCE ON THE ADMINISTRATIVE REGIME FOR COMBATING MONEY LAUNDERING AND TERRORISM FINANCING. Legal policy and legal life, 3, 84-109.
18. Michigan’s MiDAS Unemployment System: Algorithm Alchemy Created Lead, Not Gold. : [Website]. — URL: https://spectrum.ieee.org/michigans-midas-unemployment-system-algorithm-alchemy-that-created-lead-not-gold#toggle-gdpr (Accessed: 17.05.2023).
19. SyRI legislation in breach of European Convention on Human Rights: [Website]. — URL: https://www.rechtspraak.nl/Organisatie-en-contact/Organisatie/Rechtbanken/Rechtbank-Den-Haag/Nieuws/Paginas/SyRI-legislation-in-breach-of-European-Convention-on-Human-Rights.aspx (Accessed: 17.05.2023).
20. District Court of the Hague, 6 March 2020, ECLI:NL:RBDHA:2020:865: [Website]. — URL: uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:1878 (Accessed: 17.05.2023).
21.  Arbitration Procedure Code of the Russian Federation dated July 24, 2002 No. 95-FZ // Collection of Legislation of the Russian Federation dated July 29, 2002 No. 30 Art. 3012.
22. State v. Loomis. Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing.: [Website]. — URL: https://harvardlawreview.org/print/vol-130/state-v-loomis/ (Accessed: 17.05.2023).
23.  "Code of the Russian Federation on Administrative Offenses" dated December 30, 2001 N 195-FZ // "Collected Legislation of the Russian Federation", 01/07/2002, N 1 (part 1), art. 1

First Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The subject of the study. In the peer-reviewed article "Analysis of the phenomenon of technical design for the purposes of public regulation of artificial intelligence: the experience of foreign countries and the prospects of Russia", the legal norms regulating public relations in the field of creation and use of artificial intelligence act as the subject of research. The author refers to domestic law, as well as analyzes the law of foreign countries in the field of creation and use of artificial intelligence. Research methodology. The main research method is comparative analysis. In addition, the author used other modern methods of scientific cognition: historical, formal-logical, legal-technical, formal-dogmatic, comparative jurisprudence, etc. The author of the article also used such scientific methods and techniques as deduction, modeling, systematization, generalization, etc. The relevance of research. The global digitalization of all spheres of human activity raises questions about the need for appropriate legal regulation of new types of public relations, which include the creation and use of artificial intelligence by humans. This study is not only devoted to identifying problems and gaps in the law in regulating public relations using artificial intelligence, but also contains specific proposals for their resolution, which deserve the attention of legislators and law enforcement officers (for example, "the introduction of additional administrative punishment tools for developers of AI systems, if they do not comply with the principles of projected compliance with AI laws, in the form of disqualification and prohibition of carrying out these activities" or "when using AI for public purposes of supervision and control, it is necessary to ensure transparency of AI algorithms (for developers), maintain the procedure for challenging AI actions from an official, introduce mandatory additional qualification requirements for education and knowledge from officials considering these complaints, ban on the use of foreign AI systems for public purposes." Scientific novelty. It seems that this article is characterized by scientific novelty, although addressing the issue of legal regulation of the use of artificial intelligence for public purposes is not completely new to Russian legal science. However, the aspect chosen by the author - the analysis of the phenomenon of technical design for the purposes of public regulation of artificial intelligence, based on comparative jurisprudence, has elements of scientific novelty. Style, structure, content. In general, the writing style of the work can be considered scientific. At the same time, as a remark, it should be noted that the author uses the use of abbreviations known to a narrow circle of readers (for example, AI, IT), shortens words (for example, number), departs from the official names of documents and positions (for example, the code (more correctly, the Code) on administrative offenses, the Decree of the President of Russia (more correctly, the Russian Federation). It is not at all clear who the "legal officials" are. The sentences note the repetition of single-root words, for example, in the sentence "However, it should be borne in mind that initially legal norms have a certain amount of uncertainty both in determining guilt and its size." Such proposals are subject to rephrasing. The article is structured (introduction, main part and conclusion). In the opinion of the reviewer, the topic is disclosed in terms of content: "The creation and application of artificial intelligence for public purposes: a comparative legal analysis." Still, it would be necessary to explain the "phenomenon of technical design", what it is. The term "phenomenon" is used by the author in the title of the article, but is not disclosed at all in the text. The author uses "technical design" and "technological design" as identical categories. Is it acceptable? Bibliography. The author has studied a sufficient number of sources. However, references to sources are designed in violation of the requirements of the bibliographic GOST. Appeal to opponents. The article presents the points of view of other scientists on the creation and application of artificial intelligence, all appeals to opponents are correct. However, when referring to the full name of the author of the opinion (point of view) in the text of the article, the order of writing is violated (initials should be indicated first, then the surname). Conclusions, the interest of the readership. The article "Analysis of the phenomenon of technical design for the purposes of public regulation of artificial intelligence: the experience of foreign countries and the prospects of Russia" can be recommended for publication, provided that its title is corrected and its content is finalized, and all noted comments are eliminated. The article is written on a relevant topic. The article may be of interest to specialists in the field of information law and digital law.

Second Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The subject of the research in the article submitted for review, as its name implies, is the creation and application of artificial intelligence (AI) for public purposes. The author carries out a comparative legal analysis of this problem. As a result, scientists are developing recommendations on the use of AI in domestic law enforcement practice. The stated boundaries of the study are fully respected by the author. The methodology of the research is not disclosed in the text of the article, but it is obvious that the scientists used universal dialectical, logical, formal legal and comparative legal research methods. The relevance of the research topic chosen by the author is justified as follows: "Governments of different countries are increasingly using technical and information systems in the framework of law enforcement practice. For example, government officials use computer systems to sentence defendants, approve or refuse to provide state benefits, predict the locations of future crimes and impose a ban on entry across the state border [1-2]. In each case, technology is used to make important decisions regarding individual legal rights or the allocation of public resources." Additionally, the scientist needs to list the names of the leading experts who have been engaged in the study of the problems raised in the article, as well as reveal the degree of their study. This is partially stated in a number of provisions of the main part of the work. What is the scientific novelty of the study, the article does not say directly. In fact, it manifests itself in the introduction into the circulation of domestic legal science of data on facts of direct interest to Russian legislators and law enforcement officers, as well as in the recommendations developed by the author on improving the application of AI systems in practice. Thus, the article deserves the attention of the readership and makes a certain contribution to the development of domestic legal science. The scientific style of the research is fully sustained by the author. The structure of the work is quite logical. In the introductory part of the article, the scientist substantiates the relevance of his chosen research topic. In the main part of the work, the author analyzes the features of the creation and application of AI systems in France, the USA, Germany and Russia, identifies problems that arise in this case, and also provides practical examples illustrating certain provisions of the article. The final part of the work contains conclusions based on the results of the study. The content of the article fully corresponds to its title and does not cause any particular complaints. The bibliography of the study is presented by 23 sources (normative legal acts, monographs and scientific articles), including in English. From a formal and factual point of view, this is quite enough. The nature and number of sources used in writing the article allowed the author to reveal the research topic with the necessary depth and completeness. The provisions of the work are well-reasoned. There is an appeal to opponents, both general and private (S. Villani et al.). The scientific discussion is conducted correctly by the author. There are conclusions based on the results of the conducted research, they are clearly structured, specific and deserve the attention of the readership (the scientist suggests: "1. The introduction of additional administrative punishment tools for developers of AI systems, if they do not comply with the principles of projected compliance with AI laws, in the form of disqualification and a ban on carrying out these activities. 2. Formation and legislative consolidation of the register of AI technologies, within which the user can see the scope of AI, its developer, the volume of users, technical reliability (mathematical and other tools used, facts and volume of hacking), administrative activity from the point of view of law (the number of violations detected, judicial appeal, etc.). 3. When using AI for public purposes of supervision and control, it is necessary to ensure the transparency of AI algorithms (for developers), maintain the procedure for challenging AI actions from an official, introduce mandatory additional qualification requirements for education and knowledge from officials considering these complaints, and ban the use of foreign AI systems for public purposes"). The article needs additional proofreading. It contains spelling, punctuation, and syntactic errors. The interest of the readership in the article submitted for review can be shown primarily by specialists in the field of administrative law and information law, provided that it is slightly improved: disclosure of the research methodology, additional justification of the relevance of the chosen topic of work, elimination of violations in the design of the article.