Ðóñ Eng Cn Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Legal Studies
Reference:

Artificial Intelligence as an Element of a Digital Relationship

Ryzhkova Ekaterina Aleksandrovna

PhD in Law

Associate professor at Moscow State Institute of International Relations of the Russian Federation

119454, Russia, g. Moscow, ul. Prospekt Vernadskogo, 76

ryjkova@inbox.ru
Other publications by this author
 

 
Ryzhkova Evgeniya Konstantinovna

Master's Degree Student, Department of Administrative and Financial Law, Moscow State Institute of International Relations of the Ministry of Foreign Affairs of Russia

119454, Russia, g. Moscow, ul. Pr.vernadskogo, 76

ryzhkova_e_k@my.mgimo.ru
Other publications by this author
 

 

DOI:

10.25136/2409-7136.2022.8.38546

EDN:

EDPPGZ

Received:

29-07-2022


Published:

05-08-2022


Abstract: Today, the essence of digital law is reduced to the regulation of a new digital form of old relations. In these conditions, the artificial intelligence should be considered innovative, the regulation of which is only at the initial stage. Due to the fact that the process of learning artificial intelligence has already moved to the level of self-learning, the classical approach to determining the subject of digital relations seems debatable. Thus, the issue of endowing artificial intelligence with legal personality will become one of the key issues in the next decade. The rapid development and application of new technologies requires its thorough regulation, and first of all, in matters of responsibility. The analysis of the development of information technologies in the field of artificial intelligence presented in this paper allows us to conclude that a clear legal regulation is a prerequisite for the development of relations using artificial intelligence. The evolution of digital technologies, their specificity and wide application in all spheres of society requires innovations in law. It seems necessary to develop the theoretical and legal foundations of this technical concept, to shift digital algorithms into legal norms in order to protect the rights and freedoms of human and citizen, public and private interests, society and the state.


Keywords:

artificial intelligence, machine learning, algorithm, electronic person, robot, the subject of the digital legal relationship, digital law, digital attitude, law and IT, information technology

This article is automatically translated. You can find original text of the article here.

The concept of "artificial intelligence" has penetrated deeply into modern reality. However, the first mention of the phenomenon reflecting its actual content can be found in ancient Greek mythology. First of all, we are talking about Talos, Pandora and a group of maids created by Hephaestus on the instructions of Zeus [1, pp. 7-32, pp. 156-178].  It is noteworthy that the myths in which they are mentioned have negative consequences, i.e. machines endowed with intelligence are initially perceived negatively by people.  A similar attitude towards man-made machines imitating human behavior persisted throughout the Middle Ages, fueled by the church.

We also note such a phenomenon as a golem (, , - embryo, embryo; lit. formless) [2, p.104] is already found in the Tanakh: "Your eyes have seen me unformed, and all the days when (all) will be created are written in Your book, and one of them is (signified) to me" (Psalm 139:16). Later, this concept became widely used first in Jewish mythology, and then in Western European. So, in Jewish folklore, a golem is a piece of matter in the form of a person, into which occultists breathed life with a sacred name or by attracting astral influence from the stars. Writers of the XVIII century called such a "man" a "golem" [3, p.36-37].

A significant contribution to the understanding and formation of the philosophical foundations of artificial intelligence was laid by the French scientist Rene Descartes in the mechanistic theory.  Descartes formulated the idea that any living organism is a mechanism controlled by the soul [4, p.3-72].  At the same time, the soul itself is an independent substance and its only property is the ability to think. One of the questions of the theory was the question of the possibility of "programming" the behavior of a living being. Using the example of hunting dogs trained in such a way as to run to the game after hearing a shot from a gun, and not run away when fired, which is a "natural" behavior, Descartes proved this possibility. Extrapolating this statement to people who are much more developed than dogs, he hypothesized that a person can control his behavior, and only the body was responsible for this control, without any participation of the soul. Thus, in the mechanistic theory, for the first time, a hypothesis was put forward that the behavior of a mechanism (or computer) is influenced not so much by its internal structure (or algorithms) as by the influence to which it is subjected.

Science and technology are developing in parallel with myths. Back in 400 BC, the first automaton was created – the flying pigeon of Archytas of Tarentum [5, p. 450]. In the works of Mo-tzu and Li-tzu, outlandish machines were described. Thus, the inventor Yang Shi presented the king with a life-size human figure created by his mechanical "work" of leather, wood and artificial organs [6, p.4]. A great contribution to the development of robotics was made by the Arab scientist of the XIII century al-Jazari, author of the treatise "Kitab fi Marifat al-hiyal al-Khandasiyya" (A book of knowledge about ingenious mechanical devices), where he described the design of about 50 mechanisms along with instructions for their creation, including watches, combination locks and robots, the most famous of which is a female waitress [7, p.311]. It is impossible to bypass the robots of Leonardo da Vinci and Jacques de Vaucanson.

By the end of the XIX century, the robot and the golem appear on the pages of works of art, in particular, "Frankenstein" by Mary Shelley and "Rossum Universal Robots" by Karel Chapek. Since that moment, there has been a rapid development of robotics, artificial intelligence and the development of philosophical foundations originating in the works of science fiction and futurologists. And from a legal point of view, the "three laws of robotics" by Isaac Asimov should be mentioned:

1. A robot cannot harm a person or, by its inaction, allow a person to be harmed.

2. A robot must obey all orders given by a human, except in cases where these orders contradict the First Law.

3. The robot must take care of its safety to the extent that it does not contradict the First and Second Laws [8, p.5].

Today, in matters of regulating the legal status of artificial intelligence, these three laws are the primary one. It is noteworthy that the Resolution of the European Parliament of February 16, 2017 2015/2013(INL) also contains a link to them. This is a new phenomenon for legal science, when a normative act is based on principles formulated in a literary work that have not been tested in practice, in other words, on fiction, not reality. Society has not yet encountered these relations, but there are already formulated and generally accepted norms for their implementation. 

As for the Russian practice of regulating artificial intelligence, it is necessary to focus on the Decree of the President of the Russian Federation dated October 10, 2019 No. 490 "On the development of artificial intelligence in the Russian Federation", according to paragraph 5a of Article 1 of which artificial intelligence should be understood as "a set of technological solutions that allows simulating human cognitive functions (including self-learning and finding solutions without of a predetermined algorithm) and to obtain, when performing specific tasks, results comparable, at least, with the results of human intellectual activity." It is also noted that this complex includes information and communication infrastructure, software (including those that use machine learning methods), processes and services for data processing and solution search.

Now let's turn directly to the practice of creating artificial intelligence in order to determine its status.

Artificial intelligence Norman, trained to generate a text description to images, was created in April 2018 by a team at the Massachusetts Institute of Technology. He is considered the world's first artificial intelligence psychopath (Norman. World's first psychopath AI // URL: http://norman-ai.mit.edu/). The purpose of Norman's creation was to test the hypothesis that the data used to train artificial intelligence algorithms can significantly influence its behavior. The data set for Norman's training is a collection of images of dying people (for example, falling on building structures), selected from the social news site Reddit.

During the experiment, another artificial intelligence was created using the same algorithm as Norman, but trained on other images – photos of cats, birds and people.

Upon completion of the training, both artificial intelligences were shown Rorschar spots, abstract inkblot drawings used in psychology to assess and identify mental disorders. As a result, the descriptions of images given by Norman in all cases were associated with death and murder, while the results given by the second artificial intelligence were associated with birds, flowers and couples.

According to Iyad Rahwan, a member of the team that developed Norman, "data matters more than an algorithm. This highlights the idea that the data we use to train artificial intelligence is reflected in how artificial intelligence perceives the world and how it behaves" (Jane Wakefield. Are you scared yet? Meet Norman, the psychopathic AI // URL:https://www.bbc.com/news/technology-44040008).

Thus, this experiment proves that legal regulation is necessary not only for programming algorithms of artificial intelligence, but also for data sets used for its training.

Currently, in the Russian Federation, the concept of "data set" in accordance with the Decree of the President of the Russian Federation dated October 10, 2019 N 490 "On the development of artificial intelligence in the Russian Federation" is defined as "a set of data that has been pre-prepared (processed) in accordance with the requirements of the legislation of the Russian Federation on information, information technologies and protection information and necessary for the development of software based on artificial intelligence". At the same time, the above-mentioned legislation on information, information technologies and information protection regulates only the sphere related to data theft, information modification and its dissemination. In accordance with paragraph 2 of Article 7 of the Federal Law "On Information, Information Technologies and Information Protection", "publicly available information may be used by any persons at their discretion, subject to the restrictions established by federal laws regarding the dissemination of such information."

Thus, to date, there are no restrictions in the legislation of the Russian Federation on the information used in the training of artificial intelligence.

In addition to Norman, a team from the Massachusetts Institute of Technology has created an algorithm that allows artificial intelligence to "understand" what makes certain images frightening, and then use this data to transform any photo, no matter how harmless it may look, into a terrifying image. The algorithm can transform both buildings and faces. For many, iconic buildings from all over the world seem destroyed and distorted in dark scenery or among charred and smoldering landscapes flashing through what appears to be muddy, polluted water or poisonous gas clouds. So the faces of the "Nightmare Machine" are no less disturbing. Some objects are almost abstract and barely perceptible, for example, hints of empty eyes, bloody shadows and decomposing flesh (David Grossman. 'Nightmare Machine' AI Can Make Any Face Horrifying It can haunt your dreams all by itself! // URL: https://www.popularmechanics.com/technology/robots/a23637/nightmare-machine-neural-network-ai/).

The main difference between the development of the "Nightmare Machine" from Norman is that the data used to train the latter was taken from a news site that is publicly available. But the way the "Machine" was trained was based on people's opinions. Throughout the learning process, the results of the algorithm were corrected by the conclusion of the persons who evaluated the images generated by the algorithm.

The purpose of creating this artificial intelligence, according to the developers, was to check whether the computer is able to "realize" and visualize what causes fear in people. It should be noted that the goal was fully achieved.

Thus, the experiment showed that it is necessary to regulate not only the data that is in the public domain and used for training artificial intelligence, but all the data used for training, and it is also necessary to impose responsibility not only on developers, but also on persons participating in the training of artificial intelligence.

The purpose of these experiments is to identify the impact of learning on the worldview of artificial intelligence and its application. It should be noted that today practice, unlike science, allows the creation of a new subject of legal relations – neither an individual nor a legal entity, but a robot. This idea is enshrined in the Resolution of the European Parliament of February 16, 2017 2015/2013 (INL), which provides for the possibility of granting robots a special legal status in the future. At least the most advanced autonomous robots can be created as electronic persons and be responsible for the damage caused by them in cases when they make decisions autonomously or otherwise interact independently with third parties. And here we face ethical problems, because the formation of artificial intelligence is associated with the problems of its training.

Therefore, the problem of training is one of the key issues in bringing artificial intelligence to justice in case of violation of public order. Taking into account the fact that today artificial intelligence is not recognized as a subject of law, but at the same time its actions can harm society and man, the question arises of the need to identify the violator. The possible candidates for this unenviable role are the customer, the developer, the tester and the person teaching artificial intelligence. And if the question of the customer's responsibility is initially determined by the essence of his order (legitimate or illegal), then it is more difficult to find the extreme among programmers. Strict regulation is required here in order to timely identify errors that can lead to negative consequences and harm to third parties. To date, the rule established in the aforementioned resolution is applicable, according to which "the actions or omissions of robots are causally related to the actions or omissions of specific persons, for example, manufacturers, operators, owners or users, and they could have foreseen and avoided the behavior of robots, as a result of which damage was caused."

Unlike experimental psychopathic artificial intelligences, many AI have been actively implemented and are used as a means of evaluating and predicting any phenomena.

In particular, in the judicial and law enforcement activities of the United States, the "predicted justice" came to the aid of the state - the "Public Safety Assessment System" (PSA) (About the Public Safety Assessment // URL: https://advancingpretrial.org/psa/about /) in order to provide judges with objective information and assistance in making adequate decisions regarding suspects. And if this system was well received by society, the scientific community expressed serious concerns about inaccuracies that may underlie the algorithms used to assess risks in relation to a particular person under investigation [9, 10, 11].

Predictive police algorithms used in the US were also found to be similarly biased as a result of the historical crime data they were trained on.

Sometimes the data on which artificial intelligence is trained comes not from developers, but from users – people who are not related to the creation of artificial intelligence. One example of this phenomenon is the Microsoft Tay chatbot, released on Twitter in 2016. After interacting with users of the social network, the AI adopted racist tendencies and began to call for genocide and express sympathy for Hitler.

One study found that software trained on Google News became sexist as a result of the data it learned from. When asked to finish the statement, "A man is to a programmer like a woman is to X," the program replied, "housewife."

Dr Joanna Bryson, from the University of Bath's Department of Computer Science, said the problem of sexist artificial intelligence may be due to the fact that many machines are programmed by "white, single guys from California" and could be solved, at least in part, by diversifying the workforce. She also noted that when teaching machines, people necessarily carry their own biases, and the machine catches patterns.

So, Professor Rahwan noted that his experiment with Norman proved that "engineers have to find a way to somehow balance the data," but he recognizes that the ever-expanding and important world of machine learning cannot be left to programmers alone: "There is a growing belief that the behavior of machines can be something that you can study the same way you study human behavior."

In connection with the above, it is necessary to carefully study the issues of training machines, programs, artificial intelligence. This area needs serious control and supervision, especially given the fact that at a certain stage of development artificial intelligence moves to self-learning, the basis of which is all the accumulated data, regardless of their source. From this moment on, it becomes unethical to assign responsibility to the developer, tester and the person teaching artificial intelligence, since from the moment of self-learning of AI, their contribution is not taken into account by artificial intelligence itself.

And here the question arises as to what constitutes artificial intelligence - an object or a subject of law.

There are several reasons why it is so difficult to define artificial intelligence and determine its status.

Firstly, there is still no clear definition of human intelligence, and as a prism it is impossible to determine the essence of artificial intelligence.

Secondly, the problem is in the very attitude of man to machines and the tasks facing it. If the machine copes with the task, the person does not recognize intellectual efforts behind its actions and considers the result as the purpose of the machine. This phenomenon was called the "artificial intelligence effect" and was first studied by Pamela Mccordack in the work "Machines that Think: a Personal Study of the history and prospects of artificial intelligence" [12]. The essence of the AI effect is that once artificial intelligence succeeds in solving a problem, the problem is no longer part of artificial intelligence. Thus, according to Tesler's theorem, artificial intelligence has to be understood as everything that has not been done so far. As Larry Tesler himself notes: "In fact, I said: "Intelligence is something that machines have not yet done" [13, p.11]. Many people define humanity partly by our supposedly unique intelligence. Everything that a machine or an animal can do must (as these people say) be something other than intelligence" (Larry Tesler. CV: Adages & Coinages. // URL: https://www.nomodes.com/Larry_Tesler_Consulting/Adages_and_Coinages.html).

Do not forget that artificial intelligence is not a static phenomenon, but is in the process of continuous learning, the result of which sometimes goes beyond the initial expectations, which was demonstrated by the examples described above.

Therefore, it is worth agreeing with the definition of artificial intelligence given by Andreas Kaplan and Michael Henline as "the ability of a system to correctly interpret external data, learn from them and use the knowledge gained to achieve specific goals and objectives through flexible adaptation" [14].

In connection with the above, the question of the status of artificial intelligence logically arises. How justified is it to treat it only as an object of legal relations. Doesn't independent learning lead to the formation of a person's personality? And is it only the flesh that makes a man a man? And the main question is: who is responsible for the actions taken as a result of the training?

When it comes to a person, the achievement of a certain level of education, knowledge, experience speaks of growing up and expanding legal personality.

 It should be noted that a new subject of law has already appeared in foreign practice – animals [15]. However, the actions of this subject are extremely limited. In the context of foreign science, it is more about granting rights and protection, rather than imposing duties on animals. Also, New Zealand recognized the status of the Wanganui River as a legal entity in 2017 [16]. Artificial intelligence, unlike animals and rivers, is able to make decisions that can affect a person, his life, health, status. These features are related to artificial intelligence and humans. However, society is not yet ready to recognize the legal personality of artificial intelligence, because in our opinion, the key aspect is the inability to determine from a human perspective the negative consequences that artificial intelligence can suffer as a punishment. It is assumed that artificial intelligence has no fear of death, loss, therefore there are no psychological levers of influence on it. Pure reason has always frightened a person precisely by the impossibility of subordinating him to the irrational.

Therefore, humanity is looking for those responsible for the actions of artificial intelligence among people – developers, testers, and educators. This is partly due to the fear of technocrats who have special knowledge that is not available to the majority. In the eyes of society, these are persons who have a source of increased danger, and therefore are apriori found guilty of the consequences of using artificial intelligence.

However, a ray of hope for technocrats has crept out of futurology and into law. Thus, the Resolutions of the European Parliament of February 16, 2017, 2015/2013(INL) in paragraph f of paragraph 59 allow for the granting of a special legal status in the future. In particular, we are talking about the possible creation of electronic persons capable of independently bearing responsibility.

As a conclusion in matters of the legal status of artificial intelligence, it should be recalled that Saudi Arabia granted citizenship to the android robot Sofia (The android robot Sofia became a subject of Saudi Arabia. // URL: https://tass.ru/ekonomika/4680400). However, this event should not be considered as a starting point for the legal personality of robots, given the peculiarities of determining the legal status of women from Saudi Arabia.

Thus, the issue of endowing artificial intelligence with legal personality will become one of the key issues in the next decade. The rapid development and application of new technologies requires its thorough regulation, and first of all, in matters of responsibility. Guilty or not guilty, that's the question.

In the future, artificial intelligence will be used not only to solve current problems, but also on a global scale, which can significantly affect the future of humanity. The boundaries of artificial intelligence have long gone beyond the borders of a particular state, which requires coordinated decisions at the international level. It is necessary to use a unified approach in determining the status of artificial intelligence. It is noteworthy that legal decisions in this matter will be made not at the level of international law, but within the framework of national and corporate legislation, many elements of which will be determined by artificial intelligence itself.

References
1. Mayor, Adrienne. Gods and Robots: Myths, Machines, and Ancient Dreams of Technology. Princeton University Press, 2018.
2. Grafov A. E. Dictionary of Biblical Hebrew. − M.: Text, 2019.
3. The Jewish Encyclopedia. V.6. New York and London, 1904.
4. Descartes R. Reflections on the first philosophy, which proves the existence of God and the difference between the human soul and body. // Descartes R. Works in 2 volumes: Trans. from Lat. and fr. Vol. 2/Comp., ed. and note by V. V. Sokolov. M.: Mysl, 1994. 633 p.
5. Archyt // Fragments of early Greek philosophers. Part I. From epic cosmogonies to the emergence of atomistics / edition prepared by A.V. Lebedev.-M.: Nauka, 1989.-pp. 447-459.
6. Manoj Singh. Robotics Types & Applications. Neelkanth Prakashan. 2018. p. 213.
7. Hill, Donald. The Book of Knowledge of Ingenious Mechanical Devices: (Kitāb fī ma 'rifat al-ḥiyal al-handasiyya). Springer Science & Business Media. 2012.
8. Azimov A. I, robot / A. Azimov — "Eksmo", 1950,-p.140
9. Brittain, Brian & Georges, Leah & Martin, Jim. (2021). Examining the Predictive Validity of the Public Safety Assessment. Criminal Justice and Behavior.
10. DeMichele, Matthew & Baumgartner, Peter & Wenger, Michael & Barrick, Kelle & Comfort, Megan. (2020). Public safety assessment: Predictive utility and differential prediction by race in Kentucky. Criminology & Public Policy.
11. Imai, Kosuke & Jiang, Zhichao & Greiner, James & Halen, Ryan & Shin, Sooahn. (2020). Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment.
12. Pamela McCorduck, Machines Who Think: a personal inquiry into the history and prospects of artificial intelligence, A K Peters, 2004, 565 p.
13. Wang, Pei. (2019). On Defining Artificial Intelligence. Journal of Artificial General Intelligence. 10. 1-37. DOI: 10.2478/jagi-2019-0002
14. A.M. Kaplan, M. Haenlein. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62 (1) (2019), pp. 15-25.
15. Richard A. Epstein, "Animals as Objects, or Subjects, of Rights" (John M. Olin Program in Law and Economics Working Paper No.171, 2002).
16. Davison, Isaac. "Whanganui River given legal status of a person under unique Treaty of Waitangi settlement". The New Zealand Herald. Retrieved 16 March 2017.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

A REVIEW of an article on the topic "Artificial intelligence as an element of a digital relationship". The subject of the study. The article proposed for review is devoted to the issues of artificial intelligence as an element of the "... digital relationship". The author has chosen a special subject of research: the proposed issues are investigated from the point of view of the theory of law, computer law and philosophy in their historical development, while the author notes that "A significant contribution to the understanding and formation of the philosophical foundations of artificial intelligence was laid by the French scientist Rene Descartes in the mechanistic theory." Historical documents, statements of philosophers, and Russian legislation relevant to the purpose of the study are studied. A large volume of modern scientific literature on the stated problems (mainly foreign) is also studied and summarized, analysis and discussion with the opposing authors are provided. At the same time, the author notes that "... from a legal point of view, the "three laws of robotics" by Isaac Asimov should be mentioned: 1. A robot cannot harm a human being.... 2. The robot must obey all orders given by a human, .... 3. The robot must take care of its safety ... [8, p.5]". Moreover, "Today, in matters of regulating the legal status of artificial intelligence, these three laws are the primary one." Research methodology. The purpose of the study is determined by the title and content of the work "... it is necessary to legally regulate not only artificial intelligence programming algorithms, but also data sets used for its training." It can be designated as the consideration and resolution of certain problematic aspects related to the above-mentioned issues and the use of certain experience. Based on the set goals and objectives, the author has chosen a certain methodological basis for the study. In particular, the author uses a set of general scientific, special legal methods of cognition. In particular, the methods of analysis and synthesis made it possible to generalize various approaches to the proposed topic and influenced the author's conclusions. The most important role was played by special legal methods. In particular, the author used a formal legal method, which allowed for the analysis and interpretation of the norms of current Russian legislation and the resolution of the European Parliament. In particular, the following conclusions are drawn: "The purpose of creating this artificial intelligence, according to the developers, was to check whether the computer is able to "realize" and visualize what causes fear in people. It should be noted that the goal was fully achieved", etc. Thus, the methodology chosen by the author is fully adequate to the purpose of the article, allows you to study certain aspects of the topic. The relevance of the stated issues is beyond doubt. This topic is one of the most important both in the world and in Russia, from a legal point of view, the work proposed by the author can be considered relevant, namely, he notes that "... it is necessary to regulate not only the data that is in the public domain and used for artificial intelligence training, but all data used for training, and it is also necessary to impose responsibility not only on developers, but also on those involved in the training of artificial intelligence." And in fact, an analysis of these works should follow here, and it follows and the author shows the ability to master the material. Thus, scientific research in the proposed field is only to be welcomed. Scientific novelty. The scientific novelty of the proposed article is beyond doubt. It is expressed in the specific scientific conclusions of the author. Among them, for example, is this: "... the problem of learning is one of the key issues in bringing artificial intelligence to justice in case of violation of public order. Taking into account the fact that artificial intelligence is not recognized as a subject of law today, but at the same time its actions can harm society and man, the question arises of the need to identify the violator." As can be seen, these and other "theoretical" conclusions can be used in further scientific research. Thus, the materials of the article as presented may be of interest to the scientific community. Style, structure, content. The subject of the article corresponds to the specialization of the journal "Legal Studies", as it is devoted to the issues of artificial intelligence as an element of the "... digital relationship". The article contains an analysis of the opponents' scientific works, so the author notes that a question close to this topic has already been raised and the author uses their materials, discusses with opponents. The content of the article corresponds to the title, as the author considered the stated problems and achieved the goal of his research. The quality of the presentation of the study and its results should be recognized as improved. The subject, tasks, methodology, results of legal research, and scientific novelty directly follow from the text of the article. The design of the work generally meets the requirements for this kind of work. No significant violations of these requirements were found. I have a small wish. The word "legal relationship" was not mentioned in the title of the article, but the author writes about it. Maybe the article should be called: "Artificial intelligence as an element of a digital legal relationship." Bibliography. The quality of the literature presented and used should be highly appreciated. However, the lack of Russian literature somewhat narrows the validity of the author's conclusions. The works of these authors correspond to the research topic, have a sign of sufficiency, and contribute to the disclosure of many aspects of the topic. Appeal to opponents. The author conducted a serious analysis of the current state of the problem under study. The author describes different points of view on the problem, argues for a more correct position in his opinion based on the work of opponents, and offers solutions to individual problems. Conclusions, the interest of the readership. The conclusions are logical, specific, and they are obtained using a generally accepted methodology. The article in this form may be of interest to the readership in terms of the systematic positions of the author in relation to the issues stated in the article, which should be typical for legal research. Based on the above, summing up all the positive and negative sides of the article, I recommend "publishing".