Ðóñ Eng Cn Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Legal Studies
Reference:

Transparency of algorithms as a legal principle of automated processing of data relating to a person

Mochalov Artur

ORCID: 0000-0003-2502-559X

PhD in Law

Associate Professor, Department of Constitutional Law, Ural State Law University named after V. F. Yakovlev

620137, Russia, Sverdlovsk region, Yekaterinburg, Komsomolskaya str., 21

artm84@gmail.com

DOI:

10.25136/2409-7136.2023.12.69452

EDN:

EIJUVD

Received:

24-12-2023


Published:

31-12-2023


Abstract: The article discusses the problems of implementing the principle of transparency of algorithms when using information technology. The computer algorithms underlying the software often remain "black boxes" for individuals. The proposed article substantiates the need for legislative stipulation of the principle of transparency of algorithms. It assumes the openness and accessibility of information about the information used by the algorithm, the sources of obtaining such information, logical circuits and mechanisms for their processing, as well as the nature of the information obtained as a result of processing, the purposes and methods of its use. Noting the positive trends in Russian legislative regulation, the author emphasizes the need to improve legislation. The study was carried out using the comparative legal method. Examples of the legal consolidation of the principle of transparency of algorithms in the legislation of foreign countries are considered. In relation to Russian legislation, gaps in regulation are noted on the basis of the formal legal method, and proposals are formulated to eliminate them. The scientific novelty of the research lies in the formulation of the content of the principle of transparency of algorithms. It is noted that the principle of transparency should be manifested in the realization by individuals of their right to receive reliable, comprehensive and understandable information about the functioning of algorithms, about the information used and about the derived data. It is emphasized that at the level of the federal law, a criterion for the comprehensibility of information for users should be formulated. The position is also expressed that in the case of using algorithms in recommendation systems or in targeted advertising, the user should be able to refuse algorithmic processing of data about him, or limit the use of certain personal information by algorithms.


Keywords:

algorithms transparency, algorithms accountability, human rights, personal data, automated processing of data, information technologies, digital technologies, digital profiling, constitutional principles, right to know

This article is automatically translated. You can find original text of the article here.

In the modern world, a significant part of the information concerning individuals is processed in an automated mode based on algorithms that can be defined as programmed procedures for converting incoming information (input) into the desired result (output) based on special calculations [1, p. 167]. The algorithms underlying computer programs accompany not only human activity in an online environment, but also his offline activity. For example, on the Internet, we constantly encounter the results of algorithms in recommendation systems (on marketplaces, in news aggregators); algorithms underlie targeted advertising, configured to show users corresponding to specified characteristics. In an offline environment, algorithms are used in computer programs designed to process information about an individual in order to assess his trustworthiness and solvency (for example, when a bank decides to grant him a loan), to recognize people from their images obtained from surveillance cameras, control their movement, etc. In some algorithms, the "output" is some information, on the basis of which a decision is subsequently made by a person (for example, a decision to issue a loan or to refuse to issue a loan, depending on information indicating the solvency of the borrower, the probability of repayment of the loan). Other algorithms assume that the resulting "output" is the decision itself, made in an automated mode, i.e. without direct human participation. The term "algocracy" appeared in foreign literature, meaning "the power of algorithms" [2]. The opinion is expressed about the transition of society from informational to algorithmic [3, p. 32].

The transformation of public relations under the influence of information technologies (which are always based on certain algorithms) requires a rethink of the legal regulation of these relations. In recent years, the scientific literature has paid increased attention to such aspects of the use of information technologies as transparency and accountability of the underlying algorithms. The works of Russian authors often focus on legal means to ensure transparency of artificial intelligence algorithms [4],[5],[6]. This trend is probably due to the fact that transparency is explicitly named in the National Strategy for the Development of Artificial Intelligence for the period up to 2030 (approved by Presidential Decree No. 490 of October 10, 2019) among the principles for the development and use of artificial intelligence technologies.

However, it seems that the principle of algorithm transparency is not limited to artificial intelligence technologies. Having a universal character, it can be considered in relation to any information technology involving the use of human data or designed to make decisions that directly affect the interests of individuals.

The operation of any algorithm is associated with the processing and analysis of some initial data set through computer calculations. This can be both personal data and data characterizing the environment (for example, time of day, weather, etc.), as well as technical information characterizing, for example, user devices, communication networks or software installed on devices. In any case, all this information relates to a person to one degree or another. Combining data of different types, establishing statistically significant correlations between different classes of data allows using the means of so-called predictive (predictive) analytics to predict the likely behavior of subjects. Computer processing of information about an individual and his environment in order to assess his personal, business, professional qualities, health status, preferences, determination of probable behavior, etc. has been called digital profiling. It is in this sense that the term "profiling" is used in the General Regulation of the European Union No. 2016/679 "On the protection of individuals in the processing of personal data and on the free circulation of such data, as well as on the repeal of Directive 95/46/EC" (General Data Protection Regulation, hereinafter - GDPR). The result of algorithms aimed at profiling individuals will always be the formation of new (derived) data – knowledge about a particular person (having a certain or probabilistic character), which were not initially known and were obtained as a result of computer processing of an array of information received at the "input".

Of course, certain predictions about the behavior of a person can be made without the use of computer calculations. But it is information technology that has made it possible to collect and process huge amounts of structured and unstructured data concerning a large number of people – quickly, in real time and without direct human involvement. This opens up wide opportunities for governments, private corporations, as well as criminal structures to control the behavior of a significant part of people. In order to ensure human rights and freedoms, legislative measures are required that would limit arbitrary and disproportionate interference in the sphere of private life. The principle of transparency of algorithms can be considered one of such measures.

On the one hand, this principle is aimed at ensuring fundamental rights and freedoms, such as the right to privacy, the right to personal and family secrets, the right to dignity, etc. On the other hand, it serves as a manifestation of the human right to information, including the unconditional right to know which, by whom and how (how, for what purposes) the data about him is used. The realization of such a "right to know" (or "right to an explanation" [2]), in turn, acts as a guarantee of other fundamental rights and freedoms.

One of the legal problems of using algorithms is known as the "black box" problem [7]. In many cases, individuals are deprived of the right to know how algorithms work and on the basis of what principles computer programs that process their data work; what source data is used (as well as the sources of their receipt); what knowledge about individuals is formed at the "exit", how they are interconnected with the "incoming" data and how this new knowledge They can be used against an individual (including against his interests). Moreover, subjects are not always aware of the algorithmic processing of information about them in principle, and certainly not in all cases subjects gave specific and informed consent to this. At the same time, as it is rightly noted in the literature, the use of algorithms for processing human data (both personal data and other information related to a person and his environment) directly affects the realization of fundamental rights and freedoms by him [5],[6].

Thus, recommendation systems used in news aggregators significantly narrow the space for a person to search and receive information in all its diversity, since they offer only the content that, in the "opinion" of the algorithm, is most likely to be viewed by the addressee user. Another (and perhaps more important) aspect of the action of recommendation systems is that, by determining the interests, needs and inclinations of the user (potential buyer) and predicting his purchasing behavior, algorithms "invade" the sphere of his private life, "learning" through mathematical calculations such information about a person that he does not I would like to disclose and which I sought to keep secret (or hoped to keep secret). Finally, another threat to human rights is related to the fact that algorithms may contain discriminatory practices. This is especially noticeable when automation of decision-making is used in areas such as hiring, granting loans or providing public services. Computer programs can "deny" individuals their requests only because they belong to a certain group (the literature often cites the example of a recruiting robot at Amazon, which gave preference to men when selecting employees [7]).

The need to ensure the transparency of algorithms in the processing of personal data has been understood for a long time. Article 8 of the Convention on the Protection of Natural Persons with Automated Processing of Personal Data of 1981 provides for the right of a data subject "to know about the existence of an automated personal data file, to know its main purposes, as well as the name and place of habitual residence or the location of the file controller", to receive "confirmation of whether personal data concerning him are stored in an automated file data, as well as to obtain such data in an understandable form." The 2018 Protocol on Amendments to the Convention (has not entered into force) introduces the principle of transparency of data processing and expands the list of information related to data processing that a subject has the right to receive from controllers, including information about the origin of data.

The GDPR went a little further in this matter. According to its Article 15, the data subject has the right to receive information from the controller about the existence of an automated decision-making process (including profiling), including information "on the appropriate logical scheme, as well as on the significance and expected consequences of this processing for the data subject."

There is a lot of criticism in the literature about the relevant provisions of the GDPR. The main problem lies in the fact that the wording of the GDPR does not have the necessary specifics and does not allow us to say unequivocally how the personal data controller should fulfill the relevant obligation. There are two approaches to the interpretation of this rule. Some researchers believe that the controller is obliged to actually open the "black box", revealing to data subjects the logic of automated decision–making, including technical information - for example, which "input" data affect the final decision as factors; what is the weight of each of these factors, etc. Another position is that it is enough for the controller to describe only in general terms, how this or that decision is made by means of an algorithm [7, pp. 142-143].

Each of these approaches has disadvantages. Thus, the requirement of mandatory disclosure of the "black box", i.e. the technical side of the functioning of the algorithm, may constitute disproportionate interference in the sphere of economic interests of business entities – developers and copyright holders of relevant computer programs, violation of their intellectual rights. Computer programs (including the underlying algorithms for collecting and processing incoming information) are often protected by copyright holders as production secrets. The logic of the program's functioning and its architecture may be a competitive advantage of the developer, justifying his desire to keep such information secret. At the same time, the disclosed technical information may not always be useful and understandable to individuals. For example, the publication of the source code of the program will formally represent a full disclosure of the logic of the functioning of the computer algorithm, but this logic will be understandable only to technical specialists. Finally, the controller of personal data may not himself have information (especially technical) about the logic of the functioning of a computer program – in the event that he is not its developer, but only acquired a license to use it. The narrow interpretation of Article 15 of the GDPR, which allows for the possibility of an abstract description of algorithms, makes it possible for personal data controllers to evade the performance of the relevant duty, since it gives them the opportunity to independently determine what information and to what extent should be communicated to subjects. In both cases, the disclosure of information about the algorithm may acquire a formal, decorative character.

The implementation of the requirement to disclose logical schemes of algorithms becomes more difficult if the algorithm involves the use of artificial intelligence technologies based on machine learning. A feature of artificial intelligence is that a computer program, independently identifying new significant correlations that were not originally laid down by the developers, and discarding erroneous correlations, is able to change the initially created logical circuits and reconfigure logical operations. As a result, the result of the "algorithm multiplied by artificial intelligence" may be unpredictable [4].

In this regard, Y. S. Kharitonova expresses doubts about the validity of the requirements for absolute disclosure of the source code or architecture of applicable artificial intelligence models, recognizing that "such disclosure cannot be thorough for a number of reasons" [6, p. 348]. In addition, this author rightly doubts the legal significance of the disclosure of artificial intelligence algorithms, drawing attention to the fact that artificial intelligence systems, devoid of an understanding of ethical values, in any case carry the risk of making "inexplicable biased decisions" based only on mathematical correlations [6, p. 345]. As an alternative to the disclosure of the "black box", the researcher suggests strengthening the audit of the result obtained, monitoring the results of artificial intelligence systems with software created by independent developers. The task of lawyers in this case is to develop criteria for checking the results of artificial intelligence systems for objectivity.

Accepting Y. S. Kharitonova's argument regarding the problems related to the transparency of artificial intelligence algorithms, it should be emphasized that this author mixes the concepts of algorithm transparency and its control. Despite the fact that transparency and accountability are interrelated and determine each other, the establishment of control and audit mechanisms in relation to algorithms and their decisions should not replace or cancel the principle of transparency, which assumes the explicability and comprehensibility of the algorithm to users, information about which is used in the operation of such an algorithm.

E. V. Talapina notes that the most risky from the point of view of human rights violations is the use of algorithms in the public sector [4]. For this reason, in a number of European countries, the rule on mandatory disclosure of the logic of the algorithms used is of paramount importance for public authorities and institutions. For example, the French Law on the Digital Republic of 2016 establishes that the rules governing the algorithmic processing of data in a public authority must be published on the website of that authority. In the UK, the Central Digital and Data Office published the Algorithm Transparency Standard in December 2021[1], which is part of the UK National Data Strategy and is designed to assist public sector organizations in providing clear information about the algorithms they use. algorithmic tools and why they use them. Algorithmic transparency is understood as "openness about how algorithmic tools support decision-making." It includes providing information about algorithmic tools and solutions using algorithms in a complete, open, understandable, easily accessible and free format.

Unlike France and the United Kingdom, in the People's Republic of China, the requirement for mandatory disclosure of the logic of algorithms is established at the legislative level in relation to recommendation systems that are used primarily in private business (Regulation on the Management of algorithmic recommendations of information services on the Internet dated November 16, 2021) [8]. The subjects of this obligation are the providers of algorithmic recommendation services, i.e. persons who directly benefit from the use of the recommendation system. Along with this obligation, the Regulation also establishes the mandatory registration of such algorithms with an authorized state agency [9].

In Russia, the Federal Law "On Personal Data" dated July 27, 2006 No. 152-FZ also contains certain regulations concerning the automated processing of personal data (art. 16). In particular, it is the responsibility of the personal data operator to explain to the data subject "the procedure for making a decision based solely on the automated processing of his personal data and the possible legal consequences of such a decision." However, the term "decision-making procedure" does not seem to be identical to the description of the logical scheme of the algorithm used for data processing, and therefore it is possible to say that Article 16 of this Law provides for the principle of transparency of algorithms with a high degree of conditionality. It should also be noted the limited application of Article 16 of the Law on Personal Data itself. It applies only to cases of "exclusively automated processing of personal data", the result of which is the adoption of "decisions that generate legal consequences for the subject of personal data or otherwise affect his rights and legitimate interests." The question of whether the results of the algorithms of recommendation systems or targeted advertising can be considered sufficiently "affecting the rights and legitimate interests" of data subjects remains debatable (and does not have a clear answer in the law). Also, the effect of Article 16, following its literal interpretation, does not apply to cases when, through algorithmic data processing, an individual's profile (digital profile) is only formed with information about his personal qualities and a forecast of probable behavior, and a decision based on the analysis of such a profile is made by a person (for example, an employee of the bank's security service when considering a client's application for the issuance of a loan).

In a more distinct form, the principle of transparency of algorithms appeared in Russian legislation in 2023 with the introduction of Federal Law No. 149-FZ dated July 27, 2006 "On Information, Information Technologies and Information Protection" of Article 10.2-2, which provides for the specifics of providing information to users using recommendation technologies. In part, the regulation is similar to that in China (although without the requirement of mandatory registration of algorithms). According to Part 1 of the above article, the owner of an Internet site that uses recommendation technologies (i.e., "information technologies for providing information based on the collection, systematization and analysis of information related to the preferences of Internet users") must post information on the use of recommendation technologies on such a site (the text of the information message is unified, approved by Roskomnadzor Order No. 149 dated October 6, 2023) and the rules for the use of recommendation technologies. The latter should include, in particular, "a description of the processes and methods of collecting, systematizing, analyzing information related to the preferences of Internet users, providing information based on this information, as well as ways to implement such processes and methods," as well as the types of information used and sources of their receipt.

An analysis of the rules for the use of recommendation technologies posted on individual Internet resources makes it possible to identify some obvious gaps in the new regulation already now. Firstly, resource owners often indicate the list of processed information in a non-exhaustive way. For example, the rules of the Zen site (dzen.ru ), listing the information, the wording "as well as similar parameters" is used at the end of the list[2]. Even more interesting is the corresponding wording in the rules of the Ozon service (in terms of describing the mechanism of functioning of the recommendation technology): "[We] also learn the characteristics of the user, such as gender and age."[3] In this case, the user's characteristics are not specified, only examples are given. Secondly, the legislative regulation does not provide for such a criterion as the clarity of the algorithm description. As a result, descriptions of the type that can be seen on the Ozon website appear in the rules: "As a model, Ozon uses one of the machine learning methods — gradient boosting of decision trees." It is questionable how informative such a "description" is for users. The description of the technology on the Zen website is no better from the point of view of clarity: "Mathematical models are used (including for the formation of a feature space), which, through multifactorial analysis, classification, ranking of content features, allow us to estimate the probability that the information material will be viewed to the end by a specific user." Finally, thirdly, the Law does not contain regulation that would allow the user to refuse to use recommendation technology or limit (disable) the use of certain characteristics in it that relate to his personality. Accordingly, there is no such option in Internet services using recommendation systems.

So, the principle of transparency of the algorithm can be defined as the fundamental principle of automated processing of human data (including personal data and other information related to a person and his environment), assuming openness and accessibility of information explaining the use of the algorithm for automated processing of data about subjects. One can agree with W. Seymour's statement that the principle of transparency of algorithms should include both "transparency of input" (disclosure of information about what data is used for processing and from what sources) and "transparency of output" (what data can be obtained as a result of processing and to what extent it can be used) [10].

It seems that such information, revealing the operation of the algorithm, should include information provided or disseminated by the operator:

- about the existence of algorithmic data processing;

- about the range (list) of processed data and about the sources of their receipt;

- about the derived data that can be obtained as a result of automated processing, about the purposes and methods of their further use;

- about the logical scheme(s) underlying the algorithm;

- about decisions made as a result of automated data processing, including decisions made in automated mode, and decisions that can be made with direct human participation based on the data obtained as a result of data processing; about the legal consequences of such decisions;

- about the risks of erroneous decisions and ways to overcome risks, including the cancellation or revision of decisions at the request of the subject;

- on the methods of refusal of the subject from automated processing of his data.

At the same time, it is important to consolidate in legislation such criteria for the transparency of algorithms as the exhaustive nature of information about the information at the "entrance" and the sources of their receipt, as well as the comprehensibility of the description of the mechanisms of the algorithms to persons who are not specialists in the field of information technology.

At the same time, it also seems reasonable that if automated data processing is not carried out for the purpose of making legally significant decisions based on the law (for example, when it comes to targeted advertising or recommendation systems), the user should be provided with the opportunity to refuse to use automated data processing in relation to him or to determine a list of data (or data sources) which he refuses to provide for such automated processing. For example, the user should be given the opportunity to prohibit (in a form understandable to him and in a convenient way) the use of data such as search query history, geometries, hashtags, purchase history and other similar information for automated processing unrelated to the exercise of public authority functions, which, as a rule, relate only to a specific person. They form part of his private life and, therefore, cannot be used without the consent of the person. In the case of a recommendation system, the proposal to establish a requirement for digital platforms that allows the user to assess the relevance of recommendations or completely abandon the corresponding function of the platform deserves attention [9, p. 57].

Russian legislation, having taken an important step towards regulating the principle of transparency of algorithms, is still far from perfect. The current directions of development of the legislation of the Russian Federation in this area are the legal consolidation of this principle, the extension of its effect to all cases of algorithmic processing of information affecting the interests of individuals; legislative consolidation of the requirement for an understandable description of the mechanisms of functioning of algorithms and the need to specify in each case an exhaustive list of processed information. It is important to establish legal mechanisms for subjects to refuse to use algorithms against them (in particular, in recommendation systems and targeted advertising), as well as to provide users with the opportunity to refuse to provide certain information or from using certain sources of their receipt.

[1] Algorithmic transparency standard. URL: https://www.gov.uk/government/collections/algorithmic-transparency-standard (date of application: 13.10.2023).

[2] On the rules for the use of recommendation technologies by the Zen content platform.  https://dzen.ru/legal/recommendation-technologies/index.html

[3] The algorithm of recommendations on Ozon. https://docs.ozon.ru/legal/terms-of-use/site/algorithms/recomendation-algorithms.

References
1. Gillespie, T. (2014). The relevance of algorithms. Gillespie T., Boczkowski P. J., Foot K. A. (eds.). Media Technologies: Essays on Communication, Materiality, and Society. Boston: MIT Press, 167–193.
2. Aneesh, A. (2023) Technologically Coded Authority: The Post-Industrial Decline in Bureaucratic Hierarchies. https://www.researchgate.net/publication/254843955_Technologically_Coded_Authority_The_Post-Industrial_Decline_in_Bureaucratic_Hierarchies
3. Pibaev, I. A., & Simonova, S. V. (2020). Algoritmy v mehanizme realizacii konstitucionnyh prav i svobod: vyzovy cifrovoj jepohi [Algorithms in the Mechanism of Realization of constitutional Rights and Freedoms: Challenges of the Digital Era]. Sravnitel'noe konstitucionnoe obozrenie, 6, 31–50.
4. Talapina, Je. V. (2020). Algoritmy i iskusstvennyj intellekt skvoz' prizmu prav cheloveka [Algorithms and Artificial Intelligence through the Prism of Human Rights]. Journal of Russian Law, 10, 25–39.
5. Haritonova, Ju. S., Savina, V. S., & Pan'ini, F. (2021). Predvzjatost' algoritmov iskusstvennogo intellekta: voprosy jetiki i prava [Artificial Intelligence’s Algorithmic Bias: Ethical and Legal Issues]. Perm University Herald. Juridical Sciences, 53, 488–515.
6. Haritonova, Ju. S. (2023). Pravovye sredstva obespechenija principa prozrachnosti iskusstvennogo intellekta [Legal Means of Realization of the Principle of Artificial Intelligence Transparency]. Journal of Digital Technologies and Law, 1, 337–358
7. Kutejnikov, D. L., Izhaev, O. A., Zenin, S. S., & Lebedev, V. A. (2020). Algoritmicheskaja prozrachnost' i podotchetnost': pravovye podhody k razresheniju problemy «chernogo jashhika» [Algorithms’ Transparency and Accountability: Legal Approaches to the Solving the “Black Box” Problem]. Lex russica, 73(6), 139–148.
8. Haritonova, Ju. S., & Jan, T. (2022). Rekomendatel'nye sistemy: vyrabotka pravovyh podhodov v Rossii i Kitae [Recommendation Systems: Elaborating the Legal Approaches in Russia and China]. Predprinimatel'skoe parvo, 2, 50–58.
9. Haritonova, Ju. S., & Tjan'fan, Ja. (2022). Rekomendatel'nye sistemy cifrovyh platform v Kitae: pravovye podhody i praktika obespechenija prozrachnosti algoritmov [Recommendation Systems of the Digital Platforms in China: Legal Approaches and the Practice of Implementation of Algorithms’ Transparency]. Zakon, 9, 40–49.
10. Seymour, W. (2023) Detecting Bias: Does An Algorithm Have to Be Transparent in Order to Be Fair? http://ceur-ws.org/Vol-2103/paper_1.pdf

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The subject of the research in the article submitted for review is, as its name implies, the problem of transparency of algorithms as a legal principle of automated processing of human data. The stated boundaries of the study are fully respected by the author. The methodology of the research is not disclosed in the text of the article, but it is obvious that the scientists used universal dialectical, logical, historical, formal legal, comparative legal research methods. The relevance of the research topic chosen by the author is undeniable and justified by him as follows: "The transformation of public relations under the influence of information technologies (which are always based on certain algorithms) requires a rethink of the legal regulation of these relations. In recent years, the scientific literature has paid increased attention to such aspects of the use of information technologies as transparency and accountability of the underlying algorithms. The works of Russian authors often focus on legal means to ensure transparency of artificial intelligence algorithms [4],[5],[6]. This trend is probably due to the fact that transparency is explicitly named in the National Strategy for the Development of Artificial Intelligence for the period up to 2030 (approved by Presidential Decree No. 490 of October 10, 2019) among the principles of the development and use of artificial intelligence technologies. However, it seems that the principle of algorithm transparency is not limited to artificial intelligence technologies. Having a universal character, it can be considered in relation to any information technology involving the use of human data or designed to make decisions that directly affect the interests of individuals." The scientific novelty of the work is manifested in a number of conclusions and suggestions of the author: "... the principle of transparency of the algorithm can be defined as the fundamental principle of automated processing of human data (including personal data and other information related to a person and his environment), assuming openness and accessibility of information explaining the use of the algorithm for automated processing of data about subjects"; "... information revealing the operation of the algorithm must include information provided or disseminated by the operator: - about the existence of algorithmic data processing; - about the range (list) of processed data and the sources of their receipt; - about the derived data that can be obtained as a result of automated processing, about the purposes and methods of their further use; - about the logical scheme(s) underlying the algorithm; - about decisions made as a result of automated data processing, including decisions made in automated mode, and decisions that can be made with direct human participation based on the data obtained as a result of data processing; about the legal consequences of such decisions; - about the risks of erroneous decisions and ways to overcome risks, including the cancellation or revision of decisions at the request of the subject; - on the methods of refusal of the subject from automated processing of his data. At the same time, it is important to consolidate in legislation such criteria for the transparency of algorithms as the exhaustive nature of information about the information at the "entrance" and the sources of their receipt, as well as the comprehensibility of the description of the mechanisms of the algorithms to persons who are not specialists in the field of information technology," etc. Thus, the article makes a definite contribution to the development of domestic legal science and, of course, deserves the attention of potential readers. The scientific style of the research is fully sustained by the author. The structure of the work is quite logical. In the introductory part of the article, the scientist substantiates the relevance of his chosen research topic. In the main part of the work, the author, based on the analysis of normative and theoretical material, including foreign, offers a definition of the concept of "the principle of algorithm transparency", reveals its content in detail, and makes recommendations for improving the relevant Russian information legislation. The final part of the article contains conclusions based on the results of the study. The content of the article corresponds to its title and does not cause any particular complaints. There is a typo in the work: "This is especially noticeable when automation of decision-making is used in areas such as hiring, granting loans or providing public services" - "hiring". The bibliography of the study is presented by 10 sources (scientific articles), including in English. From a formal and factual point of view, this is enough. The nature and number of sources used in writing the article allowed the author to reveal the research topic with the necessary depth and completeness. There is an appeal to opponents, both general and private (D. L. Kuteynikov, O. A. Izhaev, Y. S. Kharitonova, etc.), and it is quite sufficient. The scientific discussion is conducted by the author correctly. The provisions of the work are justified to the necessary extent and illustrated with examples. There are conclusions based on the results of the study ("Russian legislation, having taken an important step towards regulating the principle of transparency of algorithms, is still far from perfect. The current directions of development of the legislation of the Russian Federation in this area are the legal consolidation of this principle, the extension of its effect to all cases of algorithmic processing of information affecting the interests of individuals; legislative consolidation of the requirement for an understandable description of the mechanisms of functioning of algorithms and the need to specify in each case an exhaustive list of processed information. It is important to establish legal mechanisms for subjects to refuse to use algorithms against them (in particular, in recommendation systems and targeted advertising), as well as providing users with the opportunity to refuse to provide certain information or use certain sources of their receipt"), have the properties of reliability and validity and undoubtedly deserve scientific attention communities. The interest of the readership in the article submitted for review can be shown primarily by specialists in the field of information law, provided that it is slightly improved: the disclosure of the research methodology and the elimination of typos in the text of the work.