Ðóñ Eng Cn Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

National Security
Reference:

Safety Rules of Heuristic Behavior Algorithms as a Prototype of the "Morality" of Robotic Systems

Tikhanychev Oleg Vasilyevich

ORCID: 0000-0003-4759-2931

PhD in Technical Science

Deputy Head of Department in the Office of Advanced Development, Technoserv Group 

111395, Russia, Moscow, Yunosti str., 13

tow65@yandex.ru
Other publications by this author
 

 

DOI:

10.7256/2454-0668.2022.6.38856

EDN:

ONCJRV

Received:

29-09-2022


Published:

30-12-2022


Abstract: The relevance of choosing the subject of research, as the use of autonomous robotic systems for various purposes and the object of research, which are algorithmic problems arising in terms of algorithm security control, is determined by the existing contradiction between the need for autonomous robotic systems and the complexity of the software implementation of this requirement. The study was conducted on the example of the application of robotics in the field of autonomous transport systems. The implementation of trends of autonomization in this area generates certain legal and technological problems related to the security of algorithmic support of autonomous systems. The manifestation of such problems are critical application errors that are problematic to identify at the development stage. In robotics, the main potential cause of these errors is the functioning of the software of autonomous robotic systems outside the framework of deterministic algorithms that were tested during their development. And if for conventional technical systems the consequences of application errors are predictable, and responsibility for their results is determined, then for the case of autonomous robots, the situation requires a solution. In the article, using general scientific methods of analysis and synthesis, the main aspects of the current state and prospects of the problem of algorithmization of the use of autonomous systems are investigated. Based on a review analysis of algorithmic problems of using autonomous robots, possible consequences and causes of algorithmic errors arising from their use, the formulation of the scientific problem of solving the problem of developing heuristic algorithms that determine their application using a dynamically formed system of predicate behavior similar to human morality is synthesized.


Keywords:

safety of robotic systems, autonomous robotics, control of heuristic algorithms, security control methods, external constraints on behavior, internal limits of behavior, algorithm limitations, algorithm security check, methods for checking algorithms, rules of behavior for robots

This article is automatically translated. You can find original text of the article here.

IntroductionWith the advent of the very first mechanisms, both objective rules for their application and a subjective attitude towards mechanical amplifiers and substitutes for human functions began to form.

The complexity of the mechanisms grew, these rules and relationships became more complex and diverse. Their quintessence can be considered human interaction with robotic systems. In the aspect of the topic of the article, it can be noted that when building interaction with mechanisms, including robotic ones, great importance was paid to safety. Measures were taken to ensure it, at first simple: kinetic danger was countered by physical separation in space, for example, by creating protective fences in robotic workshops. The variety and scope of application of robots increased, robotics more actively penetrated into various fields of activity: industry, transport, military affairs, the total number of robots used increased, and, accordingly, the probability of damage caused by them.

It is quite natural that by now it is possible to cite a sufficiently large number of incidents in the operation of industrial robots and robotic vehicles. The most representative sample of such examples is in terms of the use of robotic systems in military affairs. Initially, in combat conditions, robots were used for reconnaissance and mine clearance, as disposable or reusable means of destruction. With the development of robotic weapons, it became possible to create autonomous combat systems (Lethal autonomous robotics, LAR), the use of which is objectively in demand in modern combat conditions. On October 17, 2001, during the operation of the US and NATO coalition forces in Afghanistan, the first case of the use of an MQ-1 "Predator" unmanned aerial vehicle (UAV) equipped with a guided missile to defeat a self-propelled anti-aircraft installation with a calculation in it was recorded. No longer a guided munition, but a robotic carrier detected the target and used weapons on the inhabited object, but the decision to strike was remotely issued by a human operator. And in March 2021, an event occurred that was a certain milestone in the history of the relationship between man and the mechanisms created by him: the Kargu-2 attack unmanned quadcopter independently used weapons, detecting and destroying the target without an operator's command. As it was noted later in the report of the UN Security Council, this is the first case in history when an autonomous vehicle carried out an attack, practicing the algorithm of automatic search and destruction of the target, without waiting for the operator's permission.

Autonomous unmanned vehicles on the roads, although not designed to intentionally cause damage, and, moreover, are programmed to prevent this, also carry a certain danger. It is caused not only by the errors of the robots' own software, but also by their reaction to the behavior of drivers and pedestrians, who may perform actions that do not always correspond to the logic of behavior recorded in standard algorithms. The result is road accidents, accidents and even catastrophes. The first accident involving an unmanned vehicle was recorded in the US state of Florida in 2016: a Tesla Model S robocar under autopilot control rammed a truck with a trailer, killing a person. Subsequently, several more similar accidents of varying severity were noted. Unfortunately, there is no reason to consider this list closed.

Based on the current situation, there is an objective need for reliable security of the use of robotic systems in all spheres of activity. One of the most obvious approaches to reducing errors, taking into account the fact that all modern robotic tools are controlled using specialized software (software) is the implementation of security algorithms that are implemented by the specified software.

However, the practical implementation of this obvious approach is hampered by objective difficulties that arise when trying to control the behavior algorithms of robotic systems, especially autonomous ones [1,2]. And any robotic system can be called autonomous to one degree or another: even if I act under the control of an operator, it still implements some of the elementary functions independently, just as the system begins to act independently when control is lost.

There is a contradiction between the need to implement autonomous systems and the possibility of safely implementing such functionality into the control systems of robotic devices. The formulated contradiction creates a problem at the intersection of legal and technical areas of regulation: all currently used methods of ensuring the safety of robotic systems are based on the principle of immutability of deterministic algorithms embedded in each system. But, if we accept that the algorithm used can be adaptive and, therefore, modifiable, then it will already belong to the heuristic class, and the developer cannot be fully responsible for the use of a robotic system after modifying what he developed and tested, and the operator, in turn, cannot guarantee be responsible for the application of a system with principles of behavior that are not fully known to him.

The contradiction that has arisen between the need to use autonomous robots and the safety problems of their use is determined, first of all, by the lack of reliable methods of controlling their algorithmic support and requires an operational solution. What determines the relevance of the topic formulated in the article.

 

1. Analysis of existing problems of ensuring the security of algorithms of robotic systemsPractice shows that the most urgent problems are the control of algorithmic support of autonomous robotic systems in those areas where the objective requirement of autonomy of application is in conflict with the need to ensure the safety of application.

Realizing this, developers of software tools for controlling robotic systems are trying to implement certain measures, currently, due to the lack of other opportunities – mainly organizational. An example is the concept of using artificial intelligence in military affairs, developed by the US Department of Defense (US Department of Defense) and setting the basic principles for the use of artificial intelligence systems in military and dual-use systems [3], other foreign regulatory documents [4,5]. The provisions of these documents are implemented in the methods of algorithm control, for example, the control of developments in the field of artificial intelligence in the US and NATO Armed Forces [6] the TEVV (Testing, Evaluating, Verifying and Validating) testing program is being implemented

In our country, of course, documents are also being developed that normalize the use of artificial intelligence, including in terms of features regulating the security of its algorithms. An example is the "Concept for the Development of regulation of Relations in the Field of artificial Intelligence and robotics technologies for the period up to 2024", put into effect by Decree of the Government of the Russian Federation No. 2129-r dated August 19, 2020. As part of the implementation of this concept, the "National Code of Ethics in the field of Artificial Intelligence" was developed in 2021.

But, there are nuances in the practice of using such documents.

Firstly, as the analysis shows, all adopted laws and regulations, both Russian and foreign, have a common problem – they are self-sufficient in content, are mainly descriptive in nature, without revealing the mechanisms for implementing the requirements set out in them. And, from the point of view of practical implementation, it is the absence of such mechanisms that does not allow solving the problem of the safety of algorithms for the behavior of robotic systems.

Secondly, none of the developed documents recognizes artificial intelligence as an actor, they all describe exclusively the responsibility of developers and operators. That is, these concepts are not provisions for the safe development of robotics, but only the basis for regulating the actions of its developers and operators, which, of course, is not enough in the conditions of the requirement to increase the autonomy of robotic systems.

It can be argued that for systems with rigid algorithms and supervised learning, when knowledge and algorithms change under human control, a solution to ensure security in existing conditions can still be found, even if only by organizational methods.

The situation with self-learning systems (unsupervised learning), with the so-called "strong" artificial intelligence, is much more critical. Of course, the basic algorithms for the behavior of these systems in the process of self-learning are written by the developer, he also sets the rules for the formation of new algorithms during training. But where does controlled nesting end, to what level can one be absolutely sure of the security of the modified algorithms? After all, formally, at some stage, the system's communication with the developer may be interrupted and the autonomous system with self-learning will begin to operate according to completely different rules than those laid down during its development. How safe will they be? It is impossible to determine this in advance, but it is objectively necessary to ensure security. To resolve this contradiction, at least at the present stage of the development of the subject area, it is possible only by setting unconditional restrictions that do not depend on the variability of algorithms, including heuristic ones.

Based on this, in order to ensure the use of autonomous robotic systems, a specific safety mechanism is needed, based not on declarations or prohibitions described in general, but on strict mathematical rules with proven effectiveness.

 

2. Possible approaches to the organization of algorithm security controlCurrently, the development of algorithms implemented as part of software, including for robotic systems, is carried out according to certain rules set by regulatory documentation: international and national standards (for example, in the Russian Federation, according to GOST R 60.2.2.1-2016/ ISO 13482:2014; R 60.1.2.2-2016/ISO 10218-2:2011), technical regulations, and other similar documents.

But, as the analysis of the subject area shows, all existing regulatory documents are initially focused on the development of systems implementing deterministic algorithms, that is, all control methods defined by them are based on the principle of immutability of the basic properties of algorithms. Analysis of algorithms for possible errors in this approach is carried out by comparative methods involving the use of formal specifications and a set of tools for syntactic analysis and proof of the properties of algorithms.

The process of proving the effectiveness and safety of algorithms can currently be carried out in the following ways:

· heuristic, based on the analysis of the texts of the programs implementing them;

· according to qualitative criteria;

· by quantitative indicators.

The first approach is used for step-by-step (operational) analysis of complex algorithms by testers. It is assumed that if all operations are performed correctly and their sequence is predictable, the algorithm is safe. This is the simplest and most subjective of all assessments.

If the results of the algorithm execution can be comprehensively evaluated at least in a qualitative form, the evaluation of its correctness is carried out on the basis of a predicate formed in advance. This approach is implemented in the mentioned artificial intelligence testing system TEVV. According to the testing conditions, if, as a result of the implementation of the algorithm, the predicate is fulfilled over the entire range of possible behavior conditions, the algorithm is considered to have passed the test. The problem with this method is that it does not have a guaranteed confirmation of the reliability of the result.

And, if the result of the algorithm implementation can be described by quantitative parameters, the results obtained are compared with the matrix of the predicted ratio of input-output data. In this case, the so-called "Hoare triple" is used, describing the preconditions and postconditions that guarantee the correct operation of the algorithm being tested:

{P}Q{S}where P are the prerequisites to be executed before starting the program Q;

R - postconditions that are true after the completion of the program implementing the algorithm.

The problem of using the latter method is the need to describe all possible behaviors of the system under study without exception, which, in most cases, is simply impossible. Especially for autonomous systems and systems operating in a group.

Indirect control methods also exist and are used, for example, an analysis of the reliability and consistency of the entire set of mathematical methods used, which serves as the basis for forming a hypothesis that there is no reason not to trust the final algorithm based on correct methods. But such approaches have no strict mathematical justification.

Moreover, for systems implementing heuristic algorithms, the considered approaches to testing cannot work correctly by definition, primarily due to the fact that it is almost impossible to form qualitative estimates of predicates or the output matrix of the "Hoare triple" for them. This is determined by the fact that heuristic algorithms cannot be reduced to the form of deterministic finite automata. The option of using the "Turing test", which is sometimes mentioned in the situation of controlling heuristic algorithms, is also inoperable in this case, since the target setting of the latter is not to control the security of the algorithm, but only to establish its nature.

Taking into account these factors, the algorithm control approaches currently used do not ensure the safety of using autonomous robots, both in the field of armed confrontation [7-10] and in the field of peaceful use of artificial intelligence [11,12].

At the same time, when formulating requirements for the systems being developed, the customer and operators of autonomous robotic complexes operate with such requirements for algorithms as tunability, modifiability (learnability and self-learning). Which implies the use of heuristic algorithms, but does not cancel compliance with the requirements for the safety of the behavior of robotic systems.

As a result, the task of controlling algorithms embedded in autonomous systems becomes urgent, taking into account their possible modification during operation (training, self-learning).

The formulated task is essentially similar to one of the problems of artificial intelligence: the constraint satisfaction problem. It should be noted that to control the safety of heuristic algorithms implemented in autonomous robotic systems, this task has not been applied in a direct formulation to date.

As part of the task of optimizing the behavior of a robotic system, this task can be formalized in the form of the following bundle:

· objective function with calculated indicators of the control vector xj at conditionally constant coefficients cj;

· a system of m restrictions of bi behavior, described as limits of change of controlled parameters.

The analysis of the bi constraint system allows us to divide the set m into two groups:

· a group of constraints implemented as part of the algorithms themselves, when developing them, as a rule, for pre-known typical behavior situations;

· a group of constraints formed when calculating the objective function in specific conditions of behavior, relative to the current situation and for the entire system as a whole.

In accordance with the methods of their formation, it is logical to define the first as "internal" restrictions, the second as "external". With the same seemingly functional, these algorithms have different physical meaning and different principles of formation. For "internal" constraints, the parameters of the objective function and the values can be determined unambiguously and in advance, taking into account the list of typical situations formed in advance and the description of each of them. For "external", the set of parameter options is too large and their full set cannot be reliably determined.

Accordingly, the task arises to understand the principles of applying different groups of restrictions and the relationship between them in specific systems

From the analysis of possible behavior situations, it can be concluded that as "internal" restrictions, those can be used that affect the technical characteristics of the system itself and can be reliably predicted in advance: maximum speed limits under different driving conditions, limitation of maneuvering radii depending on speed and the resulting maximum overload, taking into account the dimensions of the system when passing narrowness and other similar parameters.

"External" restrictions can be formed on situations of interaction with the external environment and other participants of joint actions not provided in advance: rules of divergence on overlapping courses, rules of movement in a group, and the like. Based on this, the "external" constraints include setting the boundaries of the behavior of the system, describing the permissible scope of its application in the form of a set of boundary rules. That is, the restriction is not of individual parameters of the function F(x), and not of particular algorithms for performing operations, but of variants of the final value of the objective function within the space of possible behavioral trajectories (areas).

When using the division into "external" and "internal", it is necessary to take into account that the use of each type of restrictions has its own characteristics.

Thus, the use of "internal" constraints for heuristic algorithms can cause a number of problems of their control:

Firstly, algorithms at the development stage are set for a specific type of objective function, in which only controlled parameters are supposed to be changed during the search for a solution (task execution). At the same time, the dynamic modification of behavior algorithms provided for in autonomous robotic systems can lead to changes not only in the controlled parameters, but also in the type of behavioral function itself, which makes its control carried out at the development stage ineffective.;

Secondly, restrictions are formed in advance from the safety conditions of the robotic system itself. And with respect to these conditions, the algorithms can be tested for safety during acceptance tests. In a real situation, when interacting with a changeable external environment and actors whose behavior is unknown in advance, all the indicators against which the algorithms were tested change. Accordingly, the previously conducted security check loses legitimacy.

And there are no methods for reliably ensuring this verification yet and it is not expected in the foreseeable future. That is, the application of "internal" restrictions cannot unambiguously guarantee the safety of the behavior of the system as a whole.

The main problem of using "external" constraints is the possible reduction of the functionality of the robotic system by the specified behavior framework and extremely strict requirements for the dynamism of their formation by on-board computing means of robotic systems.

The analysis of the subject area has shown that the choice between "internal" and "external" constraints of algorithms, as well as constraints automatically generated by the system itself as part of the modification of its behavior algorithms, is one of the most difficult problems of ensuring the safety of autonomous systems.

If the method of "internal" constraints is chosen to solve the problem of ensuring the safety of the use of autonomous robotic systems, well-known approaches implemented as constraints in optimization methods of finding the optimum can be used to control algorithms: search with clipping, methods of penalty functions, matching methods, genetic algorithms and others. However, for heuristic algorithms, especially self-learning systems, there are no guarantees for the effectiveness of such methods. Moreover, they probably won't appear in the foreseeable future.

When choosing the method of "external" constraints, the principle of setting the general boundaries of the system's behavior can be used as the latter: in space and in time, in the form of a set of allowed (forbidden) spatial and logical variables set in matrix or vector form. This method is guaranteed to be reliable, but it can impose restrictions on the functionality of autonomous systems, as well as significantly increase the requirements for the capabilities of onboard computers.

Thus, both approaches under consideration have their positive sides and disadvantages, both in terms of ease of implementation and the impact on limiting the functionality of the managed system, and ensuring guaranteed safety of its application [13].

Based on this, it seems most logical to use a combined approach: where possible under the conditions of algorithm modification: to use simple and dynamic "internal" constraints, and to control the behavior of the system as a whole – to use "external" constraints with guaranteed security. At the same time, it is likely that the volume of restrictions will be strongly shifted towards "external" restrictions, since "internal" can be used only in the simplest situations, reliably calculated in advance.

With such a statement of the problem, it is quite logical to build an analogy with human morality, which forms internal and general rules of behavior in various conditions.

 

3. On the principles of implementation of the proposed approachIf we accept the hypothesis about the operability of the proposed approach to assessing the security of heuristic algorithms, the task of describing the principles of its implementation appears.

Within the framework of the proposed approach, it is determined that one of the most important tasks to be solved when implementing algorithms based on combined constraints is the formation of predicates by which the safety of the implementation of certain actions will be checked. Returning to the formulated analogy with morality, we can say that it will be necessary to form something similar to the "categorical imperative of Kant", but for systems controlled by artificial intelligence.

It may be recalled that, in general, the "categorical imperative" is based on the fact that a person's motivations do not always lead him to actions useful for other people and society as a whole, since they depend on the object of desire. Therefore, each individual needs to control himself, assessing the result of each action as to whether it can be accepted as a law of behavior in society. Moreover, evaluating independently of the empirical conditions in which the action takes place. And thus imposing on himself a certain imperative, a duty, to observe the unconditional rules of behavior. From the point of view of controlling the safety of algorithms of autonomous robotic systems, such an imperative will be to control the safety of heuristic algorithms before implementing each iteration of behavior carried out on the basis of calculating the prediction of the result of performing an action and evaluating its consequences. As a result, the final result will be evaluated not only by the effectiveness of the system, manifested in specific conditions, but also by compliance with the rules set as a predicate of correct behavior.

The analysis shows that the rules used will probably be somewhat broader than Asimov's "three rules of robotics".

Firstly, because, as it is not sad to admit, in a number of situations a robot will be used against a person, for example, to solve problems of protecting people and objects within various forms of armed confrontation. Moreover, the principle of unconditional subordination of a robot to a human, declared by many experts, will not always be applied in the practice of using artificial intelligence. The conflicts of recent decades show many examples of the use of modern dual-use technologies for destructive purposes: the use of communication, navigation and software systems by illegal armed groups in the conduct of military operations, the use of homemade or purchased UAVs to carry out terrorist attacks, and the like. And it is quite logical to assume that protection against such use should be built into the software of robotic systems, including through advanced self–monitoring of the security of their algorithms. Recall that such measures are not provided for in the existing concepts of the development of artificial intelligence.

Secondly, because, as noted earlier, robotic systems, in most cases, do not operate in isolation – they are used where other unmanned and controlled means can already be located and solve problems. Means whose exact intentions and trajectories are unknown in advance, but these data need to be taken into account during the interaction. The rules of mutual behavior, for example, when maneuvering autonomous robots, can be formed in dynamics by each of the participants in the events. For example, in the form of allowed vectors or motion matrices formed over time, taking into account the forecast of the motion of interacting systems. The problem is that with a large number of agents or spatial constraints, the volume of calculations is not just growing at an explosive rate – worse is that at some point in time the vector of allowed directions of movement for one or more participants may turn out to be zero, intersecting or mutually exclusive for several means. And then it will be necessary to make a decision that cannot be strictly calculated, but can only be built on a system of restrictive rules of mutual behavior.

Based on these circumstances, within the framework of the formulated task, a system of rules based on the following provisions can be formed to ensure the control of algorithms:

· the main way to ensure the safety of heuristic algorithms at the current level of technology development is an integrated approach – setting "external" constraints as a system of rules applied to the current situation, and using pre-prepared "internal" constraints;

· "internal" constraints are formed in advance for known typical behavior situations;

· "external" constraints should be formed dynamically, taking into account the situation developing at each moment of time;

· the "external" constraints set by the situation and the "internal" constraints formed in advance should be as flexible as possible so as not to limit the functionality of robotic systems.

From a formal point of view, the formation of solutions using this system is a discrete forecasting process that provides an assessment of the result of actions before performing each iteration, based on the objective function and taking into account dynamic constraints. Note that this is a computationally capacious process, which, taking into account the multivariance of actions, can lead to a "combinatorial explosion". The problem is complicated by the fact that when calculating the predicative function, it is necessary to take into account the reaction of interacting systems at each step of the calculations. No matter how it is done: using game theory, scenario analysis methods or others, it increases the dimensionality of the algorithms being evaluated. To parry this problem, well-known mathematical methods of dimensionality reduction can be used to solve the forecasting problem. The result of the solution will be the cutting off of unacceptable options for actions that do not satisfy the specified values of predicative behavior variables (Figure 1).

Fig.1. The principle of the organization of control of heuristic algorithms

It seems logical that decisions on evaluating the results of the implementation of modifiable algorithms in trivial situations, when it is enough to use existing rules, will be made by the software of the robotic system, and the formation and refinement of the system of behavior predicates itself will be determined by a group decision external to the autonomous system. Data on such solutions can be processed, for example, in some "cloud" service that stores a system of rules modified according to a centralized principle (Figure 2).

 

Fig.2. The principle of modification of the system of predicates of behavior

And each robotic system will request solutions to the situation in the cloud and, if there are any, receive an answer that determines its actions. If there is no ready-made predicate of behavior for a situation, since there has never been such a situation before, a request for the development of a new rule will be automatically generated. And this physically external "cloud" storage for the system will allow to form a set of combined restrictions of algorithms, according to the method of application and creation that really resemble morality, and acting regardless of the affiliation of robotic systems and their scope of application.

It is likely that the implementation of the proposed methodology will potentially provide prerequisites for the widespread use of combined restrictions with self-regulation, minimally limiting the functionality of robotic systems with a high level of ensuring their safety.

 

ConclusionIn this paper, the task of forming a system of rules for ensuring the safety of algorithms of autonomous robotic systems is formulated in a general way.

At the same time, the problem for the solution of which the problem statement is formulated is extremely relevant for any autonomous systems that carry a potential danger to humans: transport and industrial, possessing kinetic energy, and, of course, combat, the very purpose of which is destruction [14,15]. This problem requires a solution: organizational, in terms of making changes to regulatory and guidance documentation, and technological, in terms of developing and implementing methods for controlling heuristic algorithms [16,17].

As part of the solution of the formulated problem, it is proposed to introduce an algorithmic analogue of Kant's categorical imperative: to optimize the behavior of a robotic system not with respect to the set goal, but with respect to the dynamics of the state and potential needs of all interacting systems. The implementation of this principle can be ensured on the basis of the proposed imperative rules, by introducing them as "internal" and, most importantly, "external" limitations of algorithms, and rejecting, paradoxically, the priority of rational behavior in favor of safe and predictable actions.

And the use of the proposed approach will allow to form a system that includes;

· a storage system and services for providing rules, upon request, corresponding to typical situations;

· means and bodies for identifying non-standard situations, forming new rules;

· means of revision of the existing system of rules, clarification or removal of outdated ones.

Such a structure will ensure the formation and maintenance of a system of rules for regulating rational behavior and controlling algorithms of autonomous systems.

Taking into account the above, the formulation of the problem formulated in the article serves as a prerequisite for solving the problem of the security of algorithms that control the behavior of autonomous robotic systems, and in the long term, perhaps, recognizing them as responsible subjects in certain fields of activity.

References
1. A.Simulin et al. (2015) Some aspects of the use of robotics in military affairs, Conference Collection Center Sociosphere 27, 67-71.
2. Hover and stare: FCS testing UAVS Military.com URL: https://www.military.com/defensetech/2008/05/30/hover-and-stare-fcs-testing-uavs (date of access: 21.11.2021).
3. DOD Adopts Ethical Principles for Artificial Intelligence US Dept of Defense. URL: https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence (date of access: 21.11.2021).
4. Recommendation No 2102 2017 of Parliamentary Assembly of the Council of Europe «Technological convergence, artificial intelligence and human rights» URL: http://assembly.coe.int/nw/xml/XRef/Xref-XML2HTML-en.asp?fileid=23726 (date of access: 21.11.2021).
5. Recommendation No 2069 2015 of Parliamentary Assembly of the Council of Europe «Technological convergence, artificial intelligence and human rights» URL: https://www.cilevics.eu/wp-content/uploads/2019/01/EDOC14303.pdf (date of access: 21.11.2021).
6. E. Christie Artificial Intelligence at NATO: dynamic adoption, responsible use. NATO Review. Opinion, analysis and debate on security issues. 24 November 2020. URL: https://www.nato.int/docu/review/articles/2020/11/24/artificial-intelligence-at-nato-dynamic-adoption-responsible-use/index.html (date of access: 29.07.2022).
7. D.Chirov and K.Novak (2018) Promising areas for the development of special-purpose robotic systems, Security Issues 2, 50-59. https://doi.org/10.25136/2409-7543.2018.2.22737.
8. S.Khripunov, I.Blagodaryashchev and D.Chirov (2015) Military robotics: modern trends and development vectors, Trends and Management 4, 410-422.
9. Army Equipment Program in support of President’s Budget 2015 US Army G-8.
10. É.Pflimlin Drones et robots: La guerre des futurs (Publisher Levallois-Perret, France, 2017).
11. J.Weaver (2013) Abhor a Vacuum: The Status of Artificial Intelligence and AI Drones, Under International Law New Hampshire Bar Journal 2, 14-21.
12. O.V.Tikhanychev (2020) On improving indicators for assessing the decision support systems’ software quality, IOP Conference Series: Materials Science and Engineering 919(5), 052009. https://doi.org/10.1088/1757-899x/919/5/052009.
13. O.V.Tikhanychev (2022) Influence of the Problem of Safety Control of Heuristic Algorithms on the Development of Robotics in Robotics, Machinery and Engineering Technology for Precision Agriculture. Smart Innovation, Systems and Technologies edited by M. Shamtsyan (Publisher Springer, Singapore, pp.335-345 . https://doi.org/10.1007/978-981-16-3844-2_31.
14. D.Johnson (2006) «Computer Systems: Moral entities but not moral agents,» Ethics and Information Technology 8, 195-204. https://doi.org/10.1007/s10676-006-9111.
15. A.Schuller (2017) At the Crossroads of Control: The Intersection of Artificial Intelligence in Autonomous Weapon Systems with International, Humanitarian Law Harvard National Security Journal 8, 379–425.
16. T.M.Lopatina (2005) Nekotorye aspekty problemy moral'noi otvetstvennosti komp'yuternykh sistem. Pravovye voprosy svyazi 1, 12-13.
17. O.V.Tikhanychev (2020) Statement of the problem of ensuring the security of intelligent systems algorithms, National Security 5, 45-55 . https://doi.org/10.7256/2454-0668.2020.5.33286.

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

The article submitted for review discusses the safety rules of heuristic behavior algorithms. The research methodology is based on the study and generalization of literary sources on the topic of the work, the visual presentation of the author's proposals to improve the safety of heuristic algorithms for the behavior of robotic systems. The author of the article rightly associates the relevance of the work with the fact that, on the one hand, the need to use autonomous robots is becoming more and more obvious, and on the other hand, it is necessary to ensure the safety of their use, which depends, among other things, on the availability of reliable methods of controlling their algorithmic support. The study is relevant for any autonomous systems that pose a potential danger to humans: combat, transport and industrial systems with kinetic energy. The scientific novelty of the reviewed study, according to the reviewer, consists in proposals for the introduction of an algorithmic analogue of Kant's categorical imperative: to optimize the behavior of a robotic system not with respect to the set goal, but with respect to the dynamics of the state and potential needs of all interacting systems based on the proposed imperative rules, by introducing them as "internal" and "external" constraints of algorithms, and abandoning the priority of rational behavior in favor of safe and predictable actions. The following sections are structurally highlighted in the article: Introduction, Analysis of existing problems of ensuring the safety of algorithms of robotic systems, Possible approaches to the organization of algorithm security control, On the principles of implementation of the proposed approach, Conclusion and Bibliography. The author examines the documents regulating the use of artificial intelligence, including in terms of the features regulating the safety of its algorithms, dwells on the nuances of their practical application; ways to prove the effectiveness and safety of algorithms (analysis of program texts, according to qualitative and quantitative criteria); methods of "internal" and "external" restrictions for controlling algorithms, their positive sides and disadvantages, both in terms of ease of implementation and the impact on limiting the functionality of the managed system, as well as ensuring guaranteed safety of its application. The article notes that from the point of view of controlling the safety of algorithms of autonomous robotic systems, the proposed categorical imperative will be "to control the safety of heuristic algorithms before implementing each iteration of behavior based on calculating the prediction of the result of an action and evaluating its consequences. As a result, the final result will be evaluated not only by the effectiveness of the system, which manifests itself in specific conditions, but also by compliance with the rules set as a predicate of correct behavior." The article is illustrated with two figures: "The principle of organizing the control of heuristic algorithms" and "The principle of modifying the system of predicates of behavior", which clearly reflect the proposed author's approach. The bibliographic list includes 17 sources – publications of foreign and domestic scientists on the topic of the article, normative materials and Internet resources. The text contains targeted references to literary sources confirming the existence of an appeal to opponents. As a remark, it can be noted that when presenting the essence of the Hoare triple, which describes the preconditions and postconditions that guarantee the correct operation of the algorithm being tested, the symbols used in the text differ from those given in the transcript: S and R. The reviewed material corresponds to the direction of the journal "National Security / nota bene", has been prepared on an urgent topic, contains a non-trivial view on solving the problem of ensuring the security of algorithms of robotic systems, and, despite the indisputability of the point of view expressed, the presented material may arouse interest among readers, and therefore it is recommended for publication after some refinement of the design of the symbols used in the article.