Analysis of approaches to determining legal liability for the actions of artificial intelligence in the medical field: the experience of the United States and Russia.
Abstract: This article introduces a comparative analysis of existing approaches to determining the liability of artificial intelligence in the context of public medical relations between the United States and Russia. As part of the comparative analysis, the basic problems in the field of transparency in the decision-making of artificial intelligence were identified, theoretical and practical situations for the use of artificial intelligence in the medical field were considered, and possible compensatory legal measures were proposed to ensure the safe integration of artificial intelligence into the healthcare sector in Russia. The subject of the study is the formalization of artificial intelligence actions in legal relations between a doctor and a patient. The object of the study is regulatory documents, recommendations and other documents regulating the use of artificial intelligence for the purposes of medical legal relations in Russia and the United States, judicial practice, academic publications and analytical reports on the issues under study. The research methodology integrates a complex of modern philosophical, general scientific, special scientific methods of cognition, including dialectical, systemic, structural-functional, hermeneutical, comparative legal, formal legal (dogmatic), etc. Within the framework of this study, special emphasis is laid on the implementation of a comparative legal study of the phenomenon of the autonomy of artificial intelligence involved in legal relations between a doctor and a patient, followed by the identification of potential scenarios for regulating responsibility for AI actions. The measures proposed as a result of the study can be applied in the legislative activities and their implementation by relevant authorities that are in charge of the integration of artificial intelligence into the sphere of public relations in Russia, including the healthcare sector.
Keywords:artificial intelligence, electronic person, comparative legal research of AI, medical law, telemedicine, secure AI, public law, administrative law, information law, law enforcement practice
This article is automatically translated. You can find original text of the article here.
Active discussion of the status and subsequent role of artificial intelligence (hereinafter - AI) is actively taking place on various international platforms (OSER, EU, UN). Special attention is paid to the integration of AI into the healthcare sector and the corresponding legal and public risks, in this regard, it is proposed to study the US experience in more detail for the subsequent determination of acceptable legal structures for Russia.
Considering the regulatory and legal regulation of AI in the US medical field, it is necessary to note the following documents:
1. Decree of the President of the United States of 11.02.2019 No. 13859 "Maintaining American leadership in the field of artificial intelligence", which defines the directions of work on R&D in the field of AI, practical application and refinement of technologies AI; creation of the necessary infrastructure, data sets, technical and ethical standards in the field of AI.
2. The US Guidelines for the Regulation of AI-based Applications, which includes the principles to be taken into account when developing AI technologies.
3. Recommendations on digital healthcare, which contain regulatory documents and recommendations, which fix the position of the regulator in relation to AI-based solutions, an approach to their classification and registration.
4. The Law on Liability and Transfer of Data on Citizens' Health Insurance and the Law on the Use of Medical Information Technologies in Economic Activity and Clinical Practice. These documents determine the need to ensure the privacy of patient-related data for all participants in the process of providing medical services.
Considering the practical phenomenon of the use of artificial intelligence, it is impossible not to mention IBM Watson technology, which uses cognitive computing to interpret clinical information about cancer patients and determine the course of treatment based on evidence.
The Deep Patient AI was trained in medical diagnostics based on a database of approximately 700,000 people, and when testing new patient data, it showed incredibly good results in predicting diseases. Without any special instructions, the AI found patterns that are not clearly visible to the attending physician and indicate a wider range of problems (including liver cancer).
In practice, this means that a large array of data on oncological issues is loaded into the program and subsequent recommendations based on Big-data are provided to the doctor. Based on machine learning, this array analyzes data as quickly as possible (including drop-down zones) and gives suggestions to the attending physician as quickly as possible.
According to the author, the question of responsibility becomes logical in case of an error both in the prediction by the AI and at the decision-making stage by the doctor.
In the context of the medical application of AI, this problem is generally referred to as the "AI black box" in healthcare - the use of opaque computational models for decision-making related to healthcare. As a result of the use of models, algorithms are opaque, and the conclusions they fix cannot be clearly understood by the attending physician, and sometimes cannot even be explicitly formulated.
From the point of view of law, the indirect factor of blurring responsibility for the proposed AI solutions is discreteness and diffuseness, expressed by the fact that the AI development process can be carried out by different teams (not working in close conjunction and within a single overall coordination). This approach affects the issue of determining the responsibility of the AI development team for the medical field (taking into account the multicomponent analytical tools and the depth of analysis of patient data).
The next significant issue in the field of determining legal responsibility for AI actions is the question of the very quality of the data obtained for AI analysis. At various stages, incorrect provision of patient data by the doctor himself, at the stage of laboratory studies, etc. is possible.
In the context of determining the responsibility of AI in the US medical field, it should be noted that courts traditionally do not single out legal liability because they are not legal entities. As a result, the issue of a separate legal structure for AI is raised, since there is a risk of shifting responsibility to an unlimited number of persons (from the patient and the doctor, to the developer/s or persons influencing the infrastructure and data), which is also noted by American scientists (the difficulty of identifying cause-and-effect relationships due to the large number of legal relationships and mechanisms of interaction between humans and AI).
At the same time, experts note that a possible solution for the consideration of AI-related litigation is the use of the res ipsa loquitur doctrine, which allows for a conclusion about the negligence of a particular defendant, despite the fact that this fact of negligence is typical for subjects to which the defendant can be attributed. This principle is based on the conclusion that the defendant has exclusive control over the instrument causing harm that could potentially have been committed as a result of negligence.
The next aspect in the field of determining the responsibility of AI in the medical field concerns the deviation of the doctor from the standards of medical care, which automatically leads to the definition of a violation of the duties of the doctor in relation to the patient.
Within the framework of this approach, a comparative analysis of the actions of a doctor with his potential colleague with comparable experience, education and technical equipment is carried out.
At the same time, the question of interpretation of AI technical equipment and its role in the standard of medical services remains open.
The next aspect includes the definition of subsidiary liability in the application of AI in the medical field. This aspect concerns the question of the legal status of AI as an "agent". In this logic, a medical institution has some control or authority over an agent (for example, an employee — usually a doctor, but potentially also an AI system).
Scientists propose the theoretical use of a strict liability model, which will simply automatically assign responsibility only to the manufacturer or developer of AI, including through the design of a single enterprise. The legal responsibility of a "single enterprise" implies that each organization in a group of interconnected companies can be jointly and severally responsible for the actions of other organizations that are part of the group. This theory of responsibility is beneficial in relation to AI, since it does not require companies to function together, but to work to achieve a common goal — to design, program and produce an AI product or its components.
However, even here a question arises arising from the technical nature of the autonomy and independent development of AI. If the court considers the AI to be completely autonomous, then assigning subsidiary responsibility to the medical institution for any damage caused by the AI will be impossible, since such an autonomous AI will be functionally beyond the control of the principal.
The next issue that scientists note is the unanimous conclusion of the courts that doctors have a legal obligation to inform patients about essential information regarding the proposed course of treatment and other information related to the provision of medical services, thereby strengthening the role of the doctor in ensuring that adequate information is communicated to the patient to make an informed decision. Hence, the question arises of understanding sufficient patient information, including in the context of the AI used, its decision-making algorithms, etc..
Considering the domestic legal regulation of AI in the medical field, it is necessary to note the following documents:
1. Decree of the President of the Russian Federation dated 10.10.2019 No. 490 "On the Development of Artificial Intelligence in the Russian Federation", which defines the basic legal guidelines for the development, regulation and further deployment of AI in Russia; this document is supplemented by a list of specific measures in the Order of the Government of the Russian Federation dated 08/19/2020 No. 2129-p
2. Federal Law No. 323 of 21.11.2011 "On the basics of public health protection in the Russian Federation". Article 38 of the said law defines the need for state registration of any software used in the medical field (the procedure is fixed by Decree of the Government of the Russian Federation No. 1416 of December 27, 2012).
3. The Order of the Ministry of Health of the Russian Federation dated 06.06.2012 No. 4n approves the classification of software, which is a medical device, in which AI is a medical device with a high degree of risk.
4. Related federal laws: Federal Law No. 152-FZ of 27.07.2006 "On Personal Data" (Clause 1 of Part 1 of Article 6 providing for the mandatory consent of the subject to the processing of his data) and Federal Law No. 123-FZ of 24.04.2020 "On conducting an experiment to establish special regulation in order to create the necessary conditions for the development of and the introduction of artificial intelligence technologies in the subject of the Russian Federation" (providing for an experimental regime for regulating the activities of AI with depersonalized data of subjects).
At the level of domestic doctrinal research, it should be noted that a number of scientists (Lipchanskaya M. A., Zametina T. V.) note that there is a distorted perception of the concept of telemedicine and AI in the medical field, as well as the need to deepen the legal regulation of legal relations that arise between a doctor and a patient when using AI technology. E.A. Ostregnova complements this the position, noting that in the near future the right to receive medical care may be supplemented by a separate role of AI.
I.V. Ponkin, in his voluminous report on the regulation of AI in the medical field, notes the need for a significant breakthrough development of the reference subject-object area of regulatory regulation of the use of technologies of digital AI models. This researcher also emphasizes the need to transform the structure and ontology of regulatory regulation in the field of healthcare, including the synthesis of digital law, medical law and bioethics. In addition, the author also justifies the need for a large-scale audit in the field of regulatory regulation of healthcare, including with the use of AI and Big-data technologies.
Investigating the phenomenon of legal responsibility for the actions of AI in medicine, it is necessary to note the position of the lawyer-practitioner M.S. Varyushin. As part of his review article, the author noted that the first basic question is whether the procedure for implementing AI has been properly formalized by a medical organization. If the AI is used without registration as a medical device, then the responsibility remains with the medical organization, with the possibility of further filing a lawsuit against the AI developer (depending on the terms and procedure for registration of a license agreement or an agreement on alienation of exclusive rights to AI). If the AI is registered as a medical device, then the harm caused to the patient will be compensated by the operating medical organization, while it is possible to file a recourse claim against the developer (copyright holder) AI in the event that an AI malfunction is detected, and not a mistake made by a doctor.
In addition, Kovelina T. A., Sobyanin A.V. and Marukhno V. M. note in their study that the current regulatory and legal regulation of AI does not provide for the possibility of voluntary consent of the patient to use artificial intelligence in the implementation of medical intervention, and responsibility falls solely on the manufacturer and / or user, but in the future the circle of responsible may be expanded.
Thus, based on the above, it is recommended to form the following regulatory initiatives in the field of AI application in the medical field:
1. Creation of a unified standard for the provision of medical services with the use of AI and mandatory notification of the use of this technology to the patient, which requires appropriate amendments to the relevant legislation.
2. Creation of universal rules for the distribution of responsibility for patient data received by AI (patient, operator, doctor), ensuring their processing (doctor, developer, external factor), interpretation of data (doctor, developer) and also procedures for auditing the results obtained (developer, supervisory authority (including forensic examination), doctor, medical organization).
3. Formation of a public coordination center that ensures the capabilities and interests of developers, patients, doctors and regulatory authorities in order to ensure the safe integration of AI into the healthcare sector.