Culturological reconstruction of ChatGPT's socio-cultural threats and information security of Russian citizens
// Philosophy and Culture. – 2023. – ¹ 8.
– P. 46-56.
Read the article
Abstract: The subject of the study is the socio-cultural threats to the information security of Russian citizens associated with ChatGPT technologies (Chat Generative Pre-trained Transformer, a machine-generated text response generator simulating a dialogue). The object of research − evaluation of the ratio of advantages and threats of generative language models based on "machine learning" in modern (2021-2023) scientific literature (journals HAC K1, K2 and Scopus Q1, Q2). The scientific novelty of the research lies in the culturological approach to the analysis of threats to the security of Russian citizens associated with the use of ChatGPT as one of the technologies of "artificial intelligence". The formulation of the problem of the classical Turing test "to distinguish a person from a machine" is characterized as a scholastic abstraction, instead a more correct and productive approach is proposed: a socio-cultural assessment of the value (based on cultural axiology) of new computer technologies. The starting point of the analysis is the determination of socio-cultural value (or, conversely, damage) as a result of the use of generative language models based on machine learning. Further, the contribution and responsibility of various socio-cultural subjects of its creation and application are revealed − user, creator and developer. The result of the application of the proposed approach is the deconstruction of the discourse of the "philosophy of artificial intelligence" in terms of uncritical translation of developer declarations intended for marketing and attracting financing. Hypertrophied perception, precariously balancing on the edge of utopia and dystopia, is assessed as a risk of incorrect identification and ranking of threats to information security. Assumptions about the hypothetical "superweapon of psychological warfare" mask modern incidents of cross-border leakage of confidential data, the risks of being held accountable for publishing deliberately false information and illegal content as a result of using ChatGPT. National security measures are recommended, including restrictive measures and increasing the general civil culture of information security of users, as well as the orientation of domestic developments of solutions of this type on traditional values, socio-cultural identity and interests of Russian citizens.
Keywords: traditional values, information security, socio-cultural threats, digital sovereignty, turing test, artificial intelligence, generative language model, ChatGPT, sociocultural identity, disinformation
Gill, S., & Kaur, R. (2023). ChatGPT: Vision and challenges. Internet of Things and Cyber-Physical Systems, 3, 262-271. doi:10.1016/j.iotcps.2023.05.004
Soifer, V.A. (2021). Human factor. Ontology of Designing, 11(1(39)), 8-19. doi:10.18287/2223-9537-2021-11-1-8-19
Agathokleous, E., Saitanis, C., Fang, Ch., & Yu, Zh. (2023). Use of ChatGPT: What does it mean for biology and environmental science? Science of The Total Environment, 888. doi:10.1016/j.scitotenv.2023.164154
Mironova, N.G. (2021). Philosophical understanding of social risks of intellectual automation of social management. The Digital Scholar: Philosopher's Lab, 4(2), 125-144. doi:10.32326/2618-9267-2021-4-2-125-144
Goncharov, V.S. (2022). Application of combined artificial intelligence technologies in information and psychological warfare. Political science issues, 12(4(80)), 1118-1126. doi:10.35775/PSI.2022.80.4.015
Bulychev, I.I., Ka