Ðóñ Eng Cn Translate this page:
Please select your language to translate the article


You can just close the window to don't translate
Library
Your profile

Back to contents

Culture and Art
Reference:

Socio-cultural risks of multimodal large generative models of "artificial intelligence" (GenAI)

Bylevskiy Pavel Gennadievich

ORCID: 0000-0002-0453-526X

PhD in Philosophy

Associate Professor, Department of Information Culture of Digital Transformation, Department of International Information Security, Moscow State Linguistic University

119034, Russia, Moscow, Moscow, Ostozhenka str., 36, office 106

pr-911@yandex.ru
Other publications by this author
 

 

DOI:

10.7256/2454-0625.2024.6.70926

EDN:

DWMERQ

Received:

02-06-2024


Published:

03-07-2024


Abstract: The article is devoted to the study of the conditions for ensuring the information security of Russian citizens when using generative "artificial intelligence" technologies in the socio-cultural sphere. The relevance of the topic is due to the modern high rates of development of computer neural networks that generate multimedia content: texts, images, sounds and videos. The developers classify generative technologies as "artificial intelligence", position them as a "new nuclear project" capable of radically increasing the productivity of socio-cultural creativity, and receive significant government, corporate and investment financing. The object of the study is modern multimedia generative models, the subject of the study is the possibility of their use in the socio-cultural sphere of creativity and the associated risks of information security. The purpose of the study is to determine the conditions for ensuring the information security of Russian citizens when using multimodal generative technologies in the socio-cultural sphere.  The research materials are scientific publications of recent years (2021–2024) in Russian journals of the HAC list (categories K1, K2) and international Scopus publications (quartiles Q1, Q2) devoted to research and critical analysis of the possibilities of multimodal generative models, associated risks and security tools. The philosophical methodology is applied: theoretical and cultural analysis, synthesis. The scientific novelty of the article is due to the application of a philosophical theoretical and cultural methodology for a critical comparison of the declarations of developers and the actual potential of applications of multimodal generative technologies. The result of the study is an assessment of how greatly exaggerated the risks predicted based on the positioning of the technologies in question as "artificial intelligence". The real risks are proposed to include: the incompatibility of development costs with the usefulness of the results; lowering the cultural level of professional and amateur creativity and worsening the tastes of the mass audience; use in "social engineering", fraud, mass disinformation, fake news, manipulation of public consciousness, "cancellation culture", destruction of traditional values and substitution of socio-cultural identity. The means of ensuring the safety of Russian citizens in the development and use of multimedia generative technologies in the socio-cultural sphere are recommended.


Keywords:

information security, artificial intelligence, multimedia generative models, mass culture, disinformation, fake news, social engineering, cancel culture, traditional values, socio-cultural identity

This article is automatically translated. You can find original text of the article here.

Introduction

Among modern computer-network technologies of "artificial intelligence", an important place is occupied by multimodal large generative models that allow automating the creation of texts, sounds, images and videos on an industrial scale (Generative Artificial intelligence – GenAI). The developers position the generated multimedia content as "comparable" to human creativity, predicting the possibility of multiple cheapening with increased speed and productivity. Comparing with the "new nuclear project", such developments receive support at the level of national programs and federal projects, significant and rapidly growing government, corporate and investment financing. Given the importance of the socio-cultural needs of the mass audience, professional and amateur creativity, multimodal large generative models are recognized as one of the priorities of state programs for the development of artificial intelligence technologies (see Decree of the President of the Russian Federation dated 02/15/2024 No. 124 "On Amendments to Decree of the President of the Russian Federation dated October 10, 2019 No. 490 "On the development of artificial Intelligence in the Russian Federation" and in the National Strategy approved by this Decree").

However, the emergence, development and mass use of new computer network technologies in various industries and spheres is associated with associated risks, which should be assessed by specialized specialists. Exploring the results and prospects of GenAI application in the socio-cultural sphere, it is important to avoid uncritical broadcasts of developers' declarations, sometimes of a marketing nature. For multimodal large generative models, the profile is philosophical, theoretical and cultural analysis, which allows us to critically assess the possibilities of their use in creativity, consider the socio-cultural risks of application and perception by a mass audience, and recommend effective security measures.

1. The derivation of the "humanization" of technology from the mechanistic reduction of man

There are three main narratives of research on the philosophical and socio-cultural aspects of the development and application of GenAI, published in Russian scientific journals of the HAC list (categories K1, K2) and the international Scopus database (quartiles Q1, Q2): the possibilities of using GenAI in professional and amateur creativity; the influence of GenAI on the socio-cultural perception of the mass audience; prospects for recognizing machine generation as creativity, and GenAI as the author. The dialectical methodology allows us to distinguish two opposite approaches: one includes uncritical translation of developer declarations in the spirit of the "philosophy of artificial intelligence", the second is based on the priority of the specifics of socio-cultural creativity, recognizing only an instrumental nature of technology of any complexity.

Exaggerated estimates of the capabilities of multimodal large generative models are facilitated by the very fact that they are classified as "artificial intelligence" technologies: recognition of generated texts, sounds, images and videos as "comparable" with the results of human activity. The criterion of "comparability" is based on the classic Turing test, based only on the opinion of an expert evaluating the results of generation, possibly erroneous, illusory or interested [1]. The perception and evaluation of the generated multimedia content, as experimental studies have shown, largely, if not crucially, depend on the nature of user awareness of what "artificial intelligence" is [2]. In this area, as in many others, they often "see" exactly what the audience was convincingly "promised" in accordance with its socio-cultural values [3]. Consumer behavior, user transactions with GenAI services and the degree of their satisfaction also largely depend on expectations of "superiority over human abilities" [4].

The criticism of "instrumentalism" by the method of intersectoral feminist discursive analysis of the metaphorical meanings of the term "artificial intelligence" unexpectedly leads to the "humanization" of computer technology. The approach to GenAI as a tool in teaching English is compared with the previous colonialist attitude towards the indigenous peoples of Australia as an object. The conclusion is drawn by analogy with the acquisition of civil rights by women and representatives of colonial peoples: in order to achieve planetary justice, after overcoming racial and gender inequality, it may be necessary to start treating GenAI as something more than just a tool [5]. If we recognize the machine's ability to think and create, it is formally logical to conclude that it is possible to "surpass man" with the help of increasingly powerful and advanced electronic computing computer network solutions.

Such an assumption, assuming a counter reduction of a person to a machine (automated mechanism), generates overestimated expectations of GenAI's capabilities in the range from utopia to dystopia. The new generation of "artificial intelligence" technologies cause both excitement and apprehension. There are still few theoretical discussions about the potential, limitations and "pitfalls" of introducing GenAI into creative activity, but there are often assessments of powerful sociotechnical models as "a monster causing deep horror and controversy" [6]. ChatGPT, a generator of only texts based on user requests, looks like a threat of displacement of a person from entire industries and the disappearance of professions, for example, in advertising [7]. GenAI sociolinguistic systems can turn into "agents of digital lords" – owners of Internet services, become a third party to their communication with users, forming a new social structure of "algorithmic", "platform" capitalism of "universal surveillance" [8].

The conditional equation of man and machine generates a logical chain: the machine generation of multimedia content without understanding the meanings is likened to human creativity, carried out without full knowledge of its principles and consequences. The complexity and recognition of "partial superiority over man" make it possible to perceive GenAI no longer as a tool, but as an unpredictable and incomprehensible "assistant" capable of "interacting with society." Next, forecasts are formulated that are close to the eschatological level: it is assumed that GenAI is able to gain the experience of independent existence, launch a cycle of endless self-improvement and self-generation, transform matter into a digital society of swarm intelligence [9].

2. Modern formulation of the problem of creativity and legal personality of an electronic computer

Exaggerated forecasts of opportunities and, accordingly, risks caused by the classification of multimodal large generative models as "artificial intelligence" can be conditionally divided into two levels, fundamental and applied. Related applied socio-cultural problems, the solution of which requires a philosophical methodology, are as follows: should the machine generation of multimedia content and the legal personality of "artificial intelligence" be recognized as creativity? The second problem is not purely legal, since legal terminology must be correlated with the philosophical concepts of society and man. An analysis of the already applied opportunities and risks can show whether it is advisable to further consider the fundamental forecasts of the development and application of GenAI technologies, both utopian and dystopian. It is possible to accurately and unambiguously answer the question whether technology, a computer software and hardware complex for generating multimedia content can be recognized as the creator and subject of law through the classical philosophical methodology of dialectical materialism.

In accordance with classical political economy, the theoretical and cultural approach makes it possible to identify material factors in the creative work of the human creator: the tool (tool), the material and the result. Just as the pen, brush and musical instrument are hand tools, GenAI also acts as a tool of human creativity, only stationary and automated. From this point of view, the processing of texts and sounds, images and videos through automated computer network software and hardware complexes does not fundamentally differ from the machine production of other industrial and household products. In this case, the subject of automated creative activity and the author of the generated content is a human user. It is he who decides whether to use this service, formulates a text request, accepts or rejects the result.

On the contrary, the mechanistic reduction of man to a machine and the "humanization" of electronic computing technology help the successful marketing of technologies classified as "artificial intelligence", obtaining government support and investment. However, the uncritical translation of these postulates of the "philosophy of artificial intelligence" into the socio-cultural sphere leads to an incorrect formulation of the problem, on the verge of being obviously insoluble.

The interpretation of the use of large generative textual models as "cooperation", "interaction" with a person leads the study away from the issues of the effectiveness of creating and editing texts for various purposes towards "dialectical negotiations, joint writing with artificial intelligence using repetitive prompts" [10]. The use of multimodal large generative models in the socio-cultural sphere no longer appears as a technology-mediated interaction of creative workers, audiences, creators and service providers. A new era of "creative relations" between artists and technology is predicted, with huge prospects for expanding creative potential due to the "symbiosis of man and artificial intelligence" [11].

The uncritical translation of the postulates of the "philosophy of artificial intelligence" into the socio-cultural sphere is criticized by lawyers who adhere to the definition of GenAI as an object, not a subject of law, including in the field of authorship. In order to shake their position, for the sake of recognizing at least temporary rights to the result for the owner, and not for the user of the generative service, terminological allegories are used. Such are, for example, the euphemisms generators of results "comparable to the results of intellectual activity", "the creator of the result, who is not a human being" [12]. The creative nature and originality of the generated result [13] can be interpreted not as a joint product of the user and the creators of the automated service, but as an argument in favor of the possibility of giving legal personality to "artificial intelligence" [14].

The issue of "machine creativity" is directly related not only to copyright, but also to liability for offenses committed using technology. Multimodal large generative models of "artificial intelligence" are a technique, although automated, largely autonomous from creators and operators. The recognition of the ability of modern neural networks to create "objects that have hitherto been considered achievements of human creative activity", the classification of digital art as "machine creativity" [15] only masks, transfers to the tool the responsibility of the person who made the request and published the generated content that violates the law.

3. Theoretical and cultural analysis of the creative content of computer generation

The philosophical and cultural analysis of creativity makes it possible to classify multimodal large generative models not as "artificial intelligence", but as automated systems of machine electronic computing processing of electronic (also digital) documents – text, sound, images, video, and potentially other types (digitized smells, tastes, tactile sensations, etc.). Creative only the work of human subjects who use, maintain, develop, and administer generative services as objects can have a character. The basic structure of a multimodal large generative model can be described by analogy with industrial machine mass production of standardized products, industrial or household. The use of the machine as an automated means of labor consists in processing the object of labor (raw materials, blanks) into a finished product; the device of the machine must include an engine, a control transmission and working tools.

The "raw material" for processing, that is, the subject of labor, are electronic documents – the results of digitization of human actions: writing, drawing, speech, singing, performing musical works, roles in theater and films, participation in mass events, etc. These documents are formalized, indexed according to specified parameters and structured into electronic multimedia libraries – a kind of ordered "warehouse of raw materials" with a catalog. The working tools for processing "raw materials" are specialized programs – templates of operations that simulate human editing techniques of source documents (texts, images, audio and video recordings, etc.), combined into structured collections.

The role of an automated control transmission ("steering wheel") is played by search and control algorithms that determine the "raw" source documents, processing operations and their sequence that best match the user's request according to formal criteria. The "engine" of multimodal large generative models as electronic computing software and hardware complexes are electric power generating capacities - hydroelectric power plants, CHP plants, nuclear power plants, etc.

Document generation (text, image, audio, video) at the user's request, up to the real-time broadcast mode, is carried out by the following technical means. According to a text query with a description of the result, the most formally appropriate source documents from the available library are automatically determined, followed by tools (programs) from the available list and the sequence of their execution. Result generation is an automated processing of source documents ("raw materials") by a sequence of operations (software tools).

Using the generative service, the user himself, unnoticeably, selects the most suitable source document, tool programs from the available ones, determines the sequence of operations, the execution of which creates a processed, "generated" document. Many documents may be required to generate the result; operations can include fragmentation, compilation of fragments, application of special effects to the "collage" and final "originalization" (protection against "anti-plagiarism"), which does not change the essence of the matter at all. Large multimodal generative models are self–service tools for users, and no more so than for a driver - a car richly equipped with electronic automation. High expectations and uncritical perception of GenAI as a potential "artistic superintelligence" are generated by such a phenomenon of "mass culture" as the modern mythology of "artificial intelligence".

The "instrumental" approach to GenAI seems to be the most correct and productive, confirmed by the already established practice of application in the socio-cultural sphere. Creativity remains a human monopoly, requires imagination, talent, and skill, and digitalization and neural networks free the artist only from repetitions of the simplest routine operations [16]. GenAI is applicable to stimulate and expand the technical capabilities of human creativity based on automated analysis of large-scale sets of "big data" [2],[17]. "Artificial emotional intelligence" can be useful in statistical analytics, the development of norms and standards, and large-scale projects in the field of art and culture [18]. The use of multimodal large generative models, free from the illusions of "artificial intelligence", can contribute to strengthening technological and cultural sovereignty, overcoming consumerism, and turning social creativity into the highest value [19],[20].

4. Socio-cultural risks of multimodal large generative models

Based on the definition of multimodal large generative models as an automated technical tool of creativity, it is possible to identify and identify various "applied" socio-cultural risks associated with their development and application. If these risks are realized, damage may be caused to government and corporate interests, professional and non-professional, amateur creative workers, as well as citizens and users representing a mass audience of Internet services.

Government and corporate support are accompanied by the risk of mismatch of the cost of resources for the development and implementation of GenAI with the results obtained, inefficient spending of funds spent on expensive equipment, purchase from operators of "big personal data", regular long-term "training" (more precisely, configuration) of GenAI on many people. The vulnerabilities here are the uncritical perception of efficiency forecasts, deliberately overestimated by developers, justified only by the postulates of the "philosophy of artificial intelligence", as well as the insufficiently widespread involvement in the expertise and development of domestic philosophers, cultural and art theorists, culturologists who are well versed in modern digital technologies.

The risk for professional and amateur creative workers, especially for beginners and students, is the illusory perception of multimodal large generative models not as a self-service service, but as an example of a "creative superintelligence". GenAI-generated compilations from "raw materials", fragments of works by unknown authors who do not represent artistic value, transformed by automatic special effects and disguised by "anti-plagiarism protection", can, thanks to being classified as "artificial intelligence", be perceived as high examples of truly modern "high-tech" art, surpassing classical masterpieces of culture, as guidelines for improving their own skill.

A similar risk of a significant decrease in artistic taste as a result of digital automation of the "mass culture" industry exists for a wide audience, potentially for all citizens and users. Machine generation on an industrial scale of secondary and tertiary "cultural consumer goods" creates the illusion of genuine creativity that meets the fashion for "artificial intelligence", supported by massive broadcast in Internet communications, automated aggressive marketing and intrusive advertising of "recommendation services".

It is possible to identify a whole group of socio-cultural risks for Russian government, commercial, non-profit organizations and all citizens associated with the use by attackers of the speed, high productivity and cheapness of convincing falsified multimedia content generated by GenAI. Convincing high-quality (Deepfake) fake speeches by reputable leaders, executives and experts, forgeries of works of art, historical, political and other documents, and opinions of citizens can be quickly and inexpensively generated for broadcasting to a mass audience through the press and social networks.

Thus, the possibility of attackers using multimodal large generative models generates a whole group of socio-humanitarian risks associated with highly productive automation of destructive manipulations of consciousness. GenAI as a high-performance "machine of illusions and deception" can be effectively used by criminals as an automating technical means of "social engineering", fraud in social networks and trading services, including those committed through mobile phones, Internet messengers, etc. In corporate and interstate conflicts, "hybrid wars", the potential of GenAI can be used by unfriendly countries and organizations for mass disinformation, fake news production, "culture of cancellation" campaigns, falsification of history, destruction of traditional values and substitution of socio-cultural identity.

Conclusion

The result of the study is a conclusion about the illusory nature of the "fundamental" risks that multimodal large generative models classified as "artificial intelligence" will surpass the quality and make human creativity superfluous. When understanding GenAI as an automated, high-performance, efficient service for the industrial creation of fake original but convincing content, more real risks are outlined. These include the risk of low efficiency of government and corporate costs for the development of multimodal large generative models, incomparable with the usefulness of the results, as well as a decrease in the skill of professional and non-professional creative workers, deterioration of the artistic taste of the mass audience. Criminals, unfriendly countries and organizations can use GenAI as a high-performance and fast-acting technical tool in such methods of violating information security as "social engineering", fraud, mind manipulation, disinformation, fake news, "cancellation culture", destruction of traditional values and substitution of socio-cultural identity.

To minimize the considered socio-cultural risks, to ensure the information security of national interests and citizens of Russia in the development and application of multimodal large generative models, the following tools can be recommended. Firstly, the involvement in the development and expertise of Russian GenAI models of domestic philosophers, cultural theorists, art historians, etc., who are at the same time specialized specialists in socio-cultural issues, modern digital technologies and information security. Secondly, the expansion and deepening of scientific research on socio-cultural aspects of the effectiveness and safety of multimodal large generative models is in demand. Thirdly, the results of such studies are recommended to be used in education and popularization through the domestic mass press, social networks, digital platforms and social advertising to update and enhance the culture of information security of Russian citizens.

References
1. Bylevskiy, P.G. (2023). Culturological deconstruction of socio-cultural threats ChatGPT information security of Russian citizens. Philosophy and Culture, 8, 46-56.
2. Akin, N., Bergdahl, J., Cvetkovic, A., et al. (2023). Artificial intelligence in fine arts: A systematic review of empirical research. Computers in Human Behavior: Artificial Humans, 1(2), 100004. doi:10.1016/j.chbah.2023.100004
3. Mirin, I.G. (2023). Stereotypes of perception of artificial intelligence discodiffusion works and their differences from human visual creativity. Design. Materials. Technology, 2(70), 59-66. doi:10.46418/1990-8997_2023_2(70)_59_66
4. Hermann, E., & Puntoni, S. (2024). Artificial intelligence and consumer behavior: From predictive to generative AI. Journal of Business Research, 180, 114720. doi:10.1016/j.jbusres.2024.114720
5. McKnight, L., & Shipp, C. (2024). “Just a tool”? Troubling language and power in generative AI writing. English Teaching: Practice & Critique, 23(1), 23-35. doi:10.1108/ETPC-08-2023-0092
6. Edgell, R.A. (2024). A monstrous matter: The three faces of artificial creativity. Journal of Creativity, 34(4), 100075. doi:10.1016/j.yjoc.2024.100075
7. AlRabiah, S., Angell, R., Gao, L., et al. (2024). To ChatGPT, or not to ChatGPT: Navigating the paradoxes of generative AI in the advertising industry. Business Horizons. In Press, Journal Pre-proof. doi:10.1016/j.bushor.2024.05.002
8. Dyke, Van R., & Harrison, S. (2023). Beyond the break, theory on a dramatic scale. Computers and Composition, 69, 102795. doi:10.1016/j.compcom.2023.102795
9. Shatkin, M.A. (2023). Socio-philosophical aspects of the development of generative artificial intelligence. Proceedings of the Saratov University. A new series. Series: Philosophy. Psychology. Pedagogy, 23(4), 414-418. doi:10.18500/1819-7671-2023-23-4-414-418
10. Li, R. (2024). A “Dance of storytelling”: Dissonances between substance and style in collaborative storytelling with AI. Computers and Composition, 71, 102825. doi:10.1016/j.compcom.2024.102825
11. Druzhinina, A.A. (2023). The artist and the neural network: a symbiosis of the future? Decorative art and the subject-spatial environment. Bulletin of the Russian State Pedagogical University named after S.G. Stroganov, 3-3, 39-64. doi:10.37485/1997-4663_2023_3_3_39_64
12. Lutkova, O.V. (2023). Legal regulation of the participation of artificial intelligence and other generators in the process of creating results comparable to works. Bulletin of the O.E. Kutafin University (MGUA), 9(109), 108-117. doi:10.17803/2311-5998.2023.109.9.108-117
13. Fenwick, M., & Jurcys, P. (2023). Originality and the future of copyright in an age of generative AI. Computer Law & Security Review, 51, 105892. doi:10.1016/j.clsr.2023.105892
14. Kukhno, M.O. (2023). Artificial intelligence – a new subject of copyright: the near future or fiction? Journal of the Court of Intellectual Rights, 3(41), 72-86. doi:10.58741/23134852_2023_3_8
15. Belikova, E.K., & Popov, E.A. (2023). Modern problems of the correlation of natural and artificial intelligence in the paradigm of culture. Socio-humanitarian knowledge, 11, 9-13.
16. Safronov, N.S. (2023). Digital evolution in the international cultural environment. International life, 12, 128-131.
17. Bobkov, S.P., Galkin, D.V., & Konovalova, K.V. (2021). On the problem of automation of creativity in the field of art and design: instrumental and generative approaches. Bulletin of Tomsk State University. Cultural studies and art history, 44, 14-24. doi:10.17223/22220836/44/2
18. Ilyin, R.V., & Sandakova, L.G. (2022). Digitalization of the creative process: problems and prospects. Bulletin of the Buryat State University, 2, 64-71. doi:10.18101/1994-0866-2022-2-64-71
19. Karpukhin, O.I., & Makarevich, E.F. (2023). Changing the paradigm of social development in Russia – from a mass consumer society to a society of social creativity in the conditions of transformation of mass communications. Socio-humanitarian knowledge, 6, 11-20. doi:10.34823/SGZ.2023.6.52019
20. Karpukhin, O.I., & Makarevich, E.F. (2023). Changing the paradigm of social development in Russia – from a mass consumer society to a society of social creativity in the conditions of transformation of mass communications. Socio-humanitarian knowledge, 7, 27-36. doi:10.34823/SGZ.2023.7.52019

Peer Review

Peer reviewers' evaluations remain confidential and are not disclosed to the public. Only external reviews, authorized for publication by the article's author(s), are made public. Typically, these final reviews are conducted after the manuscript's revision. Adhering to our double-blind review policy, the reviewer's identity is kept confidential.
The list of publisher reviewers can be found here.

In the journal Culture and Art, the author presented his article "Socio-cultural risks of multimodal large generative models of artificial intelligence (GenAI)", in which a study of the potential impact of modern technologies on the socio-cultural environment was conducted. The author proceeds from the study of this issue from the fact that the emergence, development and mass use of new computer network technologies in various industries and spheres is associated with associated risks, which should be assessed by specialized specialists, therefore, exploring the results and prospects of GenAI application in the socio-cultural sphere, it is important to avoid uncritical broadcasts of developer declarations, sometimes of a marketing nature. The relevance of this study is due to the importance of multimodal large generative models, which are recognized as one of the priorities of state programs for the development of "artificial intelligence" technologies. The purpose of this study is to evaluate the possibility of using multimodal large generative models in creativity. To achieve this goal, the author sets the following tasks: consideration of the socio-cultural risks of application–perception by the mass audience, development of recommendations for the creation of effective security measures. As a methodological justification, the author applies an integrated approach, including philosophical, theoretical and cultural analysis. The theoretical basis was the works of such researchers as O.I. Karpukhin, P.G. Bylevsky, I.G. Mirin, M.A. Shatkin, etc. Based on the analysis of the scientific validity of the problems, the author identifies three main narratives of research on the philosophical and socio-cultural aspects of GenAI development and application: the possibilities of using GenAI in professional and amateur creativity; the influence of GenAI on the socio-cultural perception of the mass audience; the prospects for recognizing machine generation as creativity, and GenAI as the author. The author identifies two opposite approaches: one includes uncritical translation of developer declarations in the spirit of the "philosophy of artificial intelligence", the second is based on the priority of the specifics of socio-cultural creativity, recognizing only an instrumental nature of technology of any complexity. The author does not share the concerns of some researchers about the possibility of replacing human creativity with the process of creating cultural content using artificial intelligence technologies, as well as the humanization of such technologies. The author divides exaggerated forecasts of opportunities and, accordingly, risks caused by the classification of multimodal large generative models as "artificial intelligence" into two levels, fundamental and applied. Related applied socio-cultural problems, the solution of which requires a philosophical methodology, according to the author, are as follows: should the machine generation of multimedia content and the legal personality of "artificial intelligence" be recognized as creativity? However, according to the author, the recognition of the ability of modern neural networks to create objects that have so far been considered achievements of human creative activity, the classification of digital art as machine creativity only masks, transfers to the tool the responsibility of the person who made the request and published the generated content that violates the law. The philosophical and cultural analysis of creativity allowed the author to classify multimodal large generative models not as "artificial intelligence", but as automated systems of machine electronic computing processing of information of various types. Creativity remains a human monopoly, requires imagination, talent, and skill, and digitalization and neural networks free the artist only from repetitions of the simplest routine operations. GenAI is applicable to stimulate and expand the technical capabilities of human creativity based on automated analysis of large-scale sets of "big data". Based on the definition of multimodal large generative models as an automated technical tool of creativity, the author identifies and defines various "applied" socio-cultural risks associated with their development and application. To minimize the considered socio-cultural risks, to ensure the information security of national interests and citizens of Russia in the development and application of multimodal large generative models, the author recommends the following tools. Firstly, the involvement in the development and expertise of Russian GenAI models of domestic philosophers, cultural theorists, art historians, who are at the same time specialized specialists in socio-cultural issues, modern digital technologies and information security. Secondly, the expansion and deepening of scientific research on socio-cultural aspects of the effectiveness and safety of multimodal large generative models are in demand. Thirdly, the use of the results of such research in education and popularization through the domestic mass press, social networks, digital platforms and social advertising to actualize and enhance the culture of information security of Russian citizens. In conclusion, the author presents a conclusion on the conducted research, which contains all the key provisions of the presented material. It seems that the author in his material touched upon relevant and interesting issues for modern socio-humanitarian knowledge, choosing a topic for analysis, consideration of which in scientific research discourse will entail certain changes in the established approaches and directions of analysis of the problem addressed in the presented article. The results obtained allow us to assert that a comprehensive study of the impact of technological progress on socio-cultural transformations is of undoubted practical cultural interest and can serve as a source of further research. The material presented in the work has a clear, logically structured structure that contributes to a more complete assimilation of the material. An adequate choice of methodological base also contributes to this. The bibliographic list of the research consists of 20 sources, which seems sufficient for generalization and analysis of scientific discourse. The author fulfilled his goal, obtained certain scientific results that made it possible to summarize the material, showed deep knowledge of the studied issues. It should be noted that the article may be of interest to readers and deserves to be published in a reputable scientific publication.