Software for innovative information technologies
Reference:
Lizneva, Y.S., Kostyukovich, A.E., Kokoreva, E.V. (2024). Analysis of the possibilities of determining location in a Wi-Fi network using neural network algorithms. Software systems and computational methods, 4, 1–12. https://doi.org/10.7256/2454-0714.2024.4.72107
Abstract:
Indoor positioning on a Wi-Fi network belongs to a class of tasks in which the dependence of output characteristics on input variables is influenced by many parameters and external factors. When solving such problems, it is necessary to take into account that in determining the location, it is of significant interest not only to determine the static coordinates of an object, but also to predict the vector of its movements. In the case where the location of an object is determined only by the level of signal power received from several access points on a Wi-Fi network, the use of signal attenuation models that take into account the conditions of propagation of radio waves indoors is difficult due to the need for reliable information about the material of ceilings, floors and ceilings, the presence of fixed and mobile shading objects, etc. Since the electromagnetic environment inside the room varies depending on many factors, the above-mentioned models have to be adjusted to these changes. Since finding patterns in a large amount of data requires non-standard algorithms, artificial neural networks can be used to solve the positioning problem. It is important to choose a neural network architecture that can take into account changes in the signal strength received by a mobile device from Wi-Fi access points. Before training a neural network, statistical data is preprocessed. For example, abnormal cases are excluded from the machine learning dataset when the device detects a signal from less than three access points at one measuring point. As a result of the analysis of statistical data, it was found that the same distance between the measuring points leads to the fact that the neural network incorrectly determines the location of the object. The paper shows that in order to increase the accuracy of positioning the location in conditions of complex radio placement, when compiling radio maps, it is necessary to determine the optimal varying distances between measuring points. The conducted experimental studies, taking into account the proposed approach to optimizing the distances between measuring points, prove that the accuracy of location determination in the vast majority of measuring points reaches 100%.
Keywords:
machine learning, hidden layer, signal strength, neural network, RSSI, measuring point, positioning, Wi-Fi, training sample, training set
Systems analysis , search, analysis and information filtering
Reference:
Dagaev, A.E., Popov, D.I. (2024). Comparison of automatic summarization of texts in Russian. Software systems and computational methods, 4, 13–22. https://doi.org/10.7256/2454-0714.2024.4.69474
Abstract:
The subject of the research in this article is the generalization of texts in Russian using artificial intelligence models. In particular, the authors compare the popular models GigaChat, YaGPT2, ChatGPT-3.5, ChatGPT-4, Bard, Bing AI and YouChat and conduct a comparative study of their work on Russian texts. The article uses datasets for the Russian language, such as Gazeta, XL-Sum and WikiLingua, as source materials for subsequent generalization, as well as additional datasets in English, CNN Dailymail and XSum, were taken to compare the effectiveness of generalization. The article uses the following indicators: ROUGE, BLEU score, BERTScore, METEOR and BLEURT to assess the quality of text synthesis. In this article, a comparative analysis of data obtained during automatic generalization using artificial intelligence models is used as a research method. The scientific novelty of the research is to conduct a comparative analysis of the quality of automatic generalization of texts in Russian and English using various neural network models of natural language processing. The authors of the study drew attention to the new models GigaChat, YaGPT2, ChatGPT-3.5, ChatGPT-4, Bard, Bing AI and YouChat, considering and analyzing their effectiveness in the task of text generalization. The results of the generalization in Russian show that YouChat demonstrates the highest results in terms of the set of ratings, emphasizing the effectiveness of the model in processing and generating text with a more accurate reproduction of key elements of content. Unlike YouChat, the Bard model showed the worst results, representing the model with the least ability to generate coherent and relevant text. The data obtained during the comparison will contribute to a deeper understanding of the models under consideration, helping to make a choice when using artificial intelligence for text summarization tasks as a basis for future developments.
Keywords:
text compression, YouChat, Bing AI, Bard, ChatGPT-4, ChatGPT-3, YaGPT2, GigaChat, text summarization, natural language processing
Models and methods of information security management
Reference:
Bulgakov, V.D., Gvozdevsky, I.N. (2024). Proof of Performance Consensus Model and Algorithm. Software systems and computational methods, 4, 23–48. https://doi.org/10.7256/2454-0714.2024.4.71119
Read the article
First Peer Review:
Second Peer Review:
Third Peer Review:
|
EDN: NAGMFW
|
Abstract:
The article examines the working principle of the Proof of Performance (PoP) model, based on a consensus algorithm that supports horizontal sharding functions. The PoP model introduces changes to the traditional block structure used in Proof of Stake algorithms and Tendermint-based networks. Horizontal sharding allows transactions to be distributed among multiple nodes (shards), significantly increasing the network's throughput. The main goal of the study is to explore ways to enhance the efficiency and scalability of blockchain networks through dynamic transaction distribution and adaptive node management. An important aspect is the definition of parameters and adjustable characteristics of nodes, such as performance and reliability, to ensure even and fair load distribution within the network. This provides the system with the ability to adapt to changing load conditions. The study employs analytical and formal methods to describe the block structure, transaction distribution mechanism, and the system of penalties and rewards for shards. The research represents an innovative approach to managing blockchain networks, focusing on node performance. The PoP model with horizontal sharding provides higher throughput and scalability compared to traditional consensus algorithms. A system of dynamic load distribution and adaptive weight adjustment of nodes based on their performance is proposed, which contributes to the improvement of the network's efficiency and reliability. The results of the study demonstrate that the Proof of Performance model significantly increases transaction processing speed and overall blockchain network performance. Application examples confirm the model's effectiveness in various types of networks, such as DeFi platforms, supply chain management systems, and IoT networks. The PoP model encourages nodes to maintain high performance, ensuring fair load distribution and enhancing the overall network resilience.
Keywords:
Blockchain, Load distribution, Performance, Horizontal sharding, Proof of Stake, Proof of Performance, Block, Shard, Consensus algorithm, Consensus model
Theory, software and languages of concurrent computing
Reference:
Dimitrichenko, D.P. (2024). Analysis of the appropriate behavior of various types of automata in the conditions of the placement game. Software systems and computational methods, 4, 49–65. https://doi.org/10.7256/2454-0714.2024.4.72488
Abstract:
The object of research in this work is homogeneous collectives of automata with the property of purposeful behavior. The subject of this study is a comparison of different designs of such machines in the implementation of the conditions of the game of placement. The aim of the study is to establish the best (or similar) structures in terms of properties in order to optimize the time and computational costs of more complex machine learning models based on the principle of reinforcement learning. In the collectives under consideration, automata perform actions in a given habitat (functioning) with varying degrees of effectiveness. The automata, in accordance with their design, react to the input signal with another action. The evaluation of the effectiveness of the machine is defined as the sum of positive signals (rewards) or negative signals (penalties) received by the machine during the considered period of time. This characteristic depends on both the declared design of the machine and the depth of its memory. It is necessary to determine the simplest designs of automata that allow achieving optimal efficiency in a given environment in the shortest possible way. The formalization of both the properties of the environment and the actions of automata, as well as the processing of the results obtained, is carried out using the apparatus of game theory. In this case, the values of the effectiveness of the functioning of the machines are represented as the cumulative amounts of winnings and losses of the slot machine players. As result of the research the designs of automata that provide a given efficiency of functioning with a minimum depth of memory (the least complex design) are presented. The result obtained makes it possible to trace the influence of the inertial qualities of automata, implemented in the form of appropriate structures, on the efficiency of functioning in a given environment, formalized in the form of a game of placement. An automaton with linear tactics and a Krylov automaton form two marginal implementations of an automaton strategy for approaching the optimum. The first is due to the high speed of changing actions, the second is due to a long stay in states close to optimal. The field of application of the results obtained is further investigation of more complex dynamic environments using the simplest designs of automata, since synchronous collectives of automata in the process of computational implementation are difficult to parallelize, which leads to a significant increase in time and computational costs with the complication of the structure of dynamic environments or with an increase in these optimization tasks.
Keywords:
optimal strategy, game theory, memory depth, machine, reinforcement learning, fines, incentives, appropriate behavior, homogeneous group of machines, placement game
Computer graphics, image processing and pattern recognition
Reference:
Zakharov, A.A. (2024). A method for detecting objects in images based on neural networks on graphs and a small number of training examples. Software systems and computational methods, 4, 66–75. https://doi.org/10.7256/2454-0714.2024.4.72558
Abstract:
In the presented work, the object of research is computer vision systems. The subject of the study is a method for detecting objects in images based on neural networks on graphs and a small number of training examples. Such aspects of the topic as the use of a structural representation of the scene to improve the accuracy of object detection are discussed in detail. It is proposed to share information about the structure of the scene based on neural networks on graphs and training from "multiple shots" to increase the accuracy of object detection. Relationships between classes are established using external semantic links. To do this, a knowledge graph is pre-created. The method contains two stages. At the first stage, object detection is performed based on training with "multiple shots". At the second stage, the detection accuracy is improved using a neural network on graphs. The basis of the developed method is the use of convolution based on spectral graph theory. Each vertex represents a category in the knowledge graph, and the edge weight of the graph is calculated based on conditional probability. Based on the convolution, information from neighboring vertices and edges is combined to update the vertex values. The scientific novelty of the developed method lies in the joint use of convolutional networks on graphs and training from "multiple shots" to increase the accuracy of object detection. A special contribution of the author to the research of the topic is the use of a convolutional network based on a knowledge graph to improve the results of the object detection method using a small number of training examples. The method was studied on test sets of images from the field of computer vision. Using the PASCAL VOC and MS COCO datasets, it is demonstrated that the proposed method increases the accuracy of object detection by analyzing structural relationships. The average accuracy of object detection using the developed method increases by 1-5% compared to the "multiple shots" training method without using a structural representation.
Keywords:
artificial intelligence, pattern recognition, graph, limited annotation, deep learning, small data set, convolutional networks, object detection, computer vision, structural representation of scenes
Data encryption and data protection
Reference:
Knyazev, M.A., Shabrova, A.S., Kryuchkov, A.A. (2024). An approach to choosing protection mechanisms for personal Internet of Things devices based on a mathematical model with two criteria. Software systems and computational methods, 4, 76–92. https://doi.org/10.7256/2454-0714.2024.4.72839
Abstract:
Existing methods to protect Personal Internet of Things (PIoT) devices require continuous updates to counter new threats and vulnerabilities. A major task is developing a universal, efficient approach that accounts for the resource constraints of consumer electronics manufacturers. This study introduces a mathematical model applying criteria for implementation complexity and versatility of protection mechanisms to rank them. The goal is to enhance portable smart device security while controlling costs and complying with information security requirements. The object of the study is the process of securing PIoT devices under current regulatory and technical constraints, as well as limited resources. Its subject is a set of security mechanisms chosen and ranked using a two-criteria model. Within this research, a detailed analysis of recommendations in international and domestic standards was conducted, and feasibility of their adoption was examined through effective resource allocation guided by the model. This research’s novelty lies in an original method for selecting PIoT security mechanisms based on complexity and versatility. By accounting for emerging threats, regulatory mandates, and development expenses, the model provides an integrated strategy that minimizes operational overhead. It delivers comprehensive coverage of security needs despite manufacturers' resource constraints. Findings confirm that adopting this model is a promising, cost-effective way to address PIoT security challenges. By prioritizing complex yet versatile measures, developers can align with legislative requirements and evolving risks. In conclusion, the proposed approach addresses both emergent vulnerabilities and stringent legal obligations, ensuring efficient use of limited resources. By incorporating these criteria, developers can systematically balance complexity, versatility, and cost, achieving stronger protection for consumer IoT products. By emphasizing these factors, it ensures product compliance with evolving standards. The study shows that a two-criteria mathematical model can steer manufacturers toward robust, practical security solutions for Personal Internet of Things devices.
Keywords:
IoT security, mathematical model, portable smart devices, Internet, Device protection, Smart device protection, security methodology, PIoT, PIoT devices, personal Internet of Things
Models and methods of information security management
Reference:
Khlestkin, A.Y., Raikov, A.V., Kazantsev, A.A., Emelin, D.P., Larin, D.V. (2024). The role of operating systems and shells in cloud computing: analysis of OS and shells used in cloud platforms and their impact on cloud infrastructure. Software systems and computational methods, 4, 93–108. https://doi.org/10.7256/2454-0714.2024.4.70626
Read the article
First Peer Review:
Second Peer Review:
Third Peer Review:
|
EDN: KYNBQH
|
Abstract:
The main focus of the article is on operating systems, cloud computing and command shells, which have been actively developing for several decades and are already part of the life of both an ordinary user and a computer technology professional. These objects are considered as separate components of information technology, as well as their relationship and the results of this relationship. Operating systems in cloud servers perform a managerial role. To be more precise, they manage the resources of physical servers. Operating systems or OS in this case define several parameters. These parameters include how operating systems can use and manage memory and storage for various virtual machines. Command shells, in turn, are represented by an application that provides the user with a command-line interface in which he enters commands both individually and runs scripts consisting of a list of commands. Research methods include theoretical (classification, comparative analysis, literature analysis) and practical (experiment, modeling) approaches. This allows for a comprehensive analysis of the functioning of operating systems and command shells in cloud computing. The scientific novelty of our research is the creation of scripts to perform a particular task in the field of cloud computing on a specific operating system using the above-described command shells. Thus, the authors provided theoretical data on operating systems and command shells. The authors provided examples of scripts for Bash and Bourne Shell (sh) command shells for the Linux operating system and scripts for Command Prompt command shells (cmd.exe ) and Windows PowerShell for the Microsoft Windows operating system. As a result of the analysis of the scripts, a table was compiled with the impact of the operating system and the command shell on cloud computing. The analysis of the table allowed the authors to characterize the objects of research of this scientific work and draw appropriate conclusions.
Keywords:
Hyperfine, Bourne Shell, Bash, Microsoft Windows, Linux, cloud infrastructure, cloud platforms, cloud computing, command shells, operating systems
Operating systems
Reference:
Smirnov, .A., Podolskiy, E.A., Cherenkov, A.V., Gosudarev, I.B. (2024). A comparative analysis of the performance of JavaScript code execution environments: Node.js, Deno and Bun. Software systems and computational methods, 4, 109–123. https://doi.org/10.7256/2454-0714.2024.4.72206
Abstract:
The subject of the study was the performance of JavaScript program execution in modern environments Node.js, Deno and Bun. These platforms are used to develop server applications and have significant differences in architecture, functionality and performance. Node.js is the most mature and widespread solution and is actively used in most modern web applications. Deno is a newer environment developed by the creator of Node.js, offering improved security, TypeScript support, and other innovations. Bun, on the other hand, is a modern and high-performance alternative focused on the speed of server-side applications. The purpose of the study is to identify the performance differences between the major modern runtime environments (Node.js, Deno and Bun) for further utilization of these environments in web application development. A computer experiment method using Docker containers and process automation using Ansible was used for the study. The execution time of different scenarios in each of the execution environments was measured. The scientific novelty of this study lies in the fact that for the first time a holistic and valid methodology for measuring and comparing JavaScript code performance in modern runtime environments has been proposed, which will allow researchers to build on the proposed approach in further experiments and extend it to new runtime environments. The results show that Bun shows the best performance in synchronous computations (sorting, JSON processing), but is behind Node.js and Deno in checking for prime numbers. Deno showed high performance in asynchronous operations, thanks to the use of Rust and the Tokio library. Node.js, despite lower results in synchronous tasks, showed stable performance in tests and remains a solid choice for large projects. In the course of the study, recommendations were developed for selecting the appropriate server-side JavaScript code execution environment for various tasks.
Keywords:
Docker, Server, Web, Backend, Computer experiment, Performance, Bun, Deno, Node.js, JavaScript
Educational software systems
Reference:
Kopysheva, T.N., Mitrofanova, T.V., Smirnova, T.N., Khristoforova , A.V. (2024). The use of artificial intelligence systems for data processing in the educational process. Software systems and computational methods, 4, 124–139. https://doi.org/10.7256/2454-0714.2024.4.71438
Read the article
First Peer Review:
Second Peer Review:
Third Peer Review:
|
EDN: KLWJBM
|
Abstract:
In the Russian Federation, much attention is paid to the development of end-to-end digital technologies, including artificial intelligence (AI) technologies. Decree of the President of the Russian Federation No. 490 dated October 10, 2019 "On the development of artificial intelligence in the Russian Federation" approved the National Strategy for the Development of Artificial Intelligence for the period up to 2030. In accordance with subparagraph (c) of paragraph 51.5 of the Strategy, one of the directions for improving the level of AI competencies and the level of awareness of citizens about AI is the development of skills in using AI technologies among graduates of educational institutions of higher education through the inclusion of AI modules in each educational program. The object of the study of this article is the use of AI systems in the course of laboratory and practical classes, as well as independent work of students in the disciplines "Artificial intelligence systems", "Fundamentals of artificial intelligence" and similar disciplines. The subject of the research is the methods of solving problems of approximation of functions and classification of data by means of specialized platforms Loginom Community and Neural Network Wizard. The methods of training a neural network, methods for evaluating the quality of training samples of a neural network are considered in detail. The research methodology is based on a combination of theoretical and practical approaches using methods of analysis, comparison, generalization, synthesis, classification, and modeling. Materials have been developed, during the study of which students should gain theoretical knowledge in the field of function approximation and data classification, familiarize themselves with the basic concepts and methods of artificial intelligence, as well as their application in various fields of data processing, consolidate practical skills in working with neural networks, as well as specialized platforms and tools Loginom Community and Neural Network Wizard. The results of the control measures showed that the skills of using artificial intelligence technologies were formed at a sufficient level among students of training areas not related to the field of artificial intelligence.
Keywords:
regression, training sample, Neural Network Wizard, Loginom Community, model training, data processing, data classification, function approximation, neural networks, artificial intelligence
Methods, languages and forms of human-computer interaction
Reference:
Ishankhonov , A.Y., Pshychenko, D.V., Mozharovskii , E.A., Aluev , A.S. (2024). The Role of LLM in Next-Generation Integrated Development Environments. Software systems and computational methods, 4, 140–150. https://doi.org/10.7256/2454-0714.2024.4.72022
Abstract:
The role of Large Language Models (LLM) in new generation integrated development environments (IDEs). Tools such as GitHub Copilot, IntelliCode and Alice Code Assistant are explored in the context of their use in programming. The authors examine how LLMs enable the automation of key development tasks, including code autocompletion, error detection, refactoring, and code generation, which result in increased development efficiency and improved code quality. Special emphasis is placed on how LLMs affect developers' cognitive processes, such as problem-solving abilities, creativity, and professional skills. A review of existing integrated development environments that utilize large language models. LLM functionality for code autocompletion, fragment generation, error detection and correction was evaluated. Comparative methods were applied to evaluate the effectiveness of LLM compared to traditional development tools. Special attention was paid to analyzing the cognitive load caused by the use of LLMs and assessing their impact on the creative process. The novelty of the research consists in the complex analysis of LLM application in modern IDEs, as well as in revealing their potential for increasing developers' productivity and improving the quality of program code. It is concluded that LLM integration into IDEs allows not only speeding up the process of code creation, but also considerably increasing its quality due to intellectual support and automation of the routine tasks. However, while the benefits of integrating LLMs into IDEs are clear, limitations related to cognitive load, ethical issues, data security, and the need to maintain a balance between automation and development of programmers' skills are also identified.
Keywords:
development process optimization, machine learning, software systems, code completion, artificial intelligence, code improvement, programming automation, Integrated Development Environments, Large Language Models, data analysis
Quality aspects and improving the margin of reliability of software systems
Reference:
Sheinman , V., Starikov , D.D., Tiumentsev , D.V., Vavilov , G.D. (2024). Improving the Efficiency of Software Development Processes: Container Technologies. Software systems and computational methods, 4, 151–161. https://doi.org/10.7256/2454-0714.2024.4.72015
Abstract:
The article discusses the impact of containerized technologies on software development processes. It focuses on the role of containerization in optimizing the deployment and management of applications, as well as in increasing the flexibility and scalability of software systems. The study analyzes key aspects of containerization, including application isolation, increasing software portability between different environments, and reducing operating costs by optimizing the use of computing resources. Modern tools such as Docker and Kubernetes, which allow standardizing and automating the processes of infrastructure deployment and management, are considered. To analyze the effectiveness of container technologies, benchmarking techniques have been used to evaluate their impact on infrastructure flexibility and software system performance. The sources of data were scientific publications. The novelty of the research lies in considering the application of container technologies in the context of modern software development practices, which allows to significantly accelerate the processes of development, testing and deployment of software products. The results show that containerization improves system performance, simplifies application management, and reduces operational costs. Examples of practical use of Docker and Kubernetes in large companies demonstrate that containerization significantly increases infrastructure flexibility and scalability of solutions, allowing developers to easily adapt to changing conditions and market requirements. In conclusion, it is emphasized that container technologies play a key role in modern software development processes, and their further development will contribute to even more significant improvements in automation and infrastructure management of software systems.
Keywords:
platforms, automation, software operation, resource optimization, scalability, Kubernetes, software development, Docker, containerization, process isolation
Systems analysis , search, analysis and information filtering
Reference:
Pekunov, V.V. (2024). Object-transactional models of programs in algorithmic languages. Software systems and computational methods, 4, 162–169. https://doi.org/10.7256/2454-0714.2024.4.69228
Abstract:
This paper is devoted to the issue of representability of programs written in algorithmic languages using formalisms based on the idea of using marginal partially transactional memory, including a single transactional cell and many ordinary cells. It is assumed that such formalisms are based on the concept of a network of objects representing both the main and auxiliary elements of the solving problem. Objects function in memory of the specified type, executing methods containing exclusively branching code devoid of cycles. Cycles in this approach are replaced by multiple special object network renegotiation, similar to that implemented in classical transactional memory. Based on the most general ideas about the process of solving a problem in a certain subject area, the concept of an object-transactional model is introduced for the first time and their basic properties are formulated. The methods of discrete mathematics and the theory of algorithms are used in the formulation of the structure and basic principles of the functioning of object-transactional models. The concept of marginal partially transactional memory containing a single transactional cell with a special agreement is introduced. The features of matching such memory in the context of the proposed models are described. A hypothesis is put forward about the feasibility of arbitrary algorithms using object-transactional models. The basic principles of the functioning of such models are described, and their basic properties are formulated. The concepts of marginal non-parallel and parallel models are introduced. It is proved that the limiting nonparallel model is capable of executing an arbitrary Turing-solvable algorithm. It is proved that the limiting parallel model of K+2 nodes is equivalent to a system of K parallel running Turing machines and, accordingly, is capable of executing an arbitrary Turing-solvable algorithm implying the presence of K parallel branches. Thus, the hypothesis put forward in the paper on the feasibility of arbitrary algorithms has been proved.
Keywords:
parallel algorithm, theory, Turing machine, object-transaction model, limit theorems, special reconciliation, transactional memory, program's model, sequential algorithm, feasibility of algorithms
Computer graphics, image processing and pattern recognition
Reference:
Kheyfits, A.E., Yanchus , V.E., Borevich, E.V. (2024). Methodology for conducting an experimental study on the perception of visual information in the field of human peripheral vision. Software systems and computational methods, 4, 170–181. https://doi.org/10.7256/2454-0714.2024.4.44101
Abstract:
This article describes the method of conducting an experiment in the framework of the study of visual information perception in the field of peripheral vision of a person. The authors describe the methodology of preparation and design of stimulus material, experimental setup, experimental methodology and processing of the data obtained. During the design of the experiment, the question of expanding the reading area of visual information when working with a computer system is raised. Since the analysis of incoming information was carried out with a gradual complication of the task, the development of a possible solution in the course of work became a production necessity. Any monitor, regardless of size, limits the analysis of visual perception. Incorrect increase or decrease of objects in the discipline format leads to data degradation. A correct image reduces the level of fatigue when reading information, improves the quality of its perception. The peripheral vision area remains unaffected in work processes, being an important part of human vision. The study of the perception of graphic images in the peripheral area of human vision will potentially expand the efficiency of the interface. The results obtained on the basis of experimental data can be rationally used in the development of human-computer interaction interfaces. The methodology includes the development and consideration of factors of color, size and distance in the stimulus material. The stimulus material is designed using a software module that is subsequently responsible for the random and independent location of the stimulus material. To fix the parameters of the viewing pattern, an AI-tracking software and hardware complex is used.
Keywords:
Visual system, Human-Computer Interaction, Eye Tracker, Intelligent Interface, Statistical analysis, Stimulus material, Eye Tracking, Graphical interface, Peripheral vision, Visual perception
Computer graphics, image processing and pattern recognition
Reference:
Damdinova, T.T., Damdinov, Z.S., Prudova, L.Y., Bubeev, I.T. (2024). Modeling irregular-shaped pores based on the brightness of pixels in a digital image. Software systems and computational methods, 4, 182–191. https://doi.org/10.7256/2454-0714.2024.4.72014
Abstract:
One of the current directions in the field of materials science is the study and modeling of processes occurring in capillary-porous bodies. These facilities play an important role in areas such as oil production, medicine, construction, production of filters and batteries, and the creation of new materials, where accurate understanding and control of processes occurring in porous media is required. Modern modeling methods using computer vision and high computing power of computers can significantly improve the accuracy and efficiency of studies of capillary-porous bodies and processes in them. For porous bodies of irregular structure, accurate visual fixation of processes occurring inside objects is difficult, but there is a need for this especially for multilevel processes that affect the state of a porous body. The proposed simulators and models of the structure of such objects using simplifications and abstract models to assess the interaction of substances (diffusion, percolation, etc.) are unsuitable in heterogeneous and irregular structures. To obtain a geometric model of arbitrary-shaped pores, it is proposed to use digital image processing methods. An array of points describing the geometry of the pore has been created. A method of surface modeling based on spline interpolation of selected points is was chosen. To obtain a surface model of an irregular-shaped pore, a technique has been developed for forming an array of points based on the brightness levels of pixels of its digital image. The brightness levels for grouping the initial pore points are determined based on the local maxima of the brightness histogram. An algorithm for constructing a surface model of an irregular-shaped pore using spline interpolation of point arrays has been developed. The result of the software implementation of the developed algorithm for modeling the surface of irregular pores is presented. According to the developed method of geometric modeling of irregular pores, it is possible to obtain further information for analysis on the size and volume of pores, porosity of the object as a whole, as well as to analyze processes in porous objects. If necessary, to increase the accuracy of the shape, the number of brightness levels for grouping points can be increased.
Keywords:
histogram, array of points, geometric modeling, irregular shape, surface modeling, spine interpolation, digital image processing, porosity, porous materials, pixel brightness level
Software for innovative information technologies
Reference:
Zolotukhina, D. (2024). The Efficiency of Distributed Caching Platforms in Modern Backend Architectures: A Comparative Analysis of Redis and Hazelcast. Software systems and computational methods, 4, 192–204. https://doi.org/10.7256/2454-0714.2024.4.72305
Abstract:
The object of this study is two caching and distributed data storage systems — Redis and Hazelcast — which are widely used to accelerate data access in high-load applications. This article presents a comprehensive comparative analysis of these systems based on key aspects important for efficient caching: architectural features, memory management models, clustering approaches, fault tolerance mechanisms, and scalability. Special attention is given to investigating caching patterns and support for SQL-like queries. The aim of the work is to provide an in-depth analysis of the advantages and limitations of Redis and Hazelcast in the context of data caching, as well as to identify their strengths and weaknesses under different loads and usage scenarios. The methodology of the research includes a comparative analysis of Redis and Hazelcast based on key aspects, with results presented in the form of a comparative table. Performance testing of CRUD operations was also conducted using automated tests integrated into a Spring Boot application. The study shows that Redis, being a single-threaded system with fast read and write operations, is more efficient for simple, localized applications, while Hazelcast, which supports multi-threading and dynamic clustering, handles large data volumes and distributed tasks more effectively. The author's contribution to the research is a comprehensive comparative analysis of these systems, considering key characteristics such as performance, scalability, and fault tolerance, along with testing their performance in real-world scenarios. The novelty of the research lies in the detailed examination of Redis and Hazelcast for data caching in high-load applications, which will be valuable for the development and optimization of the infrastructure of high-performance distributed systems that require real-time data caching.
Keywords:
in-memory data storage, clustering, multithreading, fault tolerance, CRUD operations, performance, Hazelcast, Redis, caching, distributed system