Quality aspects and improving the margin of reliability of software systems
Reference:
Revnivykh A.V., Velizhanin A.S.
The study of the disassembled representation of executable files generated by different compilers. Example of buffer overflow vulnerability.
// Cybernetics and programming.
2019. ¹ 1.
P. 1-17.
DOI: 10.25136/2644-5522.2019.1.28238 URL: https://en.nbpublish.com/library_read_article.php?id=28238
Abstract:
The subject of the study is a potential buffer overflow vulnerability in various software related to the function of the standard C / C ++ strcpy programming language library and approaches and methods for finding such vulnerabilities. The object of the study is the data of the machine code of the compilers when the program is assembled in various modes. The purpose of the study is to analyze some features of the machine code generated by various compilers for Windows and Linux in the Debug and Release modes, including, on the basis of this, a review of the buffer overflow vulnerability. Research methods. The paper reviews and develops methods for constructing algorithms for searching for buffer overflow vulnerabilities, examines the characteristics of this vulnerability at the level of machine code. This is done using the Visual C ++ compilers, Intel C ++ compilers, g ++ compilers, as well as the WinDBG, GDB debuggers. Key findings. Building programs in different modes leads to the formation of differences in the executable code, which is made from the completely same high-level programming language code; these differences manifest themselves in differences in program behavior. In the course of researching software in search of vulnerabilities, it is important to analyze computer code in order to identify hidden patterns. The novelty of the study lies in identifying differences in the machine code obtained after assembling the same high-level code, identifying compiler stamps when executing the assembly of the program in different modes. A special contribution of the author to the study of the topic is the development of methods for constructing algorithms for searching for buffer overflow vulnerabilities.
Keywords:
Debug mode, Compiler stamps, Buffer overflow, Disassembling, Code analysis, Vulnerabilities, Information security, Release mode, Algorithm construction methods, WinDBG debugger
Question at hand
Reference:
Vavilov V.
Analysis of RQ-systems operating in semi-Markov environment with return of requests
// Cybernetics and programming.
2019. ¹ 1.
P. 18-36.
DOI: 10.25136/2644-5522.2019.1.28838 URL: https://en.nbpublish.com/library_read_article.php?id=28838
Abstract:
The object of this research is RQ-systems (retrial queueing systems, systems with repeated calls) with the simplest incoming flow of requirements, waiting in orbit, return of requests and functioning in a random (semi-Markov) environment. The systems in question are models of a wide class of real service systems in which an application, upon completion of a successful service, can leave the system permanently or, after a certain period of time, return to the system for repeated maintenance. Examples of such systems are banks, where a customer who has repaid a loan can reapply for a new loan, employment centers, where customers can reapply in search of new work, etc. The efficiency of such systems depends on a number of factors, the nature of which can be defined as random (random semi-Markov environment). In this article, the author presents a mathematical modeling of the class of systems under study. The research tool of the systems under consideration is the mathematical apparatus of the theory of mass service. The proposed mathematical model of RQ-systems with the return of applications in a semi-Markov medium is investigated by the method of asymptotic analysis of markovized systems. The scientific novelty of the work lies in the fact that for the first time a mathematical model of an RQ-system functioning in a semi-Markovian environment with called applications was proposed and its asymptotic analysis was performed. The asymptotic average of the normalized number of customers in the system, the deviation from the average is found, the main probability-time characteristic is obtained - the probability density distribution of the values of the process of changing the system states.
Keywords:
diffusion process, diffusion approximation, asymptotic analysis method, semi-Markov process, random environment, orbit, queuing system, Markov chain, variable parameters, Poisson flow of applications
Mathematical models and computer simulation experiment
Reference:
Ipatov Y.A., Kalagin I.V.
Analysis of the dynamic characteristics for target groups of social networks
// Cybernetics and programming.
2019. ¹ 1.
P. 37-50.
DOI: 10.25136/2644-5522.2019.1.18417 URL: https://en.nbpublish.com/library_read_article.php?id=18417
Abstract:
The object of research is the dynamic characteristics for target groups of social networks. The subject of this study is to analyze the methods and models of the evolutionary characteristics of the social graphs of large dimension. The study examines in detail the approaches of analysis, quantitative characteristics graph models. Synthesized an algorithm to analyze the dynamic characteristics for target groups of social networks. The experimental results show the fact of adding a user to the subject area of interest, as well as visualize the entire process in real time. The developed software tools can be useful for further development and research topics related to the social network. When solving tasks used methods of mathematical logic, graph theory, mathematical statistics, the apparatus of mathematical analysis, linear algebra, mathematical modeling methods, theory of algorithms, as well as object-oriented programming techniques. The novelty of the study is to determine the dynamic characteristics of the target groups of social networks, as well as the visualization of the entire process in real time. The main conclusions of the study is that the developed software tool will enable to trace cause and effect indicators of changes in the social graph. The proposed prototype of the software will be of interest primarily marketers, system analysts, and professionals involved in the analysis and the study of social networks.
Keywords:
characteristics social graph, social graph model, network analysis methods, dynamic network analysis, network evolution, Social network, graph visualization, group influence, conversion tool, social groups dynamics
Mathematical models and computer simulation experiment
Reference:
Sklyar A.
Analysis and elimination of noise components in time series with variable pitch
// Cybernetics and programming.
2019. ¹ 1.
P. 51-59.
DOI: 10.25136/2644-5522.2019.1.27031 URL: https://en.nbpublish.com/library_read_article.php?id=27031
Abstract:
The article discusses the methodology for estimating the noise component in time series with variable pitch, its justification, and suggests an algorithm for removing noise from data. The analysis is based on the requirement of smoothness of a function representing the original data and having continuous derivatives up to the third order. The proposed method and algorithms for estimating and eliminating noise in the data under the assumption of smoothness, the function they represent, allow reasonably determining both absolute and relative noise in the data, regardless of the uniformity of the measurement step in the source data, the noise level in the data, remove the noise component from the data . The algorithm for solving the problem is based on minimizing the deviations of the calculated values from the smooth function, provided that the deviations from the source data correspond to the noise level. The proposed method and algorithms for estimating and eliminating noise in the data under the assumption of smoothness, the function they represent, allow reasonably determining both absolute and relative noise in the data, regardless of the uniformity of the measurement step in the source data and their noise, and remove the noise component from the data. Considering the smoothness of the data obtained as a result of noise elimination, the data obtained by noise elimination are suitable for detecting both analytical and differential dependencies in them.
Keywords:
numerical simulation, data decomposition, trend, time series, digital noise filtration, relative noise, absolute noise, time series analysis, mathematical model, statistics processing
Theory, software and languages of concurrent computing
Reference:
Pekunov V.V.
Some properties of procedures with re-entry scheduling. Planning C language
// Cybernetics and programming.
2019. ¹ 1.
P. 60-65.
DOI: 10.25136/2644-5522.2019.1.25522 URL: https://en.nbpublish.com/library_read_article.php?id=25522
Abstract:
The article analyzes the descriptive capabilities of procedures and functions with re-entry planning. A procedure / function with re-entry scheduling differs from the usual procedure / function by the presence of a dynamically updated (from both inside and outside) plan of execution. This is a fairly new formalism, the theoretical and practical properties of which are still poorly covered in the scientific literature. Special attention is paid to the Planning C programming language, which fully implements the procedures and functions with re-entry planning. Descriptive features of procedures / functions with re-entry planning are considered both theoretically, using extended Turing machines, and constructively, by building equivalents of basic algorithmic control structures based on these procedures. The novelty consists in proving the representability of any sequential and parallel algorithms using these procedures. It is proposed to use Planning C, which implements such procedures / functions, for solving time-consuming problems of computational mathematics on parallel computing systems. The possibility of its use in solving the problem of learning deep neural networks is shown.
Keywords:
programming language, Planning C, algorithmic completeness, parallel algorithms, sequential algorithms, planning re-entry, procedures, hard calculations, deep neural networks, computational mathematics
Knowledge bases, intelligent systems, expert systems, decision support systems
Reference:
Mustafaev A.G.
Neural network techniques for automatic electrocardiogram analysis in the diagnosis of diseases of the cardiovascular system
// Cybernetics and programming.
2019. ¹ 1.
P. 66-74.
DOI: 10.25136/2644-5522.2019.1.19343 URL: https://en.nbpublish.com/library_read_article.php?id=19343
Abstract:
One of the most important factors for the timely provision of medical care is to quickly and accurately obtain information about the state of health of the patient. Electrocardiography (ECG) is a non-invasive process of interpreting the electrical activity of the heart, allowing to evaluate the speed and regularity of heart contractions. This data is used to determine the damage and pathologies of the heart. Automatic ECG analysis is a challenging theoretical and practical task. The goal of the work is to use neural networks to detect the characteristic ECG signals that determine heart rhythm abnormalities and identify the corresponding heart disease. In the design, the Neural Network Toolbox of MATLAB 8.6 (R2015b) was used to simulate an artificial neural network apparatus. The effectiveness of the developed neural network model for ECG analysis was investigated using the MIT-BIH database. The accuracy of detection and extraction of the components of the ECG signal shows that the developed neural network model can be used to detect heart disease in patients. The sensitivity of the model was 71%, the specificity of 89%.
Keywords:
sensitivity, data classification, error back propagation, multilayer perceptron, computer diagnostics, artificial neural network, electrocardiogram, specificity, supervised learning, QRS complex
Software for innovative information technologies
Reference:
Stepanov P.P.
Application of group control and machine learning algorithms on the example of the "Battlecode" game
// Cybernetics and programming.
2019. ¹ 1.
P. 75-82.
DOI: 10.25136/2644-5522.2019.1.23527 URL: https://en.nbpublish.com/library_read_article.php?id=23527
Abstract:
The subject of the research is the task of group management of autonomous agents in a dynamic multi-agent system and self-study of the management model. The author examines such aspects of the problem as a group interaction, using the example of the most effective group control algorithms, such as SWARM, ant algorithm, bee algorithm, firefly algorithm and fish school movement algorithm, and training of an artificial neural network through the use of reinforcement training. A comparison of various algorithms for finding the optimal path. The comparison was made on the basis of the gaming environment "Battlecode", which dynamically forms a new map for the new round, which ensured the quality of the comparison of the considered algorithms. The author uses statistical methods of data analysis, the selection and analysis of qualitative signs, forecasting methods, modeling method, classification method. The author shows that Q-learning increases its effectiveness by replacing the tabular representation of the Q-function with a neural network. This work proves the effectiveness of the bee algorithm in solving the problem of researching and patrolling the area. At the same time, the path search algorithm A* is much more flexible and efficient than the Dijkstra algorithm.
Keywords:
multiagent system, ant algorithm, bee algorithm, game artificial intellegence, reinforcement learning, neural network, group management, Battlecode, modeling, agent
Computer graphics, image processing and pattern recognition
Reference:
Lobanov A.A.
Conceptual optical schemes of direction finding computational devices for guiding the space probe to a landing point on small bodies of the solar system
// Cybernetics and programming.
2019. ¹ 1.
P. 83-89.
DOI: 10.25136/2644-5522.2019.1.28720 URL: https://en.nbpublish.com/library_read_article.php?id=28720
Abstract:
The subject of the research is an optical direction finder for the navigation and guiding a space probe. The usage of the optical correlation computing device is determined by the necessity to reduce the load to a space probe on-board computer system. The paper describes especially the optical correlation computing devices for building the correlation-extremal direction finders. The principle optical schemes of this class of optical devices are described. The method of mathematical modeling is used for building and analyzing the optical scheme of the proposed optical correlation computing devices. Both advantages and disadvantages of the existed optical schemes are noted. The requirements to parameters of the special on-board optical correlation computing device for the direction finding are proposed. It is shown that the optical device potentially can be used for navigation and guiding a space probe in the process of the landing on a small body of the solar system.
Keywords:
spatial information, optical computer, autonomous guidance, on-board computing system, image recognition, optical processing, optical scheme, direction finding, autonomous navigation, spatial data