Question at hand
Reference:
Chernyshev Y.O., Ventsov N.N., Pshenichnyi I.S.
A possible method of allocating resources in destructive conditions
// Cybernetics and programming.
2018. ¹ 5.
P. 1-7.
DOI: 10.25136/2644-5522.2018.5.27626 URL: https://en.nbpublish.com/library_read_article.php?id=27626
Abstract:
The subject of research is the approach to the allocation of resources in terms of possible destructive conditions.The object of the research is a model of decision-making processes of a distributional nature under the conditions of possible destructive influences. The authors consider the issues of modeling the processes of resource flow distribution under the conditions of possible undesirable effects. It is shown that the use of relative fuzzy estimates of resource transfer routes is more expedient than modeling the entire resource allocation area in terms of the time complexity of the decision-making process, since, based on statistical and expert assessments, route preferences can be quickly determined from the point of view of guaranteed resource transfer under destructive impacts. The research method is based on the use of set theory, fuzzy logic, evolutionary and immune approaches. The use of fuzzy preference relations reduces the time to build a model, and the use of evolutionary and immune methods to speed up the search for a solution. The main conclusion of the study is the possibility of using relative fuzzy estimates of the preferences of the used routes when organizing the allocation of resources. An algorithm for the allocation of resources in the context of destructive influences is proposed, a distinctive feature of which is the use of information about previously implemented resource allocations in the formation of a set of initial solutions. Verification of the solutions obtained is supposed to be carried out using the method of negative selection - one of the methods of modeling the immune system. Modification of existing solutions is advisable to produce, for example, using the methods of evolutionary modeling.
Keywords:
decision making, modeling, adaptation, intellectual method, optimization, distribution, fuzziness, evolution, immune approach, flows
Parallel algorithms for numerical analysis
Reference:
Sechenov P., Olennikov A.A.
Application of NVIDIA CUDA parallel programming technology in the task of melting a spherical particle
// Cybernetics and programming.
2018. ¹ 5.
P. 8-14.
DOI: 10.25136/2644-5522.2018.5.20345 URL: https://en.nbpublish.com/library_read_article.php?id=20345
Abstract:
The article describes the NVIDIA CUDA parallel programming technology used in the task of melting a spherical particle. The tendency of modern computers to increase power by increasing the number of cores, and not by increasing the frequency of the processor (which leads to significant energy consumption and heat generation). The Amdal law is presented, allowing to estimate the acceleration of the program time when parallelized on N processors. The conditions for increasing the performance of the algorithm in parallelizing tasks are listed. The task of melting iron ore particles is presented. The features of the parallel programming language CUDA C are considered and the algorithms for the selected task are presented. A comparative analysis of the task execution time on the CPU (C #) and GPU (CUDA C) has been made. The technology of parallel programming CUDA allows you to increase the performance of parallelized algorithms of complexity N up to 60 times. This requires the presence of a graphics processor supporting this technology, the development environment and the CUDA compiler, knowledge of the CUDA C language, as well as a good knowledge of the task and the possibility of its parallelization.
Keywords:
Amdahl's law, C# language, CUDA C language, melting particles, CPU, GPU, parallel programming, algorithm performance, lead time, high-level language
Methods, languages and forms of human-computer interaction
Reference:
Kukushkin Y.A., Aivazyan S.A.
Methods of automated processing control movements of the operator in applied research the reliability of ergonomics systems
// Cybernetics and programming.
2018. ¹ 5.
P. 15-23.
DOI: 10.25136/2306-4196.2018.5.1817 URL: https://en.nbpublish.com/library_read_article.php?id=18178
Abstract:
The subject of the study is to optimize systems management ergatic laws based psychophysiological operator capabilities. The complexity of the analysis of control movements of the operator is determined by the fact that the processes of manipulating the body ergatic control system are non-stationary, so you need a digital filtering techniques to eliminate the process of being analyzed low-frequency trend and move on to the analysis of a stationary random process. The technique of the automated processing control operator movements in applied research the reliability of ergonomics systems, test during ergonomic research on HIL simulator complex. The research methodology combines the techniques of engineering psychology, theory of reliability, ergonomics, spectral analysis, mathematical cybernetics and computational mathematics. The main conclusion of the study is that the analysis of the motions of the control operator must be an integral part of HIL ergonomic research because its results allow for adequate consideration of psychophysiological state and the reserve capacity of the system operator ergatic in managing its operation. This allows you to develop and implement a set of measures aimed at ensuring the proper functional reliability of professional activity ergatic system operator to ensure the safety of its operation.
Keywords:
ergatic system control, operator functional reliability, operator control movement, spectral analysis of signals, digital low frequency filtering, non-stationary signal processing, psycho-physiological reliability, ergonomic research, software engineering, engineering psychology
Systems analysis , search, analysis and information filtering
Reference:
Myasnikov V.I.
Generation algorithm for high-precision PWM signal
// Cybernetics and programming.
2018. ¹ 5.
P. 24-31.
DOI: 10.25136/2644-5522.2018.5.18991 URL: https://en.nbpublish.com/library_read_article.php?id=18991
Abstract:
The object of the study are regulators, built on the basis of pulse-width modulation (PWM). The popularity of regulation using PWM is explained by the simplicity of its implementation, the absence of hysteresis in the executive electromagnetic device. The bit width and frequency of the PWM signal affect the quality of regulation. Modern microcontrollers have an integrated PWM module, so the implementation of the control device on them is simple and inexpensive. Given the parameters of the PWM signal - frequency and bit depth, the developer often faces difficulty in implementing the controller due to the limited resources of the microcontroller, in particular, the clock frequency. The paper analyzes the specified N-bit PWM by splitting it into two components, one of which is determined by the frequency and width of the PWM processor. The dependence of the required memory size is investigated depending on the algorithm of splitting into components, as well as on the bit depth of the microcontroller used. As a result of the research, the memory costs for a tabular method for generating a PWM signal depending on the width of the microcontroller used are determined. The main result of the work is the possibility of increasing the resolution of the controller with limited microcontroller resources. Recommendations on the implementation of the algorithm depending on the selected microcontroller are given, the required memory resources are given.
Keywords:
accuracy, rate, memory size, resolution, filtering, spectrum, algorithm, PWM, microcontroller, frequency
Automated systems of engineering process administration
Reference:
Fedosovsky M.E.
Development of a methodology for constructing control systems for complex technical complexes using the methods of mathematical category theory
// Cybernetics and programming.
2018. ¹ 5.
P. 32-43.
DOI: 10.25136/2644-5522.2018.5.27561 URL: https://en.nbpublish.com/library_read_article.php?id=27561
Abstract:
The object of research in this work are the control systems of complex technical complexes. The subject of research is the methodology for developing control systems for complex technical complexes. The developed methodology for creating a control system for complex technical complexes is based on the idea of generating a sequence of mappings of conceptual models into infological models and, further, into datalogical models. Previously, the author presented conceptual and infological modeling and the mathematical models corresponding to these levels, as well as the relations between them, that is, mathematical categories. The developed methodology for creating a control system for complex technical complexes is based on the methods of the theory of mathematical categories. The categories presented in the datalogical representation have two levels of abstraction. The main findings of the study:1. A unified description of families of inhomogeneous mathematical models reflecting a different level of abstraction (generalization) at the stage of the datalogical presentation of subject problems makes it possible to create formulations for the general definition of models with a description of their structure.2. The developed method of datalogical modeling provides all the possibilities for providing customization on specific software and hardware tools for implementing a control system for complex technical complexes.
Keywords:
computer-aided design, control systems, datalogical modeling, infological modeling, conceptual modeling, abstract levels, mathematical model, mathematical category theory, display, technical complex
Databases
Reference:
Raikhlin V.A., Minyazev R.S., Klassen R.K.
The efficiency of a large conservative type DBMS on a cluster platform
// Cybernetics and programming.
2018. ¹ 5.
P. 44-62.
DOI: 10.25136/2644-5522.2018.5.22301 URL: https://en.nbpublish.com/library_read_article.php?id=22301
Abstract:
The results of original research on the principles of organization and features of the operation of conservative DBMS of cluster type are discussed. The relevance of the adopted orientation to work with large-scale databases is determined by modern trends in the intellectual processing of large information arrays. Increasing the volume of databases requires them to be hashed over cluster nodes. This necessitates the use of a regular query processing plan with dynamic segmentation of intermediate and temporary relationships. A comparative evaluation of the results obtained with the alternative "core-to-query" approach is provided, provided that the database is replicated across cluster nodes. A significant place in the article is occupied by a theoretical analysis of GPU-accelerations with respect to conservative DBMS with a regular query processing plan. Experimental studies were carried out on specially developed full-scale models - Clusterix, Clusterix-M, PerformSys with MySQL at the executive level. Theoretical analysis of the GPU-accelerations is performed using the example of the proposed project Clusterix-G. The following are shown: the peculiarities of the behavior of the Clusterix DBMS in dynamics and the optimal architectural variant of the system; Increased "many times" scalability and system performance in the transition to multiclustering (DBMS Clusterix-M) or to the advanced technology "core-to-query" (PerformSys); Non-competitiveness of GPU-acceleration in comparison with the "core-to-query" approach for medium-sized databases that do not exceed the size of the cluster's memory, but do not fit into the GPU's global memory. For large-scale databases, a hybrid technology (the Clusterix-G project) is proposed with the cluster divided into two parts. One of them performs selection and projection over a hashed by nodes and a compressed database. The other is a core-to-query connection. Functions of GPU accelerators in different parts are peculiar. Theoretical analysis showed greater effectiveness of such technology in comparison with Clusterix-M. But the question of the advisability of using graphic accelerators within this architecture requires further experimental research. It is noted that the Clusterix-M project remains viable in the Big Data field. Similarly - with the "core-to-query" approach with the availability of modern expensive information technologies.
Keywords:
Performance, Scalability, Dynamic relationship segmentation, Host hashing, Regular processing plan, Big Data, Conservative DBMS, Multiclusterisation, Advanced technology, GPU-accelerators efficiency
Databases
Reference:
Lobanov A.A., Filgus D.I.
The method of searching for the shortest Hamiltonian path in an arbitrary graph based on the rank approach, which provides high efficiency and small error in solving the problem of organizing the process of managing multiple transactions and queries when they are implemented in network databases
// Cybernetics and programming.
2018. ¹ 5.
P. 63-75.
DOI: 10.25136/2644-5522.2018.5.26513 URL: https://en.nbpublish.com/library_read_article.php?id=26513
Abstract:
The object of research is the workload implementation management subsystem in a network database. The subject of research is the management of the formation of the schedule for the implementation of subscriber requests and transactions in a network database. In many cases, existing solutions do not provide the necessary results in terms of access time and accuracy of the found solution. There is a need to develop a method for scheduling the implementation of user and transaction requests. Particular attention is paid to the algorithms of sampling queries in network databases, as well as the conceptual model of the process of managing transactions and queries. We use methods of graph theory. The evaluation of the effectiveness of the task solution was performed using a systems approach, system analysis and the theory of operations research. Processing of experimental data obtained during the work was carried out in accordance with the provisions of mathematical statistics. A method has been developed for finding the shortest Hamiltonian path in an arbitrary graph based on a rank approach, which provides high efficiency and small error in solving the problem of organizing the process of managing multiple transactions and queries when they are implemented in network databases. Using the developed method allows minimizing idle time of computing devices, reducing the volume and time of data transfer from one device to another, increases overall scalability, minimizes data access time, etc. An important advantage of the proposed method is to reduce the number of elementary operations and the number of vectors being processed the queue of the operations of the request, which leads to a significant reduction in time to implement the procedures for the formation of echer di operations in the requests.
Keywords:
transaction, stub tree, network database, rank approach, query, Hamiltonian path, graph, rank, optimization, deviation in measurements
Data encryption and data protection
Reference:
Baltaev R.K., Lunegov I.V.
Steganographic method of embedding information using a noise-like sequence and preserving the statistical model of images
// Cybernetics and programming.
2018. ¹ 5.
P. 76-83.
DOI: 10.25136/2644-5522.2018.5.27634 URL: https://en.nbpublish.com/library_read_article.php?id=27634
Abstract:
The subject of research is the steganographic method of embedding information in digital images. Steganography is capable of hiding not only the content of information, but also the very fact of its existence. The paper considers one of the most important problems in the development of steganographic methods - the secrecy of the transfer of protected information. Stealth is not only visual or auditory indistinguishability of a digital media resource from a media resource with embedded information, but also statistical indistinguishability. Special attention is paid to preserving the spatial statistical dependence between the image pixels. The methodological basis of the research is the methods of mathematical statistics and image processing theory, as well as image distortion metrics. The novelty of the research lies in the development of a new method of embedding information in static images. The authors consider in detail the problem of applying the moving average autoregression process to represent the statistical dependence of image pixels. It is shown that the proposed method allows you to embed information into digital images without significant distortion.
Keywords:
information embedding algorithm, digital images, CIEDE2000, hidden communication, information security, image processing, ARMA, steganography, autoregression process, image distortion