Vol 12, No 3 (2025)
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
Construction of cellular automata using machine learning models
Abstract
The paper is devoted to the development and study of cellular automata approximation methods using machine learning models. Cellular automata are models used to study the dynamics of complex systems based on simple interaction rules. In recent years, machine learning models have become powerful tools in the field of data processing. The paper examines approaches to predicting cellular automata rules using machine learning models, considers their advantages and limitations, and proposes metrics for assessing the quality of cellular automata state predictions and the dependence of cellular automata state prediction on the number of cellular automata rule models entering the input for training. The study aims to understand how machine learning models can be used to analyze and model complex systems based on cellular automata, as well as possible prospects for the development of this approach. Based on the proposed metrics, a comparative analysis of the effectiveness of various machine learning models in predicting cellular automata rules is carried out.
13-22
Prediction of spatial effects and factors of regional development using machine learning methods
Abstract
When modeling the spatial development of a territory, taking into account spatial effects, it is important to keep in mind that the current development of the territory is influenced not only by internal indicators (economic, social, demographic, infrastructural, etc.), but also by the processes taking place in neighboring areas. When modeling the spatial development of the Russian Federation, it is necessary to take into account spatial heterogeneity, long distances, transport corridors and climatic conditions. Accounting for these complex components includes modeling of inter-regional and intra-regional interaction. The aim of the study is to assess the impact of socio-economic factors on the gross regional product (GRP), taking into account the spatial relationship between the federal districts and time dynamics. To achieve the goal, the following tasks were solved in the work: 1) a comprehensive analysis of approaches to modeling the spatial development of regions has been carried out; 2) an adapted methodology of spatial analysis has been developed, including: a comprehensive system of indicators of socio-economic development that takes into account the specifics of Siberian regions, a typology of spatial econometric models. Materials and methods. The econometric spatial modeling apparatus was used in the modeling. Conclusions. Spatial econometric models provide a more accurate description of socio-economic processes in federal districts compared to traditional approaches that do not take into account the spatial structure of data.
23-30
MATHEMATICAL MODELING, NUMERICAL METHODS AND COMPLEX PROGRAMS
Modeling chemical and biological systems using stochastic block cellular automata with Markov neighborhood
Abstract
The article is devoted to the description of a new variation of stochastic block cellular automata – the so-called Markov automata, a distinctive feature of which is the dynamic and stochastic formation of blocks. Examples of the simplest models of physical processes built on the basis of this type of automata are given. The expressive possibilities of the introduced model are considered in the article. In particular, through comparison with the Turing machine, the algorithmic universality of Markov automata is shown, which allows them to theoretically perform arbitrarily complex processing of symbol chains. On the other hand, the presence of the so-called mixing substitution subsystem in the system of automata rules leads to a different type of behavior of these automata, the dynamics of which is described by classical kinetic equations for chemical reaction systems. It is shown that the use of special separating symbols (membranes) in the automaton allows combining several different types of behavior in different parts of the same automaton, as well as organizing information interaction between these parts. This technique opens up the possibility of modeling the simplest biological systems – cells. Using the example of a two-dimensional version of the proposed model, it is shown how the basic one-dimensional model can be extended to the case of higher dimensions.
31-40
Remote stimulation of QED scenarios in the Jaynes–Cummings–Hubbard model
Abstract
The article addresses the important and relevant task of remote induction of quantum dynamic scenarios. This involves transferring such scenarios from donor atoms to a target atom. This induction is based on the enhancement of quantum transitions in the presence of multiple photons of the same transition. We use the quantum master equation for the Tavis-Cummings-Hubbard (TCH) model with multiple cavities connected to the target cavity via waveguides. The dependence of the efficiency and transfer of the scenario on the number of donor cavities, the number of atoms in them, and the bandwidth of the waveguides is investigated.
41-46
CYBERSECURITY
Cybersecurity of the digital ruble platform in the context of increasing information security risks
Abstract
The relevance of the article is due to the following circumstances: the need to form a new concept of cybersecurity of the digital ruble in the system of destructive events in the digital space associated with an increase in the number of computer attacks on the credit and financial sector; secondly, the growth of means of destructive information impact on financial organizations; the tasks of identifying and preventing cyber threats in the cybersecurity system of the financial sector are becoming relevant; the need to ensure security and protection of information in the digital banking space from destructive events such as hacker attacks, viruses, fraud, as well as the importance of developing systems for monitoring constructive events, for example, process optimization, improving the quality and security of financial payments on the digital ruble platform. The purpose of the study is a technical solution to ensure the cybersecurity of its infrastructure. The following tasks were completed: the concepts of “trust infrastructure”, “cybersecurity” and its varieties were clarified; an analysis of cyberspace tools in terms of cybersecurity of the trust infrastructure in the security system of the digital ruble was carried out. Proposals aimed at improving the cybersecurity of the trust infrastructure in the security system of the digital ruble were formulated.
47-57
SYSTEM ANALYSIS, INFORMATION MANAGEMENT AND PROCESSING, STATISTICS
A visualization method for metagraphs with complex multi-level hierarchical structures
Abstract
Graph visualization is a crucial tool for the visual analysis of interconnected data; however, traditional methods often fail to efficiently represent nested, multi-level hierarchical structures. This study proposes a method based on classical force-directed layout algorithms, specifically designed for visualizing metagraphs through a planetary interaction model of vertices. The distinctive feature of the proposed approach lies in incorporating the intrinsic weight of each vertex. This adaptation allows for the preservation of nested structures, improved graph readability, and scalability of the visualization as data complexity increases. Simulation results demonstrate the method's effectiveness across various types of nested data, including file systems, organizational hierarchies, and biological ontologies.
58-66
Analysis of software code preprocessing methods to improve the effectiveness of using large language models in vulnerability detection tasks
Abstract
As software systems grow in scale and complexity, the need for intelligent methods of vulnerability detection increases. One such method involves the use of large language models trained on source code, which are capable of analyzing and classifying vulnerable code segments at early stages of development. The effectiveness of these models depends on how the code is represented and how the input data is prepared. Preprocessing methods can significantly impact the accuracy and robustness of the model. The purpose of the study: to analyze the impact of various code preprocessing methods on the accuracy and robustness of large language models (CodeBERT, GraphCodeBERT, UniXcoder) in vulnerability detection tasks. The analysis is conducted using source code changes extracted from commits associated with vulnerabilities documented in the CVE database. The research methodology is an experimental analysis based on evaluation of the effectiveness and robustness of CodeBERT, GraphCodeBERT, and UniXcoder in the task of vulnerability classification. The models are assessed based on their performance using Accuracy and F1 score metrics. Research results: estimates of the effectiveness of different code preprocessing methods when applying large language models to vulnerability classification tasks.
67-79
ELEMENTS OF COMPUTING SYSTEMS
Dynamic routing of signal in on-board computers to increase system reliability
Abstract
The paper presents a module of the device for searching the degree of optimal placement (DOPPS) for onboard computers, implementing hardware and software reorganization of the topology of the onboard computer subsystem responsible for determining the parameters of speed, altitude, and incident flow pressure under combined interference. The “early cutoff” algorithm on the Kintex-7 FPGA checks up to 1.2 · 106 routes in 0.55 μs and switches the channel in less than 0.72 μs, which satisfies the 1 μs limit established by DO-178C and ARINC 664. HIL tests showed a 15% decrease in the integral “perturbed cost” ΔL and an increase in the probability of successful transmission Ps to 0.96. At the same time, the dynamic power of the CPU decreased by 1.1 W, and the peak temperature of the crystal did not exceed 55 °C. The solution is suitable for serial implementation in UAVs and modernization of manned systems without modification of certified software.
80-88
Searching for the degree of optimal placement in high-availability multiprocessor systems with directed information transfer
Abstract
This article addresses the search for the degree of optimality of process placement in high-availability clustered multiprocessor systems with directed information transfer. We introduce a hardware–software device that operationalizes a graph-based formulation: a weighted task-interaction graph is mapped onto the processor-topology graph, and the objective minimizes the total inter-processor link length defined as traffic weights multiplied by inter-module distances. The device combines a permutation generator with an evaluation unit operating over an electronic graph model while enforcing channel-bandwidth and processor-load constraints; early-stopping criteria are supported. Experimental evaluation on a fully connected four-processor configuration demonstrated a reduction in total link length from 450 to 320 arbitrary units (–29%) and a decrease in interaction intensity; aggregate system performance increased to 95% versus 80% under the baseline placement. The results indicate that the approach effectively relieves communication bottlenecks, reduces inter-processor traffic, and accelerates reconfiguration in real-time environments. Future work includes scaling to larger topologies, incorporating adaptive heuristics, and integrating with task-scheduling facilities to further enhance the resilience and predictability of high-availability computing platforms.
89-95
AUTOMATION OF MANUFACTURING AND TECHNOLOGICAL PROCESSES
Industrial internet of things as the basis of intelligent production
Abstract
The article is devoted to the study of the current state of the industrial Internet of Things (IIoT), its advantages and disadvantages, and the identification of development prospects. The industrial Internet of Things is fundamentally changing the economic model of supplier-consumer interaction. This makes it possible to automate the process of monitoring and managing the lifecycle of equipment, organize efficient chains from supplier enterprises to consumer companies, switch to “sharing economy” models, and much more. The article presents a model of a modern 12-layer IIoT architecture. The main advantages of IIoT are highlighted, such as increased efficiency, reduced errors, increased worker safety, and energy savings. It has been revealed that IIoT-based industrial enterprise management allows for real-time monitoring of industrial systems, supply chain management, and analysis of large amounts of data, which helps improve productivity, manage inventory and energy consumption more efficiently. At the same time, the risks and problems associated with the widespread use of IIoT have been identified, and the main ones are information security and a shortage of qualified personnel. The main directions of the industrial Internet of Things development are defined as conclusions.
96-104
MATHEMATICAL AND SOFTWARE OF COMPUTЕRS, COMPLEXES AND COMPUTER NETWORKS
Hybrid blockchain with dynamic smart contracts for automation of logistics and customs processes
Abstract
The aim of the study is to develop a hybrid blockchain architecture to overcome the key problems of logistics supply chains: low transparency, insufficient data security and a high share of manual document flow. The work applies methods of system analysis of existing blockchain platforms (IBM Food Trust, TradeLens, VeChain), assessment of their architectural features and limitations, as well as design of a hybrid model based on Hyperledger Fabric with dynamic smart contracts. The results: the proposed multi-level architecture combining private channels for confidential data and a public ledger for verification of key events; a module for dynamic generation of smart contracts, reducing development time by 40%; scenarios for automation of logistics processes (document flow, customs clearance, SLA management), ensuring a reduction in operating costs by 30-50% and acceleration of transaction processing up to 320 TPS. In conclusion, recommendations for integration with IoT and regulatory systems, as well as directions for further research in the field of data privacy are formulated.
105-114
METHODS AND SYSTEMS OF INFORMATION PROTECTION, INFORMATION SECURITY
Digital twin-based method for detecting information security threats in critical information infrastructure objects
Abstract
The article presents a method for detecting information security (IS) threat indicators in critical information infrastructure (CII) facilities using a digital twin (DT) with an adaptive mechanism. It addresses the limitations of traditional IS approaches under conditions of scarce real attack data, challenges in testing on operational CII facilities, and difficulties in identifying targeted, evasive threats. A dual-loop method (DT loop and CII facility loop) integrated with a three-level adaptation mechanism (operational, tactical, strategic modes) is proposed. The method encompasses stages of synthetic data generation, model training/testing in the DT, detection/classification at the facility, and defines adaptation trigger. Key advantages include the ability to safely generate threat scenarios and train in the virtual DT environment, automated maintenance of threat detection models. Validation results on a synthetic model of energy facility control system show significant improvement in quality metrics after adaptation.
115-122
INFORMATICS AND INFORMATION PROCESSING
Software and analytical complex for supporting balanced development of industrial ecosystems
Abstract
The article presents a concept of a software and analytical complex (SAC) designed to support the balanced development of industrial ecosystems. The architecture of the complex integrates modules for data collection, analytics, visualization, recommendation generation, and scenario analysis, enabling the consolidation of heterogeneous information on enterprises, processes, resources, and interconnections into a unified environment. Special attention is given to mechanisms of data consolidation and the integration of analytical tools that provide a comprehensive assessment of ecosystem sustainability. To ensure semantic consistency and reveal hidden dependencies, the complex incorporates an ontological knowledge model. The proposed SAC is aimed at enhancing interorganizational coordination, optimizing resource utilization, and supporting decision-making under uncertainty and dynamic external conditions.
123-129
Reducing the dimensionality of data for analysis using the principal component method
Abstract
Despite the fact that modern data mining systems have high computing power, the amount of data to analyze is constantly increasing and can become a critical factor. Thus, the task of reducing the dimensionality of the source data for analysis without reducing the quality of the analysis itself becomes relevant. One of the methods that allows you to reduce the dimensionality of the data is the principal component method. The paper considers the application of this method in data analysis in sensor network nodes. The advantage of the method is that there are no preliminary hypotheses about the condition of the object under study. The implementation of the method is linear and cyclic, which determines its good algorithmization by computer technology. As the initial data set, a set of wireless sensor network operation data is used, which consists of one thousand nodes. For each node, a selection of measurements on the main parameters of the quality of service is presented. The initial data is being preprocessed. A covariance matrix is constructed for which the eigenvalues and eigenvectors are found. The result of the method is the main components obtained by converting the eigenvectors. These components are used for data analysis. The result of this work is a reduction in the dimension of the data.
130-140
Designing a modular automated decision support information system in a digital educational environment
Abstract
In the context of digitalization of education, there is an increasing need for intelligent decision support systems that ensure informed, adaptive and personalized management of the educational process in a digital educational environment. At the same time, the key areas are timely assessment, forecasting of learning outcomes and personalization of educational routes in a digital educational environment. The implementation of these directions is impossible without an intelligent automated information system capable of providing adaptive feedback in a digital educational environment. The purpose of the study is to describe the architecture and modules of an automated decision support information system for implementing adaptive feedback. Unlike known implementations of automated information systems, the proposed project uses a modular architecture that integrates an analytical module and adaptive feedback to support decision-making in a digital educational environment. The description of the modules and the analysis of the implemented solutions in the automated information system of decision support are given. The article implements the design stages with a description of the adaptive assessment sequence diagram, class diagram, and deployment diagram. A functional architecture is built with decomposition by levels of representation in the designed decision support system. The presented draft of the system in the article will ensure the formation of personalized and informational support for the user in the decision-making process in the digital educational environment.
141-151
Recovery of electron density signals beyond the operating range of the measuring instrument
Abstract
Machine learning models have been widely incorparated into control systems aimed at improving the operational efficiency of tokamaks. The training machine learning models requires substantial datasets. However, data collection is limited because experimental campaigns on tokamaks are prolonged in time. Furthermore, the amount of suitable training data may decrease due to the present of faulty diagnostic signals. Additionally, the frequency of faulty signal occurrences increases while initial operation of a new tokamak or specialized equipment. This work examines the possibility of recovering faulty signals using machine learning techniques. Particularly, we focus on recovering signals obtained beyond the operating range of measuring instruments. Thus, recovering such kind of signals should increase the volume of available training data, consequently enhancing the efficacy of machine learning-based model training.
152-159
Mathematical model of stable task prioritization with dynamically adjustable criteria weights
Abstract
This paper presents a robust mathematical model for task prioritization under conditions of multicriteria complexity, changing input parameters, and partial data incompleteness—common challenges in modern distributed and streaming digital environments. The proposed model automatically calculates criterion weights based on statistical variability (e.g., standard deviation) and dynamically adjusts them using feedback from task execution outcomes. Unlike traditional approaches such as AHP and TOPSIS – which require complete data and manual parameter tuning—the model is resistant to missing values, interpretable, and does not rely on retraining or imputation. A compensation mechanism for incomplete data and adaptation to changing feature structures is incorporated, ensuring consistent performance in fragmented and asynchronous information contexts. Comparative evaluation with machine learning models and heuristic methods shows that the proposed approach achieves high ranking accuracy (via Spearman correlation), stability under up to 50% missing data, and linear scalability as the number of tasks and criteria increases. Experimental results on synthetic and semi-real datasets confirm its practical effectiveness. The model is applicable in a wide range of digital platforms, including decision support systems, DevOps, logistics, monitoring, and incident management, especially where adaptability and transparency are critical under uncertainty and dynamic change.
160-169
Assessment of the possibilities of using behavioral biometrics: analysis of computer mouse movements to protect remote administration sessions
Abstract
The purpose of this work is to substantiate the possibilities of using mouse dynamics as a method of behavioral biometrics for the tasks of continuous authentication of system administrators in remote access conditions. The research focuses on studying the features of mixed (discrete-continuous) data transmission channels and the specifics of using GUI interfaces used in modern administration scenarios. The paper considers formal models for processing behavioral signs, suggests an approach to integrating asynchronous and fragmentary signals, and performs a comparative analysis of biometric methods based on stability criteria, the possibilities of application in background modes, and the possibility of integration without additional equipment. Particular attention is paid to the architectural requirements for Continuous Authentication Systems (CAS), including assessing the adaptability of models and determining their resistance to data flow fragmentation. The analysis results confirm that mouse dynamics has balanced characteristics for passive biometric authentication.: It is actively used in software and hardware platforms with a graphical interface, does not require specialized sensors, and provides good identification quality with a low level of interference. It is shown that this type of biometric authentication can be effectively applied in conditions of an unstable channel, while meeting the requirements for synchronization, aggregation and profile adaptation. The proposed recommendations on the architecture of CAS systems are focused on real-world application in the IT infrastructure without compromising performance and user experience.
170-177
Data processing and annotation in a distributed video stream mining system to detect destructive behavior
Abstract
The article discusses a distributed system for intelligent video stream analysis designed for automatic detection of destructive behavior in educational institutions. The relevance of the study is due to the need to improve the efficiency of security systems in organizations where existing systems demonstrate significant limitations in response speed and objectivity of assessment. The purpose of this work is to develop the architecture of a distributed system for intelligent video stream analysis based on a three-level video data processing pipeline to identify destructive behavior. The main focus is on methods for processing and annotating video data within a three-level pipeline, including object detection (YOLO), behavior classification (CNN) and contextual event analysis. The system based on the proposed architecture and modules of the three-level pipeline allows for effective detection of destructive behavior, which demonstrates the promise of using neural network technologies to create intelligent security systems in organizational structures. In this article, the authors consider the use of a distributed system for intelligent analysis of video streams in educational institutions, but the proposed solution can be adapted for other organizational structures where prompt detection of aggression, fights and other forms of destructive behavior in crowded conditions is required.
178-183
NANOTECHNOLOGY AND NANOMATERIALS
Quantum coherence and supertunneling effect: Wave and particle nature of quantum objects in the Mach–Zehnder interferometer
Abstract
The article examines the wave and corpuscular nature of quantum objects using the example of the Mach–Zehnder interferometer and discusses the possibility of the so-called “supertunnel effect”. It is shown how the behavior of a photon in an interferometer is determined not by switching between a wave and a particle, but by the preservation or loss of coherence of its quantum amplitude. Key mechanisms are analyzed: formation of superposition at the beam splitter, interference from coherent amplitude recombination, decoherence induced by path information leakage, and recovery of interference in quantum-eraser schemes. Analogous phenomena for electrons, neutrons, atoms and large molecules are discussed, with attention to dominant decoherence sources (collisions, thermal radiation, internal degrees of freedom) and the shrinking de Broglie wavelength of massive objects. The influence of mass, momentum and barrier parameters on tunneling probability is treated, and practical strategies to enhance tunneling (barrier engineering, resonant tunneling, collective effects, and reducing effective mass) are outlined. The conclusion is that quantum laws are universal: wave-like and particle-like manifestations depend on the experimental context and coherence preservation rather than an intrinsic conversion of the object. The concept of “supertunneling” is framed as potentially realizable only if decoherence and exponential suppression can be overcome, with suggested routes for experimental pursuit.
184-190
Study of electrophysical properties of a solar cell with nano-hetera junctions on a non-crystalline silicon substrate
Abstract
The electro-optical properties of materials included in the solar cell based on non-crystalline technical silicon have been investigated. It has been determined to what extent they are suitable as effective components of a nano-hetero-junction for converting radiation energy into electricity. The main factors that prevented the active use of technical silicon have been determined: the absence of free current carriers, weak electrical conductivity, a high degree of structurelessness, and the presence of a sufficiently high concentration of deep LDES – local defect energy states. It has been concluded that electrons in these deep states can contribute (and this is very important!) to the emergence of a nano-scale electric contact field. The special advantages of non-crystalline silicon with a rich LDES content as an effective material for a solar cell have also been revealed. It has been noted that these qualities of non-crystalline silicon, however, manifest themselves in the nanosized state only in combination with nano-crystalline lead chalcogenides PbX, where X can also be sulfur (S), selenium (Se) and tellurium (Te). An important conclusion of the work is also that similar positive transformative electro physical properties are characteristic of many semiconductors in the nano-sized state, if the energy spectrum of their electrons is similar to the spectrum in the nano-sized intrinsic crystalline semiconductor. It is proven that this contact field is formed due to the self-organizing growth of “islands” – crystalline nano-inclusions of PbX in places where с-PbX itself naturally finds a silicon nano-crystallite (с-Si) with a virtually identical crystalline structure (this is the peculiarity of self-organizing growth!) with the subsequent formation of ⟨с-Si::с-PbX⟩ – a nano-hetero-junction. The contact field parameters are calculated; the number N of electrons forming the contact field is determined; a numerical analysis of the electro physical parameters of the nano-hetero-junction is carried out.
191-202
Optimizing tilt angle for enhanced solar panel efficiency: A case study in Parkent, Uzbekistan
Abstract
The efficiency of photovoltaic (PV) systems is significantly influenced by the tilt angle of solar panels, especially in regions with varying solar insolation across seasons. This study investigates the optimal tilt angle for a 10 kW solar-powered system installed in the Parkent district of Uzbekistan, a region characterized by a continental climate and high solar irradiance. Based on empirical formulas, the research identifies 33° as the fixed optimal tilt angle for year-round operation. Seasonal adjustments offer marginal gains, with two- and four-season tilt configurations improving performance by up to 4%. The findings highlight the importance of site-specific tilt optimization in maximizing solar energy harvesting, which is particularly relevant for autonomous renewable energy systems used in hydrogen production.
203-208
LARGE LANGUAGE MODELS IN LEGAL PRACTICE
Evolution of the capabilities of large language models in the legal field: Meta-analysis of four experimental studies
Abstract
This paper presents a meta-analysis of four experimental studies from the Norm! project, aimed at systematically studying the effectiveness of large language models in the legal field. The study includes a comparative analysis of junior and senior models, optimization of system prompts, and testing of multi-agent architectures on tasks in Russian family and civil law. A key discovery was the identification of a nonlinear relationship between architectural complexity and the quality of results: the transition from simple to complex systems provides a slight increase in quality (15–40%) with an exponential increase in resource costs (by a factor of 10–15). The flagship models GPT-4.1 and Gemini 2.5 Pro demonstrate superior quality (9.04 and 8.52 points), but junior LLMs with efficiency coefficients up to 130.3 remain cost-effective. A universal problem area for all architectures is tasks requiring an integrative analysis of multiple legal norms. The results form scientifically sound recommendations for various implementation scenarios: from mass consulting services to specialized legal applications, defining the prospects for the development of hybrid architectures in legal practice.
209-220


