Computational nanotechnology
Quarterly peer-review journal.
About
“Computational nanotechnology” journal publishes peer-reviewed scientific research works on mathematical modeling of processes while creating nanostructured materials and devices. The development of nanoelectronics devices, nanoprocesses needs to involve quantum computing allowing prediction of the structure of matter.Work on nanoprocesses requires the development of quantum computers with a fundamentally new architecture.
The journal publishes peer-reviewed scientific articles on the following scientific specialties:
- Computer Science
- Artificial intelligence and machine learning
- Mathematical modeling, numerical methods and complex programs
- Theoretical informatics, cybernetics
- Cybersecurity
- Information Technology and Telecommunication
- System analysis, management and information processing
- Elements of Computing Systems
- Automation of manufacturing and technological processes
- Management in organizational systems
- Mathematical and software of computеrs, complexes and computer networks
- Information security
- Computer modeling and design automation systems
- Informatics and Information Processing
- Nanotechnology and nanomaterials
Indexing
- Russian Science Citation Index (RSCI)
- East View Information Services
- Ulrichsweb Global Periodicals Directory
- Google Scholar
- Dimensions
- CrossRef
- MathNet
VAK of Russia
In accordance with the decision of the Presidium of the Higher Attestation Commission of the Ministry of Education and Science of Russia dated 29.05.2017, the journal «Computational Nanotechnology» is included in the List of leading peer‐reviewed scientific journals and publications in which the main scientific results of dissertations for the degree of candidate and doctor of sciences should be published.
Subject heading list
- Atomistic Simulations - Algorithms and Methods
- Quantum and Molecular Computing, and Quantum Simulations
- Bioinformatics, nanomedicine and the creation of new drugs and their delivery to the necessary areas of neurons
- Development of the architecture of quantum computers based on new principles, creating new quantum programming
- Development of new energy units based on renewable kinds of energy
- Problems of synthesis of nanostructured materials to create new ultra-compact schemes for supercomputers
- Peculiarities of the development of devices based on nanostructured materials
- Development of functional nanomaterials based on nanoparticles and polymer nanostructures
- Multiscale modeling for information control and processing
- Information systems of development of functional nanomaterials
Current Issue



Vol 12, No 2 (2025)
Artificial intelligence and machine learning
Improving time series forecasting by applying the sliding window approach
Abstract
Our primary research involves forecasting the IT job market, where we study trends, residuals, and seasonalities. In this study, we focus on the impact of the sliding window technique on forecasting models. The sliding window approach in the machine learning process is aimed at enhancing the accuracy of the forecasting models. It involves partitioning the continuous time series into subsets of consecutive and overlapping periods, which enables the models to track temporal characteristics effectively. The experiment is carried out on various algorithms and integrated with the sliding window. The technique allows flexibility for models to adapt to changes in the data dynamics, which greatly reduces the errors in forecasting. The study shows that sliding window methods are quite useful for building dependable and adaptive forecasting models. LSTM, ARIMA, SARIMA, and Holt’s Model were used in this experiment with a dataset of 1 048 576 job rows with job-related information. Metrics such as MSE, RMSE, and MAE were used to test the models. LSTM was found to be the most efficient because of its capability to learn complicated patterns and long-term dependencies, and showed model improvement of 0.248 on MAE, 2.649 on MSE, and 0.162 on RMSE when the sliding window was applied.



Modification of the method for modeling the thematic environment of terms using the LDA approach
Abstract
Thematic modeling is an essential tool for analyzing large volumes of textual data, enabling the identification of latent semantic patterns. However, conventional approaches such as Latent Dirichlet Allocation (LDA) encounter difficulties when dealing with multi-valued and unigram tokens, resulting in reduced accuracy and clarity in the outcomes. This study aims to develop a technique for constructing a thematic structure based on refined LDA, which incorporates contextual features, vector representations of words, and external vocabularies. The objective is to address terminological ambiguity and enhance the clarity of thematic groups. The paper employs a mathematical model that integrates probabilistic thematic modeling with vector representations, facilitating the differentiation of word meanings and the establishment of precise connections between them. Using the corpus of Dimensions AI and PubMed publications, the study demonstrates an improved distribution of terms within thematic clusters. This involves frequency analysis and vector similarity, which are essential components of the study. The results emphasize the effectiveness of an integrated approach to dealing with complex linguistic structures in automated text analysis.



Cybersecurity
Analysis of the effectiveness and robustness of neural networks with early exits in computer vision tasks
Abstract
Many embedded systems and Internet of Things (IoT) devices use neural network algorithms for various information processing tasks. At the same time, developers face the problem of insufficient computing resources for effective functioning, especially in real-time (pseudo-) tasks. In this regard, the urgent task is to find a balance between the quality of the results and computational complexity. One of the ways to increase the computational efficiency of neural networks is to use neural network architectures with early exits (for example, BranchyNet), which allow making decisions before passing through all layers of the neural network, depending on the source data for a given reliability of the results. The purpose of the study: to analyze the applicability, effectiveness and robustness of neural networks with early exits (BranchyResNet18) in computer vision tasks. The analysis is based on the GTSRB road sign dataset. The research methodology is an experimental efficiency analysis based on the calculation of the number of floating-point operations (FLOP) to obtain results with a given accuracy, and an experimental robustness analysis based on the generation of various noise effects and adversarial attacks. Research results: estimates of the effectiveness of neural networks with early exit and their robustness to unintended and intentional disturbances have been obtained.



System analysis, information management and processing, statistics
Artificial intelligence methods for short-term planning in petroleum products realization
Abstract
In this article, a critical analytical review of the application of artificial intelligence methods in the field of scheduling theory is presented, exemplified by the constraints of the short-term planning problem in the process of petroleum products realization via road transport. The objective of the research was to systematize and evaluate existing approaches to solving planning tasks while considering specific temporal constraints inherent to the petroleum products realization process. During the study, both exact and approximate methods for solving scheduling theory problems were analyzed, including heuristic algorithms and approaches based on artificial neural networks. It was established that existing methods have significant limitations when addressing semi-online planning tasks. The research findings demonstrate the necessity for developing a new method capable of promptly restructuring schedules in response to unpredictable changes that arise during the petroleum products realization process. The results of the study highlight the promising potential for advancing artificial intelligence methods to address short-term planning challenges.



Methods of computational optimization for automated insulin therapy control
Abstract
The control automation of insulin-dosing technical systems for patients with type 1 diabetes mellitus is an urgent task of biomedical engineering. The development of computing technologies allows using complex nonlinear predictive models for calculating optimal control actions. The use of such models makes it necessary to develop efficient methods for numerically solving stiff systems of nonlinear ordinary differential equations, developing efficient methods for parametric identification of mathematical models and developing efficient methods for optimizing control actions. The paper presents a set of studies and numerical experiments aimed at formalizing computational problems, identifying known methods and algorithms for solving the problems and experimentally evaluating the efficiency of selected methods and algorithms. It is demonstrated that the LSODA algorithm is efficient in numerically solving the model equations, using the Adams method when in nonstiff areas and the backward differentiation formula on stiff areas. A method for optimizing parametric identification is proposed by using the «basin hopping» global optimization method with a Nelder–Mead local minimizer. For solving the problem of multidimensional conditional optimization of control actions, the COBYLA method has shown the highest efficiency, ensuring the finding of optimal parameters on household computers in an acceptable time.



Mathematical and software of computеrs, complexes and computer networks
Effective data model selection for infological entities in multimodel database systems
Abstract
The article addresses the problem of selecting effective data models for infological entities in the context of designing multimodel databases. The focus is placed on the need for a systematic approach when modeling heterogeneous entities whose structure and behavior require different forms of representation. The study analyzes the characteristics of three widely used models – relational, graph, and multidimensional – in terms of their applicability to various types of infological entities. Key criteria influencing model selection are described, including data structure, interconnectivity, query patterns, mutability, scalability, and consistency requirements. A decision-making algorithm is proposed, based on analyzing entity characteristics and the system’s non-functional requirements. Particular attention is given to the advantages and challenges of multimodel solutions, as well as principles of coordinating different models within a unified architectural framework. The work aims to provide a methodological foundation for rational model selection and for enhancing the adaptability and sustainability of information systems.



A high-performance implementation of a stochastic TCP model in C++/AVX for performance analysis of distributed systems
Abstract
The reliability of modern distributed systems directly depends on the stability of network connections; however, traditional monitoring methods are unable to adequately assess the stochastic nature of failures at the TCP transport protocol level. This paper proposes an approach based on Stochastic Differential Equations (SDEs) to model packet loss probability as a continuous random process, accounting for mean reversion and random fluctuations. A practical implementation of the model is presented in C++ using AVX-512 vector instructions for the numerical solution of the SDE via the Euler–Maruyama method. Experimental evaluation on an Intel Xeon Silver 4410Y server platform demonstrated that the module’s performance reaches 30.1 million estimations per second, which is nearly 9 times faster than reference scalar implementations. The results prove that the proposed stochastic approach is computationally efficient and can serve as a foundation for creating real-time monitoring and adaptive control systems capable of predicting TCP performance.



Computer modeling and design automation systems
Algorithmic methods of event-based predictive quality control of complex data processing systems: integration of system analysis and computational modeling
Abstract
The purpose of this research is to develop an algorithmic framework for event-forecast quality management in complex data processing systems (CDPS), through the integration of systemic analysis methods and computational modeling. Contemporary approaches to quality assessment, based on static metrics defined by GOST R 59797–2021, fail to account for dynamic emergent properties and predictive operational scenarios of CDPS. The study proposes a hybrid model that combines multi-level system analysis with L-stable numerical simulation techniques, enabling formalization of the “event-forecast quality level” as a function of temporal system parameters. The developed algorithmic framework includes a three-tier data aggregation architecture with adaptive weighting coefficients, a dynamic quality management system integrated into the CDPS lifecycle, a neural network module for preventive optimization based on reinforcement learning. Experimental validation on 15 industrial CDPS demonstrated improved critical event prediction accuracy up to 89.7% and reduced system response time from 15.3 to 2.7 seconds. Implementation within the control loop of a petroleum refinery reduced energy consumption per operation by 33% and increased service intervals by 27%. The originality of the work lies in the synthesis of relational analysis methods with deep learning neural architectures, ISO 25010 quality management principles with predictive analytics of rigid systems, real-time dynamic parameter adaptation using a modified (2,1)-method. Practical significance is confirmed by the integration of the algorithm into design, testing, and operation phases of CDPS, meeting the requirements of GOST R 59797–2021. Research outcomes are applicable to the development of fault-tolerant control systems for mission-critical infrastructure in energy, telecommunications, and finance. Future perspectives include adapting the algorithm for quantum computing systems and distributed IoT architectures.



Overview of simulation modeling capabilities for optimizing seaport operations in AnyLogic: a development algorithm
Abstract
The relevance of the research is due to the need to increase the efficiency and competitiveness of seaports in the context of constantly growing volumes of cargo transportation, as well as the need to develop effective tools for analyzing and optimizing complex processes occurring in the port infrastructure. This article presents the process of developing a simulation model of a seaport using AnyLogic software. The aim of the work is to create a tool for analyzing and optimizing various aspects of the port's operation, including ship handling, loading and unloading operations, warehouse management and traffic flows. The key elements of the model are described, such as agents (ships, cranes, loaders, transport), the logic of their interaction, and parameters that affect system performance. The possibility of using the developed model for making informed management decisions aimed at increasing efficiency and optimizing the operation of the seaport is shown.



Determination of optimal parameters for efficient terminal operation by means of a simulation model in the AnyLogic environment
Abstract
This article presents a study on determining the optimal parameters of terminal operation based on simulation modeling in the AnyLogic environment. The urgency of the work is due to the need to improve the efficiency of terminals in conditions of increasing intensity of cargo flows and limited resources. The purpose of the study is to analyze the simulation model of the terminal, which makes it possible to identify the optimal values of key parameters. To ensure a comprehensive assessment of the prospects for the development of the port infrastructure, various scenarios for the operation of the terminal have been developed. Each of the scenarios will be analyzed twice, with varying SPM performance, which will allow us to assess the impact of this parameter on the overall efficiency of the terminal. The article presents the results of computational experiments aimed at determining the influence of various parameters on the key performance indicators of the terminal. The results obtained can be used in the design of new terminals and the modernization of existing ones.



Informatics and information processing
Post-processing of medical image segmentation results
Abstract
In modern medical diagnostics, computer vision and deep learning play an increasingly important role, especially in the analysis of complex 3D medical images. A significant obstacle to the implementation of modern deep learning algorithms in clinical practice are artifacts and inaccuracies of the primary classification by neural networks. In this paper, we systematized the main post-processing methods used in medical image segmentation tasks and reviewed related works on this topic. The aim of the study is to develop post-processing methods to eliminate segmentation errors associated with spatial incoherence and incorrect classification of 3D image voxels. In this paper, we propose a post-processing module for CT image segmentation results that effectively solves the problems of intersecting and nested pathologies. Three algorithms have been developed and implemented to eliminate fragments of false positive responses of the neural network. Experimental verification has shown that the proposed algorithms successfully provide unified coherent pathologies, which improves the quality of segmentation and simplifies subsequent analysis. The developed post-processing module can be integrated with the existing neural network framework for segmentation of medical images nnU-Net, which will contribute to improving the quality of diagnostics. The results of the study open up prospects for further development of post-processing methods in the field of medical imaging and can find wide application in systems for supporting medical decision-making.



Application of artificial intelligence tools in analyzing the problem of increasing the motivation of age groups of students in the system of additional professional education
Abstract
The article investigates the peculiarities of motivation of students of different age groups in the system of additional professional education (APE), with a focus on identifying individual barriers and needs of each category. The authors note that traditional methods of organizing the educational process often do not take into account the specific characteristics of adolescents, youth, adults and the elderly, which leads to a decrease in motivation and ineffective training. The aim of the article is to demonstrate how the use of modern artificial intelligence (AI) tools, in particular chatbots capable of analyzing tone, detecting the emotional state of the user and providing personalized recommendations, can be an effective means of increasing motivation and engagement of learners of all age groups. The research relies on an integrated methodological approach that includes both quantitative data obtained through questionnaires and qualitative results from interviews and practical test runs of the chatbot. This interdisciplinary approach allows building a correlation between emotional factors, information perception characteristics, and learning outcomes. The results of the study confirm that the implementation of adaptive AI solutions contributes to a more flexible and customized educational environment where emotional support, interactive tasks, and pacing adjustments take into account the unique characteristics of each age group. The authors conclude that further development of such technologies has the potential to significantly transform the system of additional professional education, making it more effective, personalized and open to innovation.



Nanotechnology and nanomaterials
Composite films: results of large-scale tests
Abstract
The article discusses the results of testing composite films in greenhouse farming, a key sector of agriculture. These materials create optimal conditions for plant growth, significantly increasing yield and reducing costs. Composite films stabilize the temperature inside greenhouses, which is important in a variable climate, and reduce water evaporation, conserving resources, especially in regions with water scarcity. A key aspect of the article is the results of large-scale testing in collaboration with the Chinese company Shanghai Daodun Technology Co., Ltd. The partnership aims to optimize the composition of the films to improve their strength, UV resistance, and thermal insulation properties. This contributes to the development of innovative technologies and enhances competitiveness. Additionally, the article examines trends in greenhouse farming that highlight the importance of eco-friendly technologies. Modern complexes employ methods that minimize negative environmental impacts. Composite films reduce greenhouse gas emissions and improve air quality. Forecasts for 2030 indicate that eco-friendly technologies will become the standard in greenhouse production, increasing yield and reducing environmental impact.



Intelligent Technical Systems in Manufacturing and Industrial Practice
Forecasting silicon ore concentrate yield using machine learning methods
Abstract
The article effectively applies machine learning methods to predict the production of silicon ore concentrate. The problem of silica content control is a problem for the mining industry, since the quality of the final product and its cost depend on it [9; 10]. During the study, the data obtained from the flotation plant after their preliminary processing were used to identify the most dynamically changing factors (flotation indicators). Random forest and recurrent convolutional neural network LSTM models were trained with different sets of input features. The quality of the models used was assessed using the mean square error (MSE), mean absolute error (MAE) and determination coefficient (R-squared) metrics. As a result of the experiments, it was found that instant flotation indicators have a lesser effect on improving the quality of the forecast, and unique variables taken with different lags lead to an increase in accuracy. The results of the study can be used at enterprises engaged in the processing of silicon ore for more complete automation and optimization of flotation control processes.


