Analysis of software code preprocessing methods to improve the effectiveness of using large language models in vulnerability detection tasks

Capa

Citar

Texto integral

Acesso aberto Acesso aberto
Acesso é fechado Acesso está concedido
Acesso é fechado Acesso é pago ou somente para assinantes

Resumo

As software systems grow in scale and complexity, the need for intelligent methods of vulnerability detection increases. One such method involves the use of large language models trained on source code, which are capable of analyzing and classifying vulnerable code segments at early stages of development. The effectiveness of these models depends on how the code is represented and how the input data is prepared. Preprocessing methods can significantly impact the accuracy and robustness of the model. The purpose of the study: to analyze the impact of various code preprocessing methods on the accuracy and robustness of large language models (CodeBERT, GraphCodeBERT, UniXcoder) in vulnerability detection tasks. The analysis is conducted using source code changes extracted from commits associated with vulnerabilities documented in the CVE database. The research methodology is an experimental analysis based on evaluation of the effectiveness and robustness of CodeBERT, GraphCodeBERT, and UniXcoder in the task of vulnerability classification. The models are assessed based on their performance using Accuracy and F1 score metrics. Research results: estimates of the effectiveness of different code preprocessing methods when applying large language models to vulnerability classification tasks.

Texto integral

Acesso é fechado

Sobre autores

Valery Charugin

MIREA – Russian Technological University

Autor responsável pela correspondência
Email: charugin_v@mirea.ru
ORCID ID: 0009-0003-4950-7726
Código SPIN: 4080-4997

lecturer, Department of Computer and Information Security, Institute of Artificial Intelligence

Rússia, Moscow

Valentin Charugin

MIREA – Russian Technological University

Email: charugin@mirea.ru
ORCID ID: 0009-0001-1450-0714
Código SPIN: 7264-9403

lecturer, Department of Computer and Information Security, Institute of Artificial Intelligence

Rússia, Moscow

Alexey Stavtsev

MIREA – Russian Technological University

Email: stavcev@mirea.ru
Código SPIN: 4948-2180

Cand. Sci. (Phys.-Math.), associate professor, Department of Computer and Information Security, Institute of Artificial Intelligence

Rússia, Moscow

Alexander Chesalin

MIREA – Russian Technological University

Email: chesalin_an@mail.ru
ORCID ID: 0000-0002-1154-6151
Código SPIN: 4334-5520

Cand. Sci. (Eng.), Associate Professor, Head, Department of Computer and Information Security, Institute of Artificial Intelligence

Rússia, Moscow

Bibliografia

  1. Charugin V.V., Chesalin A.N. Analysis and formation of network traffic datasets for computer attack detection. International Journal of Open Information Technologies. 2023. Vol. 11. No. 6. (In Rus.)
  2. Busko N.A., Fedorchenko E.V., Kotenko I.V. Automatic evaluation of exploits based on deep learning methods. Ontology of designing. 2024. (In Rus.)
  3. Li Y., Li X., Wu H. et al. Everything you wanted to know about LLM-based vulnerability detection but were afraid to ask. 2025. doi: 10.48550/arXiv.2504.13474.
  4. Liu C., Chen X., Li X. et al Making vulnerability prediction more practical: prediction, categorization, and localization. Information and Software Technology. 2024. Vol. 171.
  5. Drozdov V.A., Yakovlev O.V. Application of large language models for vulnerability analysis. Scientific aspect, № 6-2024 – Inform. Technologies. 2024. (In Rus.)
  6. Charugin V.V., Charugin V.V., Chesalin A.N., Ushkova N.N. Constructor of natural language processing blocks and its application to log structuring in information security. International Journal of Open Information Technologies. 2024. Vol. 12. No. 9. (In Rus.)
  7. Ridoy S.Z., Shaon M.S.H., Cuzzocrea A. et al. EnStack: An ensemble stacking framework of large language models for enhanced vulnerability detection in source code. 2024. doi: 10.48550/arXiv.2411.1656.
  8. Sultan M.F., Karim T., Shaon M.S.H. et al. A combined feature embedding tools for multi-class software defect and identification. 2024. doi: 10.48550/arXiv.2411.17621.
  9. Feng Z., Guo D., Tang D. et al CodeBERT: A pre-trained model for programming and natural languages. 2020. doi: 10.48550/arXiv.2002.08155.
  10. Guo D., Ren S., Lu S. et al. GraphCodeBERT: Pre-training code representations with data flow. 2020. doi: 10.48550/arXiv.2009.08366.
  11. Guo D., Lu S., Duan N. et al. UniXcoder: Unified cross-modal pre-training for code representation. 2022. doi: 10.48550/arXiv.2203.03850.
  12. Karthik K., Moharir M., Jeevan S. et al. Temporal analysis and Common Weakness Enumeration (CWE) code prediction for software vulnerabilities using machine learning. In: 8th International Conference on Computational System and Information Technology for Sustainable Solutions. 2024.
  13. Li Z., Zou D., Xu S. et al. VulDeePecker: A deep learning-based system for vulnerability detection. 2018. doi: 10.48550/arXiv.1801.01681.
  14. Zheng T., Liu H., Xu H. et al. Few-VulD: A few-shot learning framework for software vulnerability detection. Computers & Security. 2024. Vol. 144.
  15. Bhandari G.P., Naseer A., Moonen L. CVEfixes: Automated collection of vulnerabilities and their fixes from open-source software. 2021. doi: 10.48550/arXiv.2107.08760.
  16. Pereira D.G., Afonso A., Medeiros F.M. Overview of friedman’s test and post-hoc analysis. Communication in Statistics – Simulation and Computation. 2015. Vol. 44.
  17. Pohlert T. PMCMR: Calculate pairwise multiple comparisons of mean rank sums. 2016. doi: 10.32614/CRAN.package.PMCMR.

Arquivos suplementares

Arquivos suplementares
Ação
1. JATS XML
2. Fig. 1. Histogram of distribution of current vulnerability categories for the Python language

Baixar (204KB)
3. Fig. 2. Data preprocessing and analysis scheme for CWE classification

Baixar (298KB)
4. Fig. 3. Accuracy chart of methods for the UniXcoder model

Baixar (411KB)
5. Fig. 4. Accuracy diagram of combination of methods for the UniXcoder model

Baixar (553KB)

Declaração de direitos autorais © Yur-VAK, 2025

Link à descrição da licença: https://www.urvak.ru/contacts/