Post-processing of medical image segmentation results
- Authors: Ermolenko S.V.1, Kashirina I.L.1,2, Starichkova Y.V.2
-
Affiliations:
- Voronezh State University
- MIREA – Russian Technological University
- Issue: Vol 12, No 2 (2025)
- Pages: 109-118
- Section: Informatics and information processing
- URL: https://journals.eco-vector.com/2313-223X/article/view/678143
- DOI: https://doi.org/10.33693/2313-223X-2025-12-2-109-118
- EDN: https://elibrary.ru/QYVOSN
- ID: 678143
Cite item
Abstract
In modern medical diagnostics, computer vision and deep learning play an increasingly important role, especially in the analysis of complex 3D medical images. A significant obstacle to the implementation of modern deep learning algorithms in clinical practice are artifacts and inaccuracies of the primary classification by neural networks. In this paper, we systematized the main post-processing methods used in medical image segmentation tasks and reviewed related works on this topic. The aim of the study is to develop post-processing methods to eliminate segmentation errors associated with spatial incoherence and incorrect classification of 3D image voxels. In this paper, we propose a post-processing module for CT image segmentation results that effectively solves the problems of intersecting and nested pathologies. Three algorithms have been developed and implemented to eliminate fragments of false positive responses of the neural network. Experimental verification has shown that the proposed algorithms successfully provide unified coherent pathologies, which improves the quality of segmentation and simplifies subsequent analysis. The developed post-processing module can be integrated with the existing neural network framework for segmentation of medical images nnU-Net, which will contribute to improving the quality of diagnostics. The results of the study open up prospects for further development of post-processing methods in the field of medical imaging and can find wide application in systems for supporting medical decision-making.
Full Text

About the authors
Sergey V. Ermolenko
Voronezh State University
Author for correspondence.
Email: ermolenko@math.vsu.ru
ORCID iD: 0009-0008-5159-0123
SPIN-code: 5931-1617
postgraduate student
Russian Federation, VoronezhIrina L. Kashirina
Voronezh State University; MIREA – Russian Technological University
Email: kash.irina@mail.ru
ORCID iD: 0000-0002-8664-9817
SPIN-code: 1299-4820
Professor, Department of Mathematical Methods of Operations Research; Dr. Sci. (Eng.); Professor, Department of Artificial Intelligence Technologies
Russian Federation, Voronezh; MoscowYulia V. Starichkova
MIREA – Russian Technological University
Email: starichkova@mirea.ru
SPIN-code: 3001-6791
Cand. Sci. (Eng.); Head, Department of Artificial Intelligence Technologies
Russian Federation, MoscowReferences
- Salvi M., Acharya U.R., Molinari F., Meiburger K.M. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Computers in Biology and Medicine. 2021. Vol. 128. P. 104129.
- Camara G., Assunçao R., Carvalho A.X. et al. Bayesian inference for post-processing of remote-sensing image classification. Remote Sensing. 2024. Vol. 16. P. 4572.
- Linkon A.H.M., Labib M.M., Hasan T. et al. Deep learning in prostate cancer diagnosis and Gleason grading in histopathology images: An extensive study. Informatics in Medicine Unlocked. 2021. Vol. 24. P. 100582.
- Karimi D., Nir G., Fazli L. et al. Deep learning-based gleason grading of prostate cancer from histopathology images – role of multiscale decision aggregation and data augmentation. IEEE J. Biomed. Heal. Informatics. 2019. Vol. 24. Pp. 1413–1426.
- Cipollari S., Guarrasi V., Pecoraro M. et al. Convolutional neural networks for automated classification of prostate multiparametric magnetic resonance imaging based on image quality. J. Magn. Reson. Imaging. 2022. Vol. 55. № 2. Pp. 480–490.
- Le H., Gupta R., Hou L. et al. Utilizing automated breast cancer detection to identify spatial distributions of tumor infiltrating lymphocytes in invasive breast cancer. Am. J. Pathol. 2020. Vol. 190. No. 7.
- Yan R., Ren F., Wang Z. et al. Breast cancer histopathological image classification using a hybrid deep neural network. Methods. 2020. Vol. 173. Pp. 52–60.
- Li Y., Huang X., Wang Y. et al. U-net ensemble model for segmentation in histopathology images. In: MICCAI 2019 Workshop COMPAY. 2019.
- Wei J.W., Tafe L.J., Linnik Y.A. et al. Pathologist-level classification of histologic patterns on resected lung adenocarcinoma slides with deep neural networks. Sci. Rep. 2019. Vol. 9. Pp. 1–8.
- Leo M., Carcagnì P., Signore L. et al. Convolutional Neural Networks in the Diagnosis of Colon Adenocarcinoma. AI. 2024. Vol. 5. No. 1. Pp. 324–341.
- Saeedi S., Rezayi S., Keshavarz H. et al. MRI-based brain tumor detection using convolutional deep learning methods and chosen machine learning techniques. BMC Med. Inform. Decis. Mak. 2023. Vol. 23. P. 16.
- Zeng Z., Xie W., Zhang Y., Lu Y. RIC-UNet: An improved neural network based on UNet for nuclei segmentation in histology images. IEEE Access. 2019. Vol. 7. Pp. 21420–21428.
- Survarachakan S., Prasad P.J.R., Naseem R. et al. Deep learning for image-based liver analysis – A comprehensive review focusing on malignant lesions. Artificial Intelligence in Medicine. 2022. Vol. 130. P. 102331.
- Bilic P., Christ P. et al. The Liver Tumor Segmentation Benchmark (LiTS). Medical Image Analysis. 2023. Vol. 84. P. 102680.
- Virtanen P., Gommers R., Oliphant T.E. et al. SciPy 1.0: Fundamental algorithms for scientific computing in Python. Nature Methods. 2020. Vol. 17. No. 3. Pp. 261–272.
- Kulikov A.A., Kashirina I.L., Savkina E.F. Segmentation of liver volumetric lesions in multiphase CT images using the nnU-Net framework. Modeling, Optimization and Information Technology. 2025. Vol. 13. No. 1 (48). (In Rus.)
- Isensee F., Jaeger P.F., Kohl S.A. et al. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nature Methods. 2021. Vol. 18. No. 2. Pp. 203–211.
Supplementary files
