Analysis of the effectiveness and robustness of neural networks with early exits in computer vision tasks

Cover Page

Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription or Fee Access

Abstract

Many embedded systems and Internet of Things (IoT) devices use neural network algorithms for various information processing tasks. At the same time, developers face the problem of insufficient computing resources for effective functioning, especially in real-time (pseudo-) tasks. In this regard, the urgent task is to find a balance between the quality of the results and computational complexity. One of the ways to increase the computational efficiency of neural networks is to use neural network architectures with early exits (for example, BranchyNet), which allow making decisions before passing through all layers of the neural network, depending on the source data for a given reliability of the results. The purpose of the study: to analyze the applicability, effectiveness and robustness of neural networks with early exits (BranchyResNet18) in computer vision tasks. The analysis is based on the GTSRB road sign dataset. The research methodology is an experimental efficiency analysis based on the calculation of the number of floating-point operations (FLOP) to obtain results with a given accuracy, and an experimental robustness analysis based on the generation of various noise effects and adversarial attacks. Research results: estimates of the effectiveness of neural networks with early exit and their robustness to unintended and intentional disturbances have been obtained.

Full Text

Restricted Access

About the authors

Alexander N. Chesalin

MIREA – Russian Technological University

Author for correspondence.
Email: chesalin_an@mail.ru
ORCID iD: 0000-0002-1154-6151
SPIN-code: 4334-5520

Cand. Sci. (Eng.), Associate Professor; Head, Department of Computer and Information Security, Institute of Artificial Intelligence

Russian Federation, Moscow

Alexey V. Stavtsev

MIREA – Russian Technological University

Email: stavcev@mirea.ru
SPIN-code: 4948-2180

Cand. Sci. (Phys.-Math.); associate professor, Department of Computer and Information Security, Institute of Artificial Intelligence

Russian Federation, Moscow

Nadezhda N. Ushkova

MIREA – Russian Technological University

Email: ushkova@mirea.ru
SPIN-code: 1935-5513

senior lecturer, Department of Computer and Information Security, Institute of Artificial Intelligence

Russian Federation, Moscow

Valentin V. Charugin

MIREA – Russian Technological University

Email: сharugin_v@mirea.ru
ORCID iD: 0009-0001-1450-0714
SPIN-code: 7264-9403

lecturer, Department of Computer and Information Security, Institute of Artificial Intelligence

Russian Federation, Moscow

Valery V. Charugin

MIREA – Russian Technological University

Email: сharugin@mirea.ru
ORCID iD: 0009-0003-4950-7726
SPIN-code: 4080-4997

lecturer, Department of Computer and Information Security, Institute of Artificial Intelligence

Russian Federation, Moscow

References

  1. Jacob B., Kligys S., Chen B. et al. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2018. Pp. 2704–2713. URL: https://arxiv.org/abs/1712.05877
  2. Torkunova Yu.V., Milovanov D.V. Optimization of neural networks: Methods and their comparison using the example of text mining. International Journal of Advanced Studies. 2023. Vol. 13. No. 4. Pp. 142–158. (In Rus.). doi: 10.12731/2227-930X2023-13-4-142-158.
  3. Han S., Pool J., Tran J., Dally W.J. Learning both weights and connections for efficient neural networks. In: Advances in Neural Information Processing Systems (NeurIPS). 2015. URL: https://arxiv.org/abs/1506.02626
  4. Denton E.L., Zaremba W., Bruna J. et al. Exploiting linear structure within convolutional networks for efficient evaluation. In: Advances in Neural Information Processing Systems (NeurIPS). 2014. URL: https://arxiv.org/abs/1404.0736
  5. Hinton G., Vinyals O., Dean J. Distilling the knowledge in a neural network. arXiv Preprint arXiv:1503.02531. 2015. URL: https://arxiv.org/abs/1503.02531
  6. Ivanov E.A., Mamonova T.E. Comparison of neural network model compression methods when used on a microcontroller. In: Youth and modern information technologies. Proceedings of the XXI International Scientific and Practical Conference of Students, Postgraduates and Young Scientists (Tomsk, April 15–18, 2024). Tomsk, 2024. Pp. 176–180.
  7. Ullrich K., Meeds E., Welling M. Soft weight-sharing for neural network compression. In: International Conference on Learning Representations (ICLR). 2017. URL: https://arxiv.org/abs/1702.04008
  8. Scardapane S., Scarpiniti M., Baccarelli E., Uncini A. Why should we add early exits to neural networks? arXiv Preprint arXiv:2004.12814. 2020. URL: https://arxiv.org/abs/2004.12814
  9. Cheng Y., Wang D., Zhou P., Zhang T. A Survey of model compression and acceleration for deep neural networks. arXiv Preprint arXiv:1710.09282. 2017. URL: https://arxiv.org/abs/1710.09282
  10. Bajpai D.J., Hanawal M.K. A survey of early exit deep neural networks in NLP. arXiv Preprint arXiv:2501.07670. 2025. URL: https://arxiv.org/abs/2501.07670
  11. Panda P., Sengupta A., Roy K. Conditional deep learning for energy-efficient and enhanced pattern recognition. In: Design, Automation & Test in Europe Conference (DATE). 2016. Pp. 475–480.
  12. Teerapittayanon S., McDanel B., Kung H.T. BranchyNet: Fast inference via early exiting from deep neural networks. arXiv Preprint arXiv:1709.01686. 2017. URL: https://arxiv.org/abs/1709.01686
  13. Kaya Y., Hong S., Dumitras T. Shallow-deep networks: Understanding and mitigating network overthinking. In: Proceedings of the 36th International Conference on Machine Learning (ICML). 2019. Pp. 3301–3310.
  14. Laskaridis S., Venieris S.I., Almeida M. et al. SPINN: Synergistic progressive inference of neural networks over device and cloud. In: Proceedings of the 26th Annual International Conference on Mobile Computing and Networking (MobiCom’20). 2020. Pp. 1–15.
  15. Huang G., Chen D., Li T. et al. Multi-scale dense networks for resource efficient image classification. In: International Conference on Learning Representations (ICLR). 2018. 14 p.
  16. Kaya Y., Dumitras T. When does dynamic computation help early-exiting neural networks? In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 2021. Pp. 2709–2718.
  17. Venieris S.I., Laskaridis S., Lane N.D. Dynamic neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 2023. Vol. 45. No. 2. Pp. 2076–2098.
  18. Viola P., Jones M. Rapid object detection using a boosted cascade of simple features. CVPR. 2001. Vol. 1.
  19. Chesalin A.N. Application of cascade classification algorithms for improving intrusion detection systems. Nonlinear World. 2022. Vol. 20. No. 1. Pp. 24–41. (In Rus.)
  20. He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition. arXiv:1512.03385v1 [cs.CV]. 2015.
  21. Sai Abhishek, Allena Venkata. Resnet18 model with sequential layer for computing accuracy on image classification dataset. International Journal of Scientific Research in Computer Science, Engineering and Information Technology. 2022. Vol. 10. Pp. 2320–2882.
  22. Stallkamp J., Schlipsing M., Salmen J., Igel C. The German traffic sign recognition benchmark: A multi-class classification competition. In: IEEE International Joint Conference on Neural Networks (IJCNN). 2011.
  23. Sovrasov V. Ptflops: A flops counting tool for neural networks in Pytorch framework. 2024. URL: https://github.com/sovrasov/flops-counter.pytorch
  24. Gonzalez R., Woods R. Digital image processing. Moscow: Technosphere, 2005, 1072 p.
  25. Kim H. Torchattacks: A PyTorch repository for adversarial attacks. arXiv:2010.01950. 2020.
  26. Moosavi-Dezfooli S.-M., Fawzi A., Frossard P. DeepFool: A simple and accurate method to fool deep neural networks. arXiv:1511.04599. 2015.
  27. Rauber J., Brendel W., Bethge M. Foolbox: A Python toolbox to benchmark the robustness of machine learning models. arXiv:1707.04131. 2017.
  28. Goodfellow I., Shlens J., Szegedy C. Explaining and harnessing adversarial examples. arXiv:1412.6572. 2015.

Supplementary files

Supplementary Files
Action
1. JATS XML
2. Fig. 1. Architecture ResNet18

Download (154KB)
3. Fig. 2. Architecture BranchyResNet18

Download (207KB)
4. Fig. 3. The dependence of the number of operations (FLOPs) on the output of the neural network

Download (81KB)
5. Fig. 4. Dependence of classification accuracy on the number of FLOPs operations used

Download (69KB)
6. Fig. 5. Analysis of BranchyResNet18 accuracy and efficiency from noise level: а – Gaussian noise; b – uniform noise; c – pulse noise; d – blur; e – dependence of the number of operations (FLOPs) on the noise level

Download (427KB)
7. Fig. 6. Analysis of the robustness and effectiveness of BranchyResNet18 from the level of intentional interference: а – FGSM; b – PGD; c – DeepFool; d – BIM; e – dependence of the number of operations (FLOPs) on the intensity of the attack

Download (413KB)

Copyright (c) 2025 Yur-VAK

License URL: https://www.urvak.ru/contacts/