Methods of removing unwanted objects from aerial photography images using iterative approach

封面

如何引用文章

详细

Removing objects from images refers both to the tasks of improving the quality of the image, for example, in the field of recovering damaged photographs, and the tasks of increasing safety when removing people or cars from aerial photography images with remote sensing of the earth. At the same time, methods for removing unwanted objects usually include two stages: selecting objects for removal and restoring texture in areas of the image. The first stage can be performed manually by users, if it is necessary to select specific objects, or automatically by training the model on different classes of objects. The problem of image restoration in the course of research was solved by various methods, the main one of which involves using of the values of neighboring pixels for rendering in distant areas. In recent years, methods using deep learning based on convolutional and generative neural networks have shown good results. The aim of the work is to develop a method for removing objects from aerial photography images with manually selecting objects and drawing textures in the processed area. The paper reviews modern methods of image restoration, among which the most promising are the use of deep learning networks, as well as texture analysis in the restored area. The proposed algorithm is based on an iterative approach when analyzing neighboring areas and gradually painting the restored area with a texture from neighboring pixels, taking into account the weight and contours of the boundaries. The article evaluates the effectiveness of the proposed method using the base of video sequences obtained from quadcopters and containing people and natural objects.
At the same time, both an expert assessment was carried out, which showed good visual results, and a comparison of the quality of the algorithm with known approaches according to the PSNR metric, which showed the best results in the presence of a complex texture in the scene.

作者简介

Olga Stroy

Reshetnev Siberian State University of Science and Technology

编辑信件的主要联系方式.
Email: story_oa@sibsau.ru

student of the group MPI20-01

俄罗斯联邦, 31, Krasnoyarskii rabochii prospekt, Krasnoyarsk, 660037

Vladimir Buryachenko

Reshetnev Siberian State University of Science and Technology

Email: buryachenko@sibsau.ru

Cand. Sc., Associate Professor

俄罗斯联邦, 31, Krasnoyarskii rabochii prospekt, Krasnoyarsk, 660037

参考

  1. Remote sensing of the Earth – Roscosmos State Corporation. Available at: https://www.roscosmos.ru/24707/ (accessed: 10.09.2020).
  2. Ibadov R. R., Fedosov V. P., Voronin V. V. et al. [Investigation of a method for synthesizing textures of images of the earth's surface based on a neural network]. Izvestiya YUFU. Tekhnicheskie nauki. 2019, No. 5 (207) (In Russ.). Available at: https://cyberleninka.ru/article/n/issledovanie-metoda-sinteza-tekstur-izobrazheniy-poverhnosti-zemli-na-osnove-neyronnoy-seti (accessed: 11.09.2020).
  3. Cornell University. Available at: https://arxiv.org/abs/1512.03385 (accessed: 16.09.2020).
  4. Arhitektury neirosetei / Blog companiy NIX / Habr. Available at: https://habr.com/ru/company/ nix/blog/430524 (accessed: 12.09.2020).
  5. Meraner A., Ebel P., Xiang Zhu X. et al. Cloudremoval in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion. ISPRS Journal of Photogrammetry and Remote Sensing. 2020, Vol. 166. P. 333–346.
  6. Girshick Ross et al. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, P. 580–587.
  7. Uijlings J. R. R. et al. Selective Search for Object Recognition. International Journal of Computer Vision 104.2. 2013, P. 154–171.
  8. Getreuer P. Linear methods for image interpolation. Image Process Line. 2011, Vol. 1, P. 238–259.
  9. Charles Burlin, Yoann Le Calonnec and Louis Duperier. Deep Image Inpainting. Available at: http://cs231n.stanford.edu/reports/2017/pdfs/328.pdf (accessed: 05.03.2021).
  10. Yeh R., Chen C., Lim, T. Y., Hasegawa-Johnson M., Do M. N. Semantic image inpainting with perceptual and contextual losses. Computer Vision and Pattern Recognition arXiv: 1607.07539. 2016.
  11. Liu G., Reda F. A., Shih K. J., Wang T.-C., Tao A., Catanzaro B. Image Inpainting for Irregular Holes Using Partial Convolutions. Computer Vision and Pattern Recognition arXiv pre-print arXiv: 1804.07723. 2018.
  12. Jiang Y., Xu J., Yang B. and Junwu Zhu. Image Inpainting Based on Generative Adversarial Networks. IEEE Access. 2020, Vol. 8, P. 22884–22892.
  13. Telea A. An Image Inpainting Technique Based on the Fast Marching Method. Journal of Graphics Tools. 2004, Vol. 9, P. 23–34.
  14. Bertalmio M., Bertozzi A., Sapiro G. Navier-Stokes, fluid dynamics, and image and video inpainting. Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recogni-tion. 2001, Vol. 1, P. 355–362.
  15. Drone Videos DJI Mavic Pro Footage in Switzerland. Available at: https://www.kaggle.com/kmader/drone-videos (accessed: 05.05.2021).
  16. Almansa A. Echantillonnage, interpolation et detection: applications en imagerie satel-litaire (Doctoral dissertation, Cachan, Ecole normale superieure), 2002.
  17. Bertalmio M. Processing of flat and non-flat image information on arbitrary manifolds using partial differential equations.PhD Thesis, 2001.

补充文件

附件文件
动作
1. JATS XML

版权所有 © Stroy O.A., Buryachenko V.V., 2021

Creative Commons License
此作品已接受知识共享署名 4.0国际许可协议的许可
##common.cookie##