Trusted artificial intelligence

Cover Page

Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription or Fee Access

Abstract

In this paper we discuss the problem of creating trusted artificial intelligence (AI) technologies. Modern AI is based on machine learning and neural networks and is vulnerable to biases and errors. Efforts are made to establish standards for the development of trusted AI technologies, but they have not yet succeeded. AI technologies trust can only be achieved with the appropriate scientific and technological base and corresponding tools and techniques for countering attacks. We present the ISP RAS Trusted AI Research Center results and propose a work model that can ensure technological independence and long-term sustainable development in this area.

Full Text

Restricted Access

About the authors

A. I. Avetisyana

Ivannikov Institute for System Programming of the Russian Academy of Sciences

Author for correspondence.
Email: arut@ispras.ru

академик РАН

Russian Federation, Moscow

References

  1. Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2020, with forecasts from 2021 to 2025. https://www.statista.com/statistics/871513/worldwide-data-created/
  2. ГОСТ Р 56939-2016 “Защита информации. Разработка безопасного программного обеспечения. Общие требования”. GOST R 56939-2016 “Information protection. Secure software development. General requirements”. https://docs.cntd.ru/document/1200135525
  3. Regulation (EU) 2019/881 of the European Parliament and of the Council on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act). https://eur-lex.europa.eu/eli/reg/2019/881/oj
  4. Постановление президиума Российской академии наук “О мерах по развитию системного программирования как ключевого направления противодействия киберугрозам”. Act of The Presidium of RAS. On supporting development of system programming as a key direction for countering cyberthreats. https://www.ras.ru/presidium/documents/directions.aspx?ID=1f5522e9-ff25-4af0-a49a-6a5675525597
  5. Приказ Минобрнауки России № 118 от 24 февраля 2021 года. Order of the Ministry of Science and Education of Russia no. 118 dated February 24, 2021. https://vak.minobrnauki.gov.ru/uploader/loader?type=1&name=91506173002&f=7892
  6. Zhang Y., Nauman U. Deep Learning Trends Driven by Temes: A Philosophical Perspective // IEEE Access. January 2020. V. 8. P. 96587−196599. http://dx.doi.org/10.1109/ACCESS.2020.3032143
  7. Колмогоров А.Н. О представлении непрерывных функций нескольких переменных в виде суперпозиций непрерывных функций одного переменного и сложения // Докл. АН СССР. 1957. Т. 114. № 5. С. 953–956. Kolmogorov A.N. On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition // Dokl. Akad. Nauk SSSR.1957. V. 114. № 5. P. 953–956. (In Russ.)
  8. Tychonoff A. Ein Fixpunktsatz // Mathematische Annalen. 1935. V. 111. P. 767−776. https://link.springer.com/article/10.1007/BF01472256
  9. Insight – Amazon scraps secret AI recruiting toll that showed bias against women. https://www.reuters.com/article/idUSKCN1MK0AG/
  10. Cruise robotaxi service hid severity of accident, California officials claim. https://www.theguardian.com/business/2023/dec/04/california-cruise-robotaxi-san-francisco-accident-severity
  11. Mushroom pickers urged to avoid foraging books on Amazon that appear to be written by AI. https://www.theguardian.com/technology/2023/sep/01/mushroom-pickers-urged-to-avoid-foraging-books-on-amazon-that-appear-to-be-written-by-ai
  12. Blueprint for an AI Bill of Rights. https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf
  13. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
  14. White Paper on Artificial Intelligence: a European approach to excellence and trust. https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en
  15. EU AI Act.https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf
  16. Hiroshima Process International Code of Conduct for Advanced AI Systems. https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-code-conduct-advanced-ai-systems
  17. Указ Президента Российской Федерации “О развитии искусственного интеллекта в Российской Федерации”. Национальная стратегия развития искусственного интеллекта на период до 2030 года. Decree of the President of Russia “On developing artificial intelligence in Russia”, National strategy of developing artificial intelligence for the period until 2030. http://static.kremlin.ru/media/events/files/ru/AH4x6HgKWANwVtMOfPDhcbRpvd1HCCsv.pdf
  18. Кодекс этики в сфере искусственного интеллекта. Ethics Codex on Artificial Intelligence. https://ethics.a-ai.ru/assets/ethics_files/2023/05/12/Кодекс_этики_20_10_1.pdf
  19. ГОСТ Р 59921.2-2021 “Системы искусственного интеллекта в клинической медицине. Часть 2. Программа и методика технических испытаний”. GOST R 59921.2-2021 “Artificial Intelligence Systems in Clinical Medicine. Part 2. Program and methodology of technical validation”. https://docs.cntd.ru/document/1200181991
  20. Gu T., Dolan-Gavitt B., Garg S. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. https://arxiv.org/abs/1708.06733
  21. Shafahi A., Huang W., Najibi M. et al. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks. https://arxiv.org/abs/1804.00792v2
  22. Liu K., Dolan-Gavitt B., Garg S. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks. https://arxiv.org/abs/1805.12185
  23. Bao Gia Doan, Abbasnejad E., Ranasinghe D.C. Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems. https://arxiv.org/abs/1908.03369
  24. Temlyakov V. Greedy Approximation. Cambridge University Press, 2011.
  25. Beznosikov A., Horváth S., Richtárik P., Safaryan M. On biased compression for distributed learning // Journal of Machine Learning Research. 2023. V. 24(276). Р. 1−50.
  26. Gorbunov E., Rogozin A., Beznosikov A. et al. Recent theoretical advances in decentralized distributed convex optimization // High-Dimensional Optimization and Probability: With a View Towards Data Science. Cham: Springer International Publishing, 2022. Р. 253−325.
  27. Metelev D., Rogozin A., Kovalev D., Gasnikov A. Is consensus acceleration possible in decentralized optimization over slowly time-varying networks? // International Conference on Machine Learning, 2023. PMLR. P. 24532−24554.
  28. Beznosikov A. et al. Distributed methods with compressed communication for solving variational inequalities, with theoretical guarantees // Advances in Neural Information Processing Systems. 2022. V. 35. P. 14013−14029.
  29. Beznosikov A., Gasnikov A. Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities. https://arxiv.org/abs/2302.07615v1
  30. Yakushev A., Markin Yu., Obydenkov D. et al. Docmarking: Real-Time Screen-Cam Robust Document Image Watermarking // 2022 Ivannikov Ispras Open Conference (ISPRAS), IEEE. P. 142−150.
  31. Sankar Sadasivan V., Kumar A., Balasubramanian S. et al. Can AI-Generated Text be Reliably Detected? https://arxiv.org/pdf/2303.11156.pdf
  32. Yu N., Skripniuk V., Abdelnabi S., Fritz M. Artificial Fingerprinting for Generative Models: Rooting Deepfake Attribution in Training Data.https://openaccess.thecvf.com/content/ICCV2021/papers/Yu_Artificial_Fingerprinting_for_Generative_Models_Rooting_Deepfake_Attribution_in_Training_ICCV_2021_paper.pdf
  33. Adi Y., Baum C., Cisse M. et al. Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring. Proceedings of the 27th USENIX Security Symposium. August 15–17, 2018. Baltimore, MD, USA. P. 1615−1631.
  34. Lia Y., Wangb H., Barni M. A survey of deep neural network watermarking techniques. https://arxiv.org/pdf/2103.09274.pdf
  35. Li Y., Zhang Z., Bai J. et al. Open-sourced Dataset Protection via Backdoor Watermarking.https://arxiv.org/abs/2010.05821

Supplementary files

Supplementary Files
Action
1. JATS XML
2. Figure 1. The lifecycle of Secure software development (Microsoft)

Download (1MB)
3. Fig. 2. Computing resources and big data – engines for the development of artificial intelligence systems

Download (2MB)
4. Fig. 3. The emergence of vulnerabilities in AI systems

Download (2MB)
5. Fig. 4. Poisoned data: inserting bookmarks

Download (3MB)
6. Рис. 5. Противодействие состязательным атакам

Download (3MB)
7. Figure 6. An example of a global long-term development model

Download (5MB)

Copyright (c) 2024 Russian Academy of Sciences