Research of options of transition from unlimited to limited parallelism on the example of matrix multiplication

Cover Page

Cite item

Full Text

Abstract

Today, there are many approaches to developing parallel programs. It is considered that it is more efficient to write such programs for a particular computing system. The article proposes to ignore the features of a particular computing system and outline plans for the development of a certain automated system that allows trying to improve code efficiency by developing programs with unlimited parallelism, as well as explore the possibility of developing more efficient programs using the restrictions imposed on maximum parallelism. This approach was demonstrated on the example of the analysis of various matrix multiplication algorithms. As a mathematical apparatus, the study considered various approaches to the description of algorithms to increase their implementation, including an approach based on unlimited parallelism and, also, an approach based on various restrictions on parallelism is proposed. In the course of the work, sequential and parallel methods of matrix multiplication were studied in detail, including tape and block algorithms. As a result of the study, various matrix multiplication methods (sequential, with left and right recursion, parallel methods) were studied and more effective ones were found in terms of the resources used and the restrictions imposed on parallelism. A sequential method and a cascade summation scheme were analyzed and proposed as possible ways of convolving the results of solving the problem obtained after the decomposition stage. Also, a number of programs with different levels of parallelism were developed and implemented in the functional-stream parallel programming language. In the future, such transformations can be carried out formally, relying on a knowledge base and a language that allows equivalent transformations of the original program in accordance with the axioms and algebra of transformations laid down in it, as well as replacing functions that are equivalent in results and have different levels of parallelization. These studies can be used to increase the efficiency of developed programs in terms of resource use in many branches of science, including in the field of software development for the needs of astronomy and rocket science.

About the authors

Darya S. Romanova

Siberian Federal University

Author for correspondence.
Email: daryaooo@mail.ru

postgraduate student

Russian Federation, 79, Svobodnyy Av., Krasnoyarsk, 660041

References

  1. Legalov A. I. Sovremennie problem informatiki i vichislitel’noi tehniki [Modern problems of computer science and computer technology: a training manual]. Tomsk, Publishing House SPB Graphics, 2012, 216 p.
  2. (In Russ.).
  3. Legalov A. I. [Sorting methods obtained from the analysis of the maximum parallel program. Distributed and cluster computing]. Izbrannie materiali 3 chkoli seminara. Institut comp’uternogo modelirovatiya SO RAN. Krasnoyarsk, 2004, P. 119–134.
  4. Legalov A. I. [On the management of computing in parallel systems and programming language]. Naychnii vestnik NSTU. 2004, No. 3 (18), P. 63–72 (In Russ.).
  5. Voevodin V. V., Voevodin Vl. V. Parallelnie vichisleniya [Parallel computing]. SPb., BHV
  6. Petersburg Publ., 2002, 608 p.
  7. Solomonik E., Demmel J. Communication-optimal parallel 2.5D matrix multiplication and LU factorization algorithms. Department, University of California, Berkeley (February 2011), Available at: http: //www.eecs.berkeley. edu / Pubs / TechRpts / 2011 / EECS-2011-10.html (accessed 01.12.2019).
  8. Dongarra J. J., Duff L. S., Sorensen D. C., Vorst H. A. V. Numerical Linear Algebra for High Performance Computers (Software, Environments, Tools). Soc for Industrial & Applied Math.P, 1999, P. 120–128.
  9. Gergel V. P., Fursov V. A. Lektsii po parallel’nim vichisleniyam [Lectures on parallel computing: textbook]. Allowance. Samara, Samar. State Aerospace. University Publ., 2009, 164 p.
  10. Legalov A. I., Kazakov F. A., Kuzmin D. A., Privalikhin D. V. [The model of functional-stream parallel computing and the Pythagoras programming language. Distributed and cluster computing]. Izbrannie materiali vtoroi chkoli seminara. Instityt Komp’uternogo modelirovaniya SO RAN. Krasnoyarsk, 2002, P. 101–120 (In Russ.).
  11. Legalov A. I. [Sorting methods obtained from the analysis of the most parallel program. Distributed and cluster computing]. Izbrannie materiali tret’ei chkoli seminara. Instityt Komp’uternogo modelirovaniya SO RAN. Krasnoyarsk, 2004, P. 119–134 (In Russ.).
  12. Wirth N. Algoritmi i stryktyri dannih [Algorithms and data structures]. Moscow, Mir Publ., 1989, 360 p.
  13. Fedotov I. E. Modeli parallel’nogo programmirovaniya [Parallel Programming Models]. Moscow, Solon-Press Publ., 2012, 384 p.
  14. Legalov A. I., Nepomnyaschy O. V., Matkovsky I. V., Kropacheva M. S. Tail Recursion Transformation in Functional Dataflow Parallel Programs. Automatic Control and Computer Sciences. 2013, Vol. 47, No. 7, P. 366–372.
  15. Miller R., Boxer L. Posledovatel’nie i parallel’nie algoritmi: obshii podhod [Serial and parallel algorithms: General approach]. Moscow, BINOM. Laboratory of Knowledge Publ., 2006, 406 p.
  16. Lupin S. A., Posypkin M. A. Tehnologii parallel’nogo programmirovaniya [Parallel Programming Technologies]. Moscow, Forum, Infra-M Publ., 2008, 208 p.
  17. Herlihy M. The Art of Multiprocessor Programming. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. 2008, 508 p.

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2020 Romanova D.S.

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies