The Effectiveness of the A2C Algorithm in Relation to Classical Models of the Theory of Economic Growth
- Autores: Moiseenko A.M.1, Grineva N.V.2
-
Afiliações:
- Russian Academy of National Economy and Public Administration under the President of the Russian Federation
- Financial University under the Government of the Russian Federation
- Edição: Volume 11, Nº 1 (2024)
- Páginas: 68-77
- Seção: SYSTEM ANALYSIS, INFORMATION MANAGEMENT AND PROCESSING, STATISTICS
- URL: https://journals.eco-vector.com/2313-223X/article/view/631145
- DOI: https://doi.org/10.33693/2313-223X-2024-11-1-68-77
- ID: 631145
Citar
Resumo
The relevance of the study is to identify the accuracy of the estimate obtained by the A2C algorithm, as well as the need for verification of reinforcement learning when working with optimization of economic processes. The purpose of the study was to analyze the effectiveness of the A2C algorithm, along with the specifics of its implementation, in solving optimization economic problems. The tasks considered were maximizing consumption in the Solow, Romer and Schumpeterian models of endogenous economic growth, and maximizing per capita income in the latter two, according to the consumption rate (in the latter two – saving rate) and the share of scientists in the economy, respectively. The results showed that for deterministic models (Solow model, Romer model), the variance of the parameter estimate is minimal and the average differs from the value obtained analytically by no more than a thousandth part with a sufficiently high number of time periods in the model. Nevertheless, in stochastic models (the Schumpeterian model), firstly, a high number of time periods in the model is required to match the estimate to the value obtained analytically, and secondly, the estimate obtained in this way, although biased by no more than a thousandth of a fraction, has a high variance.
Texto integral
Sobre autores
Alexander Moiseenko
Russian Academy of National Economy and Public Administration under the President of the Russian Federation
Autor responsável pela correspondência
Email: alex7and7er@gmail.com
ORCID ID: 0009-0001-0380-1693
1st year graduate student, Department of System Analysis
Rússia, MoscowNatalia Grineva
Financial University under the Government of the Russian Federation
Email: ngrineva@fa.ru
ORCID ID: 0000-0001-7647-5967
Cand. Sci. (Econ.), Associate Professor, associate professor, Department of Data Analysis and Machine Learning
Rússia, MoscowBibliografia
- Aghion P., Howitt P. A model of growth through creative destruction. 1990.
- Atashbar T., Aruhan Shi R. AI and macroeconomic modeling: Deep reinforcement learning in an RBC model. 2023.
- Kakade S.M. A natural policy gradient. In: Advances in neural information processing systems. 2001. Vol. 14.
- Mnih V. et al. Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning. PMLR, 2016. Pp. 1928–1937.
- Peters J., Schaal S. Reinforcement learning of motor skills with policy gradients. Neural Networks. 2008. Vol. 21. No. 4. Pp. 682–697.
- Romer P.M. Endogenous technological change. Journal of Political Economy. 1990. Vol. 98. No. 5. Part 2. Pp. S71–S102.
- Solow R.M. A contribution to the theory of economic growth. The Quarterly Journal of Economics. 1956. Vol. 70. No. 1. Pp. 65–94.
- Zheng S. et al. The ai economist: Improving equality and productivity with AI-driven tax policies // arXiv preprint arXiv:2004.13332. 2020.
- Didenko D.V., Grineva N.V. Factors of economic growth in the late USSR in a spatial perspective. Economic Policy. 2022. Vol. 17. No. 2. (In Rus.) Pp. 88–119. EDN: MBEJDX. doi: 10.18288/1994-5124-2022-2-88-119.
- Grineva N.V. Assessment of intellectual capital during the transition to a digital economy. Problems of Economics and Legal Practice. 2022. Vol. 18. No. 2. Pp. 219–227. (In Rus.) EDN: CGWWNJ.
- Krinichansky K., Grineva N. Dynamic approach to the analysis of financial structure: Overcoming the bank-based vs market-based dichotomy. In: 16th International Conference Management of large-scale system development (MLSD). 2023. No. 16. EDN: RSHSND. doi: 10.1109/MLSD58227.2023.10303933.