Journal of Samara State Technical University, Ser. Physical and Mathematical SciencesJournal of Samara State Technical University, Ser. Physical and Mathematical Sciences1991-86152310-7081Samara State Technical University10768110.14498/vsgtu1930Research ArticleImplicit iterative algorithm for solving regularized total least squares problemsIvanovDmitriy V.<p>Cand. Phys. & Math. Sci., Associate Professor, Dept. of Information Systems Security, Dept. of Mechatronics</p>dvi85@list.ruhttps://orcid.org/0000-0002-5021-5259ZhdanovAleksandr I.<p>Dr. Phys. & Math. Sci., Proffessor, Dept. of Applied Mathematics & Computer Science</p>ZhdanovAleksan@yandex.ruhttps://orcid.org/0000-0001-6082-9097Samara National Research UniversitySamara State University of TransportSamara State Technical University300620222623113211505202220062022Copyright © 2022, Authors; Samara State Technical University (Compilation, Design, and Layout)2022<div class="around-button">The article considers a new iterative algorithm for solving total least squares problems. A new version of the implicit method of simple iterations based on singular value decomposition is proposed for solving a biased normal system of algebraic equations. The use of the implicit method of simple iterations based on singular value decomposition makes it possible to replace an ill-conditioned problem with a sequence of problems with a smaller condition number. This makes it possible to significantly increase the computational stability of the algorithm and, at the same time, ensures its high rate of convergence. Test examples shown that the proposed algorithm has a higher accuracy compared to the solutions obtained by non-regularized total least squares algorithms, as well as the total least squares solution with Tikhonov regularization.</div>implicit regularizationtotal least squaressingular value decompositionill-conditioningiterative regularization methodsнеявная регуляризацияполные наименьшие квадратысингулярное разложениеплохая обусловленностьитерационные методы регуляризации<p><strong>Introduction.</strong>Total least squares (TLS) are widely used in solving systems of linear algebraic equations with inaccurate data on the right and left sides.</p>
<p>Total least squares are widely used in many applied fields [1]. Including for system identification [2-5], image restoration [6, 7], tomography [8, 9], speech processing [10, 11].</p>
<p>There are many algorithms for solving total least squares problems. The classical algorithm for solving the total least squares problem based on SVD (singular value decomposition) [12]. The solution of the total least squares problem based on augmented systems is considered in [13, 14]. To solve large-scale linear systems of equations or linear systems of equations with a sparse matrix, iterative algorithms for total least squares are used: the Newton method [15, 16], Rayleigh iterations [17], Lanchotz iterations [18].</p>
<p>Various regularization methods are used to solve very ill-conditioned total least squares problems. Today, there are two main approaches to the regularization of total least squares problems: based on the truncated SVD [19] and Tikhonovs regularization [20], as well as their modifications [21-24].</p>
<p>One way to improve the accuracy of the solution is to use iterative methods of regularization [25]. In [26], an implicit iterative algorithm for ordinary least squares based on SVD was proposed.</p>
<p>The condition number for total least squares is always greater than the condition number for ordinary least squares. Tikhonovs regularization for total least squares makes it possible to approximate the condition number to the condition number of ordinary least squares [20].</p>
<p>This article proposes an implicit iterative algorithm to solve total least squares problems. When using the proposed algorithm, the condition numbers at each iteration turn out to be less than the condition numbers of ordinary least squares. This makes it possible to find the total least squares solution for very ill-conditioned problems.</p>
<p>It is proposed to use a restriction on the length of the solution vector as a stopping criterion for the iterative algorithm. The simulation results showed the high solution accuracy of the proposed implicit iterative algorithm to solve regularized total least squares problems.</p>
<p><strong>1. Problem Statement.</strong> Let the overdetermined system of equations be defined as</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mi>A</mi><mi>x</mi><mo>=</mo><mi>f</mi><mn>,</mn></math> (1)</p>
<p>where <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>A</mi><mo></mo><msup><mi></mi><mrow><mi>m</mi><mo></mo><mi>n</mi></mrow></msup></math>, <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>f</mi><mo></mo><msup><mi></mi><mi>m</mi></msup></math>, <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>m</mi><mo></mo><mi>n</mi></math>.</p>
<p>We will assume that the matrix <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>A</mi></math>and vector <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>f</mi></math>contain errors</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mi>A</mi><mo>=</mo><mover accent="true"><mi>A</mi><mo>~</mo></mover><mo>+</mo><mi></mi><mn>,</mn><mtext></mtext><mi>f</mi><mo>=</mo><mover accent="true"><mi>f</mi><mo>~</mo></mover><mo>+</mo><mi></mi><mn>.</mn></math></p>
<p>It is required to find a solution for the overdetermined system (1) using perturbed data <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>A</mi></math>and <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>f</mi></math>.</p>
<p>To find an approximate solution vector from the errors data, the total least squares can be applied [12]. The total least squares approach minimizes the squares of errors in the values of both dependent and independent variables</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><munder><mi>min</mi><mi>x</mi></munder><mo></mo><mtable equalrows="true" equalcolumns="true"><mtr><mtd><mi></mi></mtd><mtd><mi></mi></mtd></mtr></mtable><msub><mo></mo><mi>F</mi></msub><mn>,</mn><mtext>s.t.</mtext><mfenced><mrow><mover accent="true"><mi>A</mi><mo>~</mo></mover><mo>+</mo><mi></mi></mrow></mfenced><mi>x</mi><mo>=</mo><mover accent="true"><mi>f</mi><mo>~</mo></mover><mo>+</mo><mi></mi><mn>,</mn></math></p>
<p>where <math xmlns="http://www.w3.org/1998/Math/MathML"><mfenced><mtable equalrows="true" equalcolumns="true"><mtr><mtd><mi></mi><mo>,</mo></mtd><mtd><mi></mi></mtd></mtr></mtable></mfenced></math>is the augmented matrix and <math xmlns="http://www.w3.org/1998/Math/MathML"><mo></mo><mo></mo><msub><mo></mo><mi>F</mi></msub></math>is the Frobenius norm.</p>
<p>The solving of total least squares is reduced to finding the minimum of the objective function:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><munder><mi>min</mi><mrow><mi>x</mi><mo></mo><msup><mi></mi><mi>n</mi></msup></mrow></munder><mfrac><mrow><mo></mo><mi>A</mi><mi>x</mi><mo></mo><mi>f</mi><msup><mo></mo><mn>2</mn></msup></mrow><mrow><mn>1</mn><mo>+</mo><mo></mo><mi>x</mi><msup><mo></mo><mn>2</mn></msup></mrow></mfrac><mn>,</mn></math> (2)</p>
<p>where <math xmlns="http://www.w3.org/1998/Math/MathML"><mo></mo><mo></mo><mo></mo><mo></mo><mo></mo><msub><mo></mo><mn>2</mn></msub></math>is the Euclidean norm.</p>
<p>The article proposes an implicit iterative algorithm for the regularized solution of the system of equations (1) according to the data with errors using the total least squares.</p>
<p><strong>2. Iimplicit iterative algorithm for solving regularized TLS problems.</strong>Using the SVD, an arbitrary matrix can be represented as follows:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mi>A</mi><mo>=</mo><mi>U</mi><mi></mi><msup><mi>V</mi><mo></mo></msup><mn>,</mn></math>(3)</p>
<p>where <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>U</mi><mo>=</mo><mfenced><mtable equalrows="true" equalcolumns="true"><mtr><mtd><msub><mi>u</mi><mn>1</mn></msub></mtd><mtd><mo></mo></mtd><mtd><msub><mi>u</mi><mi>n</mi></msub></mtd></mtr></mtable></mfenced><mo></mo><msup><mi></mi><mrow><mi>m</mi><mo></mo><mi>n</mi></mrow></msup></math>and <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>V</mi><mfenced><mrow><msub><mi>v</mi><mn>1</mn></msub><mo></mo><mo>.</mo><mo>.</mo><mo>.</mo><mo></mo><msub><mi>v</mi><mi>n</mi></msub></mrow></mfenced><mo></mo><msup><mi></mi><mrow><mi>n</mi><mo></mo><mi>n</mi></mrow></msup></math>are orthogonal matrices; <math xmlns="http://www.w3.org/1998/Math/MathML"><mi></mi><mo>=</mo><mtext>diag</mtext><mfenced><mtable equalrows="true" equalcolumns="true"><mtr><mtd><msub><mi></mi><mn>1</mn></msub><mfenced><mi>A</mi></mfenced></mtd><mtd><mo></mo></mtd><mtd><msub><mi></mi><mi>n</mi></msub><mfenced><mi>A</mi></mfenced></mtd></mtr></mtable></mfenced></math>; <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi></mi><mn>1</mn></msub><mfenced><mi>A</mi></mfenced><mo></mo><mo></mo><mo></mo><msub><mi></mi><mi>n</mi></msub><mfenced><mi>A</mi></mfenced></math>are singular numbers of matrix <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>A</mi></math>; <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>u</mi><mi>i</mi></msub></math>and <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>v</mi><mi>i</mi></msub></math>are respectively left and right singular vectors of matrix <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>A</mi></math>.</p>
<p>Let the augmented matrix of the system of equations be defined as</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mover accent="true"><mi>A</mi><mo></mo></mover><mo>=</mo><mfenced><mrow><mi>A</mi><mn>,</mn><mi>f</mi></mrow></mfenced></math>.</p>
<p>A solution to the total least squares problem exists and is unique if the following condition is satisfied [27]:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mi></mi><mo>=</mo><msub><mi></mi><mrow><mi>n</mi><mo>+</mo><mn>1</mn></mrow></msub><mfenced><mover accent="true"><mi>A</mi><mo></mo></mover></mfenced><mo></mo><msub><mi></mi><mi>n</mi></msub><mfenced><mi>A</mi></mfenced><mo>.</mo></math>(4)</p>
<p>When condition (4) is satisfied, the solution to problem (2) can be obtained from a biased normal system of equations [27]:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mfenced><mrow><msup><mi>A</mi><mo></mo></msup><mi>A</mi><mo></mo><msup><mi></mi><mn>2</mn></msup><msub><mi>E</mi><mi>n</mi></msub></mrow></mfenced><mi>x</mi><mo>=</mo><msup><mi>A</mi><mo></mo></msup><mi>f</mi><mn>.</mn></math>(5)</p>
<p>Let <math xmlns="http://www.w3.org/1998/Math/MathML"><mi></mi></math> be a positive constant. Equation (5) is equivalent to the following equation:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mi></mi><msup><mi>A</mi><mo></mo></msup><mi>A</mi><mi>x</mi><mo>+</mo><mi>x</mi><mo>=</mo><mi></mi><msup><mi></mi><mn>2</mn></msup><mi>x</mi><mo>+</mo><mi>x</mi><mo>+</mo><mi></mi><msup><mi>A</mi><mo></mo></msup><mi>f</mi><mn>.</mn></math>(6)</p>
<p>The implicit iterative algorithm for equation (6) has the following form:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mfenced><mrow><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub><mo>+</mo><msup><mi>A</mi><mo></mo></msup><mi>A</mi></mrow></mfenced><msub><mi>x</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>=</mo><mfenced><mrow><msup><mi></mi><mn>2</mn></msup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow></mfenced><msub><mi>x</mi><mi>k</mi></msub><mo>+</mo><msup><mi>A</mi><mo></mo></msup><mi>f</mi><mn>.</mn></math>(7)</p>
<p>We write (7) as</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>x</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>=</mo><msup><mfenced><mrow><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub><mo>+</mo><msup><mi>A</mi><mo></mo></msup><mi>A</mi></mrow></mfenced><mrow><mo></mo><mn>1</mn></mrow></msup><mfenced><mrow><mfenced><mrow><msup><mi></mi><mn>2</mn></msup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow></mfenced><msub><mi>x</mi><mi>k</mi></msub><mo>+</mo><msup><mi>A</mi><mo></mo></msup><mi>f</mi></mrow></mfenced><mo>,</mo></math></p>
<p>or</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>x</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>=</mo><msub><mi></mi><mi></mi></msub><msub><mi>x</mi><mi>k</mi></msub><mo>+</mo><msub><mi>g</mi><mi></mi></msub><mn>,</mn></math> (8)</p>
<p>where <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi></mi><mi></mi></msub><mo>=</mo><mfenced><mrow><msubsup><mi></mi><mi>min</mi><mn>2</mn></msubsup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow></mfenced><msup><mfenced><mrow><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub><mo>+</mo><msup><mi>A</mi><mo></mo></msup><mi>A</mi></mrow></mfenced><mrow><mo></mo><mn>1</mn></mrow></msup></math>, <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>g</mi><mi></mi></msub><mo>=</mo><msup><mfenced><mrow><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub><mo>+</mo><msup><mi>A</mi><mo></mo></msup><mi>A</mi></mrow></mfenced><mrow><mo></mo><mn>1</mn></mrow></msup><msup><mi>A</mi><mo></mo></msup><mi>f</mi></math>.</p>
<p>Using the SVD decomposition of the matrix <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>A</mi></math> (3), let us perform the following transformations:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi></mi><mi></mi></msub><mo>=</mo><mfenced><mrow><msup><mi></mi><mn>2</mn></msup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow></mfenced><msup><mfenced><mrow><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub><mo>+</mo><msup><mi>A</mi><mo></mo></msup><mi>A</mi></mrow></mfenced><mrow><mo></mo><mn>1</mn></mrow></msup><mo>=</mo></math></p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mo>=</mo><mfenced><mrow><msup><mi></mi><mn>2</mn></msup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow></mfenced><mi>V</mi><msup><mfenced><mrow><mi></mi><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub></mrow></mfenced><mrow><mo></mo><mn>1</mn></mrow></msup><msup><mi>V</mi><mo></mo></msup><mo>=</mo><mfenced><mrow><msup><mi></mi><mn>2</mn></msup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow></mfenced><munderover><mo></mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>n</mi></munderover><mfrac><mrow><msub><mi>v</mi><mi>i</mi></msub><msubsup><mi>v</mi><mi>i</mi><mo></mo></msubsup></mrow><mrow><msubsup><mi></mi><mi>i</mi><mn>2</mn></msubsup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow></mfrac></math>;</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>g</mi><mi></mi></msub><mo>=</mo><msup><mfenced><mrow><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub><mo>+</mo><msup><mi>A</mi><mo></mo></msup><mi>A</mi></mrow></mfenced><mrow><mo></mo><mn>1</mn></mrow></msup><msup><mi>A</mi><mo></mo></msup><mi>f</mi><mo>=</mo></math></p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mo>=</mo><msup><mfenced open="[" close="]"><mrow><mi>V</mi><mfenced><mrow><mi></mi><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub></mrow></mfenced><msup><mi>V</mi><mo></mo></msup></mrow></mfenced><mrow><mo></mo><mn>1</mn></mrow></msup><mi>V</mi><mi></mi><msup><mi>U</mi><mo></mo></msup><mi>f</mi><mo>=</mo><mi>V</mi><msup><mfenced><mrow><mi></mi><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub></mrow></mfenced><mrow><mo></mo><mn>1</mn></mrow></msup><msup><mi>V</mi><mo></mo></msup><mi>V</mi><mi></mi><msup><mi>U</mi><mo></mo></msup><mi>f</mi><mo>=</mo></math></p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mo>=</mo><mi>V</mi><msup><mfenced><mrow><mi></mi><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub></mrow></mfenced><mrow><mo></mo><mn>1</mn></mrow></msup><mi></mi><msup><mi>U</mi><mo></mo></msup><mi>f</mi><munderover><mrow><mo>=</mo><mo></mo></mrow><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>n</mi></munderover><msub><mi>v</mi><mi>i</mi></msub><mfrac><msub><mi></mi><mi>i</mi></msub><mrow><msubsup><mi></mi><mi>i</mi><mn>2</mn></msubsup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow></mfrac><msubsup><mi>u</mi><mi>i</mi><mo></mo></msubsup><mi>f</mi><mn>.</mn></math></p>
<p>Then the implicit scheme (8) can be written based on the singular value decomposition in the following form:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>x</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>=</mo><mfenced><mrow><msup><mi></mi><mn>2</mn></msup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow></mfenced><munderover><mo></mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>n</mi></munderover><mfrac><mrow><msubsup><mi>v</mi><mi>i</mi><mo></mo></msubsup><msub><mi>v</mi><mi>i</mi></msub></mrow><mrow><msubsup><mi></mi><mi>i</mi><mn>2</mn></msubsup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow></mfrac><msub><mi>x</mi><mi>k</mi></msub><mo>+</mo><munderover><mo></mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>n</mi></munderover><mfrac><mrow><msub><mi></mi><mi>i</mi></msub><msubsup><mi>u</mi><mi>i</mi><mo></mo></msubsup><msub><mi>v</mi><mi>i</mi></msub></mrow><mrow><msubsup><mi></mi><mi>i</mi><mn>2</mn></msubsup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow></mfrac><mi>f</mi><mn>,</mn><mtext></mtext><mi>k</mi><mo>=</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo></mo><mn>.</mn></math>(9)</p>
<p><strong>3. Convergence and conditionality of an implicit iterative algorithm.</strong> The spectral radius of the transition matrix <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi></mi><mi></mi></msub></math>is</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mi></mi><mfenced><msub><mi></mi><mi></mi></msub></mfenced><mo>=</mo><mfenced><mrow><mi></mi><msup><mi></mi><mn>2</mn></msup><mo>+</mo><mn>1</mn></mrow></mfenced><msub><mi></mi><mi>max</mi></msub><mfenced open="[" close="]"><msup><mfenced><mrow><msub><mi>E</mi><mi>n</mi></msub><mo>+</mo><mi></mi><msup><mi>A</mi><mi>T</mi></msup><mi>A</mi></mrow></mfenced><mrow><mo></mo><mn>1</mn></mrow></msup></mfenced><mo>=</mo><mfrac><mrow><mi></mi><msup><mi></mi><mn>2</mn></msup><mo>+</mo><mn>1</mn></mrow><mrow><msub><mi></mi><mi>min</mi></msub><mfenced><mrow><msub><mi>E</mi><mi>n</mi></msub><mo>+</mo><mi></mi><msup><mi>A</mi><mo></mo></msup><mi>A</mi></mrow></mfenced></mrow></mfrac><mfrac><mrow><mi></mi><msup><mi></mi><mn>2</mn></msup><mo>+</mo><mn>1</mn></mrow><mrow><mn>1</mn><mo>+</mo><mi></mi><msubsup><mi></mi><mi>n</mi><mn>2</mn></msubsup><mfenced><mi>A</mi></mfenced></mrow></mfrac><mn>,</mn></math></p>
<p>where <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi></mi><mi>max</mi></msub></math>, <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi></mi><mi>min</mi></msub></math>are the maximum and minimum eigenvalues of the matrices, respectively.</p>
<p>The convergence condition of the implicit method of simple iterations (7) can be written as follows:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mi></mi><mfenced><msub><mi></mi><mi></mi></msub></mfenced><mo>=</mo><mfrac><mrow><mi></mi><msup><mi></mi><mn>2</mn></msup><mo>+</mo><mn>1</mn></mrow><mrow><mn>1</mn><mo>+</mo><mi></mi><msubsup><mi></mi><mi>n</mi><mn>2</mn></msubsup><mi>A</mi></mrow></mfrac><mo></mo><mn>1</mn><mo>.</mo></math>(10)</p>
<p>If condition (4) is satisfied and <math xmlns="http://www.w3.org/1998/Math/MathML"><mi></mi><mo></mo><mn>0</mn></math>, condition (10) is always satisfied. This means that the iterative algorithm (8) converges for all cases where the biased normal system of equations has a unique solution.</p>
<p>It can be shown that the larger the value <math xmlns="http://www.w3.org/1998/Math/MathML"><mi></mi></math>, the higher the rate of convergence of the algorithm.</p>
<p>Let us show that algorithms (8) and (9) have different values of the condition numbers. The simple iteration method can be written as follows:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>x</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub><munder><mrow><mo>=</mo><mi>argmin</mi></mrow><mrow><mi>x</mi><mo></mo><msup><mi></mi><mi>n</mi></msup></mrow></munder><msubsup><mfenced open="||" close="||"><mrow><mfenced><mtable equalrows="true" equalcolumns="true"><mtr><mtd><mi>A</mi></mtd></mtr><mtr><mtd><msqrt><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></msqrt><msub><mi>E</mi><mi>n</mi></msub></mtd></mtr></mtable></mfenced><mi>x</mi><mo></mo><mfenced><mtable equalrows="true" equalcolumns="true"><mtr><mtd><mi>f</mi></mtd></mtr><mtr><mtd><msqrt><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><mo>+</mo><msup><mi></mi><mn>2</mn></msup></msqrt><msub><mi>x</mi><mi>k</mi></msub></mtd></mtr></mtable></mfenced></mrow></mfenced><mn>2</mn><mn>2</mn></msubsup><mn>.</mn></math> (11)</p>
<p>Formula (11) can be represented in the following form</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>x</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>=</mo><msubsup><mi>A</mi><mi></mi><mo>+</mo></msubsup><msubsup><mi>f</mi><mi></mi><mfenced><mi>k</mi></mfenced></msubsup><mn>,</mn></math></p>
<p>where <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>A</mi><mrow><mi></mi><mo>=</mo></mrow></msub><mfenced><mtable equalrows="true" equalcolumns="true"><mtr><mtd><mi>A</mi></mtd></mtr><mtr><mtd><msqrt><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></msqrt><msub><mi>E</mi><mi>n</mi></msub></mtd></mtr></mtable></mfenced></math>, <math xmlns="http://www.w3.org/1998/Math/MathML"><msubsup><mi>f</mi><mi></mi><mfenced><mi>k</mi></mfenced></msubsup><mo>=</mo><mfenced><mtable equalrows="true" equalcolumns="true"><mtr><mtd><mi>f</mi></mtd></mtr><mtr><mtd><msqrt><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><mo>+</mo><msup><mi></mi><mn>2</mn></msup></msqrt><msub><mi>x</mi><mi>k</mi></msub></mtd></mtr></mtable></mfenced></math>; <math xmlns="http://www.w3.org/1998/Math/MathML"><msubsup><mi>A</mi><mi></mi><mo>+</mo></msubsup></math>is a pseudoinverse MoorePenrose matrix.</p>
<p>Since <math xmlns="http://www.w3.org/1998/Math/MathML"><mtext>rank</mtext><mfenced><msub><mi>A</mi><mi></mi></msub></mfenced><mo>=</mo><mi>n</mi></math>, then <math xmlns="http://www.w3.org/1998/Math/MathML"><msubsup><mi>A</mi><mi></mi><mo>+</mo></msubsup></math>can be calculated by the formula:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msubsup><mi>A</mi><mi></mi><mo>+</mo></msubsup><mo>=</mo><msup><mfenced><mrow><msubsup><mi>A</mi><mi></mi><mo></mo></msubsup><msub><mi>A</mi><mi></mi></msub></mrow></mfenced><mrow><mo></mo><mn>1</mn></mrow></msup><msubsup><mi>A</mi><mi></mi><mo></mo></msubsup><mn>.</mn></math> </p>
<p>In this case, the problem corresponds to the classical form of the implicit method of simple iterations:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>x</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>=</mo><msup><mfenced><mrow><msubsup><mi>A</mi><mi></mi><mo></mo></msubsup><msub><mi>A</mi><mi></mi></msub></mrow></mfenced><mrow><mo></mo><mn>1</mn></mrow></msup><msubsup><mi>A</mi><mi></mi><mo></mo></msubsup><msubsup><mi>f</mi><mi></mi><mfenced><mi>k</mi></mfenced></msubsup><mo>=</mo><msup><mfenced><mrow><msup><mi>A</mi><mo></mo></msup><mi>A</mi><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub></mrow></mfenced><mrow><mo></mo><mn>1</mn></mrow></msup><mfenced><mrow><msup><mi>A</mi><mo></mo></msup><mi>f</mi><mo>+</mo><mfenced><mrow><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><mo>+</mo><msup><mi></mi><mn>2</mn></msup></mrow></mfenced><msub><mi>x</mi><mi>k</mi></msub></mrow></mfenced><mo>,</mo></math></p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi></mi><mn>2</mn></msub><mfenced><mrow><msup><mi>A</mi><mo></mo></msup><mi>A</mi><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub></mrow></mfenced><mo>=</mo><mfrac><mrow><msub><mi></mi><mi>max</mi></msub><mfenced><mrow><msup><mi>A</mi><mo></mo></msup><mi>A</mi><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub></mrow></mfenced></mrow><mrow><msub><mi></mi><mi>min</mi></msub><mfenced><mrow><msup><mi>A</mi><mo></mo></msup><mi>A</mi><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><msub><mi>E</mi><mi>n</mi></msub></mrow></mfenced></mrow></mfrac><mo>=</mo><mfrac><mrow><msubsup><mi></mi><mn>1</mn><mn>2</mn></msubsup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow><mrow><msubsup><mi></mi><mi>n</mi><mn>2</mn></msubsup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow></mfrac><mn>.</mn></math> </p>
<p>For the implicit method based on SVD decomposition, the condition number is equal to the condition number of the matrix <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>A</mi><mi></mi></msub></math>:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi></mi><mn>2</mn></msub><mfenced><msub><mi>A</mi><mi></mi></msub></mfenced><mo>=</mo><msup><mfenced><mfrac><mrow><msubsup><mi></mi><mn>1</mn><mn>2</mn></msubsup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow><mrow><msubsup><mi></mi><mi>n</mi><mn>2</mn></msubsup><mo>+</mo><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></mrow></mfrac></mfenced><mrow><mn>1</mn><mo>/</mo><mn>2</mn></mrow></msup><mn>.</mn></math></p>
<p><strong>4. Stopping rule for an implicit iterative algorithm.</strong> There are a large number of stopping rules for iterative regularized algorithms [28-30]. In this article, we will use to stop the algorithm (5) the restriction on the value of the norm of the solution:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mo></mo><msub><mi>x</mi><mrow><mi>k</mi><mo>+</mo><mn>1</mn></mrow></msub><mo></mo><mo></mo><mi></mi><mn>,</mn></math> (12)</p>
<p>where <math xmlns="http://www.w3.org/1998/Math/MathML"><mi></mi></math>is the maximum allowable value of the Euclidean norm of the solution vector.</p>
<p>In contrast to Tikhonovs total least squares regularization [20], condition (12) is verified directly without calculating indirect parameters.</p>
<p><strong>5. Simulation results.</strong> Regularization Toolbox [31] was used to generate test cases. A matrix <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>A</mi><mrow><mn>2000</mn><mo></mo><mn>4</mn></mrow></msub></math>with singular values <math xmlns="http://www.w3.org/1998/Math/MathML"><mi></mi><mfenced><mtable equalrows="true" equalcolumns="true"><mtr><mtd><mn>5</mn><mo></mo><msup><mn>10</mn><mrow><mo></mo><mn>4</mn></mrow></msup></mtd><mtd><msup><mn>10</mn><mn>4</mn></msup></mtd><mtd><msup><mn>10</mn><mn>6</mn></msup></mtd><mtd><msup><mn>10</mn><mn>7</mn></msup></mtd></mtr></mtable></mfenced></math>was generated.</p>
<p>The true vector is <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>x</mi><mtext>true</mtext></msub><mo>=</mo><msup><mfenced><mtable equalrows="true" equalcolumns="true"><mtr><mtd><mn>1</mn></mtd><mtd><mn>1</mn></mtd><mtd><mn>1</mn></mtd><mtd><mn>1</mn></mtd></mtr></mtable></mfenced><mo></mo></msup></math>.</p>
<p>The vector <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>f</mi></math>is <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>f</mi><mo>=</mo><msub><mi>A</mi><mrow><mn>2000</mn><mo></mo><mn>4</mn></mrow></msub><mi>u</mi></math>.</p>
<p>Gaussian white noise with zero mean and standard deviation <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi></mi><mi>f</mi></msub><mo>=</mo><msub><mi></mi><mi>A</mi></msub><mo>=</mo><msup><mn>10</mn><mrow><mo></mo><mn>2</mn></mrow></msup></math>was added to the matrix <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>A</mi><mrow><mn>2000</mn><mo></mo><mn>4</mn></mrow></msub></math>and the vector <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>f</mi></math>.</p>
<p>The algorithm (5) was compared with the classical SVD-based TLS algorithm [12], the solution based on augmented systems [13], and regularized total least squares [20]:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mfenced><mrow><msup><mi>A</mi><mo></mo></msup><mi>A</mi><mo></mo><msup><mi></mi><mn>2</mn></msup><msub><mi>E</mi><mi>n</mi></msub><mo>+</mo><mi></mi><msub><mi>E</mi><mi>n</mi></msub></mrow></mfenced><mi>x</mi><mo>=</mo><msup><mi>A</mi><mo></mo></msup><mi>f</mi><mn>.</mn></math> (13)</p>
<p>The condition number of the matrix <math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi>A</mi><mo></mo></msup><mi>A</mi><mo></mo><msup><mi></mi><mn>2</mn></msup><msub><mi>E</mi><mi>n</mi></msub><mo>+</mo><mi></mi><msub><mi>E</mi><mi>n</mi></msub></math>is</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi></mi><mn>2</mn></msub><mfenced><mrow><msup><mi>A</mi><mo></mo></msup><mi>A</mi><mo></mo><msup><mi></mi><mn>2</mn></msup><msub><mi>E</mi><mi>n</mi></msub><mo>+</mo><mi></mi><msub><mi>E</mi><mi>n</mi></msub></mrow></mfenced><mo>=</mo><mfrac><mrow><msubsup><mi></mi><mn>1</mn><mn>2</mn></msubsup><mo></mo><mfenced><mrow><msup><mi></mi><mn>2</mn></msup><mo></mo><mi></mi></mrow></mfenced></mrow><mrow><msubsup><mi></mi><mi>n</mi><mn>2</mn></msubsup><mo></mo><mfenced><mrow><msup><mi></mi><mn>2</mn></msup><mo></mo><mi></mi></mrow></mfenced></mrow></mfrac><mn>.</mn></math></p>
<p>The parameter <math xmlns="http://www.w3.org/1998/Math/MathML"><mi></mi></math>was selected from the interval <math xmlns="http://www.w3.org/1998/Math/MathML"><mfenced><mrow><mn>0</mn><mo>,</mo><msup><mi></mi><mn>2</mn></msup></mrow></mfenced></math>with a step <math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mn>10</mn><mrow><mo></mo><mn>4</mn></mrow></msup><msup><mi></mi><mn>2</mn></msup></math>:</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi></mi><mi>i</mi></msub><mo>=</mo><msup><mn>10</mn><mrow><mo></mo><mn>4</mn></mrow></msup><msup><mi></mi><mn>2</mn></msup><mi>i</mi><mn>,</mn><mtext></mtext><mi>i</mi><mo>=</mo><mn>0</mn><mo>,</mo><mn>1</mn><mo></mo><mn>,10000.</mn></math> </p>
<p>The algorithms were compared by the relative mean square error (RMSE) of the solution</p>
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mi></mi><msub><mi>x</mi><mi>k</mi></msub><mo>=</mo><mfrac><mrow><mo></mo><msub><mi>x</mi><mi>k</mi></msub><mo></mo><msub><mi>x</mi><mrow><mtext>t</mtext><mi>r</mi><mi>u</mi><mi>e</mi></mrow></msub><msub><mo></mo><mn>2</mn></msub></mrow><mrow><mo></mo><msub><mi>x</mi><mrow><mtext>t</mtext><mi>r</mi><mi>u</mi><mi>e</mi></mrow></msub><msub><mo></mo><mn>2</mn></msub></mrow></mfrac><mo></mo><mn>100</mn><mtext>%.</mtext></math></p>
<p>The simulation results are presented in Table 1. Figure 1 shows the relative root mean square error of the solution (8) in the <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>k</mi></math>-th iteration for various values of the parameter <math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup></math>. Figure 2 shows the relative root mean square error of solution (13) depending on the choice of parameter <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi></mi><mi>i</mi></msub></math>.</p>
<p></p>
<p><strong>Table 1:</strong> <strong>RMSE of the solution </strong></p>
<table>
<tbody>
<tr>
<td width="125">
<p>Algorithm for estimating parameters</p>
</td>
<td width="125">
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mi></mi><mi>x</mi></math>, 100%</p>
</td>
<td width="125">
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi></mi><mn>2</mn></msub></math> </p>
</td>
</tr>
<tr>
<td width="125">
<p>Algorithm (5) with<math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><mo>=</mo><msup><mn>10</mn><mrow><mo></mo><mn>1</mn></mrow></msup><mi></mi></math></p>
</td>
<td width="125">
<p>7.53<math xmlns="http://www.w3.org/1998/Math/MathML"><mo></mo><msup><mn>10</mn><mrow><mo>-</mo><mn>2</mn></mrow></msup></math></p>
</td>
<td width="125">
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mn>2</mn><mo>.</mo><mn>02</mn><mo></mo><msup><mn>10</mn><mn>7</mn></msup></math></p>
</td>
</tr>
<tr>
<td width="125">
<p>Algorithm (5) with<math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><mo>=</mo><msup><mn>10</mn><mrow><mo></mo><mn>2</mn></mrow></msup><mi></mi></math> </p>
</td>
<td width="125">
<p>0.2045</p>
</td>
<td width="125">
<p>2.20<math xmlns="http://www.w3.org/1998/Math/MathML"><mo></mo><msup><mn>10</mn><mn>7</mn></msup></math></p>
</td>
</tr>
<tr>
<td width="125">
<p>Algorithm (5) with<math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi></mi><mrow><mo></mo><mn>1</mn></mrow></msup><mo>=</mo><msup><mn>10</mn><mrow><mo></mo><mn>5</mn></mrow></msup><mi></mi></math> </p>
</td>
<td width="125">
<p>8.63<math xmlns="http://www.w3.org/1998/Math/MathML"><mo></mo><msup><mn>10</mn><mrow><mo>-</mo><mn>2</mn></mrow></msup></math></p>
</td>
<td width="125">
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mn>2</mn><mo>.</mo><mn>23</mn><mo></mo><msup><mn>10</mn><mn>7</mn></msup></math></p>
</td>
</tr>
<tr>
<td width="125">
<p>TLS [12]</p>
</td>
<td width="125">
<p>49.51</p>
</td>
<td width="125">
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mn>4</mn><mo>.</mo><mn>75</mn><mo></mo><msup><mn>10</mn><mn>9</mn></msup></math></p>
</td>
</tr>
<tr>
<td width="125">
<p>TLS [13]</p>
</td>
<td width="125">
<p>49.51</p>
</td>
<td width="125">
<p>6.34<math xmlns="http://www.w3.org/1998/Math/MathML"><mo></mo><msup><mn>10</mn><mn>10</mn></msup></math></p>
</td>
</tr>
<tr>
<td width="125">
<p>RTLS [20]</p>
</td>
<td width="125">
<p>17.73</p>
</td>
<td width="125">
<p><math xmlns="http://www.w3.org/1998/Math/MathML"><mn>5</mn><mo>.</mo><mn>32</mn><mo></mo><msup><mn>10</mn><mn>16</mn></msup></math></p>
</td>
</tr>
</tbody>
</table>
<p></p>
<center>
<div class="preview fancybox" style="text-align: center;"><a title="Figure 1. RMSE of the solution (8) at the \(k\)-th iteration for various values of the parameter \(\mu ^{-1}\): 1 \(\mu ^{-1}=10^{-5}\sigma \), 2 \(\mu ^{-1}=10^{-1}\sigma \); 3 \(\mu ^{-1}=10^{-2}\sigma\)" href="/files/journals/63/articles/107681/supp/107681-252127-1-SP.jpg" rel="simplebox"><img style="max-height: 300px; max-width: 300px;" src="/files/journals/63/articles/107681/supp/107681-252127-1-SP.jpg" /></a></div>
</center>
<p><strong>Figure 1: RMSE of the solution (8) at the k-th iteration for various values of the parameter <math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi mathvariant="bold-italic"></mi><mrow><mo mathvariant="bold">-</mo><mn mathvariant="bold">1</mn></mrow></msup></math>: <em>1</em> <math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi mathvariant="bold-italic"></mi><mrow><mo mathvariant="bold">-</mo><mn mathvariant="bold">1</mn></mrow></msup><mo mathvariant="bold">=</mo><msup><mn mathvariant="bold">10</mn><mrow><mo mathvariant="bold">-</mo><mn mathvariant="bold">5</mn></mrow></msup><mi mathvariant="bold-italic"></mi></math>, <em>2</em> <math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi mathvariant="bold-italic"></mi><mrow><mo mathvariant="bold">-</mo><mn mathvariant="bold">1</mn></mrow></msup><mo mathvariant="bold">=</mo><msup><mn mathvariant="bold">10</mn><mrow><mo mathvariant="bold">-</mo><mn mathvariant="bold">5</mn></mrow></msup><mi mathvariant="bold-italic"></mi></math>; <em>3</em> <math xmlns="http://www.w3.org/1998/Math/MathML"><msup><mi mathvariant="bold-italic"></mi><mrow><mo mathvariant="bold">-</mo><mn mathvariant="bold">1</mn></mrow></msup><mo mathvariant="bold">=</mo><msup><mn mathvariant="bold">10</mn><mrow><mo mathvariant="bold">-</mo><mn mathvariant="bold">2</mn></mrow></msup><mi mathvariant="bold-italic"></mi></math></strong></p>
<p></p>
<center>
<div class="preview fancybox" style="text-align: center;"><a title="Figure 2. RMSE of the solution (13) for various values of the parameter \(\alpha _i=10^{-4}\sigma ^{2} i\)" href="/files/journals/63/articles/107681/supp/107681-252128-1-SP.jpg" rel="simplebox"><img style="max-height: 300px; max-width: 300px;" src="/files/journals/63/articles/107681/supp/107681-252128-1-SP.jpg" /></a></div>
</center>
<p><strong>Figure 2: RMSE of the solution (13) for various values of the parameter <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi mathvariant="bold-italic"></mi><mi mathvariant="bold">i</mi></msub><mo mathvariant="bold">=</mo><msup><mn mathvariant="bold">10</mn><mrow><mo mathvariant="bold">-</mo><mn mathvariant="bold">4</mn></mrow></msup><msup><mi mathvariant="bold-italic"></mi><mn mathvariant="bold">2</mn></msup><mi mathvariant="bold-italic">i</mi></math></strong></p>
<p></p>
<p><strong>Conclusion.</strong>The paper proposes a new implicit iterative algorithm for solving regularized total least squares problems. The simulation showed that the proposed algorithm has a higher accuracy compared to the solutions obtained by total least squares algorithms, as well as the total least squares solution with Tikhonov regularization.</p>
<p>The proposed implicit iterative algorithm makes it possible to implement a constraint on the length of the solution vector without solving additional nonlinear equations.</p>
<p>The condition number of problems solved at each iteration is less than the condition number of systems with Tikhonov regularization.</p>
<p><strong>Competing interests.</strong> We have no competing interests. Authors ResponsibilitiesEach author has participated in the development of the concept of the article and in the writing of the manuscript. The authors are absolutely responsible for the submission of the final manuscript in print. Each author has approved the final version of the manuscript.</p>
<p><strong>Funding.</strong> This work was supported by the Federal Agency of Railway Transport (projects nos. 122022200429-8, and 122022200432-8).</p>
<p><strong>Acknowledgments.</strong> The authors thank the referees for careful reading of the paper and valuable suggestions and comments.</p>[Markovsky I. Bibliography on total least squares and related methods, Stat. Interface, 2010, vol. 3, no. 3, pp. 329–334. DOI: https://doi.org/10.4310/SII.2010.v3.n3.a6.][Pintelon R., Schoukens J. System Identification: A Frequency Domain Approach. Piscataway, NJ, IEEE Press, 2012, xliv+743 pp. DOI: https://doi.org/10.1002/9781118287422.][Pillonetto G., Chen T., Chiuso A., De Nicolao G., Ljung L. Regularized System Identification. Learning Dynamic Models from Data, Communications and Control Engineering. Cham, Springer, 2022, xxiv+377 pp. DOI: https://doi.org/10.1007/978-3-030-95860-2.][Markovsky I., Willems J. C., Van Huffel S., Bart De Moor, Pintelon R. Application of structured total least squares for system identification and model reduction, IEEE Trans. Autom. Control, 2005, vol. 50, no. 10, pp. 1490–1500. DOI: https://doi.org/10.1109/TAC.2005.856643.][Ivanov D. V. Identification of linear dynamic systems of fractional order with errors in variables based on an augmented system of equations, Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.], 2021, vol. 25, no. 3, pp. 508–518. EDN: RCYACI. DOI: https://doi.org/10.14498/vsgtu1854.][Fu H., Barlow J. A regularized structured total least squares algorithm for high-resolution image reconstruction, Linear Algebra Appl., 2004, vol. 391, pp. 75–98. DOI: https://doi.org/10.1016/S0024-3795(03)00660-8.][Mesarovic V. Z., Galatsanos N. P., Katsaggelos A. K. Regularized constrained total least squares image restoration, IEEE Trans. Image Process., 1995, vol. 4, no. 8, pp. 1096–1108. DOI: https://doi.org/10.1109/83.403444.][Zhu W., Wang Y., Yao Y., Chang J., Graber H. L., Barbour R. L. Iterative total least-squares image reconstruction algorithm for optical tomography by the conjugate gradient method, J. Opt. Soc. Am. A, 1997, vol. 14, no. 4, pp. 799–807. DOI: https://doi.org/10.1364/josaa.14.000799.][Zhu W., Wang Y., Zhang J. Total least-squares reconstruction with wavelets for optical tomography, J. Opt. Soc. Am. A, vol. 15, no. 10, pp. 2639–2650. DOI: https://doi.org/10.1364/josaa.15.002639.][Lemmerling P., Mastronardi N., Van Huffel S. Efficient implementation of a structured total least squares based speech compression method, Linear Algebra Appl., 2003, vol. 366, pp. 295–315. DOI: https://doi.org/10.1016/S0024-3795(02)00465-2.][Khassina E. M., Lomov A. A. Audio files compression with the STLS-ESM method, St. Petersburg State Polytechnical University Journal. Computer Science. Telecommunications and Control Systems, 2015, vol. 229, no. 5, pp. 88–96. EDN: VAWFWT. DOI: https://doi.org/10.5862/JCSTCS.229.9.][Golub G. H., Van Loan C. An analysis of the total least squares problem, SIAM J. Matrix Anal. Appl., 1980, vol. 17, no. 6, pp. 883–893. DOI: https://doi.org/10.1137/0717073.][Zhdanov A. I., Shamarov P. A. The direct projection method in the problem of complete least squares, Autom. Remote Control, 2000, vol. 61, no. 4, pp. 610–620. EDN: LGBGAF.][Ivanov D., Zhdanov A. Symmetrical augmented system of equations for the parameter identification of discrete fractional systems by generalized total least squares, Mathematics, 2021, vol. 9, no. 24, 3250. EDN: QFMGJB. DOI: https://doi.org/10.3390/math9243250.][Björk Å. Newton and Rayleigh quotient methods for total least squares problem, In: Recent Advances in Total Least Squares Techniques and Errors in Variables Modeling, Proceedings of the Second Workshop on Total Least Squares and Errors-in-Variables Modeling (Leuven, Belgium, August 21–24, 1996). Philadelphia, PA, USA, SIAM, 1997, pp. 149–160.][Björck Å., Heggernes P., Matstoms P. Methods for large scale total least squares problems, SIAM J. Matrix Anal. Appl., 2000, vol. 22, no. 2, pp. 413–429. DOI: https://doi.org/10.1137/S0895479899355414.][Fasino D., Fazzi A. A Gauss–Newton iteration for total least squares problems, BIT Numer. Math., 2018, vol. 58, no. 2, pp. 281–299. DOI: https://doi.org/10.1007/s10543-017-0678-5.][Mohammedi A. Rational–Lanczos technique for solving total least squares problems, Kuwait J. Sci. Eng., 2001, vol. 28, no. 1, pp. 1–12.][Fierro R. D., Golub G. H., Hansen P. C., O’Leary D. P. Regularization by truncated total least squares, SIAM J. Sci. Comp., 1997, vol. 18, no. 4, pp. 1223–1241. DOI: https://doi.org/10.1137/S1064827594263837.][Golub G. H., Hansen P. C., O’Leary D. P. Tikhonov regularization and total least squares, SIAM J. Matrix Anal. Appl., 1999, vol. 21, no. 1, pp. 185–194. DOI: https://doi.org/10.1137/S0895479897326432.][Lampe J., Voss H. Solving regularized total least squares problems based on eigenproblems, Taiwanese J. Math., 2010, vol. 14, no. 3A, pp. 885–909. DOI: https://doi.org/10.11650/twjm/1500405873.][Sima D. M., Van Huffel S., Golub G. H. Regularized total least squares based on quadratic eigenvalue problem solvers, BIT Numer. Math., 2004, vol. 44, no. 4, pp. 793–812. DOI: https://doi.org/10.1007/s10543-004-6024-8.][Lampe J., Voss H. Efficient determination of the hyperparameter in regularized total least squares problems, Appl. Numer. Math., 2012, vol. 62, no. 9, pp. 1229–1241. DOI: https://doi.org/10.1016/j.apnum.2010.06.005.][Zhdanov A. I. Direct recurrence algorithms for solving the linear equations of the method of least squares, Comput. Math. Math. Phys., 1994, vol. 34, no. 6, pp. 693–701. EDN: VKRSPF.][Vainiko G. M., Veretennikov A. Yu. Iteratsionnye protsedury v nekorrektno postavlennykh zadachakh [Iteration Procedures in Ill-Posed Problems]. Moscow, Nauka, 1986, 177 pp.][Zhdanov A. I. Implicit iterative schemes based on singular decomposition and regularizing algorithms, Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.], 2018, vol. 22, no. 3, pp. 549–556. EDN: PJITAX. DOI: https://doi.org/10.14498/vsgtu1592.][Zhdanov A. I. The solution of ill-posed stochastic linear algebraic equations by the maximum likelihood regularization method, USSR Comput. Math. Math. Phys., 1988, vol. 28, no. 5, pp. 93–96. DOI: https://doi.org/10.1016/0041-5553(88)90014-6.][Gfrerer H. An a posteriori parameter choice for ordinary and iterated Tikhonov regularization of ill-posed problems leading to optimal convergence rates, Math. Comp., 1987, vol. 49, no. 180, pp. 507–522. DOI: https://doi.org/10.1090/S0025-5718-1987-0906185-4.][Hämarik U., Tautenhahn U. On the monotone error rule for parameter choice in iterative and continuous regularization methods, BIT Numer. Math., 2001, vol. 41, no. 5, pp. 1029–1038. DOI: https://doi.org/https://doi.org/10.1023/A:1021945429767.][Tautenhahn U., Hämarik U. The use of monotonicity for choosing the regularization parameter in ill-posed problems, Inverse Probl., 1999, vol. 15, no. 6, pp. 1487–1505. DOI: https://doi.org/10.1088/0266-5611/15/6/307.][Hansen P. C. Regularization tools version 4.0 for Matlab 7.3, Numer. Algorithms, 2007, vol. 46, no. 2, pp. 189–194. DOI: https://doi.org/10.1007/s11075-007-9136-9.]