Implicit iterative algorithm for solving regularized total least squares problems

Cover Page


Cite item

Full Text

Abstract

The article considers a new iterative algorithm for solving total least squares problems. A new version of the implicit method of simple iterations based on singular value decomposition is proposed for solving a biased normal system of algebraic equations. The use of the implicit method of simple iterations based on singular value decomposition makes it possible to replace an ill-conditioned problem with a sequence of problems with a smaller condition number. This makes it possible to significantly increase the computational stability of the algorithm and, at the same time, ensures its high rate of convergence. Test examples shown that the proposed algorithm has a higher accuracy compared to the solutions obtained by non-regularized total least squares algorithms, as well as the total least squares solution with Tikhonov regularization.

Full Text

Introduction. Total least squares (TLS) are widely used in solving systems of linear algebraic equations with inaccurate data on the right and left sides.

Total least squares are widely used in many applied fields [1]. Including for system identification [2-5], image restoration [6, 7], tomography [8, 9], speech processing [10, 11].

There are many algorithms for solving total least squares problems. The classical algorithm for solving the total least squares problem based on SVD (singular value decomposition) [12]. The solution of the total least squares problem based on augmented systems is considered in [13, 14]. To solve large-scale linear systems of equations or linear systems of equations with a sparse matrix, iterative algorithms for total least squares are used: the Newton method [15, 16], Rayleigh iterations [17], Lanchotz iterations [18].

Various regularization methods are used to solve very ill-conditioned total least squares problems. Today, there are two main approaches to the regularization of total least squares problems: based on the truncated SVD [19] and Tikhonov’s regularization [20], as well as their modifications [21-24].

One way to improve the accuracy of the solution is to use iterative methods of regularization [25]. In [26], an implicit iterative algorithm for ordinary least squares based on SVD was proposed.

The condition number for total least squares is always greater than the condition number for ordinary least squares. Tikhonov’s regularization for total least squares makes it possible to approximate the condition number to the condition number of ordinary least squares [20].

This article proposes an implicit iterative algorithm to solve total least squares problems. When using the proposed algorithm, the condition numbers at each iteration turn out to be less than the condition numbers of ordinary least squares. This makes it possible to find the total least squares solution for very ill-conditioned problems.

It is proposed to use a restriction on the length of the solution vector as a stopping criterion for the iterative algorithm. The simulation results showed the high solution accuracy of the proposed implicit iterative algorithm to solve regularized total least squares problems.

1. Problem Statement. Let the overdetermined system of equations be defined as

Ax=f, (1)

where Am×n, fm, m>n.

We will assume that the matrix A and vector f contain errors

A=A~+Ξ,f=f~+ξ.                                        

It is required to find a solution for the overdetermined system (1) using perturbed data A and f.

To find an approximate solution vector from the errors data, the total least squares can be applied [12]. The total least squares approach minimizes the squares of errors in the values of both dependent and independent variables

minxξΞF, s.t. A~+Ξx=f~+ξ,                                

where ξ,Ξ is the augmented matrix and F is the Frobenius norm.

The solving of total least squares is reduced to finding the minimum of the objective function:

minxnAxf21+x2, (2)

where 2 is the Euclidean norm.

The article proposes an implicit iterative algorithm for the regularized solution of the system of equations (1) according to the data with errors using the total least squares.

2. Iimplicit iterative algorithm for solving regularized TLS problems. Using the SVD, an arbitrary matrix  can be represented as follows:

A=UΣV, (3)

where U=u1unm×n and Vv1 ... vnn×n are orthogonal matrices; Σ=diagσ1AσnA; σ1AσnA are singular numbers of matrix A; ui and vi are respectively left and right singular vectors of matrix A.

Let the augmented matrix of the system of equations be defined as

A¯=A,f.

A solution to the total least squares problem exists and is unique if the following condition is satisfied [27]:

σ=σn+1A¯<σnA. (4)

When condition (4) is satisfied, the solution to problem (2) can be obtained from a biased normal system of equations [27]:

AAσ2Enx=Af. (5)

Let μ be a positive constant. Equation (5) is equivalent to the following equation:

μAAx+x=μσ2x+x+μAf. (6)

The implicit iterative algorithm for equation (6) has the following form:

μ1En+AAxk+1=σ2+μ1xk+Af. (7)

We write (7) as

xk+1=μ1En+AA1σ2+μ1xk+Af,

or

xk+1=Φμxk+gμ, (8)

where Φμ=σmin2+μ1μ1En+AA1, gμ=μ1En+AA1Af.

Using the SVD decomposition of the matrix A (3), let us perform the following transformations:

Φμ=σ2+μ1μ1En+AA1=

=σ2+μ1VΣ+μ1En1V=σ2+μ1i=1nviviσi2+μ1;

gμ=μ1En+AA1Af=

=VΣ+μ1EnV1VΣUf=VΣ+μ1En1VVΣUf=

=VΣ+μ1En1ΣUf=i=1nviσiσi2+μ1uif.

Then the implicit scheme (8) can be written based on the singular value decomposition in the following form:

xk+1=σ2+μ1i=1nviviσi2+μ1xk+i=1nσiuiviσi2+μ1f,k=0,1. (9)

3. Convergence and conditionality of an implicit iterative algorithm. The spectral radius of the transition matrix Φμ is

ρΦμ=μσ2+1λmaxEn+μATA1=μσ2+1λminEn+μAAμσ2+11+μσn2A,

where λmax, λmin are the maximum and minimum eigenvalues of the matrices, respectively.

The convergence condition of the implicit method of simple iterations (7) can be written as follows:

ρΦμ=μσ2+11+μσn2A<1. (10)

If condition (4) is satisfied and μ>0, condition (10) is always satisfied. This means that the iterative algorithm (8) converges for all cases where the biased normal system of equations has a unique solution.

It can be shown that the larger the value μ, the higher the rate of convergence of the algorithm.

Let us show that algorithms (8) and (9) have different values of the condition numbers. The simple iteration method can be written as follows:

xk+1=argminxnAμ1Enxfμ1+σ2xk22. (11)

Formula (11) can be represented in the following form

xk+1=Aμ+fμk,

where Aμ=Aμ1En, fμk=fμ1+σ2xk; Aμ+ is a pseudoinverse Moore–Penrose matrix.

Since rankAμ=n, then Aμ+ can be calculated by the formula:

Aμ+=AμAμ1Aμ.                                          

In this case, the problem corresponds to the classical form of the implicit method of simple iterations:

xk+1=AμAμ1Aμfμk=AA+μ1En1Af+μ1+σ2xk,

κ2AA+μ1En=λmaxAA+μ1EnλminAA+μ1En=σ12+μ1σn2+μ1.                     

For the implicit method based on SVD decomposition, the condition number is equal to the condition number of the matrix Aμ:

κ2Aμ=σ12+μ1σn2+μ11/2.

4. Stopping rule for an implicit iterative algorithm. There are a large number of stopping rules for iterative regularized algorithms [28-30]. In this article, we will use to stop the algorithm (5) the restriction on the value of the norm of the solution:

xk+1δ, (12)

where δ is the maximum allowable value of the Euclidean norm of the solution vector.

In contrast to Tikhonov’s total least squares regularization [20], condition (12) is verified directly without calculating indirect parameters.

5. Simulation results. Regularization Toolbox [31] was used to generate test cases. A matrix A2000×4 with singular values σ5104104106107 was generated.

The true vector is xtrue=1111.

The vector f is f=A2000×4u.

Gaussian white noise with zero mean and standard deviation σf=σA=102 was added to the matrix A2000×4 and the vector f.

The algorithm (5) was compared with the classical SVD-based TLS algorithm [12], the solution based on augmented systems [13], and regularized total least squares [20]:

AAσ2En+αEnx=Af. (13)

The condition number of the matrix AAσ2En+αEn is

κ2AAσ2En+αEn=σ12σ2ασn2σ2α.

The parameter α was selected from the interval 0,σ2 with a step 104σ2:

αi=104σ2i,i=0,1,10000.                                 

The algorithms were compared by the relative mean square error (RMSE) of the solution

δxk=xkxtrue2xtrue2100 %.

The simulation results are presented in Table 1. Figure 1 shows the relative root mean square error of the solution (8) in the k-th iteration for various values of the parameter μ1. Figure 2 shows the relative root mean square error of solution (13) depending on the choice of parameter αi.

 

Table 1: RMSE of the solution

Algorithm for estimating parameters

 δx, 100%

 κ2  

Algorithm (5) with μ1=101σ

7.53·10-2 

 2.02·107 

Algorithm (5) with μ1=102σ  

 0.2045

2.20·107

Algorithm (5) with μ1=105σ  

 8.63·10-2

 2.23·107

TLS [12]

 49.51

4.75·109

TLS [13]

 49.51

6.34·1010

RTLS [20]

 17.73

5.32·1016  

 

Figure 1: RMSE of the solution (8) at the k-th iteration for various values of the parameter μ-1: 1μ-1=10-5σ, 2μ-1=10-5σ; 3 –  μ-1=10-2σ

 

Figure 2: RMSE of the solution (13) for various values of the parameter  αi=10-4σ2i

 

Conclusion. The paper proposes a new implicit iterative algorithm for solving regularized total least squares problems. The simulation showed that the proposed algorithm has a higher accuracy compared to the solutions obtained by total least squares algorithms, as well as the total least squares solution with Tikhonov regularization.

The proposed implicit iterative algorithm makes it possible to implement a constraint on the length of the solution vector without solving additional nonlinear equations.

The condition number of problems solved at each iteration is less than the condition number of systems with Tikhonov regularization.

Competing interests. We have no competing interests. Author’s ResponsibilitiesEach author has participated in the development of the concept of the article and in the writing of the manuscript. The authors are absolutely responsible for the submission of the final manuscript in print. Each author has approved the final version of the manuscript.

Funding. This work was supported by the Federal Agency of Railway Transport (projects nos. 122022200429-8, and 122022200432-8).

Acknowledgments. The authors thank the referees for careful reading of the paper and valuable suggestions and comments.

×

About the authors

Dmitriy V. Ivanov

Samara National Research University; Samara State University of Transport

Email: dvi85@list.ru
ORCID iD: 0000-0002-5021-5259
SPIN-code: 6672-4830
Scopus Author ID: 22937879800
http://www.mathnet.ru/person42123

Cand. Phys. & Math. Sci., Associate Professor, Dept. of Information Systems Security, Dept. of Mechatronics

Russian Federation, 34, Moskovskoye shosse, Samara, 443086; 2 B, Svobody str., Samara, 443066

Aleksandr I. Zhdanov

Samara State Technical University

Author for correspondence.
Email: ZhdanovAleksan@yandex.ru
ORCID iD: 0000-0001-6082-9097
SPIN-code: 5056-3555
Scopus Author ID: 7102747969
ResearcherId: E-1433-2014
http://www.mathnet.ru/person41724

Dr. Phys. & Math. Sci., Proffessor, Dept. of Applied Mathematics & Computer Science

Russian Federation, 244, Molodogvardeyskaya st., Samara, 443100

References

  1. Markovsky I. Bibliography on total least squares and related methods, Stat. Interface, 2010, vol. 3, no. 3, pp. 329–334. DOI: https://doi.org/10.4310/SII.2010.v3.n3.a6.
  2. Pintelon R., Schoukens J. System Identification: A Frequency Domain Approach. Piscataway, NJ, IEEE Press, 2012, xliv+743 pp. DOI: https://doi.org/10.1002/9781118287422.
  3. Pillonetto G., Chen T., Chiuso A., De Nicolao G., Ljung L. Regularized System Identification. Learning Dynamic Models from Data, Communications and Control Engineering. Cham, Springer, 2022, xxiv+377 pp. DOI: https://doi.org/10.1007/978-3-030-95860-2.
  4. Markovsky I., Willems J. C., Van Huffel S., Bart De Moor, Pintelon R. Application of structured total least squares for system identification and model reduction, IEEE Trans. Autom. Control, 2005, vol. 50, no. 10, pp. 1490–1500. DOI: https://doi.org/10.1109/TAC.2005.856643.
  5. Ivanov D. V. Identification of linear dynamic systems of fractional order with errors in variables based on an augmented system of equations, Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.], 2021, vol. 25, no. 3, pp. 508–518. EDN: RCYACI. DOI: https://doi.org/10.14498/vsgtu1854.
  6. Fu H., Barlow J. A regularized structured total least squares algorithm for high-resolution image reconstruction, Linear Algebra Appl., 2004, vol. 391, pp. 75–98. DOI: https://doi.org/10.1016/S0024-3795(03)00660-8.
  7. Mesarovic V. Z., Galatsanos N. P., Katsaggelos A. K. Regularized constrained total least squares image restoration, IEEE Trans. Image Process., 1995, vol. 4, no. 8, pp. 1096–1108. DOI: https://doi.org/10.1109/83.403444.
  8. Zhu W., Wang Y., Yao Y., Chang J., Graber H. L., Barbour R. L. Iterative total least-squares image reconstruction algorithm for optical tomography by the conjugate gradient method, J. Opt. Soc. Am. A, 1997, vol. 14, no. 4, pp. 799–807. DOI: https://doi.org/10.1364/josaa.14.000799.
  9. Zhu W., Wang Y., Zhang J. Total least-squares reconstruction with wavelets for optical tomography, J. Opt. Soc. Am. A, vol. 15, no. 10, pp. 2639–2650. DOI: https://doi.org/10.1364/josaa.15.002639.
  10. Lemmerling P., Mastronardi N., Van Huffel S. Efficient implementation of a structured total least squares based speech compression method, Linear Algebra Appl., 2003, vol. 366, pp. 295–315. DOI: https://doi.org/10.1016/S0024-3795(02)00465-2.
  11. Khassina E. M., Lomov A. A. Audio files compression with the STLS-ESM method, St. Petersburg State Polytechnical University Journal. Computer Science. Telecommunications and Control Systems, 2015, vol. 229, no. 5, pp. 88–96. EDN: VAWFWT. DOI: https://doi.org/10.5862/JCSTCS.229.9.
  12. Golub G. H., Van Loan C. An analysis of the total least squares problem, SIAM J. Matrix Anal. Appl., 1980, vol. 17, no. 6, pp. 883–893. DOI: https://doi.org/10.1137/0717073.
  13. Zhdanov A. I., Shamarov P. A. The direct projection method in the problem of complete least squares, Autom. Remote Control, 2000, vol. 61, no. 4, pp. 610–620. EDN: LGBGAF.
  14. Ivanov D., Zhdanov A. Symmetrical augmented system of equations for the parameter identification of discrete fractional systems by generalized total least squares, Mathematics, 2021, vol. 9, no. 24, 3250. EDN: QFMGJB. DOI: https://doi.org/10.3390/math9243250.
  15. Björk Å. Newton and Rayleigh quotient methods for total least squares problem, In: Recent Advances in Total Least Squares Techniques and Errors in Variables Modeling, Proceedings of the Second Workshop on Total Least Squares and Errors-in-Variables Modeling (Leuven, Belgium, August 21–24, 1996). Philadelphia, PA, USA, SIAM, 1997, pp. 149–160.
  16. Björck Å., Heggernes P., Matstoms P. Methods for large scale total least squares problems, SIAM J. Matrix Anal. Appl., 2000, vol. 22, no. 2, pp. 413–429. DOI: https://doi.org/10.1137/S0895479899355414.
  17. Fasino D., Fazzi A. A Gauss–Newton iteration for total least squares problems, BIT Numer. Math., 2018, vol. 58, no. 2, pp. 281–299. DOI: https://doi.org/10.1007/s10543-017-0678-5.
  18. Mohammedi A. Rational–Lanczos technique for solving total least squares problems, Kuwait J. Sci. Eng., 2001, vol. 28, no. 1, pp. 1–12.
  19. Fierro R. D., Golub G. H., Hansen P. C., O’Leary D. P. Regularization by truncated total least squares, SIAM J. Sci. Comp., 1997, vol. 18, no. 4, pp. 1223–1241. DOI: https://doi.org/10.1137/S1064827594263837.
  20. Golub G. H., Hansen P. C., O’Leary D. P. Tikhonov regularization and total least squares, SIAM J. Matrix Anal. Appl., 1999, vol. 21, no. 1, pp. 185–194. DOI: https://doi.org/10.1137/S0895479897326432.
  21. Lampe J., Voss H. Solving regularized total least squares problems based on eigenproblems, Taiwanese J. Math., 2010, vol. 14, no. 3A, pp. 885–909. DOI: https://doi.org/10.11650/twjm/1500405873.
  22. Sima D. M., Van Huffel S., Golub G. H. Regularized total least squares based on quadratic eigenvalue problem solvers, BIT Numer. Math., 2004, vol. 44, no. 4, pp. 793–812. DOI: https://doi.org/10.1007/s10543-004-6024-8.
  23. Lampe J., Voss H. Efficient determination of the hyperparameter in regularized total least squares problems, Appl. Numer. Math., 2012, vol. 62, no. 9, pp. 1229–1241. DOI: https://doi.org/10.1016/j.apnum.2010.06.005.
  24. Zhdanov A. I. Direct recurrence algorithms for solving the linear equations of the method of least squares, Comput. Math. Math. Phys., 1994, vol. 34, no. 6, pp. 693–701. EDN: VKRSPF.
  25. Vainiko G. M., Veretennikov A. Yu. Iteratsionnye protsedury v nekorrektno postavlennykh zadachakh [Iteration Procedures in Ill-Posed Problems]. Moscow, Nauka, 1986, 177 pp.
  26. Zhdanov A. I. Implicit iterative schemes based on singular decomposition and regularizing algorithms, Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.], 2018, vol. 22, no. 3, pp. 549–556. EDN: PJITAX. DOI: https://doi.org/10.14498/vsgtu1592.
  27. Zhdanov A. I. The solution of ill-posed stochastic linear algebraic equations by the maximum likelihood regularization method, USSR Comput. Math. Math. Phys., 1988, vol. 28, no. 5, pp. 93–96. DOI: https://doi.org/10.1016/0041-5553(88)90014-6.
  28. Gfrerer H. An a posteriori parameter choice for ordinary and iterated Tikhonov regularization of ill-posed problems leading to optimal convergence rates, Math. Comp., 1987, vol. 49, no. 180, pp. 507–522. DOI: https://doi.org/10.1090/S0025-5718-1987-0906185-4.
  29. Hämarik U., Tautenhahn U. On the monotone error rule for parameter choice in iterative and continuous regularization methods, BIT Numer. Math., 2001, vol. 41, no. 5, pp. 1029–1038. DOI: https://doi.org/https://doi.org/10.1023/A:1021945429767.
  30. Tautenhahn U., Hämarik U. The use of monotonicity for choosing the regularization parameter in ill-posed problems, Inverse Probl., 1999, vol. 15, no. 6, pp. 1487–1505. DOI: https://doi.org/10.1088/0266-5611/15/6/307.
  31. Hansen P. C. Regularization tools version 4.0 for Matlab 7.3, Numer. Algorithms, 2007, vol. 46, no. 2, pp. 189–194. DOI: https://doi.org/10.1007/s11075-007-9136-9.

Supplementary files

Supplementary Files
Action
1. JATS XML
2. Figure 1. RMSE of the solution (8) at the \(k\)-th iteration for various values of the parameter \(\mu ^{-1}\): 1 – \(\mu ^{-1}=10^{-5}\sigma \), 2 – \(\mu ^{-1}=10^{-1}\sigma \); 3 – \(\mu ^{-1}=10^{-2}\sigma\)

Download (85KB)
3. Figure 2. RMSE of the solution (13) for various values of the parameter \(\alpha _i=10^{-4}\sigma ^{2} i\)

Download (160KB)

Copyright (c) 2022 Authors; Samara State Technical University (Compilation, Design, and Layout)

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies