Implicit iterative algorithm for solving regularized total least squares problems
- Authors: Ivanov D.V.1,2, Zhdanov A.I.3
-
Affiliations:
- Samara National Research University
- Samara State University of Transport
- Samara State Technical University
- Issue: Vol 26, No 2 (2022)
- Pages: 311-321
- Section: Mathematical Modeling, Numerical Methods and Software Complexes
- URL: https://journals.eco-vector.com/1991-8615/article/view/107681
- DOI: https://doi.org/10.14498/vsgtu1930
- ID: 107681
Cite item
Full Text
Abstract
Full Text
Introduction. Total least squares (TLS) are widely used in solving systems of linear algebraic equations with inaccurate data on the right and left sides.
Total least squares are widely used in many applied fields [1]. Including for system identification [2-5], image restoration [6, 7], tomography [8, 9], speech processing [10, 11].
There are many algorithms for solving total least squares problems. The classical algorithm for solving the total least squares problem based on SVD (singular value decomposition) [12]. The solution of the total least squares problem based on augmented systems is considered in [13, 14]. To solve large-scale linear systems of equations or linear systems of equations with a sparse matrix, iterative algorithms for total least squares are used: the Newton method [15, 16], Rayleigh iterations [17], Lanchotz iterations [18].
Various regularization methods are used to solve very ill-conditioned total least squares problems. Today, there are two main approaches to the regularization of total least squares problems: based on the truncated SVD [19] and Tikhonov’s regularization [20], as well as their modifications [21-24].
One way to improve the accuracy of the solution is to use iterative methods of regularization [25]. In [26], an implicit iterative algorithm for ordinary least squares based on SVD was proposed.
The condition number for total least squares is always greater than the condition number for ordinary least squares. Tikhonov’s regularization for total least squares makes it possible to approximate the condition number to the condition number of ordinary least squares [20].
This article proposes an implicit iterative algorithm to solve total least squares problems. When using the proposed algorithm, the condition numbers at each iteration turn out to be less than the condition numbers of ordinary least squares. This makes it possible to find the total least squares solution for very ill-conditioned problems.
It is proposed to use a restriction on the length of the solution vector as a stopping criterion for the iterative algorithm. The simulation results showed the high solution accuracy of the proposed implicit iterative algorithm to solve regularized total least squares problems.
1. Problem Statement. Let the overdetermined system of equations be defined as
(1)
where , , .
We will assume that the matrix and vector contain errors
It is required to find a solution for the overdetermined system (1) using perturbed data and .
To find an approximate solution vector from the errors data, the total least squares can be applied [12]. The total least squares approach minimizes the squares of errors in the values of both dependent and independent variables
where is the augmented matrix and is the Frobenius norm.
The solving of total least squares is reduced to finding the minimum of the objective function:
(2)
where is the Euclidean norm.
The article proposes an implicit iterative algorithm for the regularized solution of the system of equations (1) according to the data with errors using the total least squares.
2. Iimplicit iterative algorithm for solving regularized TLS problems. Using the SVD, an arbitrary matrix can be represented as follows:
(3)
where and are orthogonal matrices; ; are singular numbers of matrix ; and are respectively left and right singular vectors of matrix .
Let the augmented matrix of the system of equations be defined as
.
A solution to the total least squares problem exists and is unique if the following condition is satisfied [27]:
(4)
When condition (4) is satisfied, the solution to problem (2) can be obtained from a biased normal system of equations [27]:
(5)
Let be a positive constant. Equation (5) is equivalent to the following equation:
(6)
The implicit iterative algorithm for equation (6) has the following form:
(7)
We write (7) as
or
(8)
where , .
Using the SVD decomposition of the matrix (3), let us perform the following transformations:
;
Then the implicit scheme (8) can be written based on the singular value decomposition in the following form:
(9)
3. Convergence and conditionality of an implicit iterative algorithm. The spectral radius of the transition matrix is
where , are the maximum and minimum eigenvalues of the matrices, respectively.
The convergence condition of the implicit method of simple iterations (7) can be written as follows:
(10)
If condition (4) is satisfied and , condition (10) is always satisfied. This means that the iterative algorithm (8) converges for all cases where the biased normal system of equations has a unique solution.
It can be shown that the larger the value , the higher the rate of convergence of the algorithm.
Let us show that algorithms (8) and (9) have different values of the condition numbers. The simple iteration method can be written as follows:
(11)
Formula (11) can be represented in the following form
where , ; is a pseudoinverse Moore–Penrose matrix.
Since , then can be calculated by the formula:
In this case, the problem corresponds to the classical form of the implicit method of simple iterations:
For the implicit method based on SVD decomposition, the condition number is equal to the condition number of the matrix :
4. Stopping rule for an implicit iterative algorithm. There are a large number of stopping rules for iterative regularized algorithms [28-30]. In this article, we will use to stop the algorithm (5) the restriction on the value of the norm of the solution:
(12)
where is the maximum allowable value of the Euclidean norm of the solution vector.
In contrast to Tikhonov’s total least squares regularization [20], condition (12) is verified directly without calculating indirect parameters.
5. Simulation results. Regularization Toolbox [31] was used to generate test cases. A matrix with singular values was generated.
The true vector is .
The vector is .
Gaussian white noise with zero mean and standard deviation was added to the matrix and the vector .
The algorithm (5) was compared with the classical SVD-based TLS algorithm [12], the solution based on augmented systems [13], and regularized total least squares [20]:
(13)
The condition number of the matrix is
The parameter was selected from the interval with a step :
The algorithms were compared by the relative mean square error (RMSE) of the solution
The simulation results are presented in Table 1. Figure 1 shows the relative root mean square error of the solution (8) in the -th iteration for various values of the parameter . Figure 2 shows the relative root mean square error of solution (13) depending on the choice of parameter .
Table 1: RMSE of the solution
Algorithm for estimating parameters | , 100% |
|
Algorithm (5) with | 7.53 |
|
Algorithm (5) with | 0.2045 | 2.20 |
Algorithm (5) with | 8.63 |
|
TLS [12] | 49.51 | |
TLS [13] | 49.51 | 6.34 |
RTLS [20] | 17.73 |
|
Figure 1: RMSE of the solution (8) at the k-th iteration for various values of the parameter : 1 – , 2 – ; 3 –
Figure 2: RMSE of the solution (13) for various values of the parameter
Conclusion. The paper proposes a new implicit iterative algorithm for solving regularized total least squares problems. The simulation showed that the proposed algorithm has a higher accuracy compared to the solutions obtained by total least squares algorithms, as well as the total least squares solution with Tikhonov regularization.
The proposed implicit iterative algorithm makes it possible to implement a constraint on the length of the solution vector without solving additional nonlinear equations.
The condition number of problems solved at each iteration is less than the condition number of systems with Tikhonov regularization.
Competing interests. We have no competing interests. Author’s ResponsibilitiesEach author has participated in the development of the concept of the article and in the writing of the manuscript. The authors are absolutely responsible for the submission of the final manuscript in print. Each author has approved the final version of the manuscript.
Funding. This work was supported by the Federal Agency of Railway Transport (projects nos. 122022200429-8, and 122022200432-8).
Acknowledgments. The authors thank the referees for careful reading of the paper and valuable suggestions and comments.
About the authors
Dmitriy V. Ivanov
Samara National Research University; Samara State University of Transport
Email: dvi85@list.ru
ORCID iD: 0000-0002-5021-5259
SPIN-code: 6672-4830
Scopus Author ID: 22937879800
http://www.mathnet.ru/person42123
Cand. Phys. & Math. Sci., Associate Professor, Dept. of Information Systems Security, Dept. of Mechatronics
Russian Federation, 34, Moskovskoye shosse, Samara, 443086; 2 B, Svobody str., Samara, 443066Aleksandr I. Zhdanov
Samara State Technical University
Author for correspondence.
Email: ZhdanovAleksan@yandex.ru
ORCID iD: 0000-0001-6082-9097
SPIN-code: 5056-3555
Scopus Author ID: 7102747969
ResearcherId: E-1433-2014
http://www.mathnet.ru/person41724
Dr. Phys. & Math. Sci., Proffessor, Dept. of Applied Mathematics & Computer Science
Russian Federation, 244, Molodogvardeyskaya st., Samara, 443100References
- Markovsky I. Bibliography on total least squares and related methods, Stat. Interface, 2010, vol. 3, no. 3, pp. 329–334. DOI: https://doi.org/10.4310/SII.2010.v3.n3.a6.
- Pintelon R., Schoukens J. System Identification: A Frequency Domain Approach. Piscataway, NJ, IEEE Press, 2012, xliv+743 pp. DOI: https://doi.org/10.1002/9781118287422.
- Pillonetto G., Chen T., Chiuso A., De Nicolao G., Ljung L. Regularized System Identification. Learning Dynamic Models from Data, Communications and Control Engineering. Cham, Springer, 2022, xxiv+377 pp. DOI: https://doi.org/10.1007/978-3-030-95860-2.
- Markovsky I., Willems J. C., Van Huffel S., Bart De Moor, Pintelon R. Application of structured total least squares for system identification and model reduction, IEEE Trans. Autom. Control, 2005, vol. 50, no. 10, pp. 1490–1500. DOI: https://doi.org/10.1109/TAC.2005.856643.
- Ivanov D. V. Identification of linear dynamic systems of fractional order with errors in variables based on an augmented system of equations, Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.], 2021, vol. 25, no. 3, pp. 508–518. EDN: RCYACI. DOI: https://doi.org/10.14498/vsgtu1854.
- Fu H., Barlow J. A regularized structured total least squares algorithm for high-resolution image reconstruction, Linear Algebra Appl., 2004, vol. 391, pp. 75–98. DOI: https://doi.org/10.1016/S0024-3795(03)00660-8.
- Mesarovic V. Z., Galatsanos N. P., Katsaggelos A. K. Regularized constrained total least squares image restoration, IEEE Trans. Image Process., 1995, vol. 4, no. 8, pp. 1096–1108. DOI: https://doi.org/10.1109/83.403444.
- Zhu W., Wang Y., Yao Y., Chang J., Graber H. L., Barbour R. L. Iterative total least-squares image reconstruction algorithm for optical tomography by the conjugate gradient method, J. Opt. Soc. Am. A, 1997, vol. 14, no. 4, pp. 799–807. DOI: https://doi.org/10.1364/josaa.14.000799.
- Zhu W., Wang Y., Zhang J. Total least-squares reconstruction with wavelets for optical tomography, J. Opt. Soc. Am. A, vol. 15, no. 10, pp. 2639–2650. DOI: https://doi.org/10.1364/josaa.15.002639.
- Lemmerling P., Mastronardi N., Van Huffel S. Efficient implementation of a structured total least squares based speech compression method, Linear Algebra Appl., 2003, vol. 366, pp. 295–315. DOI: https://doi.org/10.1016/S0024-3795(02)00465-2.
- Khassina E. M., Lomov A. A. Audio files compression with the STLS-ESM method, St. Petersburg State Polytechnical University Journal. Computer Science. Telecommunications and Control Systems, 2015, vol. 229, no. 5, pp. 88–96. EDN: VAWFWT. DOI: https://doi.org/10.5862/JCSTCS.229.9.
- Golub G. H., Van Loan C. An analysis of the total least squares problem, SIAM J. Matrix Anal. Appl., 1980, vol. 17, no. 6, pp. 883–893. DOI: https://doi.org/10.1137/0717073.
- Zhdanov A. I., Shamarov P. A. The direct projection method in the problem of complete least squares, Autom. Remote Control, 2000, vol. 61, no. 4, pp. 610–620. EDN: LGBGAF.
- Ivanov D., Zhdanov A. Symmetrical augmented system of equations for the parameter identification of discrete fractional systems by generalized total least squares, Mathematics, 2021, vol. 9, no. 24, 3250. EDN: QFMGJB. DOI: https://doi.org/10.3390/math9243250.
- Björk Å. Newton and Rayleigh quotient methods for total least squares problem, In: Recent Advances in Total Least Squares Techniques and Errors in Variables Modeling, Proceedings of the Second Workshop on Total Least Squares and Errors-in-Variables Modeling (Leuven, Belgium, August 21–24, 1996). Philadelphia, PA, USA, SIAM, 1997, pp. 149–160.
- Björck Å., Heggernes P., Matstoms P. Methods for large scale total least squares problems, SIAM J. Matrix Anal. Appl., 2000, vol. 22, no. 2, pp. 413–429. DOI: https://doi.org/10.1137/S0895479899355414.
- Fasino D., Fazzi A. A Gauss–Newton iteration for total least squares problems, BIT Numer. Math., 2018, vol. 58, no. 2, pp. 281–299. DOI: https://doi.org/10.1007/s10543-017-0678-5.
- Mohammedi A. Rational–Lanczos technique for solving total least squares problems, Kuwait J. Sci. Eng., 2001, vol. 28, no. 1, pp. 1–12.
- Fierro R. D., Golub G. H., Hansen P. C., O’Leary D. P. Regularization by truncated total least squares, SIAM J. Sci. Comp., 1997, vol. 18, no. 4, pp. 1223–1241. DOI: https://doi.org/10.1137/S1064827594263837.
- Golub G. H., Hansen P. C., O’Leary D. P. Tikhonov regularization and total least squares, SIAM J. Matrix Anal. Appl., 1999, vol. 21, no. 1, pp. 185–194. DOI: https://doi.org/10.1137/S0895479897326432.
- Lampe J., Voss H. Solving regularized total least squares problems based on eigenproblems, Taiwanese J. Math., 2010, vol. 14, no. 3A, pp. 885–909. DOI: https://doi.org/10.11650/twjm/1500405873.
- Sima D. M., Van Huffel S., Golub G. H. Regularized total least squares based on quadratic eigenvalue problem solvers, BIT Numer. Math., 2004, vol. 44, no. 4, pp. 793–812. DOI: https://doi.org/10.1007/s10543-004-6024-8.
- Lampe J., Voss H. Efficient determination of the hyperparameter in regularized total least squares problems, Appl. Numer. Math., 2012, vol. 62, no. 9, pp. 1229–1241. DOI: https://doi.org/10.1016/j.apnum.2010.06.005.
- Zhdanov A. I. Direct recurrence algorithms for solving the linear equations of the method of least squares, Comput. Math. Math. Phys., 1994, vol. 34, no. 6, pp. 693–701. EDN: VKRSPF.
- Vainiko G. M., Veretennikov A. Yu. Iteratsionnye protsedury v nekorrektno postavlennykh zadachakh [Iteration Procedures in Ill-Posed Problems]. Moscow, Nauka, 1986, 177 pp.
- Zhdanov A. I. Implicit iterative schemes based on singular decomposition and regularizing algorithms, Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.], 2018, vol. 22, no. 3, pp. 549–556. EDN: PJITAX. DOI: https://doi.org/10.14498/vsgtu1592.
- Zhdanov A. I. The solution of ill-posed stochastic linear algebraic equations by the maximum likelihood regularization method, USSR Comput. Math. Math. Phys., 1988, vol. 28, no. 5, pp. 93–96. DOI: https://doi.org/10.1016/0041-5553(88)90014-6.
- Gfrerer H. An a posteriori parameter choice for ordinary and iterated Tikhonov regularization of ill-posed problems leading to optimal convergence rates, Math. Comp., 1987, vol. 49, no. 180, pp. 507–522. DOI: https://doi.org/10.1090/S0025-5718-1987-0906185-4.
- Hämarik U., Tautenhahn U. On the monotone error rule for parameter choice in iterative and continuous regularization methods, BIT Numer. Math., 2001, vol. 41, no. 5, pp. 1029–1038. DOI: https://doi.org/https://doi.org/10.1023/A:1021945429767.
- Tautenhahn U., Hämarik U. The use of monotonicity for choosing the regularization parameter in ill-posed problems, Inverse Probl., 1999, vol. 15, no. 6, pp. 1487–1505. DOI: https://doi.org/10.1088/0266-5611/15/6/307.
- Hansen P. C. Regularization tools version 4.0 for Matlab 7.3, Numer. Algorithms, 2007, vol. 46, no. 2, pp. 189–194. DOI: https://doi.org/10.1007/s11075-007-9136-9.