Eigenvalues and eigenvectors play important role in many real-world applications, including control systems modeling, partial differential equations, data mining and clusterization, chemistry, vibration analysis, to name a few examples.
The main practical difficulty is that eigenproblem might be ill-conditioned and hard to compute even when matrix itself is well-conditioned.
This comes from the fact that computing eigenvalues is equivalent to finding roots of a polynomial, which is proven to be unavoidably ill-conditioned problem (even when roots are well-separated, as shown by J.H. Wilkinson [1-3]) .
More precisely, conditioning of an eigenproblem depends on eigenvalues clustering, magnitude and angles between the eigenvectors. All the properties are inherent to the problem and ill-conditioning, if it is present, cannot be easily alleviated or even detected beforehand. The only case when it is considered stable is computing eigenvalues of a symmetric matrix.
At the moment the only reliable way of dealing with ill-conditioned eigenproblems is to use the extended-precision computations. In fact, there is no other alternative in contrast to solving systems of linear equations where various pre-conditioners can be used to improve the conditioning.
Here we want to show two examples of such problems and how toolbox solves them in comparison to MATLAB.
Example 1. Grcar Matrix
Let’s consider a classic example of sensitive eigenvalues – the Grcar matrix[4-6]. It is composed purely of
1 elements and has a special structure:
A = gallery('grcar',5) A = 1 1 1 1 0 -1 1 1 1 1 0 -1 1 1 1 0 0 -1 1 1 0 0 0 -1 1
Theory dictates that the Grcar matrix of any dimension has the same eigenvalues of its transpose matrix.
Let’s validate this property by using the standard floating-point arithmetic in MATLAB, and compare it with Multiprecision Computing Toolbox results:
% MATLAB floating-point precision A = -gallery('grcar',150); plot(eig(A),'k.'), hold on, axis equal plot(eig(A'),'ro') % Quadruple precision (34 decimal digits) computation. mp.Digits(34); A = mp(-gallery('grcar',150)); plot(eig(A),'k.'), hold on, axis equal plot(eig(A'),'ro')
Although condition number of the Grcar matrix is low,
cond(A) = cond(A') = 3.61, MATLAB’s
double precision routines suffer from accuracy loss.
Example 2. Noschese-Pasquini-Reichel Matrix
Another interesting case is tridiagonal Toeplitz matrices:
Such matrices arise in numerous applications, including the solution of ordinary and partial differential equations, time series analysis, regularization problems, etc. (please check references in  for more details)
Eigenvalues of lie on a straight line and can be computed explicitly :
As it follows from the expression, transposition doesn’t change the matrix eigenvalues. However, if we apply numerical method to find them and rely on finite precision floating-point arithmetic – accurate solution might be completely lost.
For example, let’s verify the property of
eig(T)==eig(T.') for the following matrix :
% MATLAB standard floating-point precision n = 100; T = toeplitz([16-3i, (4+3i)/8, zeros(1,n-2)], [16-3i, -5, zeros(1,n-2)]); plot(eig(T),'k.'), hold on, axis equal plot(eig(T.'),'ro') % Multiprecision toolbox (50 decimal digits) computation. mp.Digits(50); n = 100; T = toeplitz([mp('16-3i'),mp('(4+3i)/8'),zeros(1,n-2)], [mp('16-3i'),mp('-5'),zeros(1,n-2)]); plot(eig(T),'k.'), hold on, axis equal plot(eig(T.'),'ro')
The matrix itself is well-conditioned,
cond(T) = cond(T.') = 1.81, but
double precision results are far off.
The irony here is that to estimate the conditioning of the eigenvalues – we need to compute the eigenvectors , which are even more prone to accuracy loss and require more computational resources:
Figure 5. Condition numbers for the eigenvalues of , computed with
50-digits of precision by the toolbox (green) and by MATLAB (blue). The difference is almost
30orders of magnitude!
Toolbox provides following eigen-functions capable of working with arbitrary precision:
eig, condeig, schur, ordeig, ordqz, ordschur, poly, polyeig, qz, hess, balance, cdf2rdf, rsf2csf.
Every function can be used as a drop-in replacement for analogous standard routine, as all support the same options and computation modes. For illustration, all of the following special cases are covered in
e = eig(A)
[V,D] = eig(A)
[V,D,W] = eig(A)
[___] = eig(A,balanceOption)
[___] = eig(A,B,algorithm)
[___] = eig(___,eigvalOption)
e = eig(A,B)
[V,D] = eig(A,B)
[V,D,W] = eig(A,B)
Simple usage example:
>> mp.Digits(34); % setup working precision >> A = mp(eye(2)); % convert matrices to extended precision >> B = mp([3 6; 4 8]); % B is singular to make example interesting :) >> [V,D] = eig(A,B) % solve generalized eigenproblem V = -0.75 -1 -1 0.5 D = 0.09090909090909090909090909090909092 0 0 Inf >> A*V(:,1) - D(1,1)*B*V(:,1) % check accuracy of the first eigen-pair ans = 9.629649721936179265279889712924637e-35 1.925929944387235853055977942584927e-34
Most of the functions are meta-algorithms which analyse the inputs and apply the best matching numerical method for the particular matrix. For example, in case of standard eigenproblem,
eig() uses multi-shift QR decomposition[10-12] for non-symmetric matrix, MRRR  or divide & conquer algorithm for hermitian, symmetric and tridiagonal symmetric matrices.
- J. H. Wilkinson (1984). The perfidious polynomial. Studies in Numerical Analysis.
- J. H. Wilkinson (1959). The evaluation of the zeros of ill-conditioned polynomials. Part I. Numerische Mathematik 1:150-166.
- J. H. Wilkinson (1963). Rounding Errors in Algebraic Processes. Englewood Cliffs, New Jersey: Prentice Hall.
- J. F. Grcar. Operator coefficient methods for linear equations. tech. report SAND89-8691, Sandia National Labs, 1989.
- A. Varga. Solving Control Problems – a Numerical Perspective. IEEE International Symposium on Intelligent Control, 2008.
- L. N. Trefethen. Psuedospectra of matrices. “Numerical Analysis 1991″ (Dundee 1991), Longman Sci. Tech., Harlow, 1992, 234-266.
- S. Noschese, L. Pasquini, L. Reichel, (2013). Tridiagonal Toeplitz matrices: Properties and novel applications. Numerical Linear Algebra with Applications 20 (2): 302.
- G. D. Smith, Numerical Solution of Partial Differential Equations, 2nd ed., Clarendon Press, Oxford, 1978.
- Gene H. Golub and Charles F. Van Loan. Matrix Computations. Johns Hopkins University Press; fourth edition edition (December 27, 2012).
- Braman K., Byers R., Mathias R., The multi-shift QR algorithm Part I: Maintaining well focused shifts, and level 3 performance, SIAM Journal on Matrix Analysis and Applications, 23:929–947, 2002.
- Braman K., Byers R., Mathias R., The multi-shift QR algorithm Part II: Aggressive early deflation, SIAM Journal on Matrix Analysis and Applications, 23:948–973, 2002.
- Kreßner D., Numerical methods and software for general and structured eigenvalue problems, PhD thesis, TU Berlin, Institut für Mathematik, 2004.
- Dhillon I.S., Parlett B.N., Vömel C., The design and implementation of the MRRR algorithm, ACM Transactions on Mathematical Software 32:533-560, 2006.