by Pavel Holoborodko on October 24, 2016
MATLAB allows flexible adjustment of visibility of warning messages. Some, or even all, messages can be disabled from showing on the screen by warning
command.
The little known fact is that status of some warnings may be used to change the execution path in algorithms. For example, if warning 'MATLAB:nearlySingularMatrix'
is disabled, then linear system solver (operator MLDIVIDE) might skip estimation of reciprocal condition number which is used exactly for the purpose of detection of nearly singular matrices. If the trick is used, it allows 20%-50%
boost in solver performance, since rcond
estimation is a time consuming process.

Therefore it is important to be able to retrieve status of warnings in MATLAB. Especially in MEX libraries targeted for improved performance. Unfortunately MATLAB provides no simple way to check status of warning message from MEX module.
Article outlines two workarounds for the issue.
Read More
by Pavel Holoborodko on October 20, 2016
Introduction
In connection to our previous article on Architecture of linear systems solver we decided to outline structure of eigensolver implemented in our toolbox. As with linear system solver – we have plethora of algorithms targeted for matrices with specific properties. Toolbox analyses input matrices and automatically selects the best matching method to find the eigendecomposition.
Standard eigenproblem, EIG(A)

Read More
by Pavel Holoborodko on October 7, 2016
Computational complexity of direct algorithms for solving linear systems of equations is O(n3). The only way to reduce enormous effect of cubic growth is to take advantage of various matrix properties and use specialized solvers1.
Toolbox follows this strategy by relying on poly-solver which automatically detects matrix properties and applies the best matching algorithm for the particular case. In this post we outline our solver architecture for full/dense matrices.
Poly-solver for dense input matrices (click on flowchart to see it in higher resolution):

Toolbox analyses the structure of input matrix, computes its bandwidth, checks symmetry/Hermitianity, determines permutation to convert matrix to trivial cases (diagonal or triangular) if possible. Then solver selects the most appropriate algorithm depending on matrix properties. Special attention is paid to n-diagonal matrices (banded), frequently encountered in solution of PDE.
Read More
by Pavel Holoborodko on September 20, 2016
Recent papers citing the toolbox:
- W. Gaddah, A higher-order finite-difference approximation with Richardson’s extrapolation to the energy eigenvalues of the quartic, sextic and octic anharmonic oscillators, European Journal of Physics, Volume 36, Number 3, 2015.
- M. Rahmanian, R.D. Firouz-Abadia, E. Cigeroglub. Dynamics and stability of conical/cylindrical shells conveying subsonic compressible fluid flows with general boundary conditions, November 11, 2016.
- S. Klickstein, A. Shirin, F. Sorrentino, Energy Scaling of Targeted Optimal Control of Complex Networks, Department of Mechanical Engineering, University of New Mexico, November 9, 2016.
- V. Druskin, S. Güttel, L. Knizhnerman, Compressing variable-coefficient exterior Helmholtz problems via RKFIT, November 2016.
- G. Wright, B. Fornberg, Stable computations with flat radial basis functions using vector-valued rational approximations, arXiv:1610.05374, October 17, 2016.
- J. Reeger, B. Fornberg, L. Watts, Numerical quadrature over smooth, closed surfaces, October 5, 2016.
- B. Fornberg, Fast calculation of Laurent expansions for matrix inverses. Journal of Computational Physics, September 15, 2016.
- L. Yan, J.P. Bouchaud, M. Wyart, Edge Mode Amplification in Disordered Elastic Networks, arXiv:1608.07222, August 25, 2016.
- N. Higham, P. Kandolf, Computing the Action of Trigonometric and Hyperbolic Matrix Functions, arXiv:1607.04012, July 14, 2016.
Previous issues: digest v.7, digest v.6, digest v.5, digest v.4, digest v.3 and digest v.2.
by Pavel Holoborodko on July 21, 2016
One of the main design goals of toolbox is ability to run existing scripts in extended precision with minimum changes to code itself. Thanks to object-oriented programming the goal is accomplished to great extent. Thus, for example, MATLAB decides which function to call (its own or from toolbox) based on type of input parameter, automatically and absolutely transparently to user:
>> A = rand(100); % create double-precision random matrix
>> B = mp(rand(100)); % create multi-precision random matrix
>> [U,S,V] = svd(A); % built-in functions are called for double-precision matrices
>> norm(A-U*S*V',1)
ans =
2.35377689561389e-13
>> [U,S,V] = svd(B); % Same syntax, but now functions from toolbox are used
>> norm(B-U*S*V',1)
ans =
3.01016831776648753720608552494953562e-31
Syntax stays the same, allowing researcher to port code to multi-precision almost without modifications.
However there are several situations which are not handled automatically and it is not so obvious how to avoid manual changes:
- Conversion of constants, e.g.
1/3 -> mp('1/3')
, pi -> mp('pi')
, eps -> mp('eps')
. - Creation of basic arrays, e.g.
zeros(...) -> mp(zeros(...))
, ones(...) -> mp(ones(...))
.
In this post we want to show technique on how to handle these situations and write pure precision-independent code in MATLAB, so that no modifications are required at all. Code will be able to run with standard numeric types 'double'/'single'
as well as with multi-precision numeric type 'mp'
from toolbox.
Read More
by Pavel Holoborodko on July 13, 2016
Recent works citing the toolbox:
- D. Tkachenko, Global Identification in DSGE Models Allowing for Indeterminacy, July 12, 2016.
- T. Ogita, Y. Kobayashi, Accurate and efficient algorithm for solving ill-conditioned linear systems by preconditioning methods, Nonlinear Theory and Its Applications, IEICE Vol. 7 (2016) No. 3 pp. 374-385, July 1, 2016.
- T. Ogita, K. Aishima, Iterative Refinement for Symmetric Eigenvalue Decomposition Adaptively Using Higher-Precision Arithmetic, The University of Tokyo, June 2016.
Please note that timing results in two papers above are provided for 3.8.5.9059
version of toolbox. Newer versions (>3.8.8
) include multi-threaded MRRR algorithm for symmetric eigen-problems, which is 4-6 times faster depending on matrix size. The speed of linear system solver has been improved for 2-4 times s well.
- V. Oryshchenko, Exact mean integrated squared error of kernel distribution function estimators, The University of Manchester, June 22, 2016.
- L. Liu, D. W. Matolak, C. Tao, Y. Li, Sum-Rate Capacity Investigation of Multiuser Massive MIMO Uplink Systems in Semi-Correlated Channels, 2016 IEEE 83rd Vehicular Technology Conference, 15-18 May 2016.
Previous issues: digest v.6, digest v.5, digest v.4, digest v.3 and digest v.2.
by Pavel Holoborodko on July 2, 2016
There seems to be growing interest in how to detect user interrupt (Ctrl-C
) inside compiled MEX library. The main difficulty is that TheMathWorks provides no official API to recognize and handle the situation.
As a workaround, Wotao Yin found undocumented function utIsInterruptPending
which can be used to check if Ctrl-C
was pressed. The most obvious usage pattern is to include calls to the function in lengthy computational code (loops, etc.) and exit the computations if needed. Collection of various improvements on using utIsInterruptPending
can be found in recent Yair Altman’s post.
Unfortunately, despite all these efforts the most important issue was not addressed and was not resolved up to date – asynchronous handling of Ctrl-C.
In order to respond to user interrupt, source code of the MEX module have to be changed to include utIsInterruptPending
calls. Every sub-module, every loop of the code needs to be revised. This is not only time-consuming and error-prone process, but also makes pure computational code dependent on MEX API.

Most importantly, it is not possible to do all the modifications if MEX uses third-party libraries with no access to its source code.
The ideal solution would be to avoid any of the changes to computational code in MEX. Here we propose one of the ways to do so.
Read More
by Pavel Holoborodko on May 19, 2016
New version of toolbox includes a lot of updates released over the course of six months. We focused on speed, better support of sparse matrices, special functions and stability. Upgrade to the newest version is highly recommended as it contains critical updates for all platforms.
Short summary of changes:
- The Core
by Pavel Holoborodko on May 13, 2016
Recent works citing the toolbox:
- J. Rashidinia, G.E. Fasshauer, M. Khasi, A stable method for the evaluation of Gaussian radial basis function solutions of interpolation and collocation problems, Computers & Mathematics with Applications, Available online 21 May 2016.
- Aprahamian M., Theory and Algorithms for Periodic Functions of Matrices, PhD Thesis, The University of Manchester, May 16, 2016.
- Berljafa M., Güttel S., The FEAST algorithm for Hermitian eigenproblems, Accessed on May 12, 2016.
- S. Melkoumian, B. Protas, Drift Due to Two Obstacles in Different Arrangements, Journal of Theoretical and Computational Fluid Dynamics, May 6, 2016, Pages 1–14.
- Jussi Lehtonen, Geoff A. Parker and Lukas Schärer, Why anisogamy drives ancestral sex roles. Evolution: International Journal on Organic Evolution, May 5, 2016.
- M. Rahmanian, b, R.D. Firouz-Abadia, E. Cigeroglub. Free vibrations of moderately thick truncated conical shells filled with quiescent fluid, Journal of Fluids and Structures, Volume 63, May 2016, Pages 280–301.
- Isaac S. Klickstein, Afroza Shirin, and Francesco Sorrentino. Feasible Strategies to Target Complex Networks, Department of Mechanical Engineering, University of New Mexico, April 14, 2016.
- Z. Qu, D. Tkachenko, Global Identification in DSGE Models Allowing for Indeterminacy, April 5, 2016.
Previous issues: digest v.5, digest v.4, digest v.3 and digest v.2.
by Pavel Holoborodko on May 12, 2016
Introduction
In previous posts we studied accuracy of computation of modified Bessel functions: K1(x)
, K0(x)
, I0(x)
and I1(x)
. In spite of the fact that modified Bessel functions are easy to compute (they are monotonous and do not cross x-axis) we saw that MATLAB provides accuracy much lower than expected for double
precision. Please refer to the pages for more details.
Today we will investigate how accurately MATLAB computes Bessel functions of the first and second kind Yn(x)
and Jn(x)
in double precision. Along the way, we will also check accuracy of the commonly used open source libraries.

Having extended precision routines, accuracy check of any function needs only few commands:
% Check accuracy of sine using 1M random points on (0, 16]
>> mp.Digits(34);
>> x = 16*(rand(1000000,1));
>> f = sin(x); % call built-in function for double precision sine
>> y = sin(mp(x)); % compute sine values in quadruple precision
>> z = abs(1-f./y); % element-wise relative error
>> max(z./eps) % max. relative error in terms of 'eps'
ans =
0.5682267303295349594044472141263213
>> -log10(max(z)) % number of correct decimal digits in result
ans =
15.89903811472552788729580391380739
Computed sine values have ~15.9
correct decimal places and differ from ‘true’ function values by ~0.5eps
. This is the highest accuracy we can expect from double precision floating-point arithmetic.
Read More