## Architecture of linear systems solver

by on October 7, 2016

Computational complexity of direct algorithms for solving linear systems of equations is O(n3). The only way to reduce enormous effect of cubic growth is to take advantage of various matrix properties and use specialized solvers1.

Toolbox follows this strategy by relying on poly-solver which automatically detects matrix properties and applies the best matching algorithm for the particular case. In this post we outline our solver architecture for full/dense matrices.

Poly-solver for dense input matrices (click on flowchart to see it in higher resolution):

Toolbox analyses the structure of input matrix, computes its bandwidth, checks symmetry/Hermitianity, determines permutation to convert matrix to trivial cases (diagonal or triangular) if possible. Then solver selects the most appropriate algorithm depending on matrix properties. Special attention is paid to n-diagonal matrices (banded), frequently encountered in solution of PDE.

## Citations Digest v.8

by on September 20, 2016

Recent papers citing the toolbox:

Previous issues: digest v.7, digest v.6, digest v.5, digest v.4, digest v.3 and digest v.2.

## How to write precision independent code in MATLAB

by on July 21, 2016

One of the main design goals of toolbox is ability to run existing scripts in extended precision with minimum changes to code itself. Thanks to object-oriented programming the goal is accomplished to great extent. Thus, for example, MATLAB decides which function to call (its own or from toolbox) based on type of input parameter, automatically and absolutely transparently to user:

```>> A = rand(100); % create double-precision random matrix >> B = mp(rand(100)); % create multi-precision random matrix   >> [U,S,V] = svd(A); % built-in functions are called for double-precision matrices >> norm(A-U*S*V',1) ans = 2.35377689561389e-13   >> [U,S,V] = svd(B); % Same syntax, but now functions from toolbox are used >> norm(B-U*S*V',1) ans = 3.01016831776648753720608552494953562e-31```

Syntax stays the same, allowing researcher to port code to multi-precision almost without modifications.

However there are several situations which are not handled automatically and it is not so obvious how to avoid manual changes:

• Conversion of constants, e.g. `1/3 -> mp('1/3')`, `pi -> mp('pi')`, `eps -> mp('eps')`.
• Creation of basic arrays, e.g. `zeros(...) -> mp(zeros(...))`, `ones(...) -> mp(ones(...))`.

In this post we want to show technique on how to handle these situations and write pure precision-independent code in MATLAB, so that no modifications are required at all. Code will be able to run with standard numeric types `'double'/'single'` as well as with multi-precision numeric type `'mp'` from toolbox.

## Citations Digest v.7

by on July 13, 2016

Recent works citing the toolbox:

Previous issues: digest v.6, digest v.5, digest v.4, digest v.3 and digest v.2.

## DevNotes #3. Asynchronous Handling of Ctrl-C in MEX

by on July 2, 2016

There seems to be growing interest in how to detect user interrupt (`Ctrl-C`) inside compiled MEX library. The main difficulty is that TheMathWorks provides no official API to recognize and handle the situation.

As a workaround, Wotao Yin found undocumented function `utIsInterruptPending` which can be used to check if `Ctrl-C` was pressed. The most obvious usage pattern is to include calls to the function in lengthy computational code (loops, etc.) and exit the computations if needed. Collection of various improvements on using `utIsInterruptPending` can be found in recent Yair Altman’s post.

Unfortunately, despite all these efforts the most important issue was not addressed and was not resolved up to date – asynchronous handling of Ctrl-C.

In order to respond to user interrupt, source code of the MEX module have to be changed to include `utIsInterruptPending` calls. Every sub-module, every loop of the code needs to be revised. This is not only time-consuming and error-prone process, but also makes pure computational code dependent on MEX API.

Most importantly, it is not possible to do all the modifications if MEX uses third-party libraries with no access to its source code.

The ideal solution would be to avoid any of the changes to computational code in MEX. Here we propose one of the ways to do so.

## Version 3.9.9 Release Notes

by on May 19, 2016

New version of toolbox includes a lot of updates released over the course of six months. We focused on speed, better support of sparse matrices, special functions and stability. Upgrade to the newest version is highly recommended as it contains critical updates for all platforms.

Short summary of changes:

1. The Core
• All elementary mathematical functions have been heavily optimized for quadruple precision and now enabled with sparse matrices support. This results in serious performance boost:
```% < 3.9.9: >> A = 100*mp(rand(1000,1000)-0.5); >> B = 100*mp(rand(1000,1000)-0.5);   >> tic; atan2(A,B); toc; Elapsed time is 18.056319 seconds.   >> tic; log(A); toc; Elapsed time is 31.506463 seconds.```
```% 3.9.9 >> A = 100*mp(rand(1000,1000)-0.5); >> B = 100*mp(rand(1000,1000)-0.5);   >> tic; atan2(A,B); toc; Elapsed time is 0.262123 seconds. % ~70 times faster   >> tic; C = log(A); toc; Elapsed time is 0.843535 seconds. % ~37 times faster```

This improvement covers all elementary and majority of special functions. Please note, all appropriate functions handle branch cuts correctly and support signed zeros to the contrary of `MATLAB`.

## Citations Digest v.6

by on May 13, 2016

Recent works citing the toolbox:

Previous issues: digest v.5, digest v.4, digest v.3 and digest v.2.

## Accuracy of Bessel Functions in MATLAB

by on May 12, 2016

## Introduction

In previous posts we studied accuracy of computation of modified Bessel functions: `K1(x)`, `K0(x)`, `I0(x)` and `I1(x)`. In spite of the fact that modified Bessel functions are easy to compute (they are monotonous and do not cross x-axis) we saw that MATLAB provides accuracy much lower than expected for `double` precision. Please refer to the pages for more details.

Today we will investigate how accurately MATLAB computes Bessel functions of the first and second kind `Yn(x)` and `Jn(x)` in double precision. Along the way, we will also check accuracy of the commonly used open source libraries.

Having extended precision routines, accuracy check of any function needs only few commands:

```% Check accuracy of sine using 1M random points on (0, 16] >> mp.Digits(34); >> x = 16*(rand(1000000,1));   >> f = sin(x); % call built-in function for double precision sine >> y = sin(mp(x)); % compute sine values in quadruple precision >> z = abs(1-f./y); % element-wise relative error   >> max(z./eps) % max. relative error in terms of 'eps' ans = 0.5682267303295349594044472141263213   >> -log10(max(z)) % number of correct decimal digits in result ans = 15.89903811472552788729580391380739```

Computed sine values have ~`15.9` correct decimal places and differ from ‘true’ function values by ~`0.5eps`. This is the highest accuracy we can expect from double precision floating-point arithmetic.

## Branch Cuts and Signed Zeros in MATLAB

by on April 28, 2016

Signed zeros are important part of the `IEEE754` standard for floating point arithmetic[1], commonly supported in hardware and in programming languages (C++[2], C[3], FORTRAN[4], etc.).

One of the areas where signed zeros play essential role is correct handling of branch cuts of special functions. In fact, it was one of the reasons why “negative zeros” were introduced in the first place[5].

The very unpleasant surprise is that MATLAB has poor support for this important feature.

I. Technically it allows user to create negative zeros:

```>> x = -0 x = 0   >> 1/x ans = -Inf```

The minus sign is not displayed in front of zero (well-known issue of MATLAB) but second operation shows that x is indeed negative.

II. (Real zeros) Sign of real zero is ignored in operations where it plays important role:

```% MATLAB/Double precision >> atan(+0-1.5*1i) ans = 1.5707963267949 - 0.80471895621705i >> atan(-0-1.5*1i) ans = 1.5707963267949 - 0.80471895621705i```
```% MATLAB/Symbolic Math Toolbox (VPA) >> atan(vpa('+0-1.5*i')) ans = -1.570796327 - 0.8047189562 i >> atan(vpa('-0-1.5*i')) ans = -1.570796327 - 0.8047189562 i```

In both cases sign of real zero is ignored, and the most amazingly is that different parts of MATLAB do not agree what sign zero has by default (VPA assumes zero is negative, the core function – zero is positive).

## Performance of Elementary Functions

by on April 27, 2016

UPDATE (April 22, 2017): Timings for Mathematica 11.1 have been added to the table, thanks to test script contributed by Bor Plestenjak. I suggest to take a look at his excellent toolbox for multiparameter eigenvalue problems – MultiParEig.

UPDATE (November 17, 2016): All timings in table have been updated to reflect speed improvements in new version of toolbox (`4.3.0`). Now toolbox computes elementary functions using multi-core parallelism. Also we included timings for the the latest version of `MATLAB``2016b`.

UPDATE (June 1, 2016): Initial version of the post included statement that newest version of `MATLAB R2016a` uses `MAPLE` engine for variable precision arithmetic (instead of `MuPAD` as in previous versions). After more detailed checks we have detected that this is not true. As it turned out, `MAPLE 2016` silently replaced `VPA` functionality of `MATLAB` during installation. Thus we (without knowing it) tested `MAPLE Toolbox for MATLAB` instead of `MathWorks Symbolic Math Toolbox`. We apologize for misinformation. Now post provides correct comparison results with `Symbolic Math Toolbox/VPA`.

Thanks to Nick Higham, Massimiliano Fasi and Samuel Relton for their help in finding this mistake!

From the very beginning we have been focusing on improving performance of matrix computations, linear algebra, solvers and other high level algorithms (e.g. 3.8.0 release notes).

With time, as speed of advanced algorithms has been increasing, elementary functions started to bubble up in top list of hot-spots more frequently. For example the main bottleneck of the multiquadric collocation method in extended precision was the coefficient-wise power function (`.^`).

Thus we decided to polish our library for computing elementary functions. Here we present intermediate results of this work and traditional comparison with the latest `MATLAB R2016b` (Symbolic Math Toolbox/Variable Precision Arithmetic), `MAPLE 2016` and `Wolfram Mathematica 11.1.0.0`.

Timing of logarithmic and power functions in `3.9.4.10481`:

```>> mp.Digits(34); >> A = mp(rand(2000)-0.5); >> B = mp(rand(2000)-0.5);   >> tic; C = A.^B; toc; Elapsed time is 67.199782 seconds.   >> tic; C = log(A); toc; Elapsed time is 22.570701 seconds.```

Speed of the same functions after optimization, in `4.3.0.12057`:

```>> mp.Digits(34); >> A = mp(rand(2000)-0.5); >> B = mp(rand(2000)-0.5);   >> tic; C = A.^B; toc; % 130 times faster Elapsed time is 0.514553 seconds.   >> tic; C = log(A); toc; % 95 times faster Elapsed time is 0.238416 seconds.```

Now toolbox computes 4 millions of logarithms in quadruple precision (including negative arguments) in less than a second!