Have you considered using compression to solve huge linear systems of equations? How can a lossy compression speedup numerical solvers without affecting accuracy? Do you need compression that will be suitable for GPU(s)?
Well, the good news is that we have developed a matrix compression scheme that you can use on GPUs. We call it VCRS – Very Compressed Row Storage. In this post, firstly, we focus on “why do we need matrix compression”. Secondly, we describe “what is VCRS” and “how to use it for compression”. Finally, we give few examples from a real-world applications.
Why matrix compression?
If you are doing modelling or simulation of a physical process, most of the time, you end up with differential equations describing this process. Very often we can’t solve these differential equations analytically in the continuous space. Therefore, we need to discretise these and solve numerically. Discretisation can be viewed as representation of your differential equation as a system of linear equations in a matrix form
where is the matrix,
is the solution vector,
is the right-hand side vector.
Most of the time, the matrix is huge (millions of rows) and sparse (few non-zero elements in a row). We can’t just invert this matrix to get the solution, since the matrix inversion is very costly in terms of memory and computational costs. To solve this system of linear equations, we use iterative methods. Very often we can speedup iterative methods using a preconditioner
such that
Basically, the preconditioner is a matrix which is close to the inverse of the matrix
, in other words
.
If is the identity matrix, then we automatically obtain solution of our system of linear equations.
Sometimes, the original matrix can be implemented matrix-free, meaning that the elements are calculated on the fly and not stored. However, most of the time we have to store the preconditioner matrix
in the memory. The less storage both matrices take, the larger problems we can compute. It is especially important if we use accelerators, for example a GPU with limited memory.
An important aspect of a preconditioner is that it does not have to be exact. It is acceptable to have an preconditioner that is an approximation of an approximation.
Therefore lossy compression of the preconditioner enables bigger problems to be computed on accelerators.
There are many ways to compress a matrix. In this blog we suggest to use a compressed matrix using VCRS compression. This way we take two pigeons with one bean:
- speedup iterative solver with a preconditioner
- suitable preconditioner for GPU(s)
What is VCRS?
VCRS stands for Very Compressed Row Storage format. This method was developed during Hans‘s PhD at TU Delft. VCRS format was inspired by the well-known CSR (Compressed Sparse Row) format.

CSR format versus VCRS format
To illustrate compression, let’s consider a small matrix from a one-dimensional Poisson equation with Dirichlet boundary conditions, see picture above.
CSR format consists of two integer and one floating point arrays:
- The non-zero elements of the matrix
are consecutively, row by row, stored in the floating point array
.
- The column index of each element is stored in an integer array
.
- The second integer array
contains the location of the beginning of each row.
To take advantage of the redundancy in the column indices of a matrix constructed by a discretization with finite differences or finite elements on structured meshes, we introduce a new sparse storage format: VCRS.
VCRS format consists of five integer and one floating point arrays:
- The first array
contains the column indices of the first non-zero elements of each row.
- The second array
consists of the number of non-zero elements per row.
- The third array is
which represents a unique set of indices per row, calculated as the column indices of the non-zero elements in the row
minus
. Here, the row numbers 0 and 4 in the matrix
have the same set of indices,
= {0 1}, and the row numbers 1 and 2 have the same set of indices as well,
= {1 2 1}.
-
To reduce redundancy in this array, we introduce a fourth array
that contains an index per row, pointing at the starting positions in the
- The approach from previous array 4 is also applied to the array
containing values of the non-zero elements per row, i. e., the set of values is listed uniquely.
- Therefore, also here we need an additional array of pointers per row
pointing at the positions of the first non-zero value in a row in
Why VCRS is better than CSR?
At a first glance, it seems that the VCRS format is based on more arrays than the CSR format, six versus three, respectively. However, the large arrays in the CSR format are cidx and data, and they contain redundant information of repeated indices and values of the matrix. For small matrices, the overhead can be significant, however, for large matrices it can be beneficial to use the VCRS, especially on GPUs with a limited amount of memory.
Summarizing, the following factors contribute to the usage of the VCRS format:
- The CSR format of a large matrix contains a large amount of redundancy, especially if the matrix arises from finite-difference discretisations;
- The amount of redundancy of a matrix A can vary depending on the accuracy and storage requirements, giving the opportunity to use a lossy compression;
- The exact representation of matrices is not required for the preconditioner, an approximation might be sufficient for the convergence of the solver.
VCRS with lossy compression
Of course you can already use VCRS format as described above. If you want to get even more advantages of the VCRS format, here we list two mechanisms to adjust the data redundancy:
- Quantization
- Row classification
Quantization
Quantization is a lossy compression technique that compresses a range of values to a single value. It has well-known applications in image processing and digital signal processing.
However, we need to make sure that the effect of the data loss in lossy compression does not affect the accuracy of the solution. The simplest example of quantization is rounding a real number to the nearest integer value.
The quantization technique can be used to make the matrix elements in different rows similar to each other for better compression. The quantization mechanism is based on the maximum and minimum values of a matrix and on a number of so-called bins, or sample intervals.
Figure above illustrates the quantization process of a matrix with values on the interval [0, 1]. In this example the number of bins is set to 5, meaning there are 5 intervals . The matrix entries are normally distributed between 0 and 1, as shown by the black dots connected with the solid line. By applying quantization, the matrix values that fall in a bin, are assigned to be a new value equal to the bin center. Therefore, instead of the whole range of matrix entries, we only get 5 values. Obviously, the larger number of bins, the more accurate is the representation of matrix entries.
Row classification
Next, we introduce row classification as a mechanism to define similarity of two different matrix rows.
Given a sorted array of rows and a tolerance, we can easily search for two rows that are similar within a certain tolerance. The main assumption for row comparison is that the rows have the same number of non-zero elements.
Let be the
-th row of matrix
of length
and
be
the -th row of
.
The comparison of two rows is summarized in Algorithm 3 below. If is not smaller than
and
is not smaller than
, then the rows
and
are “equal within the given tolerance
“.
Algorithm 4 then describes the comparison of two complex values and Algorithm 5 compares two floating-point numbers.
Figure bellow illustrates the classification of a complex matrix entree . Within a distance
the numbers are assumed to be equal to
. Then,
is smaller than the numbers in the dark gray area, and larger than the numbers in the light gray area.
Impact of quantization and row classification
The number of bins and tolerance have influence on
- the compression: the less bins and/or the larger lambda, the better is the matrix compressed
- the accuracy: the more bins and the smaller lambda, the more accurate are matrix operations (such as matrix-vector multiplication)
- the computational time: the less bins and/or the larger lambda, the faster are computations
- the memory usage: the less bins and/or the larger lambda, the less memory is used
- the speedup on modern hardware (which is calculated as a ratio of the computational time of the algorithm using the original matrix and of the computational time using the compressed matrix)
Example: Helmholtz equation
From our previous posts, you might know that one of the problems we had to solve is the Helmholtz equation – the wave equation in the frequency domain.

Real part of the preconditioner

Imaginary part of the preconditioner
Using the VCRS format to store this matrix results in three to four times smaller memory requirements and three to four times faster matrix-vector multiplication, depending on the compression parameters. In general, we observed the compression factor between 3 and 20 depending on the matrix.
Based on our experiments, the most reasonable parameter choice would be a tolerance λ = 0.1, and number of bins equals to 100 000.
Summarising our experiments for the whole solver (Bi-CGSTAB preconditioned with shifted Laplace matrix-dependent multigrid method), we can conclude that the VCRS format can be used
- to reduce the memory for the preconditioner as well as
- to increase the performance of the solver on different hardware platforms (CPUs, GPUs),
- with minimal effect on the accuracy of the solver.
Example: Reservoir simulation
The VCRS compression can also be used in other applications where the solvers are based on a preconditioned system of linear equations. For example, an iterative solver for linear equations is also an important part of a reservoir simulator.
It appears within a Newton step to solve discretized non-linear partial differential equations describing the fluid flow in porous media. The basic partial differential equations include a mass-conservation equation, Darcy’s law, and an equation of state relating the fluid pressure to its density. In its original form the values of the matrix are scattered. Although the matrix looks full due to the scattered entries, the most common number of non-zero elements per row is equal to 7, however the maximum number of elements per row is 210.
The distribution of the matrix values is shown in the figure below. Note, that the matrix has real-valued entries only. It can be seen that there is a large variety of matrix values, that makes the quantization and row classification effective.
Using the VCRS format to store this matrix results in two to three times smaller memory requirements and two to three times faster matrix-vector multiplication, depending on the compression parameters.
Summary
In this post
- we introduced a VCRS (Very Compressed Row Storage) format,
- we have shown that the VCRS format not only reduces the size of the stored matrix by a certain factor but also increases the efficiency of the matrix-vector computations,
- we introduced two parameters for lossy compression: number of bins and lambda,
- we concluded that with proper choice of these compression parameters, the effect of the lossy compression on the whole solver is minimal,
- we used VCRS compression on CPUs and GPUs,
- we applied VCRS on real-world applications.
What matrix compression are you using? Let us know in the comment box below.
Liked this article? Get EZNumeric’s future articles in your inbox:
[mc4wp_form id=”916″]
Recent Comments