pyplot (generic function with 1 method)
9.5 s

Practical iterative methods

3.8 μs

Incomplete LU (ILU) preconditioning

4.8 μs

Idea (Varga, Buleev, 1960 : derive a preconditioner not from an additive decomposition but from the LU factorization.

  • LU factorization has large fill-in. For a preconditioner, just limit the fill-in to a fixed pattern.

  • Apply the standard LU factorization method, but calculate only those

  • Result: incomplete LU factors L, U, remainder R:

A=LUR

  • What about zero pivots which prevent such an algoritm from being computable ?

Theorem (Saad, Th. 10.2): If A is an M-Matrix, then the algorithm to compute the incomplete LU factorization with a given pattern is stable. Moreover, A=LUR=MN where M=LU and N=R is a regular splitting.

2.1 ms

Discussion

  • Generally better convergence properties than Jacobi, Gauss-Seidel

  • Block variants are possible

  • ILU Variants:

    • ILUM: ("modified"): add ignored off-diagonal entries to main diagonal

    • ILUT: ("threshold"): zero pattern calculated dynamically based on drop tolerance

    • ILU0: Drop all fill-in

    • Incomplete Cholesky: symmetric variant of ILU

  • Dependence on ordering

  • Can be parallelized using graph coloring

  • Not much theory: experiment for particular systems and see if it works well

  • I recommend it as the default initial guess for a sensible preconditioner

13.8 μs

Further approaches to preconditioning

These are based on ideas which are best explained and developed with multidimensional PDEs in mind.

  • Multigrid: gives indeed O(N) optimal solver complexity in some situations. This is the holy grail method... I will try to discuss this later in the course.

  • Domain decomposition - based on the idea the subdivision of the computational domain into a number of subdomains and subsequent repeated solution of the smaller subdomain problems

8.8 μs

Iterative methods in Julia

Julia has some well maintained packages for iterative methods and preconditioning.

7.0 μs

Random sparse M-Matrices

3.8 μs

We will test the methods with random sparse M matrices, so we define a function which gives us a random, strictly diagonally dominant M-Matrix which is not necessarily irreducible. For skew=0 it is also symmetric:

2.8 μs
sprandm (generic function with 1 method)
55.3 μs

Test the method a bit...

4.5 μs
N
5
80.0 ns
859 ms
x1x2x3x4x5
Float64Float64Float64Float64Float64
1
2.18746
-0.814626
-0.452134
-0.704067
0.0
2
-0.494782
2.59544
-0.183471
0.0
-0.288585
3
0.0
-0.795573
1.17374
-0.228419
-0.0102942
4
-0.993348
-0.985134
0.0
0.933306
0.0
5
-0.698695
0.0
-0.537345
0.0
0.299452
268 ms

Up to rounding errors, the inverse is nonnegative, as predicted by the theory. There are zero entries because it is not necessarily irreducible. Invertibility is guaranteed by strict diagonal dominance.

2.3 μs
Ainv
5×5 Array{Float64,2}:
 199.183  198.919  198.709  198.892  198.531
 134.768  135.099  134.757  134.647  134.829
 166.991  167.226  167.737  167.027  166.924
 354.249  354.317  353.733  354.883  353.62
 764.396  764.203  764.629  763.781  766.096
323 ms
134.64717151847654
418 ns
ρ_jacobi (generic function with 1 method)
35.6 μs
0.9993509631121599
7.4 ms

Preconditioners

Here, we define two preconditioners which are able to work together with IterativeSolvers.jl.

4.4 μs
Jacobi
2.5 μs
1.6 ms

We can construct a the preconditioner then as follows:

4.9 μs
preconJacobi
19.9 ms
1.7 ms

ILU0

For this preconditioner, we need to store the matrix, the inverse of a modified diagonal and the indices of the main diagonal entries in the sparse matrix columns.

3.4 μs
1.4 ms
37.5 ms
2.9 ms

Simple iteration method with interface similar to IterativeSolvers.jl

2.2 μs
simple (generic function with 1 method)
85.4 μs

Iterative Method comparison: symmetric problems

2.6 μs
N1
100
62.0 ns
tol
1.0e-10
67.0 ns
68.8 μs
194 ms
9.4 μs
32.2 μs

Create also ILU preconditioners from IncompleteLU.jl: These have drop tolerance τ as parameter. The larger τ, the more entries of the LU factors are ignored.

6.4 μs
133 ms
281 μs
2032
2.6 ms
2310
335 ns
4860
451 ns
6680
604 μs

Create a right hand side for testing

2.7 μs
b1
3.4 μs

So let us run this with Jacobi preconditioner. Theory tells it should converge...

2.1 μs
205 ms

After 100 steps we are far from the solution, and we need lots of steps to converge, so let us have a look at the spectral radius of the iteration matrix and compare it with the residual reduction in the last iteration step:

2.3 μs
28.0 ms

It seams we have found a simple spetral radius estimator here ...

2.2 μs

Now for the ILU0 preconditioner:

2.2 μs
52.2 ms

... the spectral radius estimate is a little bit better...

2.4 μs
0.9998541700844135
8.3 μs

We have symmetric matrices, so let us try CG:

2.1 μs

Without preconditioning:

2.1 μs
96.3 ms

With Jacobi preconditioning:

2.2 μs
78.7 ms

With various variants of ILU preconditioners:

2.2 μs
92.5 ms
82.0 ms
111 μs
  • As we see, all CG variants converge within the given number of iterations steps.

  • Precoditioning helps

  • The better the preconditioner, the faster the iteration (though this also depends on the initial value)

  • The behaviour of the CG residual is not monotone

6.2 μs
131 ms

Nonsymmetric problems

Here, we skip the simple iteration and look at the performance of some Krylov subspace methods.

4.0 μs
N2
1000
67.0 ns
4.8 ms
105 μs
66.3 μs
3.7 ms
12.1 ms

Try CG:

2.3 μs
6.5 ms
27.9 ms
1.4 s

Use the bicgstabl method from IterativeSolvers.jl:

2.4 μs
82.0 ms
5.0 ms
417 ms
83.6 ms
  • CG does not converge - the case is also not covered by the theory

  • Various preconditioners improve the convergence

  • Is there a bug in the implementation of my ILU0 ?

5.3 μs
71.9 ms