# Using near null-space vectors (MPI version)¶

Let us look at how to use the near null-space vectors in the MPI version of the solver for the elasticity problem (see Using near null-space vectors). The following points need to be kept in mind:

• The near null-space vectors need to be partitioned (and reordered) similar to the RHS vector.
• Since we are using coordinates of the discretization grid nodes for the computation of the rigid body modes, in order to be able to do this locally we need to partition the system in such a way that DOFs from a single grid node are owned by the same MPI process. In this case this means we need to do a block-wise partitioning with a $$3\times3$$ blocks.
• It is more convenient to partition the coordinate matrix and then to compute the rigid body modes.

The listing below shows the complete source code for the MPI elasticity solver (tutorial/5.Nullspace/nullspace_mpi.cpp)

Listing 18 The MPI solution of the elasticity problem
  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 #include #include #include #include #include #include #include #include #include #include #include #include #include #if defined(AMGCL_HAVE_PARMETIS) # include #elif defined(AMGCL_HAVE_SCOTCH) # include #endif int main(int argc, char *argv[]) { // The command line should contain the matrix, the RHS, and the coordinate files: if (argc < 4) { std::cerr << "Usage: " << argv[0] << " " << std::endl; return 1; } amgcl::mpi::init mpi(&argc, &argv); amgcl::mpi::communicator world(MPI_COMM_WORLD); // The profiler: amgcl::profiler<> prof("Nullspace"); // Read the system matrix, the RHS, and the coordinates: prof.tic("read"); // Get the global size of the matrix: ptrdiff_t rows = amgcl::io::crs_size(argv[1]); // Split the matrix into approximately equal chunks of rows, and // make sure each chunk size is divisible by 3. ptrdiff_t chunk = (rows + world.size - 1) / world.size; if (chunk % 3) chunk += 3 - chunk % 3; ptrdiff_t row_beg = std::min(rows, chunk * world.rank); ptrdiff_t row_end = std::min(rows, row_beg + chunk); chunk = row_end - row_beg; // Read our part of the system matrix, the RHS and the coordinates. std::vector ptr, col; std::vector val, rhs, coo; amgcl::io::read_crs(argv[1], rows, ptr, col, val, row_beg, row_end); ptrdiff_t n, m; amgcl::io::read_dense(argv[2], n, m, rhs, row_beg, row_end); amgcl::precondition(n == rows && m == 1, "The RHS file has wrong dimensions"); amgcl::io::read_dense(argv[3], n, m, coo, row_beg / 3, row_end / 3); amgcl::precondition(n * 3 == rows && m == 3, "The coordinate file has wrong dimensions"); prof.toc("read"); if (world.rank == 0) { std::cout << "Matrix " << argv[1] << ": " << rows << "x" << rows << std::endl << "RHS " << argv[2] << ": " << rows << "x1" << std::endl << "Coords " << argv[3] << ": " << rows / 3 << "x3" << std::endl; } // Declare the backends and the solver type typedef amgcl::backend::builtin SBackend; // the solver backend typedef amgcl::backend::builtin PBackend; // the preconditioner backend typedef amgcl::mpi::make_solver< amgcl::mpi::amg< PBackend, amgcl::mpi::coarsening::smoothed_aggregation, amgcl::mpi::relaxation::spai0 >, amgcl::mpi::solver::cg > Solver; // The distributed matrix auto A = std::make_shared>( world, std::tie(chunk, ptr, col, val)); // Partition the matrix, the RHS vector, and the coordinates. // If neither ParMETIS not PT-SCOTCH are not available, // just keep the current naive partitioning. #if defined(AMGCL_HAVE_PARMETIS) || defined(AMGCL_HAVE_SCOTCH) # if defined(AMGCL_HAVE_PARMETIS) typedef amgcl::mpi::partition::parmetis Partition; # elif defined(AMGCL_HAVE_SCOTCH) typedef amgcl::mpi::partition::ptscotch Partition; # endif if (world.size > 1) { auto t = prof.scoped_tic("partition"); Partition part; // part(A) returns the distributed permutation matrix. // Keep the DOFs belonging to the same grid nodes together // (use block-wise partitioning with block size 3). auto P = part(*A, 3); auto R = transpose(*P); // Reorder the matrix: A = product(*R, *product(*A, *P)); // Reorder the RHS vector and the coordinates: R->move_to_backend(); std::vector new_rhs(R->loc_rows()); std::vector new_coo(R->loc_rows()); amgcl::backend::spmv(1, *R, rhs, 0, new_rhs); amgcl::backend::spmv(1, *R, coo, 0, new_coo); rhs.swap(new_rhs); coo.swap(new_coo); // Update the number of the local rows // (it may have changed as a result of permutation). chunk = A->loc_rows(); } #endif // Solver parameters: Solver::params prm; prm.solver.maxiter = 500; prm.precond.coarsening.aggr.eps_strong = 0; // Convert the coordinates to the rigid body modes. // The function returns the number of near null-space vectors // (3 in 2D case, 6 in 3D case) and writes the vectors to the // std::vector specified as the last argument: prm.precond.coarsening.aggr.nullspace.cols = amgcl::coarsening::rigid_body_modes( 3, coo, prm.precond.coarsening.aggr.nullspace.B); // Initialize the solver with the system matrix. prof.tic("setup"); Solver solve(world, A, prm); prof.toc("setup"); // Show the mini-report on the constructed solver: if (world.rank == 0) std::cout << solve << std::endl; // Solve the system with the zero initial approximation: int iters; double error; std::vector x(chunk, 0.0); prof.tic("solve"); std::tie(iters, error) = solve(*A, rhs, x); prof.toc("solve"); // Output the number of iterations, the relative error, // and the profiling data: if (world.rank == 0) { std::cout << "Iters: " << iters << std::endl << "Error: " << error << std::endl << prof << std::endl; } } 

In lines 44–49 we split the system into approximately equal chunks of rows, while making sure the chunk sizes are divisible by 3 (the number of DOFs per grid node). This is a naive paritioning that will be improved a bit later:

We read the parts of the system matrix, the RHS vector, and the grid node coordinates that belong to the current MPI process in lines 52–61. The backends for the iterative solver and the preconditioner and the solver type are declared in lines 72–82. In lines 85–86 we create the distributed version of the matrix from the local CRS arrays. After that, we are ready to partition the system using AMGCL wrapper for either ParMETIS or PT-SCOTCH libraries (lines 91–123). Note that we are reordering the coordinate matrix coo in the same way the RHS vector is reordered, even though the coordinate matrix has three times less rows than the system matrix. We can do this because the coordinate matrix is stored in the row-major order, and each row of the matrix has three coordinates, which means the total number of elements in the matrix is equal to the number of elements in the RHS vector, and we can apply our block-wise partitioning to the coordinate matrix.

The coordinates for the current MPI domain are converted into the rigid body modes in lines 135–136, after which we are ready to setup the solver (line 140) and solve the system (line 152). Below is the output of the compiled program:

$export OMP_NUM_THREADS=1$ mpirun -np 4 nullspace_mpi A.bin b.bin C.bin
Matrix A.bin: 81657x81657
RHS b.bin: 81657x1
Coords C.bin: 27219x3
Partitioning[ParMETIS] 4 -> 4
Type:             CG
Unknowns:         19965
Memory footprint: 311.95 K

Number of levels:    3
Operator complexity: 1.53
Grid complexity:     1.10

level     unknowns       nonzeros
---------------------------------
0        81657        3171111 (65.31%) [4]
1         7824        1674144 (34.48%) [4]
2          144          10224 ( 0.21%) [4]

Iters: 104
Error: 9.26388e-09

[Nullspace:       2.833 s] (100.00%)
[ self:           0.070 s] (  2.48%)
[  partition:     0.230 s] (  8.10%)
[  read:          0.009 s] (  0.32%)
[  setup:         1.081 s] ( 38.15%)
[  solve:         1.443 s] ( 50.94%)