Backends¶
A backend in AMGCL is a class that defines matrix and vector types together with several operations on them, such as creation, matrixvector products, elementwise vector operations, inner products etc. The <amgcl/backend/interface.hpp> file defines an interface that each backend should extend. The AMG hierarchy is moved to the specified backend upon construction. The solution phase then uses types and operations defined in the backend. This enables transparent acceleration of the solution phase with OpenMP, OpenCL, CUDA, or any other technologies.
In order to use a backend, user must include its definition from the
corresponding file inside amgcl/backend folder. On the user side of things,
only the types of the righthand side and the solution vectors should be
affected by the choice of AMGCL backend. Here is an example of using the
builtin
backend. First, we need to
include the appropriate header:
#include <amgcl/backend/builtin.hpp>
Then, we need to construct the solver and apply it to the vector types supported by the backend:
typedef amgcl::backend::builtin<double> Backend;
typedef amgcl::make_solver<
amgcl::amg<Backend, amgcl::coarsening::aggregation, amgcl::relaxation::spai0>,
amgcl::solver::gmres<Backend>
> Solver;
Solver solve(A);
std::vector<double> rhs, x; // Initialized elsewhere
solve(rhs, x);
Now, if we want to switch to a different backend, for example, in order to
accelerate the solution phase with a powerful GPU, we just need to include
another backend header, and change the definitions of Backend
, rhs
,
and x
. Here is an example of what needs to be done to use the
VexCL
backend.
Include the correct header:
#include <amgcl/backend/builtin.hpp>
Change the definition of Backend
:
typedef amgcl::backend::vexcl<double> Backend;
Change the definition of the vectors:
vex::vector<double> rhs, x;
That’s it! Well, almost. In case the backend requires some parameters, we also need to provide those. In particular, the VexCL backend should know what VexCL context to use:
// Initialize VexCL context on a single GPU:
vex::Context ctx(vex::Filter::GPU && vex::Filter::Count(1));
// Create backend parameters:
Backend::params backend_prm;
backend_prm.q = ctx;
// Pass the parameters to the solver constructor:
Solver solve(A, Solver::params(), backend_prm);
Builtin¶
#include
<amgcl/backend/builtin.hpp>

template <typename ValueType>
struct amgcl::backendbuiltin
¶ The builtin backend does not have any dependencies, and uses OpenMP for parallelization. Matrices are stored in the CRS format, and vectors are instances of
std::vector<value_type>
. There is no usual overhead of moving the constructed hierarchy to the builtin backend, since the backend is used internally during setup.
VexCL¶
#include
<amgcl/backend/vexcl.hpp>

template <typename real, class DirectSolver = solver::vexcl_skyline_lu<real>>
struct amgcl::backendvexcl
¶ The backend uses the VexCL library for accelerating solution on the modern GPUs and multicore processors with the help of OpenCL or CUDA technologies. The VexCL backend stores the system matrix as
vex::SpMat<real>
and expects the right hand side and the solution vectors to be instances of thevex::vector<real>
type.
struct amgcl::backend::vexcl
params
¶ The VexCL backend parameters.
Public Members

template<>
std::vector<vex::backend::command_queue> amgcl::backend::vexcl<real, DirectSolver>::paramsq
¶ Command queues that identify compute devices to use with VexCL.

template<>
bool amgcl::backend::vexcl<real, DirectSolver>::paramsfast_matrix_setup
¶ Do CSR to ELL conversion on the GPU side.
This will result in faster setup, but will require more GPU memory.

template<>

struct amgcl::backend::vexcl