Matrices and vectors, with basic linear algebra operations.

The basic datastructures in JMP are matrices and vectors, stored in double-precision (double). All the indices are 0-based, as is typical of C and Java, but different from Fortran's 1-based indexing. Complex numbers are not supported.

Basic interface

The most fundamental interfaces are {@link jmp.Matrix Matrix} and {@link jmp.Vector Vector}. By themselves, they say little about the underlying datastructure, as they only have size-information functions.

Extending Matrix is {@link jmp.ElementalAccessMatrix ElementalAccessMatrix}, which has simple assembly and retrieval functions, both elementwise and blockwise. The block-operations are typically more efficient. There is a similiar interface, {@link jmp.ElementalAccessVector ElementalAccessVector} which extends the basic vector.

Using these methods, most operations can be accomplished easily. There are two more interfaces which are of interest, namely {@link jmp.ZeroColumnMatrix ZeroColumnMatrix} and {@link jmp.ZeroRowMatrix ZeroRowMatrix}. Matrices implementing these allow quick zeroing of columns or rows such that boundary conditions for differential equations can be explicitly handled. For convenience, the interfaces {@link jmp.ElementalAccessZeroRowMatrix ElementalAccessZeroRowMatrix} and {@link jmp.ElementalAccessZeroColumnMatrix ElementalAccessZeroColumnMatrix} extends on both the elemental access interface and one of the zeroing interfaces, and can be used in application codes to easier switch between different underlying storage formats (dense, sparse etc).

The aforementioned functions are sufficient for most applications, but JMP provides other ways of accessing and changing the datastructures. This is typically by direct access, and is generally not recommended except for developing linear algebra algorithms.

Matrices and vectors

JMP provides ten matrices and three vectors implementations. Of these, there are five sparse matrices, four dense, and a block-structured matrix. Amongst the sparse and dense matrices, there are both row- and column-oriented ones. Choosing the right structure can have a significant impact on both performance and memory-usage. Most of the matrices implement ElementalAccessMatrix and most of the vectors implement ElementalAccessVector. Row-oriented matrices also implement ZeroRowMatrix while the column-oriented ones implement ZeroColumnMatrix. The following table shows all the concrete types:

Name Storage
{@link jmp.DenseColumnMatrix DenseColumnMatrix} double[][], column major
{@link jmp.DenseColumnRowMatrix DenseColumnRowMatrix} double[], column major
{@link jmp.DenseRowMatrix DenseRowMatrix} double[][], row major
{@link jmp.DenseRowColumnMatrix DenseRowColumnMatrix} double[], row major
{@link jmp.SparseColumnMatrix SparseColumnMatrix} int[][]/double[][], column major, growable
{@link jmp.SparseColumnRowMatrix SparseColumnRowMatrix} int[]/double[], column major
{@link jmp.SparseRowMatrix SparseRowMatrix} int[][]/double[][], row major, growable
{@link jmp.SparseRowColumnMatrix} int[]/double[], row major
{@link jmp.BlockMatrix BlockMatrix} int[][]/int[][]/Matrix[]
{@link jmp.BlockVector BlockVector} int[]/Vector[]
{@link jmp.CoordinateMatrix CoordinateMatrix} int[]/int[]/double[]
{@link jmp.DenseVector DenseVector} double[]
{@link jmp.SparseVector SparseVector} int[]/double[], growable

Notes:

  1. Sparse structures which are growable can exceed the initial bandwidth allocation, while those which are not growable are fixed, and over-allocation will cause an error
  2. Matrices which are column major typically perform better with column-oriented operations, and likewise for row major matrices. Matrix/vector multiplication is row-major, while transpose multiplication is column-major.
  3. Matrices using one-dimensional storage typically exhibit higher performance than the matrices with two-dimensional storage

Basic linear algebra operations and parallelization

All the basic linear algebra operations are defined in the interface {@link jmp.BLAS BLAS} (Basic Linear Algebra Subroutines). The focus is on methods for sparse matrices, and as such not all the methods found in the dense Fortran BLAS are present and some methods have been added. Of particular note is the absence of matrix/matrix multiplication and triangular solvers.

There are two implementations of BLAS, {@link jmp.SequentialBLAS SequentialBLAS} and {@link jmp.ParallelBLAS ParallelBLAS}. {@link jmp.State State} contains a reference to the current BLAS, and users should not need to create their own BLAS objects. Instead, use the method {@link jmp.State#setNumThreads setNumThreads} to set the degree of parallelization:

  // Use two threads for parallelization
  State.setNumThreads(2);

  // Get the current BLAS
  BLAS blas = State.BLAS;

  // Perform computations with blas

For more information on parallelization, see {@link jmp.util.ParallelWorker ParallelWorker}.