DoubleMatrix2D
and DoubleMatrix1D
.See: Description
Interface  Description 

Blas 
Subset of the BLAS (Basic Linear Algebra System);
High quality "building block" routines for performing basic vector and matrix operations.

Matrix2DMatrix2DFunction 
Interface that represents a function object: a function that takes
two arguments and returns a single value.

Class  Description 

Algebra 
Linear algebraic matrix operations operating on
DoubleMatrix2D ; concentrates most functionality of this package. 
CholeskyDecomposition 
For a symmetric, positive definite matrix A, the Cholesky decomposition
is a lower triangular matrix L so that A = L*L';
If the matrix is not symmetric or positive definite, the constructor
returns a partial decomposition and sets an internal flag that may
be queried by the isSymmetricPositiveDefinite() method.

EigenvalueDecomposition 
Eigenvalues and eigenvectors of a real matrix A.

LUDecomposition 
For an m x n matrix A with m >= n, the LU decomposition is an m x n
unit lower triangular matrix L, an n x n upper triangular matrix U,
and a permutation vector piv of length m so that A(piv,:) = L*U;
If m < n, then L is m x m and U is m x n.

LUDecompositionQuick 
A low level version of
LUDecomposition , avoiding unnecessary memory allocation and copying. 
Property 
Tests matrices for linear algebraic properties (equality, tridiagonality, symmetry, singularity, etc).

QRDecomposition 
For an m x n matrix A with m >= n, the QR decomposition is an m x n
orthogonal matrix Q and an n x n upper triangular matrix R so that
A = Q*R.

SeqBlas 
Sequential implementation of the Basic Linear Algebra System.

SingularValueDecomposition 
For an m x n matrix A with m >= n, the singular value decomposition is
an m x n orthogonal matrix U, an n x n diagonal matrix S, and
an n x n orthogonal matrix V so that A = U*S*V'.

SmpBlas 
Parallel implementation of the Basic Linear Algebra System for symmetric multi processing boxes.

Linear Algebraic matrix computations operating on DoubleMatrix2D
and DoubleMatrix1D
.
The linalg package provides easy and performant access to compute intensive
Linear Algebra. Much functionality is concentrated in class Algebra
.
Five fundamental matrix decompositions, which consist of pairs or triples of
matrices, permutation vectors, and the like, produce results in five decomposition
classes. These decompositions are accessed by the Algebra class
to compute solutions of simultaneous linear equations, determinants, inverses
and other matrix functions. The five decompositions are
This package could only be rolled out easily because it is to a large degree adapted from interfaces and implementations of the Jama matrix package. See the Jama homepage. Due credit is given to Joe Hicklin, Cleve Moler, Peter Webb, Ronald F. Boisvert, Bruce Miller, Roldan Pozo and Karin Remington, the Jama authors from MathWorks and NIST.
Jama matrices are of type Jama.Matrix, Colt matrices of type cern.colt.matrix.DoubleMatrix1D, cern.colt.matrix.DoubleMatrix2D and cern.colt.matrix.DoubleMatrix3D.
Jama.Matrix is not a generalpurpose array class. It is designed for a single special purpose: Linear algebra. Because of its limited scope, Jama can combine data structure and algorithms in a class Jama.Matrix. In contrast, Colt matrices are generalpurpose array classes. Since multidimensional matrices (arrays) have many applications, of which only one is linear algebra, Colt matrix packages are designed to avoid fat interfaces, yet allow to form the basis on top of which a broad set of functionality and applications can be defined (a similar spirit is used in STL and IBM Array). Thus, data structure and specialpurpose algorithms are separated. Class Algebra works on DoubleMatrix2D and contains the operations of Jama.Matrix, but holds no data structure. Class DoubleMatrix2D contains an efficient and flexible multidimensional array data structure, as well as multipurpose operations, but (almost) no linear algebraic operations.
As a consequence a Colt user initially faces some additional complexity, but after getting used to such a design, will honour the fact that logically related functionality is logically separated. For example, if a user is not interested in Formatting, Sorting, Partitioning, Statistics, etc. he/she does not see this functionality, because it is neither defined in the linalg package nor the matrix package, but somewhere else.
Perhaps more importantly, such a design will scale over time, as more and more functionality from many scientific and engineering domains is added. Also see matrix algorithms.
All methods of Jama.Matrix are provided in Algebra, except for some less important convenience methods. Colt matrices (similar to IBM Arrays) are powerful and flexible data structures. Subrange, slice, dice, flip, selection and sort views are available for Colt matrices, but not for Jama matrices. (They are difficult to implement efficiently with Jama matrices, because they internally use double[][] arrays).
No extensive performance studies have been carried out so far.
Jama matrices weakly encapsulate a normal double[][] array. Dense Colt
matrices strongly encapsulate a double[] array and use some arithmetic
to address cells in 2d. Addressing a cell is more expensive using double[][]
arrays, due to boundschecking, null pointer checks, noncontigous memory, and
problems that compilers have to optimize such code. Using double[]
arrays less boundschecking, less null pointer checks, better cache locality
and better compiler optimizations can be seen, often eliminating boundschecking
and nullpointer checks, paving the way for effective pipelining. See the publications
of IBM Watson's Ninja project.
To improve performance, matrix computations should use highly optimized kernels in innermost loops. These kernels are not part of class Algebra, but part of DoubleMatrix2D and DoubleMatrix1D. Otherwise they couldn't be fully optimized. For example, with some arithmetic (not exposed to a user), a loop over a 1d or 2d matrix can internally reduce cell adressing overhead. Some of the most critical types of (innermost) loop operations have a corresponding optimized method in DoubleMatrix2D and DoubleMatrix1D. For example, dot products, multiplications, assign(function) transforms and aggregate methods are such internally specialized kernels. Feedback may result in a few more optimized kernels. Thus, in the name of performance, in a few cases, algorithms and data structure are not completely separeted.
Some internal optimizations have been introduced, in particular for multiplications, the LUDecomposition and the CholeskyDecomposition. The other decomposition classes are almost identical to the corresponding Jama classes  as such they are functional but not (yet) particularly efficient.
For small matrices, you may be better off using Sun's Java 3D 1.2, see javax.vecmath  spec and javax.vecmath javadoc.
We are looking for volunteers!
Do you have a background in Matrix Computations?
Do you care about performance and usability?
Are you enthusiastic about Open Source?
With your help, this package could become better and richer!
Contact wolfgang.hoschek@cern.ch
for more info.
Copyright © 2006–2019 SYSTAP, LLC DBA Blazegraph. All rights reserved.