Matrices can be linearly combined just like vectors can
be linearly combined; after all, n-vectors are just
matrices.
However, multiplication of matrices
is more complicated.
An easy way to multiply two matrices A and B, is to use think of Bas a group of columns and use the the matrix-vector product on each column.
The columns of C=AB are just
products of A with the respective columns of B. For this to work
properly, the number of columns of A must be the same as the number of
rows of B. Another way to compute C is element by element; cij is
the inner product of row i of A with column j of B. An unfortunate fact
about the matrix product is that usually
(most matrices do not
commute). You are probably very used to using ab = ba when working with
scalars a and b, but when you are doing matrix algebra, it is very
important to remember that you cannot assume AB=BA. One consequence of this
is that the transpose of a product of matrices is the product of the
transposes of the individual matrices, but with the order of the terms
reversed.
Inverses of matrices can be used to find solutions to linear systems. The
scalar equation ax=b has the solution x=b/a; the correct analogue for
the solution to a linear system
is
.
An matrix A is invertible (A-1, the unique inverse, exists) if and only if
has a unique solution for all
in Rn. The inverse of a
product of square matrices is the product of the inverses of the terms, with
the order of the terms reversed. There is a simple formula for
inverses of
matrices. For larger matrices, elementary row
operations can be used to find inverses. The whole process is equivalent to
the simultaneous solution of a set of n linear systems whose right-hand
sides are the columns of the
identity matrix I. In practice,
you take A and augment it with I. Then
reduce.
A completely (if possible),
performing the same row operations on I. When you are finished, the part of
the augmented A that was originally occupied by I, now contains A-1.