there is there aree least eig...

Linear Algebra (scipy.linalg) & SciPy v0.17.0 Reference Guide
Linear Algebra ()
When SciPy is built using the optimized ATLAS LAPACK and BLAS
libraries, it has very fast linear algebra capabilities. If you dig
deep enough, all of the raw lapack and blas libraries are available
for your use for even more speed. In this section, some easier-to-use
interfaces to these routines are described.
All of these linear algebra routines expect an object that can be
converted into a 2-dimensional array. The output of these routines is
also a two-dimensional array.
scipy.linalg vs numpy.linalg
scipy.linalg contains all the functions in numpy.linalg.
plus some other more advanced ones not contained in numpy.linalg
Another advantage of using scipy.linalg over numpy.linalg is that
it is always compiled with BLAS/LAPACK support, while for numpy this is
optional. Therefore, the scipy version might be faster depending on how
numpy was installed.
Therefore, unless you don’t want to add scipy as a dependency to
your numpy program, use scipy.linalg instead of numpy.linalg
numpy.matrix vs 2D numpy.ndarray
The classes that represent matrices, and basic operations such as
matrix multiplications and transpose are a part of numpy.
For convenience, we summarize the differences between numpy.matrix
and numpy.ndarray here.
numpy.matrix is matrix class that has a more convenient interface
than numpy.ndarray for matrix operations. This class supports for
example MATLAB-like creation syntax via the, has matrix multiplication
as default for the * operator, and contains I and T members
that serve as shortcuts for inverse and transpose:
&&& import numpy as np
&&& A = np.mat('[1 2;3 4]')
matrix([[1, 2],
matrix([[-2. ,
[ 1.5, -0.5]])
&&& b = np.mat('[5 6]')
matrix([[5, 6]])
matrix([[5],
matrix([[17],
Despite its convenience, the use of the numpy.matrix class is
discouraged, since it adds nothing that cannot be accomplished
with 2D numpy.ndarray objects, and may lead to a confusion of which class
is being used. For example, the above code can be rewritten as:
&&& import numpy as np
&&& from scipy import linalg
&&& A = np.array([[1,2],[3,4]])
array([[1, 2],
&&& linalg.inv(A)
array([[-2. ,
[ 1.5, -0.5]])
&&& b = np.array([[5,6]]) #2D array
array([[5, 6]])
array([[5],
&&& A*b #not matrix multiplication!
array([[ 5, 12],
[15, 24]])
&&& A.dot(b.T) #matrix multiplication
array([[17],
&&& b = np.array([5,6]) #1D array
array([5, 6])
#not matrix transpose!
array([5, 6])
&&& A.dot(b)
#does not matter for multiplication
array([17, 39])
scipy.linalg operations can be applied equally to
numpy.matrix or to 2D numpy.ndarray objects.
Basic routines
Finding Inverse
The inverse of a matrix \(\mathbf{A}\) is the matrix
\(\mathbf{B}\) such that \(\mathbf{AB}=\mathbf{I}\) where
\(\mathbf{I}\) is the identity matrix consisting of ones down the
main diagonal.
Usually \(\mathbf{B}\) is denoted
\(\mathbf{B}=\mathbf{A}^{-1}\) . In SciPy, the matrix inverse of
the Numpy array, A, is obtained using linalg.inv (A) , or
using A.I if A is a Matrix. For example, let
\[\begin{split}\mathbf{A} = \left[\begin{array}{ccc} 1 & 3 & 5\\ 2 & 5 & 1\\ 2 & 3 & 8\end{array}\right]\end{split}\]
\[\begin{split}\mathbf{A^{-1}} = \frac{1}{25}
\left[\begin{array}{ccc} -37 & 9 & 22 \\
14 & 2 & -9 \\
4 & -3 & 1
\end{array}\right] = %
\left[\begin{array}{ccc} -1.48 & 0.36 & 0.88
0.56 & 0.08 & -0.36 \\
0.16 & -0.12 & 0.04
\end{array}\right].\end{split}\]
The following example demonstrates this computation in SciPy
&&& import numpy as np
&&& from scipy import linalg
&&& A = np.array([[1,2],[3,4]])
array([[1, 2],
&&& linalg.inv(A)
array([[-2. ,
[ 1.5, -0.5]])
&&& A.dot(linalg.inv(A)) #double check
Solving linear system
Solving linear systems of equations is straightforward using the scipy
command linalg.solve. This command expects an input matrix and
a right-hand-side vector. The solution vector is then computed. An
option for entering a symmetrix matrix is offered which can speed up
the processing when applicable.
As an example, suppose it is desired
to solve the following simultaneous equations:
\[ \begin{eqnarray*} x + 3y + 5z & = & 10 \\
2x + 5y + z & = & 8
2x + 3y + 8z & = & 3
\end{eqnarray*}\]We could find the solution vector using a matrix inverse:
\[\begin{split}\left[\begin{array}{c} x\\ y\\ z\end{array}\right]=\left[\begin{array}{ccc} 1 & 3 & 5\\ 2 & 5 & 1\\ 2 & 3 & 8\end{array}\right]^{-1}\left[\begin{array}{c} 10\\ 8\\ 3\end{array}\right]=\frac{1}{25}\left[\begin{array}{c} -232\\ 129\\ 19\end{array}\right]=\left[\begin{array}{c} -9.28\\ 5.16\\ 0.76\end{array}\right].\end{split}\]
However, it is better to use the linalg.solve command which can be
faster and more numerically stable. In this case it however gives the
same answer as shown in the following example:
&&& import numpy as np
&&& from scipy import linalg
&&& A = np.array([[1, 2], [3, 4]])
array([[1, 2],
&&& b = np.array([[5], [6]])
array([[5],
&&& linalg.inv(A).dot(b)
array([[-4. ],
&&& A.dot(linalg.inv(A).dot(b)) - b
&&& np.linalg.solve(A, b)
array([[-4. ],
&&& A.dot(np.linalg.solve(A, b)) - b
array([[ 0.],
Finding Determinant
The determinant of a square matrix \(\mathbf{A}\) is often denoted
\(\left|\mathbf{A}\right|\) and is a quantity often used in linear
algebra. Suppose \(a_{ij}\) are the elements of the matrix
\(\mathbf{A}\) and let \(M_{ij}=\left|\mathbf{A}_{ij}\right|\)
be the determinant of the matrix left by removing the
\(i^{\textrm{th}}\) row and \(j^{\textrm{th}}\) column from
\(\mathbf{A}\) . Then for any row \(i,\)
\[\left|\mathbf{A}\right|=\sum_{j}\left(-1\right)^{i+j}a_{ij}M_{ij}.\]
This is a recursive way to define the determinant where the base case
is defined by accepting that the determinant of a \(1\times1\) matrix is the only matrix element. In SciPy the determinant can be
calculated with linalg.det . For example, the determinant of
\[\begin{split}\mathbf{A=}\left[\begin{array}{ccc} 1 & 3 & 5\\ 2 & 5 & 1\\ 2 & 3 & 8\end{array}\right]\end{split}\]
\[ \begin{eqnarray*} \left|\mathbf{A}\right| & = & 1\left|\begin{array}{cc} 5 & 1\\ 3 & 8\end{array}\right|-3\left|\begin{array}{cc} 2 & 1\\ 2 & 8\end{array}\right|+5\left|\begin{array}{cc} 2 & 5\\ 2 & 3\end{array}\right|\\
& = & 1\left(5\cdot8-3\cdot1\right)-3\left(2\cdot8-2\cdot1\right)+5\left(2\cdot3-2\cdot5\right)=-25.\end{eqnarray*}\]In SciPy this is computed as shown in this example:
&&& import numpy as np
&&& from scipy import linalg
&&& A = np.array([[1,2],[3,4]])
array([[1, 2],
&&& linalg.det(A)
Computing norms
Matrix and vector norms can also be computed with SciPy. A wide range
of norm definitions are available using different parameters to the
order argument of linalg.norm . This function takes a rank-1
(vectors) or a rank-2 (matrices) array and an optional order argument
(default is 2). Based on these inputs a vector or matrix norm of the
requested order is computed.
For vector x , the order parameter can be any real number including
inf or -inf. The computed norm is
\[\begin{split}\left\Vert \mathbf{x}\right\Vert =\left\{ \begin{array}{cc} \max\left|x_{i}\right| & \textrm{ord}=\textrm{inf}\\ \min\left|x_{i}\right| & \textrm{ord}=-\textrm{inf}\\ \left(\sum_{i}\left|x_{i}\right|^{\textrm{ord}}\right)^{1/\textrm{ord}} & \left|\textrm{ord}\right|&\infty.\end{array}\right.\end{split}\]
For matrix \(\mathbf{A}\) the only valid values for norm are \(\pm2,\pm1,\) \(\pm\) inf, and ‘fro’ (or ‘f’) Thus,
\[\begin{split}\left\Vert \mathbf{A}\right\Vert =\left\{ \begin{array}{cc} \max_{i}\sum_{j}\left|a_{ij}\right| & \textrm{ord}=\textrm{inf}\\ \min_{i}\sum_{j}\left|a_{ij}\right| & \textrm{ord}=-\textrm{inf}\\ \max_{j}\sum_{i}\left|a_{ij}\right| & \textrm{ord}=1\\ \min_{j}\sum_{i}\left|a_{ij}\right| & \textrm{ord}=-1\\ \max\sigma_{i} & \textrm{ord}=2\\ \min\sigma_{i} & \textrm{ord}=-2\\ \sqrt{\textrm{trace}\left(\mathbf{A}^{H}\mathbf{A}\right)} & \textrm{ord}=\textrm{'fro'}\end{array}\right.\end{split}\]
where \(\sigma_{i}\) are the singular values of \(\mathbf{A}\) .
&&& import numpy as np
&&& from scipy import linalg
&&& A=np.array([[1,2],[3,4]])
array([[1, 2],
&&& linalg.norm(A)
&&& linalg.norm(A,'fro') # frobenius norm is the default
&&& linalg.norm(A,1) # L1 norm (max column sum)
&&& linalg.norm(A,-1)
&&& linalg.norm(A,np.inf) # L inf norm (max row sum)
Solving linear least-squares problems and pseudo-inverses
Linear least-squares problems occur in many branches of applied
mathematics. In this problem a set of linear scaling coefficients is
sought that allow a model to fit data. In particular it is assumed
that data \(y_{i}\) is related to data \(\mathbf{x}_{i}\)
through a set of coefficients \(c_{j}\) and model functions
\(f_{j}\left(\mathbf{x}_{i}\right)\) via the model
\[y_{i}=\sum_{j}c_{j}f_{j}\left(\mathbf{x}_{i}\right)+\epsilon_{i}\]
where \(\epsilon_{i}\) represents uncertainty in the data. The
strategy of least squares is to pick the coefficients \(c_{j}\) to
\[J\left(\mathbf{c}\right)=\sum_{i}\left|y_{i}-\sum_{j}c_{j}f_{j}\left(x_{i}\right)\right|^{2}.\]
Theoretically, a global minimum will occur when
\[\frac{\partial J}{\partial c_{n}^{*}}=0=\sum_{i}\left(y_{i}-\sum_{j}c_{j}f_{j}\left(x_{i}\right)\right)\left(-f_{n}^{*}\left(x_{i}\right)\right)\]
\[ \begin{eqnarray*} \sum_{j}c_{j}\sum_{i}f_{j}\left(x_{i}\right)f_{n}^{*}\left(x_{i}\right) & = & \sum_{i}y_{i}f_{n}^{*}\left(x_{i}\right)\\ \mathbf{A}^{H}\mathbf{Ac} & = & \mathbf{A}^{H}\mathbf{y}\end{eqnarray*}\]where
\[\left\{ \mathbf{A}\right\} _{ij}=f_{j}\left(x_{i}\right).\]
When \(\mathbf{A^{H}A}\) is invertible, then
\[\mathbf{c}=\left(\mathbf{A}^{H}\mathbf{A}\right)^{-1}\mathbf{A}^{H}\mathbf{y}=\mathbf{A}^{\dagger}\mathbf{y}\]
where \(\mathbf{A}^{\dagger}\) is called the pseudo-inverse of
\(\mathbf{A}.\) Notice that using this definition of
\(\mathbf{A}\) the model can be written
\[\mathbf{y}=\mathbf{Ac}+\boldsymbol{\epsilon}.\]
The command linalg.lstsq will solve the linear least squares
problem for \(\mathbf{c}\) given \(\mathbf{A}\) and
\(\mathbf{y}\) . In addition linalg.pinv or
linalg.pinv2 (uses a different method based on singular value
decomposition) will find \(\mathbf{A}^{\dagger}\) given
\(\mathbf{A}.\)
The following example and figure demonstrate the use of
linalg.lstsq and linalg.pinv for solving a data-fitting
problem. The data shown below were generated using the model:
\[y_{i}=c_{1}e^{-x_{i}}+c_{2}x_{i}\]
where \(x_{i}=0.1i\) for \(i=1\ldots10\) , \(c_{1}=5\) ,
and \(c_{2}=4.\) Noise is added to \(y_{i}\) and the
coefficients \(c_{1}\) and \(c_{2}\) are estimated using
linear least squares.
&&& import numpy as np
&&& from scipy import linalg
&&& import matplotlib.pyplot as plt
&&& c1, c2 = 5.0, 2.0
&&& i = np.r_[1:11]
&&& xi = 0.1*i
&&& yi = c1*np.exp(-xi) + c2*xi
&&& zi = yi + 0.05 * np.max(yi) * np.random.randn(len(yi))
&&& A = np.c_[np.exp(-xi)[:, np.newaxis], xi[:, np.newaxis]]
&&& c, resid, rank, sigma = linalg.lstsq(A, zi)
&&& xi2 = np.r_[0.1:1.0:100j]
&&& yi2 = c[0]*np.exp(-xi2) + c[1]*xi2
&&& plt.plot(xi,zi,'x',xi2,yi2)
&&& plt.axis([0,1.1,3.0,5.5])
&&& plt.xlabel('$x_i$')
&&& plt.title('Data fitting with linalg.lstsq')
&&& plt.show()
Generalized inverse
The generalized inverse is calculated using the command
linalg.pinv or linalg.pinv2. These two commands differ
in how they compute the generalized inverse.
The first uses the
linalg.lstsq algorithm while the second uses singular value
decomposition. Let \(\mathbf{A}\) be an \(M\times N\) matrix,
then if \(M&N\) the generalized inverse is
\[\mathbf{A}^{\dagger}=\left(\mathbf{A}^{H}\mathbf{A}\right)^{-1}\mathbf{A}^{H}\]
while if \(M&N\) matrix the generalized inverse is
\[\mathbf{A}^{\#}=\mathbf{A}^{H}\left(\mathbf{A}\mathbf{A}^{H}\right)^{-1}.\]
In both cases for \(M=N\) , then
\[\mathbf{A}^{\dagger}=\mathbf{A}^{\#}=\mathbf{A}^{-1}\]
as long as \(\mathbf{A}\) is invertible.
Decompositions
In many applications it is useful to decompose a matrix using other
representations. There are several decompositions supported by SciPy.
Eigenvalues and eigenvectors
The eigenvalue-eigenvector problem is one of the most commonly
employed linear algebra operations. In one popular form, the
eigenvalue-eigenvector problem is to find for some square matrix
\(\mathbf{A}\) scalars \(\lambda\) and corresponding vectors
\(\mathbf{v}\) such that
\[\mathbf{Av}=\lambda\mathbf{v}.\]
For an \(N\times N\) matrix, there are \(N\) (not necessarily
distinct) eigenvalues — roots of the (characteristic) polynomial
\[\left|\mathbf{A}-\lambda\mathbf{I}\right|=0.\]
The eigenvectors, \(\mathbf{v}\) , are also sometimes called right
eigenvectors to distinguish them from another set of left eigenvectors
that satisfy
\[\mathbf{v}_{L}^{H}\mathbf{A}=\lambda\mathbf{v}_{L}^{H}\]
\[\mathbf{A}^{H}\mathbf{v}_{L}=\lambda^{*}\mathbf{v}_{L}.\]
With it’s default optional arguments, the command linalg.eig
returns \(\lambda\) and \(\mathbf{v}.\) However, it can also
return \(\mathbf{v}_{L}\) and just \(\lambda\) by itself (
linalg.eigvals returns just \(\lambda\) as well).
In addition, linalg.eig can also solve the more general eigenvalue problem
\[ \begin{eqnarray*} \mathbf{Av} & = & \lambda\mathbf{Bv}\\ \mathbf{A}^{H}\mathbf{v}_{L} & = & \lambda^{*}\mathbf{B}^{H}\mathbf{v}_{L}\end{eqnarray*}\]for square matrices \(\mathbf{A}\) and \(\mathbf{B}.\) The
standard eigenvalue problem is an example of the general eigenvalue
problem for \(\mathbf{B}=\mathbf{I}.\) When a generalized
eigenvalue problem can be solved, then it provides a decomposition of
\(\mathbf{A}\) as
\[\mathbf{A}=\mathbf{BV}\boldsymbol{\Lambda}\mathbf{V}^{-1}\]
where \(\mathbf{V}\) is the collection of eigenvectors into
columns and \(\boldsymbol{\Lambda}\) is a diagonal matrix of
eigenvalues.
By definition, eigenvectors are only defined up to a constant scale
factor. In SciPy, the scaling factor for the eigenvectors is chosen so
that \(\left\Vert \mathbf{v}\right\Vert
^{2}=\sum_{i}v_{i}^{2}=1.\)
As an example, consider finding the eigenvalues and eigenvectors of
the matrix
\[\begin{split}\mathbf{A}=\left[\begin{array}{ccc} 1 & 5 & 2\\ 2 & 4 & 1\\ 3 & 6 & 2\end{array}\right].\end{split}\]
The characteristic polynomial is
\[ \begin{eqnarray*} \left|\mathbf{A}-\lambda\mathbf{I}\right| & = & \left(1-\lambda\right)\left[\left(4-\lambda\right)\left(2-\lambda\right)-6\right]-\\
& 5\left[2\left(2-\lambda\right)-3\right]+2\left[12-3\left(4-\lambda\right)\right]\\
& = & -\lambda^{3}+7\lambda^{2}+8\lambda-3.\end{eqnarray*}\]The roots of this polynomial are the eigenvalues of \(\mathbf{A}\) :
\[ \begin{eqnarray*} \lambda_{1} & = & 7.9579\\ \lambda_{2} & = & -1.2577\\ \lambda_{3} & = & 0.2997.\end{eqnarray*}\]The eigenvectors corresponding to each eigenvalue can be found using
the original equation. The eigenvectors associated with these
eigenvalues can then be found.
&&& import numpy as np
&&& from scipy import linalg
&&& A = np.array([[1, 2], [3, 4]])
&&& la, v = linalg.eig(A)
&&& l1, l2 = la
&&& print l1, l2
# eigenvalues
(-0.+0j) (5.+0j)
&&& print v[:, 0]
# first eigenvector
&&& print v[:, 1]
# second eigenvector
&&& print np.sum(abs(v**2), axis=0)
# eigenvectors are unitary
&&& v1 = np.array(v[:, 0]).T
&&& print linalg.norm(A.dot(v1) - l1*v1)
# check the computation
Singular value decomposition
Singular Value Decomposition (SVD) can be thought of as an extension of
the eigenvalue problem to matrices that are not square. Let
\(\mathbf{A}\) be an \(M\times N\) matrix with \(M\) and
\(N\) arbitrary. The matrices \(\mathbf{A}^{H}\mathbf{A}\) and
\(\mathbf{A}\mathbf{A}^{H}\) are square hermitian matrices
size \(N\times N\) and \(M\times M\) respectively. It is known
that the eigenvalues of square hermitian matrices are real and
non-negative. In addition, there are at most
\(\min\left(M,N\right)\) identical non-zero eigenvalues of
\(\mathbf{A}^{H}\mathbf{A}\) and \(\mathbf{A}\mathbf{A}^{H}.\)
Define these positive eigenvalues as \(\sigma_{i}^{2}.\) The
square-root of these are called singular values of \(\mathbf{A}.\)
The eigenvectors of \(\mathbf{A}^{H}\mathbf{A}\) are collected by
columns into an \(N\times N\) unitary
\(\mathbf{V}\) while the eigenvectors of
\(\mathbf{A}\mathbf{A}^{H}\) are collected by columns in the
unitary matrix \(\mathbf{U}\) , the singular values are collected
in an \(M\times N\) zero matrix
\(\mathbf{\boldsymbol{\Sigma}}\) with main diagonal entries set to
the singular values. Then
\[\mathbf{A=U}\boldsymbol{\Sigma}\mathbf{V}^{H}\]
is the singular-value decomposition of \(\mathbf{A}.\) Every
matrix has a singular value decomposition. Sometimes, the singular
values are called the spectrum of \(\mathbf{A}.\) The command
linalg.svd will return \(\mathbf{U}\) ,
\(\mathbf{V}^{H}\) , and \(\sigma_{i}\) as an array of the
singular values. To obtain the matrix \(\mathbf{\Sigma}\) use
linalg.diagsvd. The following example illustrates the use of
linalg.svd .
&&& import numpy as np
&&& from scipy import linalg
&&& A = np.array([[1,2,3],[4,5,6]])
array([[1, 2, 3],
[4, 5, 6]])
&&& M,N = A.shape
&&& U,s,Vh = linalg.svd(A)
&&& Sig = linalg.diagsvd(s,M,N)
&&& U, Vh = U, Vh
array([[-0.3863177 , -0.],
0.3863177 ]])
array([[ 9.508032
array([[-0., -0., -0.7039467 ],
[ 0., -0.,
&&& U.dot(Sig.dot(Vh)) #check computation
array([[ 1.,
LU decomposition
The LU decomposition finds a representation for the \(M\times N\)
matrix \(\mathbf{A}\) as
\[\mathbf{A}=\mathbf{P}\,\mathbf{L}\,\mathbf{U}\]
where \(\mathbf{P}\) is an \(M\times M\) permutation matrix (a
permutation of the rows of the identity matrix), \(\mathbf{L}\) is
in \(M\times K\) lower triangular or trapezoidal matrix (
\(K=\min\left(M,N\right)\) ) with unit-diagonal, and
\(\mathbf{U}\) is an upper triangular or trapezoidal matrix. The
SciPy command for this decomposition is linalg.lu .
Such a decomposition is often useful for solving many simultaneous
equations where the left-hand-side does not change but the right hand
side does. For example, suppose we are going to solve
\[\mathbf{A}\mathbf{x}_{i}=\mathbf{b}_{i}\]
for many different \(\mathbf{b}_{i}\) . The LU decomposition allows this to be written as
\[\mathbf{PLUx}_{i}=\mathbf{b}_{i}.\]
Because \(\mathbf{L}\) is lower-triangular, the equation can be
solved for \(\mathbf{U}\mathbf{x}_{i}\) and finally
\(\mathbf{x}_{i}\) very rapidly using forward- and
back-substitution. An initial time spent factoring \(\mathbf{A}\)
allows for very rapid solution of similar systems of equations in the
future. If the intent for performing LU decomposition is for solving
linear systems then the command linalg.lu_factor should be used
followed by repeated applications of the command
linalg.lu_solve to solve the system for each new
right-hand-side.
Cholesky decomposition
Cholesky decomposition is a special case of LU decomposition
applicable to Hermitian positive definite matrices. When
\(\mathbf{A}=\mathbf{A}^{H}\) and
\(\mathbf{x}^{H}\mathbf{Ax}\geq0\) for all \(\mathbf{x}\) ,
then decompositions of \(\mathbf{A}\) can be found so that
\[ \begin{eqnarray*} \mathbf{A} & = & \mathbf{U}^{H}\mathbf{U}\\ \mathbf{A} & = & \mathbf{L}\mathbf{L}^{H}\end{eqnarray*}\]where \(\mathbf{L}\) is lower-triangular and \(\mathbf{U}\) is
upper triangular. Notice that \(\mathbf{L}=\mathbf{U}^{H}.\) The
command linalg.cholesky computes the cholesky
factorization. For using cholesky factorization to solve systems of
equations there are also linalg.cho_factor and
linalg.cho_solve routines that work similarly to their LU
decomposition counterparts.
QR decomposition
The QR decomposition (sometimes called a polar decomposition) works
for any \(M\times N\) array and finds an \(M\times M\) unitary
matrix \(\mathbf{Q}\) and an \(M\times N\) upper-trapezoidal
matrix \(\mathbf{R}\) such that
\[\mathbf{A=QR}.\]
Notice that if the SVD of \(\mathbf{A}\) is known then the QR decomposition can be found
\[\mathbf{A}=\mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^{H}=\mathbf{QR}\]
implies that \(\mathbf{Q}=\mathbf{U}\) and
\(\mathbf{R}=\boldsymbol{\Sigma}\mathbf{V}^{H}.\) Note, however,
that in SciPy independent algorithms are used to find QR and SVD
decompositions. The command for QR decomposition is linalg.qr .
Schur decomposition
For a square \(N\times N\) matrix, \(\mathbf{A}\) , the Schur
decomposition finds (not-necessarily unique) matrices
\(\mathbf{T}\) and \(\mathbf{Z}\) such that
\[\mathbf{A}=\mathbf{ZT}\mathbf{Z}^{H}\]
where \(\mathbf{Z}\) is a unitary matrix and \(\mathbf{T}\) is
either upper-triangular or quasi-upper triangular depending on whether
or not a real schur form or complex schur form is requested.
real schur form both \(\mathbf{T}\) and \(\mathbf{Z}\) are
real-valued when \(\mathbf{A}\) is real-valued. When
\(\mathbf{A}\) is a real-valued matrix the real schur form is only
quasi-upper triangular because \(2\times2\) blocks extrude from
the main diagonal corresponding to any complex- valued
eigenvalues. The command linalg.schur finds the Schur
decomposition while the command linalg.rsf2csf converts
\(\mathbf{T}\) and \(\mathbf{Z}\) from a real Schur form to a
complex Schur form. The Schur form is especially useful in calculating
functions of matrices.
The following example illustrates the schur decomposition:
&&& from scipy import linalg
&&& A = np.mat('[1 3 2; 1 4 5; 2 3 6]')
&&& T, Z = linalg.schur(A)
&&& T1, Z1 = linalg.schur(A, 'complex')
&&& T2, Z2 = linalg.rsf2csf(T, Z)
array([[ 9.,
array([[ 9..j, -0..j,
&&& abs(T1 - T2) # different
# may vary
&&& abs(Z1 - Z2) # different
array([[ 0.,
# may vary
&&& T, Z, T1, Z1, T2, Z2 = map(np.mat,(T,Z,T1,Z1,T2,Z2))
&&& abs(A - Z*T*Z.H)
&&& abs(A - Z1*T1*Z1.H)
&&& abs(A - Z2*T2*Z2.H)
Interpolative Decomposition
contains routines for computing the
interpolative decomposition (ID) of a matrix. For a matrix \(A
\in \mathbb{C}^{m \times n}\) of rank \(k \leq \min \{ m, n \}\)
this is a factorization
\[\begin{split}A \Pi =
\begin{bmatrix}
A \Pi_{1} & A \Pi_{2}
\end{bmatrix} =
\begin{bmatrix}
\end{bmatrix},\end{split}\]
where \(\Pi = [\Pi_{1}, \Pi_{2}]\) is a permutation matrix with
\(\Pi_{1} \in \{ 0, 1 \}^{n \times k}\), i.e., \(A \Pi_{2} =
A \Pi_{1} T\). This can equivalently be written as \(A = BP\),
where \(B = A \Pi_{1}\) and \(P = [I, T] \Pi^{\mathsf{T}}\)
are the skeleton and interpolation matrices, respectively.
— for more information.
Matrix Functions
Consider the function \(f\left(x\right)\) with Taylor series expansion
\[f\left(x\right)=\sum_{k=0}^{\infty}\frac{f^{\left(k\right)}\left(0\right)}{k!}x^{k}.\]
A matrix function can be defined using this Taylor series for the
square matrix \(\mathbf{A}\) as
\[f\left(\mathbf{A}\right)=\sum_{k=0}^{\infty}\frac{f^{\left(k\right)}\left(0\right)}{k!}\mathbf{A}^{k}.\]
While, this serves as a useful representation of a matrix function, it
is rarely the best way to calculate a matrix function.
Exponential and logarithm functions
The matrix exponential is one of the more common matrix functions. It
can be defined for square matrices as
\[e^{\mathbf{A}}=\sum_{k=0}^{\infty}\frac{1}{k!}\mathbf{A}^{k}.\]
The command linalg.expm3 uses this Taylor series definition to compute the matrix exponential.
Due to poor convergence properties it is not often used.
Another method to compute the matrix exponential is to find an
eigenvalue decomposition of \(\mathbf{A}\) :
\[\mathbf{A}=\mathbf{V}\boldsymbol{\Lambda}\mathbf{V}^{-1}\]
and note that
\[e^{\mathbf{A}}=\mathbf{V}e^{\boldsymbol{\Lambda}}\mathbf{V}^{-1}\]
where the matrix exponential of the diagonal matrix \(\boldsymbol{\Lambda}\) is just the exponential of its elements. This method is implemented in linalg.expm2 .
The preferred method for implementing the matrix exponential is to use
scaling and a Padé approximation for \(e^{x}\) . This algorithm is
implemented as linalg.expm .
The inverse of the matrix exponential is the matrix logarithm defined
as the inverse of the matrix exponential.
\[\mathbf{A}\equiv\exp\left(\log\left(\mathbf{A}\right)\right).\]
The matrix logarithm can be obtained with linalg.logm .
Trigonometric functions
The trigonometric functions \(\sin\) , \(\cos\) , and
\(\tan\) are implemented for matrices in linalg.sinm,
linalg.cosm, and linalg.tanm respectively. The matrix
sin and cosine can be defined using Euler’s identity as
\[ \begin{eqnarray*} \sin\left(\mathbf{A}\right) & = & \frac{e^{j\mathbf{A}}-e^{-j\mathbf{A}}}{2j}\\ \cos\left(\mathbf{A}\right) & = & \frac{e^{j\mathbf{A}}+e^{-j\mathbf{A}}}{2}.\end{eqnarray*}\]The tangent is
\[\tan\left(x\right)=\frac{\sin\left(x\right)}{\cos\left(x\right)}=\left[\cos\left(x\right)\right]^{-1}\sin\left(x\right)\]
and so the matrix tangent is defined as
\[\left[\cos\left(\mathbf{A}\right)\right]^{-1}\sin\left(\mathbf{A}\right).\]
Hyperbolic trigonometric functions
The hyperbolic trigonometric functions \(\sinh\) , \(\cosh\) ,
and \(\tanh\) can also be defined for matrices using the familiar
definitions:
\[ \begin{eqnarray*} \sinh\left(\mathbf{A}\right) & = & \frac{e^{\mathbf{A}}-e^{-\mathbf{A}}}{2}\\ \cosh\left(\mathbf{A}\right) & = & \frac{e^{\mathbf{A}}+e^{-\mathbf{A}}}{2}\\ \tanh\left(\mathbf{A}\right) & = & \left[\cosh\left(\mathbf{A}\right)\right]^{-1}\sinh\left(\mathbf{A}\right).\end{eqnarray*}\]These matrix functions can be found using linalg.sinhm,
linalg.coshm , and linalg.tanhm.
Arbitrary function
Finally, any arbitrary function that takes one complex number and
returns a complex number can be called as a matrix function using the
command linalg.funm. This command takes the matrix and an
arbitrary Python function. It then implements an algorithm from Golub
and Van Loan’s book “Matrix Computations” to compute function applied
to the matrix using a Schur decomposition.
Note that the function
needs to accept complex numbers as input in order to work with this
algorithm. For example the following code computes the zeroth-order
Bessel function applied to a matrix.
&&& from scipy import special, random, linalg
&&& np.random.seed(1234)
&&& A = random.rand(3, 3)
&&& B = linalg.funm(A, lambda x: special.jv(0, x))
array([[ 0.,
array([[ 0., -0., -0.],
0.7259118 , -0.],
[-0., -0.,
&&& linalg.eigvals(A)
array([ 1..j, -0..j,
&&& special.jv(0, linalg.eigvals(A))
array([ 0..j,
&&& linalg.eigvals(B)
array([ 0..j,
Note how, by virtue of how matrix analytic functions are defined,
the Bessel function has acted on the matrix eigenvalues.
Special matrices
SciPy and NumPy provide several functions for creating special matrices
that are frequently used in engineering and science.
block diagonal
Create a block diagonal matrix from the provided arrays.
Construct a circulant matrix.
Create a companion matrix.
Construct a Hadamard matrix.
Construct a Hankel matrix.
Construct a Hilbert matrix.
Inverse Hilbert
Construct the inverse of a Hilbert matrix.
Create a Leslie matrix.
Create a Pascal matrix.
Construct a Toeplitz matrix.
Van der Monde
Generate a Van der Monde matrix.
For examples of the use of these functions, see their respective docstrings.

我要回帖

更多关于 there is there are 的文章