-
Notifications
You must be signed in to change notification settings - Fork 0
/
cram-matrices.tex
67 lines (61 loc) · 3.32 KB
/
cram-matrices.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
Given a square matrix $A$, the matrix exponential is defined
as~\cite{ationneeded}
\begin{equation}
\label{eq:matrix-exponential}
e^{A} = \sum_{n=0}^\infty \frac{A^n}{n!}.
\end{equation}
The CRAM approximation to $e^{-x}$ can be applied to approximate the matrix
exponential when the matrix $A$ has eigenvalues on or near the negative real
axis~\cite{pusa2010computing}. This is because the exponential of a matrix is
based on the exponentials of its eigenvalues.
% SymPy to compute Halphen's constant
% nsolve([1/s - exp(-pi*elliptic_k(1 - c**2)/elliptic_k(c**2)), elliptic_k(c**2) - 2*elliptic_e(c**2)], [c, s], [.9, 9])
The naive method of computing the matrix exponential $e^A$ via
$\hat{r}_{k, k}(t)=\frac{p_kt^k + \cdots + p_0}{q_kt^k + \cdots q_1t +
1}=\frac{p}{q}$ is to compute
$\left .(q_kA^k + \cdots + q_1A + I)\middle\backslash(p_kA^k + \cdots + p_0 I)\right.$, where
$\backslash$ is a matrix solve. However, this method is very unstable, as the
coefficients $p_i,q_i$ are quite small (the smallest on the order of
$10^{-2k}$), and taking powers of $A$ is also unstable. We also attempted
using the Horner scheme on the numerator and denominator,
$\hat{r}_{k, k}(t)=\frac{p_0 + t(p_1 + t(p_2 + \cdots t(p_{k-1} + tp_k)))}{1 +
t(q_1 + t(q_2 + \cdots t(q_{k-1} + tq_k)))}$, but found this to be
numerically unstable as well.
A better method is to perform a partial fraction decomposition on $\hat{r}_{k,
k}(t)$~\cite{pusa2010computing}:
\begin{equation}
\label{eq:part-frac}
\hat{r}_{k, k}(t) = \alpha_0 + \sum_{i=1}^k \frac{\alpha_i}{t - \theta_i},
\end{equation}
where $\theta_i$ are the roots of $q$, which are all
nonreal and have multiplicity 1, $\alpha_i$ is the residue of
$\hat{r}_{k, k}(t)$ at $t=\theta_i$, and $\alpha_0$ is the residue at
infinity. When $k$ is even, the roots $\theta_i$ all come in complex conjugate
pairs, so the sum can be reduced to
\begin{equation}
\hat{r}_{k, k}(t) = \alpha_0 + \mathrm{Re}\left(\sum_{i=1}^{k/2}
\frac{\alpha_i}{t - \theta_i}\right).
\end{equation}
Thus, the approximation to $e^Ab$ can be computed via
\begin{equation}
\label{eq:part-frac-matrix-re}
\hat{r}_{k, k}(A)b = \alpha_0b + \mathrm{Re}\left(\sum_{i=1}^{k/2} \left. (A -
\theta_i I)\middle\backslash(\alpha_i b) \right.\right).
\end{equation}
We attempted expand the real part of this expression via
\begin{equation}
\mathrm{Re}\left(\frac{\alpha_i}{t - \theta_i}\right) = \frac{\mathrm{Re}{(\alpha_{i})}t - \mathrm{Re}{(\alpha_{i})} \mathrm{Re}{(\theta_{i})} - \mathrm{Im}{(\alpha_{i})} \mathrm{Im}{(\theta_{i})}}{\left(t - \mathrm{Re}{(\theta_{i})}\right)^{2} + \mathrm{Im}{(\theta_{i})}^{2}},
\end{equation}
and compute
\begin{equation}
% \hat{r}_{k, k}(A)b =
\left.\left(\left(A - \mathrm{Re}{(\theta_{i})I}\right)^{2} +
\mathrm{Im}{(\theta_{i})}^{2}I\right)\middle\backslash \left(\mathrm{Re}{(\alpha_{i})}A - (\mathrm{Re}{(\alpha_{i})} \mathrm{Re}{(\theta_{i})} + \mathrm{Im}{(\alpha_{i})} \mathrm{Im}{(\theta_{i})})b\right)\right..
\end{equation}
However, this was found to be numerically unstable, most likely because of the
additional matrix power. % TODO: verify this
In transmutagen roots $\theta_i$ are computed with
\texttt{sympy.\allowbreak{}nsolve} using Newton's method. The residues
$\alpha_i$ were computed from the standard formulas
$\alpha_i = \frac{p}{q/(t - \theta_i)}|_{t=\theta_i}$ and
$\alpha_0=\frac{p_k}{q_k}$.