In mathematics, a Carleman matrix is a matrix used to convert function composition into matrix multiplication. It is often used in iteration theory to find the continuous iteration of functions which cannot be iterated by pattern recognition alone. Other uses of Carleman matrices occur in the theory of probability generating functions, and Markov chains.
The Carleman matrix of an infinitely differentiable function
is defined as:
![{\displaystyle M[f]_{jk}={\frac {1}{k!}}\left[{\frac {d^{k}}{dx^{k}}}(f(x))^{j}\right]_{x=0}~,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f4403f8a3cf6059a61c85b4e2467c4a85f54b92e)
so as to satisfy the (Taylor series) equation:
![{\displaystyle (f(x))^{j}=\sum _{k=0}^{\infty }M[f]_{jk}x^{k}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/03463b6cd637cee2f67b82f27d2090ea727f8911)
For instance, the computation of
by
![{\displaystyle f(x)=\sum _{k=0}^{\infty }M[f]_{1,k}x^{k}.~}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dbf258551c796e545bf7fb64d108e36a767bb4de)
simply amounts to the dot-product of row 1 of
with a column vector
.
The entries of
in the next row give the 2nd power of
:
![{\displaystyle f(x)^{2}=\sum _{k=0}^{\infty }M[f]_{2,k}x^{k}~,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/de8b11d0f8a91e4c2ced42170a60b0caf19a7384)
and also, in order to have the zeroth power of
in
, we adopt the row 0 containing zeros everywhere except the first position, such that
![{\displaystyle f(x)^{0}=1=\sum _{k=0}^{\infty }M[f]_{0,k}x^{k}=1+\sum _{k=1}^{\infty }0\cdot x^{k}~.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/944762d10523027f91b1d573c7bba50b227f4dd1)
Thus, the dot product of
with the column vector
yields the column vector
, i.e.,
![{\displaystyle M[f]{\begin{bmatrix}1\\x\\x^{2}\\x^{3}\\\vdots \end{bmatrix}}={\begin{bmatrix}1\\f(x)\\(f(x))^{2}\\(f(x))^{3}\\\vdots \end{bmatrix}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6e72b86563449420d361a17b3b9bed8bd3ffd64b)
A generalization of the Carleman matrix of a function can be defined around any point, such as:
![{\displaystyle M[f]_{x_{0}}=M_{x}[x-x_{0}]M[f]M_{x}[x+x_{0}]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/60545c7d7eebc706af5c420424fc18ead0ffe7cc)
or
where
. This allows the matrix power to be related as:
![{\displaystyle (M[f]_{x_{0}})^{n}=M_{x}[x-x_{0}]M[f]^{n}M_{x}[x+x_{0}]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65827d744752dfa269cee519bb7d75f49f94575c)
- Another way to generalize it even further is think about a general series in the following way:
- Let
be a series approximation of
, where
is a basis of the space containing ![{\displaystyle h(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/02c07825dae28705df03d15daeb8844d49c4dbd4)
- Assuming that
is also a basis for
, We can define
, therefore we have
, now we can prove that
, if we assume that
is also a basis for
and
.
- Let
be such that
where
.
- Now
- Comparing the first and the last term, and from
being a base for
,
and
it follows that ![{\displaystyle G[g\circ f]=\sum _{m}G[g]_{lm}G[f]_{mn}=G[g]\cdot G[f]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/da354f5e5f9a1bba6f9fa933fa3a7249dd514e0d)
Rederive (Taylor) Carleman Matrix
[edit]
If we set
we have the Carleman matrix. Because
![{\displaystyle h(x)=\sum _{n}c_{n}(h)\cdot \psi _{n}(x)=\sum _{n}c_{n}(h)\cdot x^{n}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1d6b4ddda94ca34c5213143cb6ee850f40f095d3)
then we know that the n-th coefficient
must be the nth-coefficient of the taylor series of
. Therefore
Therefore
Which is the Carleman matrix given above. (It's important to note that this is not an orthornormal basis)
Carleman Matrix For Orthonormal Basis
[edit]
If
is an orthonormal basis for a Hilbert Space with a defined inner product
, we can set
and
will be
. Then
.
Carleman Matrix for Fourier Series
[edit]
If
we have the analogous for Fourier Series. Let
and
represent the carleman coefficient and matrix in the fourier basis. Because the basis is orthogonal, we have.
.
Then, therefore,
which is
![{\displaystyle {\hat {G}}[f]_{mn}={\cfrac {1}{2\pi }}\int _{-\pi }^{\pi }\displaystyle e^{imf(x)}\cdot e^{-inx}dx}](https://wikimedia.org/api/rest_v1/media/math/render/svg/973e624fc5178f701baeff9b96687a3bbb0f4a1e)
Carleman matrices satisfy the fundamental relationship
![{\displaystyle M[f\circ g]=M[f]M[g]~,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8e00d6d5242cbe33b61cac226a4616334e0c3764)
which makes the Carleman matrix M a (direct) representation of
. Here the term
denotes the composition of functions
.
Other properties include:
, where
is an iterated function and
, where
is the inverse function (if the Carleman matrix is invertible).
The Carleman matrix of a constant is:
![{\displaystyle M[a]=\left({\begin{array}{cccc}1&0&0&\cdots \\a&0&0&\cdots \\a^{2}&0&0&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/27037f56eb81c02bca3637d7fa1a64a7acf69290)
The Carleman matrix of the identity function is:
![{\displaystyle M_{x}[x]=\left({\begin{array}{cccc}1&0&0&\cdots \\0&1&0&\cdots \\0&0&1&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/060db1559fd634af4732397f145102b847ee28d0)
The Carleman matrix of a constant addition is:
![{\displaystyle M_{x}[a+x]=\left({\begin{array}{cccc}1&0&0&\cdots \\a&1&0&\cdots \\a^{2}&2a&1&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4f3fea6f7f68d36e2bd565f790c580d6cf3638c7)
The Carleman matrix of the successor function is equivalent to the Binomial coefficient:
![{\displaystyle M_{x}[1+x]=\left({\begin{array}{ccccc}1&0&0&0&\cdots \\1&1&0&0&\cdots \\1&2&1&0&\cdots \\1&3&3&1&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dfd6f916a9d60728ea980fae6cac9e59c61c578a)
![{\displaystyle M_{x}[1+x]_{jk}={\binom {j}{k}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a24191e059d1693cc283f0e99fb9de0c8f473b0b)
The Carleman matrix of the logarithm is related to the (signed) Stirling numbers of the first kind scaled by factorials:
![{\displaystyle M_{x}[\log(1+x)]=\left({\begin{array}{cccccc}1&0&0&0&0&\cdots \\0&1&-{\frac {1}{2}}&{\frac {1}{3}}&-{\frac {1}{4}}&\cdots \\0&0&1&-1&{\frac {11}{12}}&\cdots \\0&0&0&1&-{\frac {3}{2}}&\cdots \\0&0&0&0&1&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/61170871a71104a2460fbca6ffedd2a2d18d37a5)
![{\displaystyle M_{x}[\log(1+x)]_{jk}=s(k,j){\frac {j!}{k!}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/08aa68c6365aea61ce9dfca0b36f57ce265f7aed)
The Carleman matrix of the logarithm is related to the (unsigned) Stirling numbers of the first kind scaled by factorials:
![{\displaystyle M_{x}[-\log(1-x)]=\left({\begin{array}{cccccc}1&0&0&0&0&\cdots \\0&1&{\frac {1}{2}}&{\frac {1}{3}}&{\frac {1}{4}}&\cdots \\0&0&1&1&{\frac {11}{12}}&\cdots \\0&0&0&1&{\frac {3}{2}}&\cdots \\0&0&0&0&1&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/46a8b609d493fb9b990b72713526cb0e214639db)
![{\displaystyle M_{x}[-\log(1-x)]_{jk}=|s(k,j)|{\frac {j!}{k!}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cc7dc6e5117d3f9472837983f4c5cf3fb0ac7f0d)
The Carleman matrix of the exponential function is related to the Stirling numbers of the second kind scaled by factorials:
![{\displaystyle M_{x}[\exp(x)-1]=\left({\begin{array}{cccccc}1&0&0&0&0&\cdots \\0&1&{\frac {1}{2}}&{\frac {1}{6}}&{\frac {1}{24}}&\cdots \\0&0&1&1&{\frac {7}{12}}&\cdots \\0&0&0&1&{\frac {3}{2}}&\cdots \\0&0&0&0&1&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/60dcc5bc8118a4d12edab88fe4d1b55bdf45750b)
![{\displaystyle M_{x}[\exp(x)-1]_{jk}=S(k,j){\frac {j!}{k!}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/88f9a96182d5979c945be545163e45b0ce248e6d)
The Carleman matrix of exponential functions is:
![{\displaystyle M_{x}[\exp(ax)]=\left({\begin{array}{ccccc}1&0&0&0&\cdots \\1&a&{\frac {a^{2}}{2}}&{\frac {a^{3}}{6}}&\cdots \\1&2a&2a^{2}&{\frac {4a^{3}}{3}}&\cdots \\1&3a&{\frac {9a^{2}}{2}}&{\frac {9a^{3}}{2}}&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ba38bbc8a6ed091fce516b7d31e2fcf28d0cf561)
![{\displaystyle M_{x}[\exp(ax)]_{jk}={\frac {(ja)^{k}}{k!}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b2ee0d7ea2fd8ded162198b085659a006b84cbfc)
The Carleman matrix of a constant multiple is:
![{\displaystyle M_{x}[cx]=\left({\begin{array}{cccc}1&0&0&\cdots \\0&c&0&\cdots \\0&0&c^{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d29dc047a6112f5ff455b1dffd13a54b90102b18)
The Carleman matrix of a linear function is:
![{\displaystyle M_{x}[a+cx]=\left({\begin{array}{cccc}1&0&0&\cdots \\a&c&0&\cdots \\a^{2}&2ac&c^{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9a3518b703f7d0701200e12ef02d74528bb03450)
The Carleman matrix of a function
is:
![{\displaystyle M[f]=\left({\begin{array}{cccc}1&0&0&\cdots \\0&f_{1}&f_{2}&\cdots \\0&0&f_{1}^{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fa0961a60884bc09a9405dea90f500a8747aea25)
The Carleman matrix of a function
is:
![{\displaystyle M[f]=\left({\begin{array}{cccc}1&0&0&\cdots \\f_{0}&f_{1}&f_{2}&\cdots \\f_{0}^{2}&2f_{0}f_{1}&f_{1}^{2}+2f_{0}f_{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{array}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f14d52bc552668f19232bd0c31b6d90279d1bfb5)
The Bell matrix or the Jabotinsky matrix of a function
is defined as[1][2][3]
![{\displaystyle B[f]_{jk}={\frac {1}{j!}}\left[{\frac {d^{j}}{dx^{j}}}(f(x))^{k}\right]_{x=0}~,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6edf4d35ab6f9257f7c0341aa0aed08fcb35e32a)
so as to satisfy the equation
![{\displaystyle (f(x))^{k}=\sum _{j=0}^{\infty }B[f]_{jk}x^{j}~,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8781733a538855e58051ba92c69fe22d63c9c1d0)
These matrices were developed in 1947 by Eri Jabotinsky to represent convolutions of polynomials.[4] It is the transpose of the Carleman matrix and satisfy
which makes the Bell matrix B an anti-representation of
.
- R Aldrovandi, Special Matrices of Mathematical Physics: Stochastic, Circulant and Bell Matrices, World Scientific, 2001. (preview)
- R. Aldrovandi, L. P. Freitas, Continuous Iteration of Dynamical Maps, online preprint, 1997.
- P. Gralewicz, K. Kowalski, Continuous time evolution from iterated maps and Carleman linearization, online preprint, 2000.
- K Kowalski and W-H Steeb, Nonlinear Dynamical Systems and Carleman Linearization, World Scientific, 1991. (preview)