In the study of linear systems, a familiar relationship is the homogeneous state-space equation , where is an -vector, and is an matrix. The time-invariant solution, (i.e., when is a constant matrix), is . When this subject is first introduced, the solution is often assumed, rather than derived.

The thinking is that since the solution to the homogeneous scalar equation is , then students will willingly accept a matrix-friendly equivalent that solves the state-space differential equation. So the definition for the exponential matrix is given, and is shown to work for the homogeneous case:

$$\begin{aligned} \dot{\mathbf{x}}(t) & = \frac{d}{dt} \left( e^{At} \mathbf{x}_0 \right) \\ & = \frac{d}{dt} \left( e^{At} \right) \mathbf{x}_0 + e^{At} \frac{d}{dt} \left( \mathbf{x}_0 \right) \\ & = A e^{At} \mathbf{x}_0 + e^{At} \left( 0 \right) \\ & = A e^{At} \mathbf{x}_0 \\ & = A \mathbf{x}(t) \end{aligned}$$

It seems to me that this presentation sequence, however, masks what is really going on with the system; that there is an infinite recursion on the initial state, , that converges to a value for :

$$\begin{aligned} \mathbf{x}(t) & = \mathbf{x}_0 + A \int_0^t \mathbf{x}(\tau)\, d\tau \\ & = \mathbf{x}_0 + A \int_0^t \left[ \mathbf{x}_0 + A \int_0^t\mathbf{x}(\tau)\, d\tau \right]\,d\tau \\ & = \mathbf{x}_0 + A \int_0^t \left[ \mathbf{x}_0 + A \int_0^t \left[ \mathbf{x}_0 + A \int_0^t\mathbf{x}(\tau)\, d\tau \right] d\tau \right]\,d\tau \end{aligned}$$

This recursion obviously repeats *ad infinitum*. However, the matrix exponential can now be defined by collecting terms on the right hand side, leading to:

$$\begin{aligned} \mathbf{x}(t) & = \left[ \mathbf{I}_n + At + \frac{1}{2!} \left( At \right)^2 + \dots \right] \mathbf{x}_0 \\ & = e^{At} \mathbf{x}_0 \end{aligned}$$

Presented in this order, the exponential matrix is developed based on system response, rather than the other way around. This strikes me as being easier to comprehend than “guessing” that some seemingly arbitrary function might solve the problem. Is this conceptually easier for anyone else?

## 2 Comments

Your post looks fine when I visit your web page directly. But when I saw it inside Google Reader, none of the LaTeX source rendered; I saw blocks of LaTeX commands rather than equations rendered as images.

Unfortunately, it looks like most of the sites using MathJax have this problem. There is an option to enclose an image inside some <class=”MathJax_Preview”> [image] </class> tags, where the image is shown until the LaTeX code is rendered by MathJax. However, this requires converting each equation into an image first, then pasting in the correct tags for each equation. Doable, but time consuming. This method also disables the ability to use $ inline to get in and out of math mode, as far as I can tell.

I’ll keep looking into the matter, to see if I can find a workable solution. Thanks for the info!

[Update:] I’ve reworked this page as best I could to provide images of the equations. Unfortunately, I had real problems with the ampersand in aligned equations being modified. If I used & for ampersands inside equations, then these were properly interpreted by the tex2jax pre-processor. However, this approach does not allow me to hide LaTeX code inside of <script> tags, which has the benefit of keeping the raw LaTeX code from being shown if MathJax fails to render the page. Until I get smarter about how the tex2jax pre-processor works, this will have to be good enough, as I’m not sure what else to do.