As long as $\alpha \ne 0$, there is only one equilibrium $x_e=$ . The equilibrium is stable, i.e., nearby solutions approach the equilibrium, if . (Enter an equality or inequality involving $\alpha$, $x$, or $t$.) The equilibrium is unstable, i.e., some nearby solutions move away from the equilibrium, if .
Our goal is to generalize this result to two-dimensional linear systems so we can characterize their behavior.
Let's start with a simple two-dimensional system which is equivalent to two one-dimensional systems: \begin{align*} \diff{x}{t} &= \alpha x\\ \diff{y}{t} &= \delta y \end{align*} (The character $\delta$ is the Greek letter “delta”.)
Hide help
Online, enter $c_1$ as c_1 and $c_2$ as c_2.
As long as $\alpha \ne 0$ and $\delta \ne 0$, there is only one equilibrium $(x_e, y_e) =$ . If the solution $\vc{x}(t) = (x(t),y(t))$ always approaches the equilibrium, then the equilibrium is stable. Since we need both $x(t)$ and $y(t)$ to approach , the condition for stability is . (You'll likely need a compound inequality, one with “and” or “or”, involving the parameters.)
For the equilibrium to be unstable, the solution $\vc{x}(t)=(x(t),y(t))$ must move away from the equilibrium for some close initial conditions. In this case, it is enough for just one of the variables $x(t)$ or $y(t)$ to move away from . The condition for instability is
If we let $\vc{x}= (x,y)$, we can write this equation as a matrix-vector multiplication $$\diff{\vc{x} }{t} = A \vc{x}$$ where $A$ is the diagonal matrix (meaning the only non-zero entries are on the diagonal): $$A = \left[\begin{matrix}\alpha & 0\\0 & \delta\end{matrix}\right].$$ The eigenvalues of a diagonal matrix are just the entries, so they are $\lambda_1 = \alpha$ and $\lambda_2 = \delta$. Therefore, in terms of the eigenvalues, the equilibrium is stable if . (Rewrite the above condition in terms of $\lambda_1$ and $\lambda_2$.)
First, rewrite the general solution in terms of $\lambda_1$ and $\lambda_2$ rather than $\alpha$ and $\delta$. $x(t) = $ $y(t) = $
Next, write this solution in vector form for $\vc{x}(t)$
Now, let's rewrite it in terms of the eigenvectors. If we take the first component, $_$, and multiply it by the first eigenvector $\vc{u}_1=(1,0)$, we get the vector
If we take the second component, $_$, and multiply it by the second eigenvector $\vc{u}_2=(0,1)$, we get the vector
The general solution $\vc{x}(t)$ is just the sum of those two vectors. Add the left sides of those equations to get the general solution formula in terms of the eigenvalues and eigenvectors: $\vc{x}(t) =$ .
Notice how this general solution is just in terms of the eigenvalues ($\lambda_1$ and $\lambda_2$), the eigenvectors ($\vc{u}_1$ and $\vc{u}_2$), and two arbitrary constants ($c_1$ and $c_2$, which would be determined by the initial conditions). In our case, the eigenvalues and eigenvectors were simple expressions, so the solution had a simple form. It turns out, though, that this formula for the solution works for any two-dimensional linear differential equation of the form $\vc{x}'(t) = A \vc{x}$ (except for the special case when $\lambda_1=\lambda_2$, but we won't worry about that).
The right panel show curves that should be familiar. There, $x$ is plotted versus $t$ on the top axes (solid curve) and $y$ is plotted versus $t$ on the bottom axes (dashed curve). These curves will exhibit exponential growth when $\alpha$ or $\delta$ is positive and will exhibit exponential decay when $\alpha$ or $\delta$ is negative. You can move the slider for $t$ (or press the play bottom in the lower left of one of the panels) so that red points illustrating $x(t)$ and $y(t)$ move along the curves.
The left panel shows the phase plane with the state variable $x$ plotted versus the state variable $y$. There is no representation of time $t$ in the phase plane. The cyan solution trajectory in the phase plane is the curve that the point $(x(t),y(t))$ traces out as $t$ increases. To help visualize time, you can change the value of $t$ and observe how the red point $(x(t),y(t))$ changes.
For the linear dynamical system \[{}\] the solution is \[{}\]
(Hide)
In the above applet, change set $\alpha=-1$ and $\delta=1$. Move the initial condition $\vc{x}(0)$ (the cyan point in the phase plane) to about $(1,1)$. From this initial condition, the solution trajectory (the thick cyan line) moves straight down downward and to the righ straight right upward and to the right straight up upward and to the left straight left downward and to the left, which means that $x(t)$ is increasing decreasing constant and $y(t)$ is decreasing increasing constant. In other words, $x(t)$ is exhibiting exponential growth decay and $y(t)$ is exhibiting exponential growth decay, which you can also verify by computing the solution (or looking at the solution section below the applet).
Unless you move the initial condition exactly along the line $y=0$ (or, equivalently, set the initial condition of $y(0)=0$ in the lower right plot), the solution trajectory moves away from toward the equilibrium at the origin, which means the equilibrium is stable unstable. Notice how, if you go backward in time, i.e., follow the thin cyan line, the solution also moves away from toward the origin unless you are on the line $x=0$.
The lines $y=0$ and $x=0$ are special because they are the directions of the eigenvectors $\vc{u}_1=(1,0)$ and $\vc{u}_2=(0,1)$, respectively. You can view the eigenvectors by checking the “show eigenvector” box. The eigenvector $\vc{u}_1$, in green, is shown as pointing toward away from the origin, which is the direction the solution moves along $\vc{u}_1$ since $\lambda_1=-1$ is negative. The eigenvector $\vc{u}_2$, in blue, is shown as pointing toward away from the origin, which is the direction the solution moves along $\vc{u}_2$ since $\lambda_2=1$ is positive. You can further explore the behavior of the system when $\alpha=-1$ and $\delta=1$ by moving the initial condition around. Notice how the general behavior of the solution trajectories in the phase plane is predicted just by the directions on the eigenvectors.
If both eigenvalues are negative (e.g., set both $\alpha$ and $\delta$ to negative numbers in the above applet), then the equilibrium is unstable stable. In this case, all directions are stable, and all nearby trajectories move away from toward the equilibrium. We classify such an equilibrium as a stable node (some folks call it a sink).
If both eigenvalues are positive (e.g., set both $\alpha$ and $\delta$ to positive numbers in the above applet), then the equilibrium is unstable stable. In this case, all directions are unstable, and all nearby trajectories move toward away from the equilibrium. We classify such an equilibrium as a unstable node (some folks call it a source).
That takes care of all the possibilities for our simple uncoupled system of equations, unless you set one of the eigenvalues to zero (e.g., set $\alpha$ or $\delta$ to zero in the above applet), in which case the system becomes degenerate, and we get an infinite number of equilibria. We won't concern ourselves with those cases.
As long as $\det A \ne $ , the dynamical system as only one equilibrium, which is $\vc{x}_e =$ .
Let's concern ourselves first with the case where the eigenvalues $\lambda_1$ and $\lambda_2$ of $A$ are real and distinct. Then, the system behaves nearly identically to the uncoupled system, with the difference being that the eigenvectors $\vc{u}_1$ and $\vc{u}_2$ are no longer parallel to the $x$ and $y$-axes. We aren't going to concern ourselves with deriving the formula for the solution. Instead, we'll just be content to assert that we have the same solution formula that you wrote above. Recall that this formula is in terms of the eigenvalues, eigenvectors, and two arbitrary constants $c_1$ and $c_2$ (which would be determined by initial conditions). Rewrite the solution formula here: $\vc{x}(t) = $ .
What is the condition in terms of $\lambda_1$, $\lambda_2$, $\vc{u}_1$, $\vc{u}_2$, $c_1$, and/or $c_2$ for the equilibrium at the origin to be a stable node, i.e., for the solution to approach the equilibrium from all directions?
What is the condition in terms of $\lambda_1$, $\lambda_2$, $\vc{u}_1$, $\vc{u}_2$, $c_1$, and/or $c_2$ for the equilibrium at the origin to be an unstable node, i.e., for the solution to move way from the equilibrium in all directions?
What is the condition in terms of $\lambda_1$, $\lambda_2$, $\vc{u}_1$, $\vc{u}_2$, $c_1$, and/or $c_2$ for the equilibrium at the origin to be a saddle? (To be concrete, let's assume $\lambda_1 < \lambda_2$, i.e., that $\vc{u}_1$ is the stable direction.)
The only one of those three that is stable is the stable node. Therefore, what's the condition for the equilibrium to be stable?
The following applet is an extended version of the previous one where you can set all four parameters of the matrix $A$ as well as view its eigenvalues and eigenvectors. You can use it to explore the behavior of the linear systems we'll examine, below. (Checking “show vector” will allow you to see in which the direction the solution is even if it goes off the screen.)
The eigenvalues of \[{}\] are `{}` and `{}`.
The corresponding eigenvectors are `{}` and `{}`. (The applet calculates only real eigenvectors. If the eigenvectors are complex or there is only one eigenvector, it will display a “?” in place of an eigenvector.)
For most initial conditions, in which direction does the solution move after a long time? The solution moves toward the equilibrium for initial conditions along which direction(s)? (If a single direction, enter a vector. If all directions, enter all. If no directions, enter none.)
Before we start discussing how we'll use those complex numbers, we should remember an important point. The parameters $\alpha$, $\beta$, $\gamma$, and $\delta$ are real numbers. Any initial conditions we'll use will be real numbers. If $\diff{\vc{x} }{t}=A\vc{x}$, can the solution $\vc{x}$ take on complex values? no yes The differential equation actually does not involve any complex numbers; the solution does not involve any complex numbers. We don't even need to involve complex numbers to discuss the dynamics of this system.
So, why are we going to involve complex numbers? Because it allows us to avoid doing any more work. If we use complex numbers, we can use the same solution formula we've been using all along. If $\lambda_1$ and $\lambda_2$ are the eigenvalues of $A$ with eigenvectors $\vc{u}_1$ and $\vc{u}_2$, what is the formula for the general solution of the differential equation? $\vc{x}(t) =$ (As before, use $c_1$ and $c_2$ for the constants that would be determined from the initial conditions.)
It turns out that this formula makes sense even if the eigenvalues are complex numbers. Of course, there is the problem of knowing what $e^{\lambda_1 t}$ means when $\lambda_1$ is complex. (We'll get back to that in a minute.) Then, there's the other problem that the eigenvectors $\vc{u}_1$ and $\vc{u}_2$ will also be complex. In fact, we are going to have to let the constants $c_1$ and $c_2$ be complex. Lucky for you, we'll never need to compute the eigenvectors or the constants, so we can ignore these last two problems and just accept that $\vc{u}_1$, $\vc{u}_2$, $c_1$, and $c_2$ are complex.
But, here's the really strange part. In the formula for $\vc{x}(t)$ that you typed above, $\vc{x}(t) = _$, what do you know about the solution $\vc{x}(t)=(x(t),y(t))$? The components of the solution $x(t)$ and $y(t)$ are fishy real pretend imaginary complex numbers. Since what we are really dealing with is real numbers, the ascent in to complex numbers was, well, imaginary. If we did all the computations, which we won't, we'd find out all the imaginary components of the complex numbers cancel out, leaving us with real values for $x(t)$ and $y(t)$. We use complex numbers for the intermediate calculations becomes it'd be a real big pain to try to figure out the solution without going into complex numbers for the middle part of the journey.
OK, with that fanfare, here's the one thing you need to know about complex numbers: how to take an exponential of a complex number. We're going to use Euler's formula which defines the exponential of an imaginary number. Euler's formula says that, if $t$ is any real number (which means $i\cdot t$ is a purely imaginary number), then $$e^{it} = \cos(t) + i \sin(t).$$ We're not going to worry too much about why that's true. That's left as an exercise! We just need two important properties.
First, what's the maximum value of $\sin(t)$ and $\cos(t)$? What's the minimum value of $\sin(t)$ and $\cos(t)$? . Moreover, what's magnitude of $e^{it}$? $|e^{it}| = \sqrt{\sin^2(t)+\cos^2(t)} = $ Therefore, can $e^{it}$ get really large? yes no Can $e^{it}$ get really small? no yes So, if we care about if something gets really large or really small, we actually don't care about a factor of $e^{it}$. If we multiply a number by $e^{it}$, it doesn't affect the size of the number.
Second, if we think of $t$ as representing time, what is the behavior of the functions $\sin(t)$ and $\cos(t)$ as $t$ increases? They stay constant. They oscillate. They grow exponentially. They shrink exponentially. That's what we'll get when we multiply a number by the function $e^{it}$ (if we view $e^{it}$ as a function of time). A factor of $e^{it}$ will just introduce growth decay nervousness oscillations.
If you remember your rules for exponentiation, that's enough to understand what $e^{\lambda t}$ means when $\lambda$ is a complex number. If $\lambda = a + ib$, then $e^{\lambda t} = e^{at + ibt} =$ $\cdot e^{ibt} = $ $_$ $\cdot \bigl($ $+ i \cdot $ $\bigr)$.
With that formula in mind, what is the condition on $a$ and/or $b$ for $e^{\lambda t}$ to be decreasing as $t$ increases? What is the condition for $e^{\lambda t}$ to be increasing as $t$ increases? . What happens if $a=0$? Then $e^{\lambda t}$ shrinks grows neither grows nor shrinks as it oscillates changes steadily.
Rather than using $a$ or $b$, we'll usually talk about the real part of $\lambda$, denoted $\text{Re}(\lambda)$, and the imaginary part of $\lambda$, denoted $\text{Im}(\lambda)$. In that notation, the condition for $e^{\lambda t}$ growing is , and the condition for $e^{\lambda t}$ shrinking is . Of course, if $\lambda$ is real, then $\text{Re}(\lambda)=\lambda$. That means that these conditions,
We can now give the general criterion for the stability of the equilibrium $(0,0)$ of our linear system $\diff{\vc{x} }{t} = A\vc{x}$. Given that the solution is $\vc{x}(t) = _$, the equilibrium is stable if both terms are shrinking. The condition for stability, valid regardless if the eigenvalues are real or complex, is .
If the eigenvalues do happen to be complex, then $\lambda_1$ and $\lambda_2$ will complex conjugates, meaning $\text{Re}(\lambda_1) = \text{Re}(\lambda_2)$ and $\text{Im}(\lambda_1) = - \text{Im}(\lambda_2)$. (You get this result from the quadratic formula. We often say the eigenvalues are in the form $a \pm bi$.) This means that $e^{\lambda_1 t}$ and $e^{\lambda_2 t}$ will be growing or shrinking at the same rate different rates when the eigenvalues are complex.
When the eigenvalues were real, we classified the equilibrium as a stable node, an unstable node, or a saddle. Complex eigenvalues add three more possibilities classifications. In all cases, the imaginary part of the eigenvalues creates nervousness growth decay oscillations in $e^{\lambda_1 t}$ and $e^{\lambda_2 t}$ due to the presence of the sines and cosines. (Since $\lambda_1$ and $\lambda_2$ will be complex conjugates, all oscillations will be at the same rate.)
Let's explore different examples of systems with complex eigenvalues. You can use the above applet to visualize their behavior. (Or you can open the full version of the applet -- just don't look at the Equilibrium Classification section until after you've classified the equilibrium, or it will short-circuit the learning process.) The applet will also show you the solution where the complex exponentials have been turned into sines and cosines using Euler's formula. You don't need to worry how to calculate such solutions. Notice how the solutions are real; all the imaginary parts of the solution have canceled out. The real part of the eigenvalues appear in the exponentials, determining the growth or decay. The imaginary parts of the eigenvalues appear in the sines and cosines, determining the oscillations.
For any initial condition, other than $(x_0,y_0)= $ , the solution moves closer to away from the equilibrium at the origin as it heads in a straight line rotates around the equilibrium.
For any initial condition, other than $(x_0,y_0)= $ , the solution moves away from closer to the equilibrium at the origin as it heads in a straight line rotates around the equilibrium.