-
Let's start by exploring the behavior of the dynamical system. We'll be computing the first five values of $x_n$ and $y_n$ for different initial conditions. You can compute them by hand or use computer program like R.
First, a pretty boring initial condition: $x_0=0$, $y_0=0$
$x_1=$
, $y_1=$
$x_2=$
, $y_2=$
$x_3=$
, $y_3=$
$x_4=$
, $y_4=$
$x_5=$
, $y_5=$
We conclude that if we start at the initial condition $\vc{x}_0 = (0,0)$, we
. This means that the value $(x_n,y_n)=(0,0)$ is an equilibrium of the dynamical system. If the state variables are exactly at an equilibrium, the system stays there forever — that's the definition of an equilibrium.
For linear systems of the form $\vc{x}_{n+1}=A\vc{x}_n$ (like the one we're looking at here), the point $\vc{x}_n=\vc{0}=(0,0)$ (i.e., the origin) is always an equilibrium. After all, if you plug in $\vc{x}_n=\vc{0}$, then $\vc{x}_{n+1}=A\vc{x}_n=\vc{0}$ for any matrix $A$. The system doesn't move away from the origin if you start exactly at the origin, just like you computed, above.
For most cases we'll be dealing with here, the origin will be the only equilibrium. We should get more interesting behavior if we start any other point. Let's see what happens for some more initial conditions.
$x_0=1$, $y_0=0$
$x_1=$
, $y_1=$
$x_2=$
, $y_2=$
$x_3=$
, $y_3=$
$x_4=$
, $y_4=$
$x_5=$
, $y_5=$
When starting at the initial condition $\vc{x}_0=(1,0)$, does the system move away from or toward the equilibrium $\vc{0}$?
As $n$ increases, the system eventually moves away from the origin in which direction? It eventually moves close to the direction parallel to the vector
(Look at the ratio between $x_n$ and $y_n$ for the larger values of $n$, such as $n=5$.)
Eventually, how fast does the system move away from the origin? Check the ratios $x_5/x_4$, $y_5/y_4$, $x_4/x_3$, and $y_4/y_3$. After each of these later steps, the distance from the origin appears to be increasing by a factor of
. (Round your answer to the nearest tenth.)
Let's try another initial condition starting in a different direction from the origin..
$x_0=0$, $y_0=1$
$x_1=$
, $y_1=$
$x_2=$
, $y_2=$
$x_3=$
, $y_3=$
$x_4=$
, $y_4=$
$x_5=$
, $y_5=$
When starting at the initial condition $\vc{x}_0=(0,1)$, the system eventually moves
the origin in which direction? It eventually moves close to the direction parallel to the vector
(Look at the ratio between $x_n$ and $y_n$ for the larger values of $n$. Opposite directions, like $(2,-3)$ and $(-2,3)$ are consider the same direction here.)
Eventually, how fast does the system move away from the origin? At the later steps, the distance from the origin appears to be increasing by a factor of
. (Round your answer to the nearest tenth.)
Does the system always move away from the origin in the same direction and speed? Well, obviously it doesn't if you start right at the equilibrium $(0,0)$, because then it stays there forever. But, excluding that point, does the system always act like these last two example? You'd have to try an infinite number of initial conditions before you could be sure. Since you might not have enough time to check that many initial conditions, we'll suggest another initial condition to try:
$x_0=1$, $y_0=1$
$x_1=$
, $y_1=$
$x_2=$
, $y_2=$
$x_3=$
, $y_3=$
$x_4=$
, $y_4=$
$x_5=$
, $y_5=$
When starting at the initial condition $\vc{x}_0 = (1,1)$, does the system move toward or away from the equilibrium at the origin?
In what direction does it move?
At each time step, the distance from the origin decreases exactly by a factor of
.
It turns out, only when you start with initial conditions that are multiples of $\vc{x}_0 = (1,1)$ will the system continue to move toward the equilibrium. If you start from any other direction, you'll eventually move away from the origin. (Remember, starting in the opposite direction, such as $\vc{x}_0 = (-1,-1)$ is considered the same direction. The system will still move toward the equilibrium if you start there.)
You shouldn't be surprised what happens when we start directly in the direction along which solutions were moving away from the origin. Try the initial conditions: $x_0=1$, $y_0=-1$
$x_1=$
, $y_1=$
$x_2=$
, $y_2=$
$x_3=$
, $y_3=$
$x_4=$
, $y_4=$
$x_5=$
, $y_5=$
When starting at the initial condition $\vc{x}_0=(1,-1)$, the system immediately moves
the origin in the direction parallel to the vector
At each time step, the distance from the origin increases exactly by a factor of
.
-
Now, let's analyze the system and determine why we observed the above behavior. Rewrite the discrete dynamical system in matrix form so that we can find the eigenvalues and eigenvectors.
$\displaystyle\begin{bmatrix} x_{n+1}\\ y_{n+1} \end{bmatrix}=$
$\displaystyle\begin{bmatrix} x_{n}\\ y_{n} \end{bmatrix}$
The characteristic equation for the eigenvalue, $\det (A-\lambda I)=0$ is
The eigenvalues, in increasing order, are: $\lambda_1=$
,
$\lambda_2=$
We'll denote the $\lambda_1$ eigenvector as $\vc{u}_1$ and the $\lambda_2$ eigenvector as $\vc{u}_2$. What are the eigenvectors? (Any scalar multiple of the directions are OK.)
$\vc{u}_1=$
, $\vc{u}_2=$
-
How do the eigenvalues and eigenvectors relate to what we found in part a?
For most initial conditions, the solution eventually moved away from equilibrium at the origin in the direction
Does that direction correspond to an eigenvector?
Eventually, the distance from the origin increased by what factor after each time step?
What was the one direction in which the solution moved toward the equilibrium?
Does that direction correspond to an eigenvector?
When starting with an initial condition along that direction, the distance from the origin decreased by what factor after each time step?
When we start with an initial condition exactly along an eigenvector, the situation is fairly simple. If $\vc{x}_0$ is an eigenvector, then we know that $\vc{x}_1=A\vc{x}_0$ points in
direction as $\vc{x}_0$. In other words, $\vc{x}_1$ is a scalar multiple of $\vc{x}_0$, where we multiplied by the eigenvalue $\lambda$. We can keep going since $\vc{x}_1$
also an eigenvector corresponding to the same eigenvalue. The next iterate $\vc{x}_2$
also an eigenvector; it is just $\vc{x}_1$ multiplied by
. In each step, to go from $\vc{x}_{n-1}$ to $\vc{x}_n$, we just multiply by
.
When we started with the initial condition $\vc{x}_0=(1,1)$, we were along the eigenvector $\vc{u}_1$ corresponding to the eigenvalue $\lambda_1=$
. Therefore, at each time step, the solution was multiplied by the factor
.
Similarly, when we started with the initial condition $\vc{x}_0=(1,-1)$, we were along the eigenvector $\vc{u}_2$ corresponding to the eigenvalue $\lambda_2=$
. Therefore, at each time step, the solution was multiplied by the factor
.
If we start with a vector that is not an eigenvector, the effects of multiplying by the matrix aren't quite as simple. We observed that somehow the solution got closer to a eigenvector direction corresponding to one of the eigenvalues. Which one?
Since this larger eigenvalue is called the dominant eigenvalue of the matrix, as the eigenvalue and its eigenvector eventually dominate the dynamics of the system.
Hint
Online, enter $\lambda$ as
lambda or the symbol
λ.
Hide help
-
Before we go any further with analytic explanations, let's look at this behavior graphically. The following applet is similar to those used to visualize the relationship between eigenvectors and matrix-vector multiplication. In this applet, however, you can increase the value of $n$, which will compute more iterations, multiplying the initial vector by the matrix $A$ a total of $n$ times.
The matrix-vector product of the iteration
Given the matrix
\[{}\]
the iteration to calculate `{}` from `{}` is the product:
\[{}\]
(Hide)
To see the relationship between the matrix iteration and the eigenvectors, check “show eigenvectors.” This displays the eigenvector $\vc{u}_1$ in red and the eigenvector $\vc{u}_2$ in green. Since the eigenvector $\vc{u}_1$ is in the direction $(1,1)$, which is considered the same direction as its opposite $(-1,-1)$, the applet plots two red arrows, one in each direction. For convenience, the applet draws these depictions of $\vc{u}_1$ as long vectors.
For $\vc{u}_2$, it plots green arrows in the direction $(1,-1)$ and its opposite $(-1,1)$.
As you increase the value of $n$, you can visualize how the repeated matrix multiplication transforms the vector closer and closer to the direction of
(the green arrows). Only if the initial condition is exactly parallel to
(the red arrows) does the behavior differ; in that case, multiplication by $A$ preserves the direction and just shrinks the vector by a factor of $\lambda_1=0.5$ each iteration.
To understand what the matrix-multiplication is doing in simple terms, we can break up each vector into two components, a component parallel to $\vc{u}_1$ and a component parallel to $\vc{u}_2$. To see this decomposition, check the “show decompositions” box. (It's probably easiest to see what's going on if you also reduce $n$ to $0$ or $1$ at first.) Each blue vector $\vc{x}_n$ is decomposed into the sum of two vectors: a vector parallel to $\vc{u}_1$ (in red) plus a vector parallel to $\vc{u}_2$ (in green). (For reference, you can see a graphical depictions of vector addition.)
Since each red vector is parallel to
, multiplication by $A$ simply shrinks the red vector by the factor
. Since each green vector is parallel to
, multiplication by $A$ simply increases the length of the green vector by the factor
. The resulting sum of the red and green vectors, the next blue vector, therefore points closer to the direction of
after multiplication by $A$. (Unless, of course, the green arrow started off being $\vc{0}$, in which case multiplication by $\lambda_2=1.5$ won't increase its length.)
-
Let's see if we can do this with equations. We aren't going to worry about how to decompose a vector into a sum of vectors parallel to the eigenvectors (i.e., determine the red and green arrows from the blue arrows). The important point is to understand the concept of this decomposition. For our examples, we'll just give you the decomposition.
Also, to make these equations look simple, we are going to use for the eigenvectors $\vc{u}_1=\left ( 1, \quad 1\right )$ and $\vc{u}_2=\left ( 1, \quad -1\right )$. It's OK if you wrote different eigenvectors, as long as they pointed in the same direction. But, in what follows, we'll use these choices.
The first example, we looked at in part (a), was $\vc{x}_0=(0,0)$. That one was boring because all the vectors were zero and multiplication by $A$ didn't do anything.
The second example was $\vc{x}_0 = (1, 0)$. We claim that this vector is $\frac{1}{2}$ times $\vc{u}_1$ plus $\frac{1}{2}$ times $\vc{u}_2$. Don't worry how we figured that out. Just verify that this is true.
$\frac{1}{2} \vc{u}_1 + \frac{1}{2}\vc{u}_2 = $
$+$
$=$
In the applet, if you made the blue vector $\vc{x}_0$ be $\vc{x}_0=(1, 0)$, the linear combination $\vc{x}_0=\frac{1}{2} \vc{u}_1 + \frac{1}{2}\vc{u}_2$ of the eigenvectors would be the sum of the red vector plus the green vector. Now, we can see what happens when we multiply by the matrix $A$ to calculate $\vc{x}_1=A\vc{x}_0$.
When we distribute the $A$, multiplication by $A$ gives this result:
$$\vc{x}_1 = A\vc{x}_0 = A\left(\frac{1}{2} \vc{u}_1\right) + A\left(\frac{1}{2}\vc{u}_2\right)$$
Each term on the right is a multiple of an eigenvector that is multiplied by $A$. Since the vector $\frac{1}{2} \vc{u}_1$ is still an eigenvector for $\lambda_1=0.5$, multiplication by $A$ just multiplies that vector by
. Since the vector $\frac{1}{2} \vc{u}_2$ is still an eigenvector for $\lambda_2=1.5$, multiplication by $A$ just multiplies that vector by
. We've calculated that
$\vc{x}_1=A\vc{x}_0= $
$\cdot \left(\frac{1}{2} \vc{u}_1\right) +$
$\cdot \left(\frac{1}{2}\vc{u}_2\right)$.
This equation captures what we saw graphically. The red vector was multiplied by $0.5$ and the green vector by $1.5$.
We can repeat this process as many times as we like. The above equation gives $\vc{x}_1$ as a linear combination of the eigenvectors. Therefore, when we multiply by $A$ a second time, each term is multiplied by the corresponding eigenvalue a second time. We've now multiplied each term by the eigenvalue squared, i.e., we calculate that
$\vc{x}_2=A\vc{x}_1= $
$\cdot \left(\frac{1}{2} \vc{u}_1\right) +$
$\cdot \left(\frac{1}{2}\vc{u}_2\right)$.
In each step, we get another multiple of $\lambda_1$ and $\lambda_2$, raising the eigenvalues to a higher power. If we repeated $n$ times to calculate $\vc{x}_n$, we would have multiplied each term by the corresponding eigenvalue raised to the power of $n$:
$\vc{x}_n=$
$\cdot \left(\frac{1}{2} \vc{u}_1\right) +$
$\cdot \left(\frac{1}{2}\vc{u}_2\right)$.
As $n$ increases, the first term is
and the second term is
. For large values of $n$, the solution $\vc{x}_n$ is moving
the equilibrium at the origin.
-
We'll repeat this calculation for the third initial condition $\vc{x}_0 = (0, 1)$. The decomposition of this initial condition into linear combinations of the eigenvectors is
$$\vc{x}_0=\frac{1}{2} \vc{u}_1 + (- \frac{1}{2})\vc{u}_2.$$
(You can verify this fact on your own.)
Multiplying by $A$ to compute $\vc{x}_1=A\vc{x}_0$ multiplies each term by its eigenvalue. Therefore, after one iteration, we get
$\vc{x}_1=A\vc{x}_0= $
$\cdot \left(\frac{1}{2} \vc{u}_1\right) +$
$\cdot \left(- \frac{1}{2}\vc{u}_2\right)$.
Repeating $n$ times, the general solution is
$\vc{x}_n=$
$\cdot \left(\frac{1}{2} \vc{u}_1\right) +$
$\cdot \left(- \frac{1}{2}\vc{u}_2\right)$.
As $n$ increases, the first term is
and the second term is
. For large values of $n$, the solution $\vc{x}_n$ is moving
the equilibrium at the origin. The only difference from the previous initial condition is that the solution is moving in the opposite direction along the eigenvector $\vc{u}_2$, as seen by the minus sign in front of that term.
-
The same pattern works in general. Any vector $\vc{x}$ can be written as a linear combination of the eigenvectors $\vc{u}_1$ and $\vc{u}_2$. Let's say, using our secret calculation, we found the two numbers $s_1$ and $s_2$ so that $\vc{x}_0 = s_1 \vc{u}_1 + s_2 \vc{u}_2$. Now, just like above, we can find a formula for $\vc{x}_n$:
$\vc{x}_n=$
$\cdot s_1 \vc{u}_1 +$
$\cdot s_2 \vc{u}_2$.
If $s_1=0$ or $s_2=0$, it means we started out with an eigenvector. The solution stays along that direction. If neither $s_1$ nor $s_2$ is zero, which terms will be larger once $n$ is large enough?
As long as $s_2 \ne 0$, i.e., as long as we didn't start with a scalar multiple of $\vc{u}_1$, for large $n$, the solution will be moving
the equilibrium at the origin. The solution will be increasing at a rate determined by the dominant eigenvalue $\lambda_2=1.5$ in the direction of its eigenvector $\vc{u}_2$.