Math Insight

Asymptotic behavior of 2x2 discrete linear dynamical systems

Math 2241, Spring 2023
Name:
ID #:
Due date: Feb. 8, 2023, 11:59 p.m.
Table/group #:
Group members:
Total points: 1
  1. Eigenvalues and eigenvectors give us information about the behavior of a matrix. We want to use this to understand the behavior of a discrete dynamical system. Let's start with a simple example: \begin{align*} x_{n+1} &= x_{n} - 0.5 y_{n} \qquad \text{for $n=0,1,2, \ldots$}\\ y_{n+1} &= - 0.5 x_{n} + y_{n} \end{align*} with state variables $x_n$ and $y_n$, that we might also write as the vector $\vc{x}_n = (x_n,y_n)$.
    1. Let's start by exploring the behavior of the dynamical system. We'll be computing the first five values of $x_n$ and $y_n$ for different initial conditions. You can compute them by hand or use computer program like R.

      First, a pretty boring initial condition: $x_0=0$, $y_0=0$
      $x_1=$
      , $y_1=$

      $x_2=$
      , $y_2=$

      $x_3=$
      , $y_3=$

      $x_4=$
      , $y_4=$

      $x_5=$
      , $y_5=$

      We conclude that if we start at the initial condition $\vc{x}_0 = (0,0)$, we
      . This means that the value $(x_n,y_n)=(0,0)$ is an equilibrium of the dynamical system. If the state variables are exactly at an equilibrium, the system stays there forever — that's the definition of an equilibrium.

      For linear systems of the form $\vc{x}_{n+1}=A\vc{x}_n$ (like the one we're looking at here), the point $\vc{x}_n=\vc{0}=(0,0)$ (i.e., the origin) is always an equilibrium. After all, if you plug in $\vc{x}_n=\vc{0}$, then $\vc{x}_{n+1}=A\vc{x}_n=\vc{0}$ for any matrix $A$. The system doesn't move away from the origin if you start exactly at the origin, just like you computed, above.

      For most cases we'll be dealing with here, the origin will be the only equilibrium. We should get more interesting behavior if we start any other point. Let's see what happens for some more initial conditions.

      $x_0=1$, $y_0=0$
      $x_1=$
      , $y_1=$

      $x_2=$
      , $y_2=$

      $x_3=$
      , $y_3=$

      $x_4=$
      , $y_4=$

      $x_5=$
      , $y_5=$

      When starting at the initial condition $\vc{x}_0=(1,0)$, does the system move away from or toward the equilibrium $\vc{0}$?

      As $n$ increases, the system eventually moves away from the origin in which direction? It eventually moves close to the direction parallel to the vector
      (Look at the ratio between $x_n$ and $y_n$ for the larger values of $n$, such as $n=5$.)

      Eventually, how fast does the system move away from the origin? Check the ratios $x_5/x_4$, $y_5/y_4$, $x_4/x_3$, and $y_4/y_3$. After each of these later steps, the distance from the origin appears to be increasing by a factor of
      . (Round your answer to the nearest tenth.)

      Let's try another initial condition starting in a different direction from the origin..

      $x_0=0$, $y_0=1$
      $x_1=$
      , $y_1=$

      $x_2=$
      , $y_2=$

      $x_3=$
      , $y_3=$

      $x_4=$
      , $y_4=$

      $x_5=$
      , $y_5=$

      When starting at the initial condition $\vc{x}_0=(0,1)$, the system eventually moves
      the origin in which direction? It eventually moves close to the direction parallel to the vector
      (Look at the ratio between $x_n$ and $y_n$ for the larger values of $n$. Opposite directions, like $(2,-3)$ and $(-2,3)$ are consider the same direction here.)

      Eventually, how fast does the system move away from the origin? At the later steps, the distance from the origin appears to be increasing by a factor of
      . (Round your answer to the nearest tenth.)

      Does the system always move away from the origin in the same direction and speed? Well, obviously it doesn't if you start right at the equilibrium $(0,0)$, because then it stays there forever. But, excluding that point, does the system always act like these last two example? You'd have to try an infinite number of initial conditions before you could be sure. Since you might not have enough time to check that many initial conditions, we'll suggest another initial condition to try:

      $x_0=1$, $y_0=1$
      $x_1=$
      , $y_1=$

      $x_2=$
      , $y_2=$

      $x_3=$
      , $y_3=$

      $x_4=$
      , $y_4=$

      $x_5=$
      , $y_5=$

      When starting at the initial condition $\vc{x}_0 = (1,1)$, does the system move toward or away from the equilibrium at the origin?
      In what direction does it move?
      At each time step, the distance from the origin decreases exactly by a factor of
      .

      It turns out, only when you start with initial conditions that are multiples of $\vc{x}_0 = (1,1)$ will the system continue to move toward the equilibrium. If you start from any other direction, you'll eventually move away from the origin. (Remember, starting in the opposite direction, such as $\vc{x}_0 = (-1,-1)$ is considered the same direction. The system will still move toward the equilibrium if you start there.)

      You shouldn't be surprised what happens when we start directly in the direction along which solutions were moving away from the origin. Try the initial conditions: $x_0=1$, $y_0=-1$
      $x_1=$
      , $y_1=$

      $x_2=$
      , $y_2=$

      $x_3=$
      , $y_3=$

      $x_4=$
      , $y_4=$

      $x_5=$
      , $y_5=$

      When starting at the initial condition $\vc{x}_0=(1,-1)$, the system immediately moves
      the origin in the direction parallel to the vector
      At each time step, the distance from the origin increases exactly by a factor of
      .

    2. Now, let's analyze the system and determine why we observed the above behavior. Rewrite the discrete dynamical system in matrix form so that we can find the eigenvalues and eigenvectors.

      $\displaystyle\begin{bmatrix} x_{n+1}\\ y_{n+1} \end{bmatrix}=$




      $\displaystyle\begin{bmatrix} x_{n}\\ y_{n} \end{bmatrix}$

      The characteristic equation for the eigenvalue, $\det (A-\lambda I)=0$ is

      The eigenvalues, in increasing order, are: $\lambda_1=$
      , $\lambda_2=$

      We'll denote the $\lambda_1$ eigenvector as $\vc{u}_1$ and the $\lambda_2$ eigenvector as $\vc{u}_2$. What are the eigenvectors? (Any scalar multiple of the directions are OK.)
      $\vc{u}_1=$
      , $\vc{u}_2=$

    3. How do the eigenvalues and eigenvectors relate to what we found in part a?

      For most initial conditions, the solution eventually moved away from equilibrium at the origin in the direction
      Does that direction correspond to an eigenvector?
      Eventually, the distance from the origin increased by what factor after each time step?

      What was the one direction in which the solution moved toward the equilibrium?
      Does that direction correspond to an eigenvector?
      When starting with an initial condition along that direction, the distance from the origin decreased by what factor after each time step?

      When we start with an initial condition exactly along an eigenvector, the situation is fairly simple. If $\vc{x}_0$ is an eigenvector, then we know that $\vc{x}_1=A\vc{x}_0$ points in
      direction as $\vc{x}_0$. In other words, $\vc{x}_1$ is a scalar multiple of $\vc{x}_0$, where we multiplied by the eigenvalue $\lambda$. We can keep going since $\vc{x}_1$
      also an eigenvector corresponding to the same eigenvalue. The next iterate $\vc{x}_2$
      also an eigenvector; it is just $\vc{x}_1$ multiplied by
      . In each step, to go from $\vc{x}_{n-1}$ to $\vc{x}_n$, we just multiply by
      .

      When we started with the initial condition $\vc{x}_0=(1,1)$, we were along the eigenvector $\vc{u}_1$ corresponding to the eigenvalue $\lambda_1=$
      . Therefore, at each time step, the solution was multiplied by the factor
      .

      Similarly, when we started with the initial condition $\vc{x}_0=(1,-1)$, we were along the eigenvector $\vc{u}_2$ corresponding to the eigenvalue $\lambda_2=$
      . Therefore, at each time step, the solution was multiplied by the factor
      .

      If we start with a vector that is not an eigenvector, the effects of multiplying by the matrix aren't quite as simple. We observed that somehow the solution got closer to a eigenvector direction corresponding to one of the eigenvalues. Which one?

      Since this larger eigenvalue is called the dominant eigenvalue of the matrix, as the eigenvalue and its eigenvector eventually dominate the dynamics of the system.

    4. Before we go any further with analytic explanations, let's look at this behavior graphically. The following applet is similar to those used to visualize the relationship between eigenvectors and matrix-vector multiplication. In this applet, however, you can increase the value of $n$, which will compute more iterations, multiplying the initial vector by the matrix $A$ a total of $n$ times.

      Matrix-vector multiplication (Show)

      To see the relationship between the matrix iteration and the eigenvectors, check “show eigenvectors.” This displays the eigenvector $\vc{u}_1$ in red and the eigenvector $\vc{u}_2$ in green. Since the eigenvector $\vc{u}_1$ is in the direction $(1,1)$, which is considered the same direction as its opposite $(-1,-1)$, the applet plots two red arrows, one in each direction. For convenience, the applet draws these depictions of $\vc{u}_1$ as long vectors. For $\vc{u}_2$, it plots green arrows in the direction $(1,-1)$ and its opposite $(-1,1)$.

      As you increase the value of $n$, you can visualize how the repeated matrix multiplication transforms the vector closer and closer to the direction of
      (the green arrows). Only if the initial condition is exactly parallel to
      (the red arrows) does the behavior differ; in that case, multiplication by $A$ preserves the direction and just shrinks the vector by a factor of $\lambda_1=0.5$ each iteration.

      To understand what the matrix-multiplication is doing in simple terms, we can break up each vector into two components, a component parallel to $\vc{u}_1$ and a component parallel to $\vc{u}_2$. To see this decomposition, check the “show decompositions” box. (It's probably easiest to see what's going on if you also reduce $n$ to $0$ or $1$ at first.) Each blue vector $\vc{x}_n$ is decomposed into the sum of two vectors: a vector parallel to $\vc{u}_1$ (in red) plus a vector parallel to $\vc{u}_2$ (in green). (For reference, you can see a graphical depictions of vector addition.)

      Since each red vector is parallel to
      , multiplication by $A$ simply shrinks the red vector by the factor
      . Since each green vector is parallel to
      , multiplication by $A$ simply increases the length of the green vector by the factor
      . The resulting sum of the red and green vectors, the next blue vector, therefore points closer to the direction of
      after multiplication by $A$. (Unless, of course, the green arrow started off being $\vc{0}$, in which case multiplication by $\lambda_2=1.5$ won't increase its length.)

    5. Let's see if we can do this with equations. We aren't going to worry about how to decompose a vector into a sum of vectors parallel to the eigenvectors (i.e., determine the red and green arrows from the blue arrows). The important point is to understand the concept of this decomposition. For our examples, we'll just give you the decomposition.

      Also, to make these equations look simple, we are going to use for the eigenvectors $\vc{u}_1=\left ( 1, \quad 1\right )$ and $\vc{u}_2=\left ( 1, \quad -1\right )$. It's OK if you wrote different eigenvectors, as long as they pointed in the same direction. But, in what follows, we'll use these choices.

      The first example, we looked at in part (a), was $\vc{x}_0=(0,0)$. That one was boring because all the vectors were zero and multiplication by $A$ didn't do anything.

      The second example was $\vc{x}_0 = (1, 0)$. We claim that this vector is $\frac{1}{2}$ times $\vc{u}_1$ plus $\frac{1}{2}$ times $\vc{u}_2$. Don't worry how we figured that out. Just verify that this is true.
      $\frac{1}{2} \vc{u}_1 + \frac{1}{2}\vc{u}_2 = $
      $+$
      $=$

      In the applet, if you made the blue vector $\vc{x}_0$ be $\vc{x}_0=(1, 0)$, the linear combination $\vc{x}_0=\frac{1}{2} \vc{u}_1 + \frac{1}{2}\vc{u}_2$ of the eigenvectors would be the sum of the red vector plus the green vector. Now, we can see what happens when we multiply by the matrix $A$ to calculate $\vc{x}_1=A\vc{x}_0$.

      When we distribute the $A$, multiplication by $A$ gives this result: $$\vc{x}_1 = A\vc{x}_0 = A\left(\frac{1}{2} \vc{u}_1\right) + A\left(\frac{1}{2}\vc{u}_2\right)$$

      Each term on the right is a multiple of an eigenvector that is multiplied by $A$. Since the vector $\frac{1}{2} \vc{u}_1$ is still an eigenvector for $\lambda_1=0.5$, multiplication by $A$ just multiplies that vector by
      . Since the vector $\frac{1}{2} \vc{u}_2$ is still an eigenvector for $\lambda_2=1.5$, multiplication by $A$ just multiplies that vector by
      . We've calculated that
      $\vc{x}_1=A\vc{x}_0= $
      $\cdot \left(\frac{1}{2} \vc{u}_1\right) +$
      $\cdot \left(\frac{1}{2}\vc{u}_2\right)$.

      This equation captures what we saw graphically. The red vector was multiplied by $0.5$ and the green vector by $1.5$.

      We can repeat this process as many times as we like. The above equation gives $\vc{x}_1$ as a linear combination of the eigenvectors. Therefore, when we multiply by $A$ a second time, each term is multiplied by the corresponding eigenvalue a second time. We've now multiplied each term by the eigenvalue squared, i.e., we calculate that
      $\vc{x}_2=A\vc{x}_1= $
      $\cdot \left(\frac{1}{2} \vc{u}_1\right) +$
      $\cdot \left(\frac{1}{2}\vc{u}_2\right)$.

      In each step, we get another multiple of $\lambda_1$ and $\lambda_2$, raising the eigenvalues to a higher power. If we repeated $n$ times to calculate $\vc{x}_n$, we would have multiplied each term by the corresponding eigenvalue raised to the power of $n$:
      $\vc{x}_n=$
      $\cdot \left(\frac{1}{2} \vc{u}_1\right) +$
      $\cdot \left(\frac{1}{2}\vc{u}_2\right)$.

      As $n$ increases, the first term is
      and the second term is
      . For large values of $n$, the solution $\vc{x}_n$ is moving
      the equilibrium at the origin.

    6. We'll repeat this calculation for the third initial condition $\vc{x}_0 = (0, 1)$. The decomposition of this initial condition into linear combinations of the eigenvectors is $$\vc{x}_0=\frac{1}{2} \vc{u}_1 + (- \frac{1}{2})\vc{u}_2.$$ (You can verify this fact on your own.)

      Multiplying by $A$ to compute $\vc{x}_1=A\vc{x}_0$ multiplies each term by its eigenvalue. Therefore, after one iteration, we get
      $\vc{x}_1=A\vc{x}_0= $
      $\cdot \left(\frac{1}{2} \vc{u}_1\right) +$
      $\cdot \left(- \frac{1}{2}\vc{u}_2\right)$.

      Repeating $n$ times, the general solution is
      $\vc{x}_n=$
      $\cdot \left(\frac{1}{2} \vc{u}_1\right) +$
      $\cdot \left(- \frac{1}{2}\vc{u}_2\right)$.

      As $n$ increases, the first term is
      and the second term is
      . For large values of $n$, the solution $\vc{x}_n$ is moving
      the equilibrium at the origin. The only difference from the previous initial condition is that the solution is moving in the opposite direction along the eigenvector $\vc{u}_2$, as seen by the minus sign in front of that term.

    7. The same pattern works in general. Any vector $\vc{x}$ can be written as a linear combination of the eigenvectors $\vc{u}_1$ and $\vc{u}_2$. Let's say, using our secret calculation, we found the two numbers $s_1$ and $s_2$ so that $\vc{x}_0 = s_1 \vc{u}_1 + s_2 \vc{u}_2$. Now, just like above, we can find a formula for $\vc{x}_n$:
      $\vc{x}_n=$
      $\cdot s_1 \vc{u}_1 +$
      $\cdot s_2 \vc{u}_2$.

      If $s_1=0$ or $s_2=0$, it means we started out with an eigenvector. The solution stays along that direction. If neither $s_1$ nor $s_2$ is zero, which terms will be larger once $n$ is large enough?
      As long as $s_2 \ne 0$, i.e., as long as we didn't start with a scalar multiple of $\vc{u}_1$, for large $n$, the solution will be moving
      the equilibrium at the origin. The solution will be increasing at a rate determined by the dominant eigenvalue $\lambda_2=1.5$ in the direction of its eigenvector $\vc{u}_2$.

  2. We want to use eigenvalues and eigenvectors to understand the long-term behavior of an arbitrary linear discrete dynamical system. Let's look at the system \begin{align*} x_{n+1} & = a x_n + b y_n \qquad \text{for $n=0,1,2, \ldots$}\\ y_{n+1} & = c x_n + d y_n \end{align*} Suppose that the matrix $$A=\begin{bmatrix} a & b \\ c & d \end{bmatrix}$$ has two distinct real eigenvalues, which we will call $\lambda_1$ and $\lambda_2$, with $\vc{u_1}$ and $\vc{u_2}$ the respective eigenvectors.

    (The situation is more complicated if the eigenvalues are complex or equal to each other.)

    1. Before we look at this system in detail, let's start by discussing a one-dimensional discrete dynamical system. Consider the system \begin{eqnarray*} x_{n+1} & = & kx_n\\ x_0 & = & s \end{eqnarray*} Because this system is so simple, we can come up with a formula for $x_n$ in terms of $n$ and $s$. What is the formula? $x_n=$

      Based on this formula, let's examine how the long term behavior of the solution depends on $k$.

      • If $k > 1$, the solution $x_{n}$
        , and the bigger $k$ is, the
        the solution grows.
      • $k=1$ is a peculiar special case. In this case, the solution $x_n$
        no matter what the initial condition is. (We say every value is an equilibrium.)
      • If $0 < k < 1$, the solution $x_{n}$
        , and the bigger $k$ is, the
        the solution decays toward zero.
      • If $k=0$, then $x_{n}=$
        for $n \ge 1$.
      • If $-1 < k < 0$, then the solution
        and, at the same time,
        (in magnitude). The bigger the absolute value of $k$ is, the
        the solution decays toward zero.
      • $k=-1$ is another peculiar special case. In this case, the solution $x_n$ is
        , but its magnitude
        .
      • If $k < -1$, then the solution
        and, at the same time,
        (in magnitude). The bigger the absolute value of $k$ is, the
        the solution grows (in magnitude).

      We're interested in the long time behavior, i.e., for large values of $n$. We're also interested in how large a solution is in magnitude (meaning $-6$ is larger in magnitude than $4$, since $|-6| > |4|$). We could therefore summarize one key conclusion of the dependence of $x_n$ on $k$ as the following:
      The larger the magnitude of $k$, the
      the magnitude of the solution (for $n$ large enough).

    2. The two-dimensional linear system will behave a lot like the one-dimensional system (at least for real distinct eigenvalues, like we're considering). But, instead of the single number $k$, the long term behavior of the system is going to depend on
      .

      Remember how, in the first example, we decomposed the initial condition vector into a linear combination of the eigenvectors? We'll do the same thing here, setting $\vc{x}_0 = s_1 \vc{u}_1 + s_2 \vc{u}_2$. We aren't going to worry about how we determine the numbers $s_1$ and $s_2$. But, if the eigenvalues are real and distinct, we can always do this.

      If it turned out that $s_2=0$, then the initial condition $\vc{x}_0$ is a multiple of the eigenvector
      . This means that, when you multiply by $A$, the vector $\vc{x}_0$ gets multiplied by
      . Since each value of $x_n$ will still be a multiple of the eigenvector
      , at each time step, the solution gets multiplied by
      . The formula for the solution looks a lot like the one-dimensional case, it is
      $\vc{x}_n = $
      $\cdot s_1 \vc{u}_1$.

      Similarly, if it turned out that $s_1=0$, then the initial condition $\vc{x}_0$ is a multiple of the eigenvector
      , and at each time step, the solution gets multiplied by
      . The formula for the solution for this special case is:
      $\vc{x}_n = $
      $\cdot s_2 \vc{u}_2$.

      In general, we wouldn't expect either $s_1$ or $s_2$ to be zero, so we have to keep track of both terms of the decomposition into a linear combination of the eigenvalues. Still each term just gets multiplied by the respective eigenvalue at each time step. The solution, which is the general formula of the solution you derived at the end of the first problem, is:
      $\vc{x}_n=$
      $\cdot s_1 \vc{u}_1 +$
      $\cdot s_2 \vc{u}_2$.

      The solution is composed of two terms. One term is parallel to the eigenvector $\vc{u}_1$ and is getting multiplied by $\lambda_1$ at each time step. The other term is parallel to the eigenvector $\vc{u}_2$ and is getting multiplied by $\lambda_2$ at each time step. The question is, after a long time, who wins? For large values of $n$, will the first term or the second term be much larger?

      The answer lies in your conclusion for the one-dimensional case. Let's reword the conclusion in terms of the eigenvalues and eigenvectors:
      The larger the magnitude of the eigenvalue, the
      the term of $\vc{x}_n$ in the direction of its eigenvector (unless one of these terms was zero to begin with).

      If $|\lambda_1| > |\lambda_2|$, then for large $n$, $\lambda_1^n$ is
      in magnitude that $\lambda_2^n$. Hence, for large $n$, as long as $s_1 \ne 0$, the term $\lambda_1^n s_1 \vc{u}_1$ of the solution is so much larger than the term $\lambda_2^n s_1 \vc{u}_2$ that we can
      the second term and approximate the solution $\vc{x}_n$ as being equal to the first term $\lambda_1^n s_1 \vc{u}_1$. After a long time, the solution is effectively a multiple of the eigenvector
      . At that point, the solution is effectively being multiplied by
      at each time step.

      In contrast, if $|\lambda_2| > |\lambda_1|$, then which eigenvector wins?
      After a long time, as long as the initial condition had some component that direction (i.e., as long as $s_2 \ne 0$), the solution will be essentially proportional to that eigenvector and will be multiplied by
      at each time step.

      In summary, to determine the long time behavior of the dynamical system, we have to determine which eigenvalue is
      . This eigenvalue is the dominant eigenvalue of the system. Then, we know that, for large $n$, the solution $\vc{x}_n$ will be almost an eigenvector of the dominant eigenvalue and the growth (or decay) rate of the solution will be determined by the dominant eigenvalue (unless the term of $\vc{x}_n$ corresponding to that eigenvector is exactly zero).

  3. For each of the following dynamical systems, determine the long term behavior of the system.
    1. \begin{eqnarray*} x_{n+1} & = & 3 x_{n} + 2 y_{n} \qquad \text{for $n=0,1,2,\ldots$}\\ y_{n+1} & = & x_{n} + 4 y_{n} \\ x_0 & = & 5 \\ y_0 & = & 9 \end{eqnarray*} Write down the matrix $A$ for this system:


      $A=$




      What is the characteristic equation $\det(A-\lambda I)=0$?

      What are the eigenvalues of this system? $\lambda_1=$
      , $\lambda_2=$
      (Enter in increasing order.)
      What are the corresponding eigenvectors? $\vc{u}_1 =$
      , $\vc{u}_2 =$

      What is the dominant eigenvalue?

      Is the initial condition vector $\vc{x}_0=(x_0,y_0)$ a multiple of either eigenvector?

      Therefore, for large $n$, the solution $\vc{x}_n=(x_n,y_n)$ will be (approximately)
      .
      It will be growing by what factor at each time step?

    2. \begin{eqnarray*} x_{t+1} & = & - 2 x_{t} + 5 y_{t} \qquad \text{for $t=0,1,2,\ldots$} \\ y_{t+1} & = & x_{t} - \frac{3 y_{t}}{2} \\ \end{eqnarray*}

      The eigenvalues, in increasing order, are: $\lambda_1=$
      , $\lambda_2=$

      The corresponding eigenvectors are $\vc{u}_1=$
      , $\vc{u}_2 =$

      What is the dominant eigenvalue?

      For the initial condition $x_0=-3$, $y_0=9$, the solution $(x_t,y_t)$ for large time $t$ will be approximately parallel to what vector?

      At that point, the solution will be multiplied by what number at each time step?
      Since this number is
      , the solution will be
      . The magnitude of the solution will be growing by a factor of
      at each time step.

    3. \begin{eqnarray*} a_{n+1} & = & a_{n} + 0.3 b_{n} \qquad \text{for $n=0,1,2,\ldots$} \\ b_{n+1} & = & - a_{n} - 0.1 b_{n} \\ a_0 & = & -10 \\ b_0 & = & 20 \end{eqnarray*}

      What are the eigenvalues of the matrix for this system?
      $\lambda_1=$
      , $\lambda_2=$
      (Enter in increasing order.)
      What are the eigenvectors? $\vc{u}_1=$
      , $\vc{u}_2=$
      .

      After a long time $n$, the solution $(a_n,b_n)$ will be approximately proportional to what vector?
      .
      The solution will then be
      by what factor at each time step?

    4. \begin{eqnarray*} x_{n+1} & = & 0.6 x_{n} + 0.7 y_{n} \qquad \text{for $n=0,1,2,\ldots$} \\ y_{n+1} & = & 2.4 x_{n} + 0.8 y_{n} \end{eqnarray*} For most initial conditions, the solution $(x_n,y_n)$ will eventually be proportional to
      and will eventually be growing by a factor of
      at each time step.
    5. \begin{eqnarray*} r_{t+1} & = & 0.1 r_{t} - 0.5 s_{t} \qquad \text{for $t=0,1,2,\ldots$}\\ s_{t+1} & = & 0.2 r_{t} + 0.8 s_{t} \end{eqnarray*}

      Find an initial condition for which the solution $(r_t,s_t)$ decreases by a factor of $0.3$ for every time step.
      $(r_0,s_0)=$

      Find an initial condition for which the solution $(r_t,s_t)$ decreases by a factor of $0.6$ for every time step.
      $(r_0,s_0)=$

      Find an initial condition for which the solution $(r_t,s_t)$ decreases by a factor of approximately $0.6$ per time step when $t$ is large enough but initially has a different behavior.
      $(r_0,s_0) = $

      If you like, you can use the following applet to explore the behavior of the above systems and confirm the answers you found. This applet is the full version of the above applet, where you can change the matrix in the control panel to see how different systems behave.

      Control panel (Show)
      Matrix-vector multiplication (Show)
      Eigenvalues and eigenvectors (Show)

      To allow you to zoom in, we've configured the applet to allow the initial condition vector to be off screen. if you can't see the black point to change the initial condition, you can still change the initial condition by specifying the coordinates $x_0$ and $y_0$ in the control panel.

      You can use the applet, for example, to see that with the above system, the solution eventually does lie along the eigenvector of the larger eigenvalue, though you may have to zoom in close to the origin to see.

      If you click the “show eigenvalues” checkbox, the applet superimposes a view of the eigenvalues on top of the plot of the solution vectors. For this plot, it uses the $x$-axis as the number line on which to plot the eigenvalues. (If you change the matrix so that it has complex eigenvalues, then the eigenvalues move off the $x$-axis and their vertical position indicates their imaginary component. But, we'll get back to that later.)

  4. In the multi-dimensional discrete systems worksheet, we introduced a dynamical system of bird populations on two islands, where each year there was reproduction and some migration between the islands. The result was that the dynamics of the population sizes $x_t$ and $y_t$ of islands one and two, respectively, in year $t$ were modeled by the discrete dynamical system: \begin{align*} x_{t+1} &= 0.88x_t + 0.24y_t \qquad \text{for $t=0,1,2,\ldots$}\\ y_{t+1} &= 0.22x_t + 0.96y_t. \end{align*}

    What is the dominant eigenvalue of this system?

    What is its eigenvector?

    After a long time, by what factor should the total bird population increase each year?

    After a long time, the vector $(x_t,y_t)$ should be approximately parallel to what vector?

    This means that eventually the fraction of birds that are on island 1 should be close to what value?

    The fraction of birds that are on island 2 should be close to what value?

    These fractions should be close to what you observed after iterating this system for awhile in the multi-dimensional discrete systems worksheet.