Math Insight

Asymptotic behavior of 2x2 discrete linear dynamical systems, further cases

Math 2241, Spring 2023
Name:
ID #:
Due date:
Table/group #:
Group members:
Total points: 1
  1. In the cases we've examined so far, we've discovered that the asymptotic behavior of a two-dimensional linear discrete dynamical system is determined by the eigenvalue with largest magnitude and its eigenvector. Except for special initial conditions, the solution ends up moving along that eigenvector, and its distance along that eigenvector is multiplied by the eigenvalue at each time step.

    With that in mind, let's take a look at the behavior of this dynamical system to see if it fits into the pattern. \begin{eqnarray*} x_{n+1} & = & 0.9 x_{n} - 0.9 y_{n} \qquad \text{for $n=0,1,2,\ldots$}\\ y_{n+1} & = & 0.9 x_{n} + 0.9 y_{n} \end{eqnarray*}

    1. First, explore the behavior of the system with this applet.

      As $n$ increases, what's the behavior of the solution $\vc{x}_n$?

      Something different is definitely going on, as the system isn't approaching a particular direction and growing/shrinking by a constant factor along that direction like we've come to expect. To figure out what's happening, we should examine
      .

    2. What is the characteristic equation for the eigenvalues $\lambda$?

      When you use the quadratic formula to solve for $\lambda$, what number do you get inside the square root?
      Since this number is
      , when you take the square root, you get an imaginary number with a factor of $i=\sqrt{-1}$. The result is that you get complex numbers for the eigenvalues. The eigenvalues should look something like $1.7 + 0.3i$ and $1.7 - 0.3i$ (but with different numbers other than $1.7$ and $0.3$).

      What are the eigenvalues? $\lambda_1=$
      , $\lambda_2=$

    3. The fact that eigenvalues are complex is the reason why the solution is rotating around. Remember, for real distinct eigenvalues, we decomposed the initial condition into a linear combination of the eigenvectors. Each of those eigenvector terms were multiplied by the corresponding eigenvalue at each time step. Although the same thing is true for the case with complex eigenvalues, we aren't going to get into calculating complex eigenvectors. (If you want to see them, you can calculate them with R or another computer program.)

      Rather than worrying about complex eigenvectors, we'll instead just focus on the fact that, if the eigenvalues are complex, then they don't have real eigenvectors. For two-dimensional systems, that means there are no directions in the phase plane (such as the axes graphed above) where the solution is just scaled; i.e., there is no direction that, if the solution points in that direction, it will stay there forever. As a result it just rotates around, always changing direction.

      The next question is, as the solution is rotating around, is it growing or shrinking, i.e., is it getting further away or closer to the origin? If we had a real eigenvalue $\lambda$, when we multiply by $\lambda$ each time step, we know the solution will grow if $|\lambda| > 1$ and will shrink if $|\lambda| < 1$. If we really wanted to do the analysis for complex eigenvalues, we'd discover that something similar is happening even in that case (i.e, that multiplication by a complex eigenvalue $\lambda$ is still involved at each time step). We're not going to worry about arithmetic with complex numbers, but only state that the same conclusion is true. The solution will be growing if $|\lambda| > 1$ and will be shrinking if $|\lambda| > 1$.

      What does the absolute value sign mean for complex numbers? It's the “magnitude” or “modulus” of the complex number. The formula for the modulus is the same as that for the magnitude of a two-dimensional vector. If $\lambda = a+ib$ for two numbers $a$ and $b$, then its modulus, or magnitude, is $|\lambda| = \sqrt{a^2+b^2}$.

      You show have found two complex eigenvalues $\lambda_1$ and $\lambda_2$. What are the moduli are of the eigenvalues?
      $|\lambda_1|=$
      , $|\lambda_2|=$

      (The fact that they have the same modulus follows from the fact that they differ only in the sign of their imaginary component, i.e., the eigenvalues are complex conjugates of each other.)

      The eigenvalue magnitudes $|\lambda_1|$ and $|\lambda_2|$ are
      . Therefore the solution is
      even as it is rotating.

  2. Let's continue a tour through the different asymptotic behavior of two-dimensional discrete linear systems and attempt to link the behavior to the eigenvalues and eigenvectors. We're not going to worry too much about the theory. Instead, we just want to get a feel for the different possibilities through these examples.

    You can use this familiar applet to explore the behavior the system.

    Control panel (Show)
    Matrix-vector multiplication (Show)
    Eigenvalues and eigenvectors (Show)

    To allow you to zoom in, we've configured the applet to allow the initial condition vector to be off screen. if you can't see the black point to change the initial condition, you can still change the initial condition by specifying the coordinates $x_0$ and $y_0$ in the control panel.

    1. \begin{align*} x_{n+1} &= 0.4 x_{n} + 0.5 y_{n}, \qquad \text{for $n=0,1,2,\ldots$}\\ y_{n+1} &= 0.1 x_{n} + 0.4 y_{n} \end{align*}

      Since the eigenvalues, $\lambda_1=$
      and $\lambda_2=$
      , are
      , the solution $\vc{x}_n = (x_n,y_n)$
      at each time step. For most initial conditions, as $n$ increases, the solution
      the equilibrium from the direction
      .

    2. \begin{align*} x_{t+1} &= 0.6 x_{t} + 0.5 y_{t}, \qquad \text{for $t=0,1,2,\ldots$}\\ y_{t+1} &= - 0.5 x_{t} + 0.6 y_{t} \end{align*} In this case, the eigenvalues are $\lambda_1=$
      and $\lambda_2=$
      . Since the eigenvalues are
      numbers, the solution $\vc{x}_t = (x_t,y_t)$
      . Since the eigenvalues' magnitude $|\lambda_1| = |\lambda_2| =$
      is
      , the solution
      at each time step.

      Since the applet cannot show complex eigenvectors, the option to view eigenvectors and the decomposition disappears when you the matrix of this system into the control panel. For real eigenvalues, the applet displayed the eigenvalues using the $x$-axis as the real line. For complex eigenvalues, when you check the “show eigenvalues” checkbox, the applet plots the real part of the eigenvalues using the $x$-axis and the imaginary part of the eigenvalues using the $y$-axis. When plotting complex number this way, we say we are plotting them on the complex plane, which is a generalization of the real line. The gray disk on the complex plane shows the region where the magnitude of a complex number is less than one.

      (The applet plots the eigenvalues using the same axes as the solution vectors just to save space. The axes have completely different meanings for the two plots, so don't get confused. However, notice that, if you make the initial condition point straight to the right, say $\vc{x}_0=(x_0,y_0)=(2,0)$, the vector for $\vc{x}_1$ passes right through the point representing an eigenvalue. Interesting, but we're not going to get into this. This direct match won't always occur, but the direction of the eigenvalues from the positive $x$-axis will indicate the average rotation of the vectors.)

    3. \begin{align*} k_{t+1} &= 0.4 k_{t} + 0.5 p_{t}, \qquad \text{for $t=0,1,2,\ldots$}\\ p_{t+1} &= 0.3 k_{t} - 0.7 p_{t} \end{align*} Since the eigenvalues are
      and one is larger in magnitude than the other, the solution $(k_t,p_t)$ tends toward the direction of the eigenvector with larger magnitude eigenvector. The larger magnitude eigenvalue is $\lambda=$
      so the solution tends toward the direction of its eigenvector $\vc{u}= $
      . Since this magnitude of this eigenvalue is
      , the solution
      with each time step. Moreover, since the eigenvalue is
      , the solution
      with each time step.
    4. Here's an interesting case: \begin{align*} w_{n+1} &= 1.2 v_{n} + 1.4 w_{n}, \qquad \text{for $n=0,1,2,\ldots$}\\ v_{n+1} &= - 0.6 v_{n} + 0.3 w_{n} \end{align*} Since the eigenvalues, $\lambda_1=$
      and $\lambda_2=$
      , are
      , you can observe with the applet that the solution
      as we expect. The eigenvalue magnitude, $|\lambda_1| = |\lambda_2| = $
      , is
      ; therefore, we expect that the solution should
      . Does the solution behave that way at each time step?
      How do we reconcile that fact with the eigenvalue magnitude?

  3. One last stop on the tour. Look at this system \begin{align*} x_{n+1} &= 0.9 x_{n} + b y_{n}, \qquad \text{for $n=0,1,2,\ldots$}\\ y_{n+1} &= x_{n} + 0.9 y_{n}, \end{align*} where $b$ is some number that we'll change.
    Control panel (Show)
    Matrix-vector multiplication (Show)
    Eigenvalues and eigenvectors (Show)
    1. First, let $b=0.1$, which is the default value for the above applet.

      How many distinct eigenvalues does the system have?
      How many distinct eigenvectors does the system have?

      The eigenvectors divide the above phase plane into how many sections?
      If the initial condition $(x_0,y_0)$ is in either the left or right section, what's the general behavior of the solution?
      If the initial condition $(x_0,y_0)$ is in either the top or bottom section, what's the general behavior of the solution?

    2. Slowly decrease $b$ from $0.1$ down to $0.01$, or even to $0.001$. How do the eigenvalues change?
      How do the eigenvectors change?
      How do the sections between the eigenvectors change? The left and right sections
      while the top and bottom sections
      . How does the behavior of the solution change in the different sections of the phase plane?
    3. Now set $b$ all the way to $0$. Anything dramatic happen? We end up with only
      distinct eigenvalue and
      eigenvector. What is the eigenvalue? $\lambda_1=\lambda_2=$
      What is the eigenvector?

      What happened to the sections of the phase plane? The top and bottom sections
      so that there are only
      sections. In all sections, what is the general behavior of the solution?
      There's an exception to this general behavior. The solution doesn't rotate if the initial condition is parallel to
      .

      Our way of analyzing the behavior of a discrete linear system is to write the initial condition as a linear combination of the two eigenvectors. Can we do that in this case? What if the initial condition is $(x_0,y_0) = (1,1)$? Can you write this initial condition as the sum of terms that are multiples of the eigenvectors?

      Sorry, we're out of luck. We're just not going to analyze this case.

    4. This exceptional case when $b=0$ is on the cusp of a different regime of behavior. When $b=0$, the solution
      in almost all cases. When have we systems where the solution always rotates in the same direction? When the eigenvalues are
      .

      Decrease $b$ even further to $b=-0.1$. Indeed, now the eigenvalues are
      and the solution always
      .