# Math Insight

### Taylor polynomials: formulas

Before attempting to illustrate what these funny formulas can be used for, we just write them out. First, some reminders:

The notation $f^{(k)}$ means the $k$th derivative of $f$. The notation $k!$ means $k$-factorial, which by definition is $$k!=1\cdot 2\cdot 3\cdot 4\cdot \ldots\cdot (k-1)\cdot k$$

Taylor's Formula with Remainder Term first somewhat verbal version: Let $f$ be a reasonable function, and fix a positive integer $n$. Then we have \begin{multline*} f(\textit{input})=f(\textit{basepoint})+ {f'(\textit{basepoint}) \over 1!}(\textit{input}-\textit{basepoint})\\ + { f''(\textit{basepoint})\over 2!}(\textit{input}-\textit{basepoint})^2 +{f'''(\textit{basepoint}) \over 3!}(\textit{input}-\textit{basepoint })^3\\ \ldots+ \frac{ f^{(n)}(\textit{basepoint})}{n!}(\textit{input}-\textit{basepoint})^n +{f^{(n+1)}(c) \over (n+1)!}(\textit{input}-\textit{basepoint})^{n+1 } \end{multline*} for some $c$ between basepoint and input.

That is, the value of the function $f$ for some input presumably ‘near’ the basepoint is expressible in terms of the values of $f$ and its derivatives evaluated at the basepoint, with the only mystery being the precise nature of that $c$ between input and basepoint.

Taylor's Formula with Remainder Term second somewhat verbal version: Let $f$ be a reasonable function, and fix a positive integer $n$. \begin{multline*} f(\textit{basepoint + increment})=f(\textit{basepoint})+{ f'(\textit{basepoint}) \over 1! }(\textit{increment})\\ + { f''(\textit{basepoint})\over 2!}(\textit{increment})^2 +{f'''(\textit{basepoint}) \over 3!}(\textit{increment})^3\\ \ldots+ \frac{ f^{(n)}(\textit{basepoint})}{n!}(\textit{increment})^n +{f^{(n+1)}(c) \over (n+1)!}(\textit{increment})^{n+1 } \end{multline*} for some $c$ between basepoint and basepoint + increment.

This version is really the same as the previous, but with a different emphasis: here we still have a basepoint, but are thinking in terms of moving a little bit away from it, by the amount increment.

And to get a more compact formula, we can be more symbolic: let's repeat these two versions:

Taylor's Formula with Remainder Term: Let $f$ be a reasonable function, fix an input value $x_o$, and fix a positive integer $n$. Then for input $x$ we have \begin{multline*} f(x)=f(x_o)+{f'(x_o)\over 1!}(x-x_o)+ {f''(x_o)\over 2!}(x-x_o)^2+{f'''(x_o)\over 3!}(x-x_o)^3+\ldots\\ \ldots+ \frac{ f^{(n)}(x_o)}{n!}(x-x_o)^n+{f^{(n+1)}(c) \over (n+1)!}(x-x_o)^{n+1 } \end{multline*} for some $c$ between $x_o$ and $x$.

Note that in every version, in the very last term where all the indices are $n+1$, the input into $f^{(n+1)}$ is not the basepoint $x_o$ but is, instead, that mysterious $c$ about which we truly know nothing but that it lies between $x_o$ and $x$. The part of this formula without the error term is the degree-n Taylor polynomial for $f$ at $x_o$, and that last term is the error term or remainder term. The Taylor series is said to be expanded at or expanded about or centered at or simply at the basepoint $x_o$.

There are many other possible forms for the error/remainder term. The one here was chosen partly because it resembles the other terms in the main part of the expansion.

Linear Taylor's Polynomial with Remainder Term: Let $f$ be a reasonable function, fix an input value $x_o$. For any (reasonable) input value $x$ we have $$f(x)=f(x_o)+{f'(x_o)\over 1!}(x-x_o)+{f''(c)\over 2!}(x-x_o)^2$$ for some $c$ between $x_o$ and $x$.

The previous formula is of course a very special case of the first, more general, formula. The reason to include the ‘linear’ case is that without the error term it is the old approximation by differentials formula, which had the fundamental flaw of having no way to estimate the error. Now we have the error estimate.

The general idea here is to approximate ‘fancy’ functions by polynomials, especially if we restrict ourselves to a fairly small interval around some given point. (That ‘approximation by differentials’ circus was a very crude version of this idea).

It is at this point that it becomes relatively easy to ‘beat’ a calculator, in the sense that the methods here can be used to give whatever precision is desired. So at the very least this methodology is not as silly and obsolete as some earlier traditional examples.

But even so, there is more to this than getting numbers out: it ought to be of some intrinsic interest that pretty arbitrary functions can be approximated as well as desired by polynomials, which are so readily computable (by hand or by machine)!

One element under our control is choice of how high degree polynomial to use. Typically, the higher the degree (meaning more terms), the better the approximation will be. (There is nothing comparable to this in the ‘approximation by differentials’).

Of course, for all this to really be worth anything either in theory or in practice, we do need a tangible error estimate, so that we can be sure that we are within whatever tolerance/error is required. (There is nothing comparable to this in the ‘approximation by differentials’, either).

And at this point it is not at all clear what exactly can be done with such formulas. For one thing, there are choices.

#### Exercises

1. Write the first three terms of the Taylor series at 0 of $f(x)=1/(1+x)$.
2. Write the first three terms of the Taylor series at 2 of $f(x)=1/(1-x)$.
3. Write the first three terms of the Taylor series at 0 of $f(x)=e^{\cos x}$.