Skip to main content

CLP-2 Integral Calculus

Section 3.6 Taylor Series

Subsection 3.6.1 Extending Taylor Polynomials

Recall 1  that Taylor polynomials provide a hierarchy of approximations to a given function \(f(x)\) near a given point \(a\text{.}\) Typically, the quality of these approximations improves as we move up the hierarchy.
  • The crudest approximation is the constant approximation \(f(x)\approx f(a)\text{.}\)
  • Then comes the linear, or tangent line, approximation \(f(x)\approx f(a) + f'(a)\,(x-a)\text{.}\)
  • Then comes the quadratic approximation
    \begin{equation*} f(x)\approx f(a) + f'(a)\,(x-a) +\frac{1}{2} f''(a)\,(x-a)^2 \end{equation*}
  • In general, the Taylor polynomial of degree \(n\text{,}\) for the function \(f(x)\text{,}\) about the expansion point \(a\text{,}\) is the polynomial, \(T_n(x)\text{,}\) determined by the requirements that \(f^{(k)}(a) = T_n^{(k)}(a)\) for all \(0\le k \le n\text{.}\) That is, \(f\) and \(T_n\) have the same derivatives at \(a\text{,}\) up to order \(n\text{.}\) Explicitly,
    \begin{align*} f(x) &\approx T_n(x) \\ &= f(a) + f'(a)\,(x-a) +\frac{1}{2} f''(a)\,(x-a)^2 +\cdots+\frac{1}{n!} f^{(n)}(a)\,(x-a)^n\\ &=\sum_{k=0}^n\frac{1}{k!} f^{(k)}(a)\,(x-a)^k \end{align*}
These are, of course, approximations — often very good approximations near \(x=a\) — but still just approximations. One might hope that if we let the degree, \(n\text{,}\) of the approximation go to infinity then the error in the approximation might go to zero. If that is the case then the “infinite” Taylor polynomial would be an exact representation of the function. Let's see how this might work.
Fix a real number \(a\) and suppose that all derivatives of the function \(f(x)\) exist. Then, we saw in (3.4.33) of the CLP-1 text that, for any natural number \(n\text{,}\)
where \(T_n(x)\) is the Taylor polynomial of degree \(n\) for the function \(f(x)\) expanded about \(a\text{,}\) and \(E_n(x)=f(x)-T_n(x)\) is the error in the approximation \(f(x) \approx T_n(x)\text{.}\) The Taylor polynomial 2  is given by the formula
while the error satisfies 3 
Note that we typically do not know the value of \(c\) in the formula for the error. Instead we use the bounds on \(c\) to find bounds on \(f^{(n+1)}(c)\) and so bound the error 4 .
In order for our Taylor polynomial to be an exact representation of the function \(f(x)\) we need the error \(E_n(x)\) to be zero. This will not happen when \(n\) is finite unless \(f(x)\) is a polynomial. However it can happen in the limit as \(n \to \infty\text{,}\) and in that case we can write \(f(x)\) as the limit
\begin{equation*} f(x)=\lim_{n\rightarrow\infty} T_n(x) =\lim_{n\rightarrow\infty} \sum_{k=0}^n \tfrac{1}{k!}f^{(k)}(a)\, (x-a)^k \end{equation*}
This is really a limit of partial sums, and so we can write
\begin{gather*} f(x)=\sum_{k=0}^\infty \tfrac{1}{k!}f^{(k)}(a)\, (x-a)^k \end{gather*}
which is a power series representation of the function. Let us formalise this in a definition.

Definition 3.6.4. Taylor series.

The Taylor series for the function \(f(x)\) expanded around \(a\) is the power series
\begin{gather*} \sum_{n=0}^\infty \tfrac{1}{n!}f^{(n)}(a)\, (x-a)^n \end{gather*}
When \(a=0\) it is also called the Maclaurin series of \(f(x)\text{.}\) If \(\lim_{n\rightarrow\infty}E_n(x)=0\text{,}\) then
\begin{gather*} f(x)=\sum_{n=0}^\infty \tfrac{1}{n!}f^{(n)}(a)\, (x-a)^n \end{gather*}
Demonstrating that, for a given function, \(\lim_{n\rightarrow\infty}E_n(x)=0\) can be difficult, but for many of the standard functions you are used to dealing with, it turns out to be pretty easy. Let's compute a few Taylor series and see how we do it.
Find the Maclaurin series for \(f(x)=e^x\text{.}\)
Solution: Just as was the case for computing Taylor polynomials, we need to compute the derivatives of the function at the particular choice of \(a\text{.}\) Since we are asked for a Maclaurin series, \(a=0\text{.}\) So now we just need to find \(f^{(k)}(0)\) for all integers \(k\ge 0\text{.}\)
We know that \(\diff{}{x}e^x = e^x\) and so
\begin{align*} e^x &= f(x) = f'(x) = f''(x) = \cdots = f^{(k)}(x) = \cdots & \text{which gives}\\ 1 &= f(0) = f'(0) = f''(0) = \cdots = f^{(k)}(0) = \cdots. \end{align*}
Equations 3.6.1 and 3.6.2 then give us
\begin{align*} e^x=f(x)&= 1+x+\frac{x^2}{2!}+\cdots+\frac{x^n}{n!}+E_n(x) \end{align*}
We shall see, in the optional Example 3.6.8 below, that, for any fixed \(x\text{,}\) \(\lim\limits_{n\rightarrow\infty}E_n(x)=0\text{.}\) Consequently, for all \(x\text{,}\)
\begin{equation*} e^x=\lim_{n\rightarrow\infty}\Big[1 +x + \frac{1}{2} x^2 +\frac{1}{3!} x^3+\cdots+\frac{1}{n!} x^n\Big] =\sum_{n=0}^\infty \frac{1}{n!}x^n \end{equation*}
We have now seen power series representations for the functions
\begin{align*} \frac{1}{1-x} && \frac{1}{(1-x)^2} && \log(1+x) && \arctan(x) && e^x. \end{align*}
We do not think that you, the reader, will be terribly surprised to see that we develop series for sine and cosine next.
The trigonometric functions \(\sin x\) and \(\cos x\) also have widely used Maclaurin series expansions (i.e. Taylor series expansions about \(a=0\)). To find them, we first compute all derivatives at general \(x\text{.}\)
\begin{align*} f(x)&=\sin x & f'(x)&=\cos x & f''(x)&=\!-\sin x & f^{(3)}(x)&=\!-\cos x \\ & & f^{(4)}(x)&=\sin x & \cdots\\ g(x)&=\cos x & g'(x)&=\!-\sin x & g''(x)&=\!-\cos x & g^{(3)}(x)&=\sin x \\ & & g^{(4)}(x)&=\cos x & \cdots \end{align*}
Now set \(x=a=0\text{.}\)
\begin{align*} f(x)&=\sin x & f(0)&=0 & f'(0)&=1 & f''(0)&=0 & f^{(3)}(0)&=\!-1 \\ & & f^{(4)}(0)&=0 & \cdots\\ g(x)&=\cos x & g(0)&=1 & g'(0)&=0 & g''(0)&=\!-1 & g^{(3)}(0)&=0 \\ & & g^{(4)}(0)&=1 & \cdots \end{align*}
For \(\sin x\text{,}\) all even numbered derivatives (at \(x=0\)) are zero, while the odd numbered derivatives alternate between \(1\) and \(-1\text{.}\) Very similarly, for \(\cos x\text{,}\) all odd numbered derivatives (at \(x=0\)) are zero, while the even numbered derivatives alternate between \(1\) and \(-1\text{.}\) So, the Taylor polynomials that best approximate \(\sin x\) and \(\cos x\) near \(x=a=0\) are
\begin{align*} \sin x &\approx x-\tfrac{1}{3!}x^3+\tfrac{1}{5!}x^5-\cdots\\ \cos x &\approx 1-\tfrac{1}{2!}x^2+\tfrac{1}{4!}x^4-\cdots \end{align*}
We shall see, in the optional Example 3.6.10 below, that, for both \(\sin x\) and \(\cos x\text{,}\) we have \(\lim\limits_{n\rightarrow\infty}E_n(x)=0\) so that
\begin{align*} f(x)&=\lim_{n\rightarrow\infty}\Big[f(0)+f'(0)\,x+\cdots +\tfrac{1}{n!}f^{(n)}(0)\, x^n\Big]\\ g(x)&=\lim_{n\rightarrow\infty}\Big[g(0)+g'(0)\,x+\cdots +\tfrac{1}{n!}g^{(n)}(0)\, x^n\Big] \end{align*}
Reviewing the patterns we found in the derivatives, we conclude that, for all \(x\text{,}\)
\begin{equation*} \begin{alignedat}{2} \sin x &= x-\tfrac{1}{3!}x^3+\tfrac{1}{5!}x^5-\cdots& &=\sum_{n=0}^\infty(-1)^n\tfrac{1}{(2n+1)!}x^{2n+1}\\ \cos x &= 1-\tfrac{1}{2!}x^2+\tfrac{1}{4!}x^4-\cdots& &=\sum_{n=0}^\infty(-1)^n\tfrac{1}{(2n)!}x^{2n} \end{alignedat} \end{equation*}
and, in particular, both of the series on the right hand sides converge for all \(x\text{.}\)
We could also test for convergence of the series using the ratio test. Computing the ratios of successive terms in these two series gives us
\begin{align*} \left| \frac{A_{n+1}}{A_n} \right| &= \frac{|x|^{2n+3}/(2n+3)!}{|x|^{2n+1}/(2n+1)!} = \frac{|x|^2}{(2n+3)(2n+2)}\\ \left| \frac{A_{n+1}}{A_n} \right| &= \frac{|x|^{2n+2}/(2n+2)!}{|x|^{2n}/(2n)!} = \frac{|x|^2}{(2n+2)(2n+1)} \end{align*}
for sine and cosine respectively. Hence as \(n \to \infty\) these ratios go to zero and consequently both series are convergent for all \(x\text{.}\) (This is very similar to what was observed in Example 3.5.5.)
We have developed power series representations for a number of important functions 5 . Here is a theorem that summarizes them.
Notice that the series for sine and cosine sum to something that looks very similar to the series for \(e^x\text{:}\)
\begin{align*} \sin(x)+\cos(x) &= \left(x-\frac{1}{3!}x^3+\frac{1}{5!}x^5-\cdots\right) +\left(1-\frac{1}{2!}x^2+\frac{1}{4!}x^4-\cdots\right)\\ &= 1 + x - \frac{1}{2!}x^2 - \frac{1}{3!}x^3 + \frac{1}{4!}x^4 + \frac{1}{5!}x^5 - \cdots\\ e^x &= 1 + x + \frac{1}{2!}x^2 + \frac{1}{3!}x^3 + \frac{1}{4!}x^4 + \frac{1}{5!}x^5 + \cdots \end{align*}
So both series have coefficients with the same absolute value (namely \(\frac{1}{n!}\)), but there are differences in sign 6 . This is not a coincidence and we direct the interested reader to the optional Section 3.6.3 where will show how these series are linked through \(\sqrt{-1}\text{.}\)
We have already seen, in Example 3.6.5, that
\begin{equation*} e^x = 1+x+\frac{x^2}{2!}+\cdots+\frac{x^n}{n!}+E_n(x) \end{equation*}
By (3.6.3)
\begin{equation*} E_n(x) = \frac{1}{(n+1)!}e^c x^{n+1} \end{equation*}
for some (unknown) \(c\) between \(0\) and \(x\text{.}\) Fix any real number \(x\text{.}\) We'll now show that \(E_n(x)\) converges to zero as \(n\rightarrow\infty\text{.}\)
To do this we need get bound the size of \(e^c\text{,}\) and to do this, consider what happens if \(x\) is positive or negative.
  • If \(x \lt 0\) then \(x \leq c \leq 0\) and hence \(e^x \leq e^c \leq e^0=1\text{.}\)
  • On the other hand, if \(x\geq 0\) then \(0\leq c \leq x\) and so \(1=e^0 \leq e^c \leq e^x\text{.}\)
In either case we have that \(0 \leq e^c \leq 1+e^x\text{.}\) Because of this the error term
\begin{equation*} |E_n(x)|=\Big|\frac{e^c}{(n+1)!}x^{n+1}\Big| \le [e^x+1]\frac{|x|^{n+1}}{(n+1)!} \end{equation*}
We claim that this upper bound, and hence the error \(E_n(x)\text{,}\) quickly shrinks to zero as \(n \to \infty\text{.}\)
Call the upper bound (except for the factor \(e^x+1\text{,}\) which is independent of \(n\)) \(e_n(x)=\tfrac{|x|^{n+1}}{(n+1)!}\text{.}\) To show that this shrinks to zero as \(n\rightarrow\infty\text{,}\) let's write it as follows.
\begin{align*} e_n(x) &= \frac{|x|^{n+1}}{(n+1)!} = \overbrace{\frac{|x|}{1} \cdot \frac{|x|}{2} \cdot \frac{|x|}{3} \cdots \frac{|x|}{n}\cdot \frac{|x|}{|n+1|}}^{\text{$n+1$ factors}}\\ \end{align*}

Now let \(k\) be an integer bigger than \(|x|\text{.}\) We can split the product

\begin{align*} e_n(x) &= \overbrace{\left(\frac{|x|}{1} \cdot \frac{|x|}{2} \cdot \frac{|x|}{3} \cdots \frac{|x|}{k} \right)}^{ \text{$k$ factors}} \cdot \left( \frac{|x|}{k+1} \cdots \frac{|x|}{|n+1|}\right)\\ &\leq \underbrace{\left(\frac{|x|}{1} \cdot \frac{|x|}{2} \cdot \frac{|x|}{3} \cdots \frac{|x|}{k} \right)}_{=Q(x)} \cdot \left( \frac{|x|}{k+1} \right)^{n+1-k}\\ &= Q(x) \cdot \left( \frac{|x|}{k+1} \right)^{n+1-k} \end{align*}
Since \(k\) does not depend not \(n\) (though it does depend on \(x\)), the function \(Q(x)\) does not change as we increase \(n\text{.}\) Additionally, we know that \(|x| \lt k+1\) and so \(\frac{|x|}{k+1} \lt 1\text{.}\) Hence as we let \(n \to \infty\) the above bound must go to zero.
Alternatively, compare \(e_n(x)\) and \(e_{n+1}(x)\text{.}\)
\begin{equation*} \frac{e_{n+1}(x)}{e_n(x)} =\frac{\vphantom{\Big[}\tfrac{|x|^{n+2}}{(n+2)!}} {\vphantom{\Big[}\tfrac{|x|^{n+1}}{(n+1)!}} =\frac{|x|}{n+2} \end{equation*}
When \(n\) is bigger than, for example \(2|x|\text{,}\) we have \(\tfrac{e_{n+1}(x)}{e_n(x)} \lt \half\text{.}\) That is, increasing the index on \(e_n(x)\) by one decreases the size of \(e_n(x)\) by a factor of at least two. As a result \(e_n(x)\) must tend to zero as \(n\rightarrow\infty\text{.}\)
Consequently, for all \(x\text{,}\) \(\lim\limits_{n\rightarrow\infty}E_n(x)=0\text{,}\) as claimed, and we really have
\begin{equation*} e^x=\lim_{n\rightarrow\infty}\Big[1 +x + \frac{1}{2} x^2 +\frac{1}{3!} x^3+\cdots+\frac{1}{n!} x^n\Big] =\sum_{n=0}^\infty \frac{1}{n!}x^n \end{equation*}
There is another way to prove that the series \(\sum_{n=0}^\infty \frac{x^n}{n!}\) converges to the function \(e^x\text{.}\) Rather than looking at how the error term \(E_n(x)\) behaves as \(n \to \infty\text{,}\) we can show that the series satisfies the same simple differential equation 7  and the same initial condition as the function.
We already know from Example 3.5.5, that the series \(\sum_{n=0}^\infty \frac{1}{n!}x^n\) converges to some function \(f(x)\) for all values of \(x\) . All that remains to do is to show that \(f(x)\) is really \(e^x\text{.}\) We will do this by showing that \(f(x)\) and \(e^x\) satisfy the same differential equation with the same initial conditions 8 . We know that \(y=e^x\) satisfies
\begin{align*} \diff{y}{x} &= y &\text{and} && y(0)=1 \end{align*}
and by Theorem 2.4.4 (with \(a=1\text{,}\) \(b=0\) and \(y(0)=1\)), this is the only solution. So it suffices to show that \(f(x)= \sum_{n=0}^\infty \frac{x^n}{n!}\) satisfies
\begin{align*} \diff{f}{x}&=f(x) &\text{and} && f(0)&=1. \end{align*}
  • By Theorem 3.5.13,
    \begin{align*} \diff{f}{x} &= \diff{}{x}\left\{\sum_{n=0}^\infty \frac{1}{n!}x^n\right\} = \sum_{n=1}^\infty \frac{n}{n!}x^{n-1} = \sum_{n=1}^\infty \frac{1}{(n-1)!}x^{n-1}\\ &= \overbrace{1}^{n=1} + \overbrace{x}^{n=2} + \overbrace{\frac{x^2}{2!}}^{n=3} + \overbrace{\frac{x^3}{3!}}^{n=4} + \cdots\\ &= f(x) \end{align*}
  • When we substitute \(x=0\) into the series we get (see the discussion after Definition 3.5.1)
    \begin{align*} f(0) &= 1 + \frac{0}{1!} + \frac{0}{2!} + \cdots = 1. \end{align*}
Hence \(f(x)\) solves the same initial value problem and we must have \(f(x)=e^x\text{.}\)
We can show that the error terms in Maclaurin polynomials for sine and cosine go to zero as \(n \to \infty\) using very much the same approach as in Example 3.6.8.
Let \(f(x)\) be either \(\sin x\) or \(\cos x\text{.}\) We know that every derivative of \(f(x)\) will be one of \(\pm \sin(x)\) or \(\pm \cos(x)\text{.}\) Consequently, when we compute the error term using equation 3.6.3 we always have \(\big|f^{(n+1)}(c)\big|\le 1\) and hence
\begin{align*} |E_n(x)| &\le \frac{|x|^{n+1}}{(n+1)!}. \end{align*}
In Example 3.6.5, we showed that \(\frac{|x|^{n+1}}{(n+1)!} \to 0\) as \(n \to \infty\) — so all the hard work is already done. Since the error term shrinks to zero for both \(f(x)=\sin x\) and \(f(x)=\cos x\text{,}\) and
\begin{gather*} f(x)=\lim_{n\rightarrow\infty}\Big[f(0)+f'(0)\,x+\cdots +\tfrac{1}{n!}f^{(n)}(0)\, x^n\Big] \end{gather*}
as required.

Subsubsection 3.6.1 Optional — More about the Taylor Remainder

In this section, we fix a real number \(a\) and a natural number \(n\text{,}\) suppose that all derivatives of the function \(f(x)\) exist, and we study the error
\begin{align*} E_n(a,x) &= f(x) - T_n(a,x) \\ \text{where } T_n(a,x) &=f(a)+f'(a)\,(x-a)+\cdots+\tfrac{1}{n!}f^{(n)}(a)\, (x-a)^n \end{align*}
made when we approximate \(f(x)\) by the Taylor polynomial \(T_n(a,x)\) of degree \(n\) for the function \(f(x)\text{,}\) expanded about \(a\text{.}\) We have already seen, in (3.6.3), one formula, probably the most commonly used formula, for \(E_n(a,x)\text{.}\) In the next theorem, we repeat that formula and give a second, commonly used, formula. After an example, we give a second theorem that contains some less commonly used formulae.
Notice that the integral form of the error is explicit - we could, in principle, compute it exactly. (Of course if we could do that, we probably wouldn't need to use a Taylor expansion to approximate \(f\text{.}\)) This contrasts with the Lagrange form which is an ‘existential’ statement - it tells us that ‘\(c\)’ exists, but not how to compute it.
  1. We will give two proofs. The first is shorter and simpler, but uses some trickery. The second is longer, but is more straightforward. It uses a technique called mathematical induction.
    Proof 1: We are going to use a little trickery to get a simple proof. We simply view \(x\) as being fixed and study the dependence of \(E_n(a,x)\) on \(a\text{.}\) To emphasise that that is what we are doing, we define
    \begin{align*} S(t) &= f(x) - f(t) -f'(t)\,(x-t)-\tfrac{1}{2}f''(t)\,(x-t)^2\\ &\hskip2in -\cdots-\tfrac{1}{n!}f^{(n)}(t)\, (x-t)^n \tag{$*$} \end{align*}
    and observe that \(E_n(a,x) = S(a)\text{.}\)
    So, by the fundamental theorem of calculus (Theorem 1.3.1), the function \(S(t)\) is determined by its derivative, \(S'(t)\text{,}\) and its value at a single point. Finding a value of \(S(t)\) for one value of \(t\) is easy. Substitute \(t=x\) into \((*)\) to yield \(S(x)=0\text{.}\) To find \(S'(t)\text{,}\) apply \(\diff{}{t}\) to both sides of \((*)\text{.}\) Recalling that \(x\) is just a constant parameter,
    \begin{align*} S'(t)&= 0 - {\color{blue}{f'(t)}} - \big[{\color{red}{f''(t)(x\!-\!t)}}-{\color{blue}{f'(t)}}\big]\\ &\hskip0.5in -\big[\tfrac{1}{2}f^{(3)}(t)(x\!-\!t)^2-{\color{red}{f''(t)(x\!-\!t)}}\big]\\ &\hskip0.5in -\cdots-\big[\tfrac{1}{n!} f^{(n+1)}(t)\,(x-t)^n -\tfrac{1}{(n-1)!}f^{(n)}(t)\,(x-t)^{n-1} \big]\\ &=-\tfrac{1}{n!} f^{(n+1)}(t)\,(x-t)^n \end{align*}
    So, by the fundamental theorem of calculus, \(S(x)=S(a)+\int_a^x S'(t)\,\dee{t}\) and
    \begin{align*} E_n(a,x) &= -\big[S(x)-S(a)\big] = - \int_a^x S'(t)\,\dee{t}\\ &=\int_a^x \frac{1}{n!}f^{(n+1)}(t)\, (x-t)^n\,\dee{t} \end{align*}
    Proof 2: The proof that we have just given was short, but also very tricky --- almost noone could create that proof without big hints. Here is another much less tricky, but also commonly used, proof.
    • First consider the case \(n=0\text{.}\) When \(n=0\text{,}\)
      \begin{equation*} E_0(a,x) = f(x) - T_0(a,x) = f(x) -f(a) \end{equation*}
      The fundamental theorem of calculus gives
      \begin{equation*} f(x)-f(a) = \int_a^x f'(t)\,\dee{t} \end{equation*}
      so that
      \begin{equation*} E_0(a,x) = \int_a^x f'(t)\,\dee{t} \end{equation*}
      That is exactly the \(n=0\) case of part (a).
    • Next fix any integer \(n\ge 0\) and suppose that we already know that
      \begin{equation*} E_n(a,x)=\int_a^x \frac{1}{n!}f^{(n+1)}(t)\, (x-t)^n\,\dee{t} \end{equation*}
      Apply integration by parts (Theorem 1.7.2) to this integral with
      \begin{align*} u(t)&=f^{(n+1)}(t)\\ \dee{v}&= \frac{1}{n!}(x-t)^n\,\dee{t},\qquad v(t)=- \frac{1}{(n+1)!}(x-t)^{n+1} \end{align*}
      Since \(v(x)=0\text{,}\) integration by parts gives
      \begin{align*} &E_n(a,x)=u(x)v(x)-u(a)v(a)-\int_a^x v(t) u'(t)\,\dee{t} \\ &\quad=\frac{1}{(n+1)!}f^{(n+1)}(a)\, (x-a)^{n+1}\\ &\hskip0.5in +\int_a^x \frac{1}{(n+1)!}f^{(n+2)}(t)\, (x-t)^{n+1}\,\dee{t} \tag{$**$} \end{align*}
      Now, we defined
      \begin{align*} E_n(a,x) &= f(x) - f(a) -f'(a)\,(x-a)-\tfrac{1}{2}f''(a)\,(x-a)^2\\ &\hskip1in -\cdots-\tfrac{1}{n!}f^{(n)}(a)\, (x-a)^n \end{align*}
      so
      \begin{equation*} E_{n+1}(a,x) = E_n(a,x)-\tfrac{1}{(n+1)!}f^{(n+1)}(a)\, (x-a)^{n+1} \end{equation*}
      This formula expresses \(E_{n+1}(a,x)\) in terms of \(E_n(a,x)\text{.}\) That's called a reduction formula. Combining the reduction formula with (\(**\)) gives
      \begin{equation*} E_{n+1}(a,x)=\int_a^x \frac{1}{(n+1)!}f^{(n+2)}(t)\, (x-t)^{n+1}\,\dee{t} \end{equation*}
    • Let's pause to summarise what we have learned in the last two bullets. Use the notation \(P(n)\) to stand for the statement “\(E_n(a,x)=\int_a^x \frac{1}{n!}f^{(n+1)}(t)\, (x-t)^n\,\dee{t}\)”. To prove part (a) of the theorem, we need to prove that the statement \(P(n)\) is true for all integers \(n\ge 0\text{.}\) In the first bullet, we showed that the statement \(P(0)\) is true. In the second bullet, we showed that if, for some integer \(n\ge 0\text{,}\) the statement \(P(n)\) is true, then the statement \(P(n+1)\) is also true. Consequently,
      • \(P(0)\) is true by the first bullet and then
      • \(P(1)\) is true by the second bullet with \(n=0\) and then
      • \(P(2)\) is true by the second bullet with \(n=1\) and then
      • \(P(3)\) is true by the second bullet with \(n=2\)
      • and so on, for ever and ever.
      That tells us that \(P(n)\) is true for all integers \(n\ge 0\text{,}\) which is exactly part (a) of the theorem. This proof technique is called mathematical induction 9 .
  2. We have already seen one proof in the optional Section 3.4.9 of the CLP-1 text. We will see two more proofs here.
    Proof 1: We apply the generalised mean value theorem, which is Theorem 3.4.38 in the CLP-1 text. It says that
    \begin{equation*} \frac{F(b)-F(a)}{G(b)-G(a)} = \frac{F'(c)}{G'(c)} \tag{GMVT} \end{equation*}
    for some \(c\) strictly between 10  \(a\) and \(b\text{.}\) We apply (GMVT) with \(b=x\text{,}\) \(F(t)=S(t)\) and \(G(t)=(x-t)^{n+1}\text{.}\) This gives
    \begin{align*} E_n(a,x) &= -\big[S(x)-S(a)\big] =-\frac{S'(c)}{G'(c)}\big[G(x)-G(a)\big] \\ &=-\frac{ -\frac{1}{n!} f^{(n+1)}(c)\,(x-c)^n}{-(n+1)(x-c)^n}\ \big[0-(x-a)^{n+1}\big]\\ &=\frac{1}{(n+1)!}f^{(n+1)}(c)(x-a)^{n+1} \end{align*}
    Don't forget, when computing \(G'(c)\text{,}\) that \(G\) is a function of \(t\) with \(x\) just a fixed parameter.
    Proof 2: We apply Theorem 2.2.10 (the mean value theorem for weighted integrals). If \(a\lt x\text{,}\) we use the weight function \(w(t) = \frac{1}{n!} (x-t)^n\text{,}\) which is strictly positive for all \(a\lt t\lt x\text{.}\) By part (a) this gives
    \begin{align*} E_n(a,x) &=\int_a^x \frac{1}{n!}f^{(n+1)}(t)\, (x-t)^n\,\dee{t} \\ &= f^{(n+1)}(c) \int_a^x \frac{1}{n!} (x-t)^n\,\dee{t} \qquad \text{for some } a\lt c\lt x\\ &= f^{(n+1)}(c) \left[-\frac{1}{n!}\frac{(x-t)^{n+1}}{n+1}\right]_a^x\\ &= \frac{1}{(n+1)!}f^{(n+1)}(c)\,(x-a)^{n+1} \end{align*}
    If \(x\lt a\text{,}\) we instead use the weight function \(w(t) = \frac{1}{n!} (t-x)^n\text{,}\) which is strictly positive for all \(x\lt t\lt a\text{.}\) This gives
    \begin{align*} E_n(a,x) &=\int_a^x \frac{1}{n!}f^{(n+1)}(t)\, (x-t)^n\,\dee{t} \\ &=-(-1)^n\int^a_x \frac{1}{n!}f^{(n+1)}(t)\, (t-x)^n\,\dee{t} \\ &=(-1)^{n+1} f^{(n+1)}(c) \int_x^a \frac{1}{n!} (t-x)^n\,\dee{t} \qquad \text{for some } x\lt c\lt a \\ &= (-1)^{n+1} f^{(n+1)}(c) \left[\frac{1}{n!}\frac{(t-x)^{n+1}}{n+1}\right]_x^a \\ &= \frac{1}{(n+1)!}f^{(n+1)}(c)\, (-1)^{n+1} (a-x)^{n+1} \\ &= \frac{1}{(n+1)!}f^{(n+1)}(c)\,(x-a)^{n+1} \end{align*}
Theorem  3.6.11 has provided us with two formulae for the Taylor remainder \(E_n(a,x)\text{.}\) The formula of part (b), \(E_n(a,x)=\frac{1}{(n+1)!}\,f^{(n+1)}(c)\, (x-a)^{n+1}\text{,}\) is probably the easiest to use, and the most commonly used, formula for \(E_n(a,x)\text{.}\) The formula of part (a), \(E_n(a,x)=\int_a^x \frac{1}{n!}f^{(n+1)}(t)\, (x-t)^n\,\dee{t}\text{,}\) while a bit harder to apply, gives a bit better bound than that of part (b) (in the proof of Theorem 3.6.11 we showed that part (b) follows from part (a)). Here is an example in which we use both parts.
In Theorem 3.6.7 we stated that
\begin{equation*} \log(1+x) = \sum_{n=0}^\infty (-1)^n\frac{x^{n+1}}{n+1} = x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\cdots \tag{S1} \end{equation*}
for all \(-1\lt x\le 1\text{.}\) But, so far, we have not justified this statement. We do so now, using (both parts of) Theorem 3.6.11. We start by setting \(f(x)=\log(1+x)\) and finding the Taylor polynomials \(T_n(0,x)\text{,}\) and the corresponding errors \(E_n(0,x)\text{,}\) for \(f(x)\text{.}\)
\begin{align*} f(x) &= \log(1+x) & f(0) &= \log 1 = 0 \\ f'(x) &= \frac{1}{1+x} & f'(0) &= 1 \\ f''(x) &= \frac{-1}{(1+x)^2} & f''(0) &= -1 \\\ f'''(x) &= \frac{2}{(1+x)^3} & f'''(0) &= 2 \\ f^{(4)}(x) &= \frac{-2\times 3}{(1+x)^4} & f^{(4)}(0) &= -3! \\\ f^{(5)}(x) &= \frac{2\times 3\times 4}{(1+x)^5} & f^{(5)}(0) &= 4! \\\ &\ \ \ \vdots & &\ \ \ \vdots \\ f^{(n)}(x)&=\frac{(-1)^{n+1}(n-1)!}{(1+x)^n} & f^{(n)}(0) &= (-1)^{n+1}(n-1)! \end{align*}
So the Taylor polynomial of degree \(n\) for the function \(f(x)=\log(1+x)\text{,}\) expanded about \(a=0\text{,}\) is
\begin{align*} T_n(0,x) &=f(0)+f'(0)\,x+\cdots+\tfrac{1}{n!}f^{(n)}(0)\, x^n \\ &= x - \frac{1}{2}x^2 + \frac{1}{3}x^3 - \frac{1}{4}x^4 + \frac{1}{5}x^5 +\cdots + \frac{(-1)^{n+1}}{n}x^n \end{align*}
Theorem 3.6.11 gives us two formulae for the error \(E_n(0,x) = f(x) - T_n(0,x)\) made when we approximate \(f(x)\) by \(T_n(0,x)\text{.}\) Part (a) of the theorem gives
\begin{equation*} E_n(0,x) = \int_0^x \frac{1}{n!}f^{(n+1)}(t)\, (x-t)^n\,\dee{t} = (-1)^n \int_0^x \frac{(x-t)^n}{(1+t)^{n+1}}\,\dee{t} \tag{Ea} \end{equation*}
and part (b) gives
\begin{equation*} E_n(0,x)=\frac{1}{(n+1)!}\,f^{(n+1)}(c)\, x^{n+1} = (-1)^n\,\frac{1}{n+1}\,\frac{x^{n+1}}{(1+c)^{n+1}} \tag{Eb} \end{equation*}
for some (unknown) \(c\) between \(0\) and \(x\text{.}\) The statement (S1), that we wish to prove, is equivalent to the statement
\begin{equation*} \lim_{n\rightarrow\infty} E_n(0,x)=0 \qquad\text{for all }-1\lt x\le 1 \tag{S2} \end{equation*}
and we will now show that (S2) is true.
The case \(x=0\text{:}\)
This case is trivial, since, when \(x=0\text{,}\) \(E_n(0,x)=0\) for all \(n\text{.}\)
The case \(0\lt x\le 1\text{:}\)
This case is relatively easy to deal with using (Eb). In this case \(0\lt x\le 1\text{,}\) so that the \(c\) of (Eb) must be positive and
\begin{align*} \left|E_n(0,x)\right| &= \frac{1}{n+1}\frac{x^{n+1}}{(1+c)^{n+1}}\\ &\le \frac{1}{n+1}\frac{1^{n+1}}{(1+0)^{n+1}}\\ &=\frac{1}{n+1} \end{align*}
converges to zero as \(n\rightarrow\infty\text{.}\)
The case \(-1\lt x\lt 0\text{:}\)
When \(-1\lt x\lt 0\) is close to \(-1\text{,}\) (Eb) is not sufficient to show that (S2) is true. To see this, let's consider the example \(x=-0.8\text{.}\) All we know about the \(c\) of (Eb) is that it has to be between \(0\) and \(-0.8\text{.}\) For example, (Eb) certainly allows \(c\) to be \(-0.6\) and then
\begin{align*} &\left|(-1)^n\frac{1}{n+1}\frac{x^{n+1}}{(1+c)^{n+1}} \right|_{\genfrac{}{}{0pt}{}{x=-0.8}{c=-0.6}}\\ &\hskip0.25in=\frac{1}{n+1}\frac{0.8^{n+1}}{(1-0.6)^{n+1}}\\ &\hskip0.25in=\frac{1}{n+1}2^{n+1} \end{align*}
goes to \(+\infty\) as \(n\rightarrow\infty\text{.}\)
Note that, while this does tell us that (Eb) is not sufficient to prove (S2), when \(x\) is close to \(-1\text{,}\) it does not also tell us that \(\lim\limits_{n\rightarrow\infty}|E_n(0,-0.8)|=+\infty\) (which would imply that (S2) is false) — \(c\) could equally well be \(-0.2\) and then
\begin{align*} &\left|(-1)^n\frac{1}{n+1}\frac{x^{n+1}}{(1+c)^{n+1}} \right|_{\genfrac{}{}{0pt}{}{x=-0.8}{c=-0.2}}\\ &\hskip0.25in=\frac{1}{n+1}\frac{0.8^{n+1}}{(1-0.2)^{n+1}}\\ &\hskip0.25in=\frac{1}{n+1} \end{align*}
goes to \(0\) as \(n\rightarrow\infty\text{.}\)
We'll now use (Ea) (which has the advantage of not containing any unknown free parameter \(c\)) to verify (S2) when \(-1\lt x\lt 0\text{.}\) Rewrite the right hand side of (Ea)
\begin{align*} & (-1)^n \int_0^x \frac{(x-t)^n}{(1+t)^{n+1}}\,\dee{t} =-\int_x^0 \frac{(t-x)^n}{(1+t)^{n+1}}\,\dee{t} \\ &=-\int_0^{-x}\frac{s^n}{(1+x+s)^{n+1}}\,\dee{s} \ s=t-x,\,\dee{s}=\dee{t} \end{align*}
The exact evaluation of this integral is very messy and not very illuminating. Instead, we bound it. Note that, for \(1+x\gt 0\text{,}\)
\begin{align*} \diff{}{s}\left(\frac{s}{1+x+s}\right) &= \diff{}{s}\left(\frac{1+x+s-(1+x)}{1+x+s} \right)\\ &= \diff{}{s}\left(1- \frac{1+x}{1+x+s}\right) \\ &=\frac{1+x}{(1+x+s)^2} \gt 0 \end{align*}
so that \(\frac{s}{1+x+s}\) increases as \(s\) increases. Consequently, the biggest value that \(\frac{s}{1+x+s}\) takes on the domain of integration \(0\le s\le -x=|x|\) is
\begin{equation*} \frac{s}{1+x+s}\bigg|_{s=-x} = -x = |x| \end{equation*}
and the integrand
\begin{align*} 0\le \frac{s^n}{[1+x+s]^{n+1}} &=\left(\frac{s}{1+x+s}\right)^n\frac{1}{1+x+s}\\ &\le \frac{|x|^n}{1+x+s} \end{align*}
Consequently,
\begin{align*} \left|E_n(0,x)\right| &= \left|(-1)^n \int_0^x \frac{(x-t)^n}{(1+t)^{n+1}}\,\dee{t}\right|\\ &=\int_0^{-x}\frac{s^n}{[1+x+s]^{n+1}}\,\dee{s} \\ &\le |x|^n \int_0^{-x}\frac{1}{1+x+s}\,\dee{s}\\ &=|x|^n\Big[\log(1+x+s)\Big]_{s=0}^{s=-x} \\ &= |x|^n [-\log(1+x)] \end{align*}
converges to zero as \(n\rightarrow\infty\) for each fixed \(-1\lt x\lt 0\text{.}\)
So we have verified (S2), as desired.
As we said above, Theorem 3.6.11 gave the two most commonly used formulae for the Taylor remainder. Here are some less commonly used, but occasionally useful, formulae.
As in the proof of Theorem 3.6.11, we define
\begin{equation*} S(t) = f(x) - f(t) -f'(t)\,(x\!-\!t)-\tfrac{1}{2}f''(t)\,(x\!-\!t)^2 -\cdots-\tfrac{1}{n!}f^{(n)}(t)\, (x\!-\!t)^n \end{equation*}
and observe that \(E_n(a,x) = S(a)\) and \(S(x)=0\) and \(S'(t)= -\tfrac{1}{n!} f^{(n+1)}(t)\,(x-t)^n\text{.}\)
  1. Recall that the generalised mean-value theorem, which is Theorem 3.4.38 in the CLP-1 text, says that
    \begin{equation*} \frac{F(b)-F(a)}{G(b)-G(a)} = \frac{F'(c)}{G'(c)} \tag{GMVT} \end{equation*}
    for some \(c\) strictly between \(a\) and \(b\text{.}\) We apply this theorem with \(b=x\) and \(F(t)=S(t)\text{.}\) This gives
    \begin{align*} E_n(a,x) &= -\big[S(x)-S(a)\big] =-\frac{S'(c)}{G'(c)}\big[G(x)-G(a)\big]\\ &=-\frac{ -\frac{1}{n!} f^{(n+1)}(c)\,(x-c)^n}{G'(c)}\ \big[G(x)-G(a)\big]\\ &=\frac{1}{n!} f^{(n+1)}(c)\,\frac{G(x)-G(a)}{G'(c)}\, (x-c)^n \end{align*}
  2. Apply part (a) with \(G(x)=x\text{.}\) This gives
    \begin{align*} E_n(a,x) &=\frac{1}{n!} f^{(n+1)}(c)\,\frac{x-a}{1}\, (x-c)^n \\ &=\frac{1}{n!} f^{(n+1)}(c)\, (x-c)^n(x-a) \end{align*}
    for some \(c\) strictly between \(a\) and \(b\text{.}\)
In Example 3.6.12 we verified that
\begin{equation*} \log(1+x) = \sum_{n=0}^\infty (-1)^n\frac{x^{n+1}}{n+1} = x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\cdots \tag{S1} \end{equation*}
for all \(-1\lt x\le 1\text{.}\) There we used the Lagrange form,
\begin{equation*} E_n(a,x)=\frac{1}{(n+1)!}\,f^{(n+1)}(c)\, (x-a)^{n+1} \end{equation*}
for the Taylor remainder to verify (S1) when \(0\le x\le 1\text{,}\) but we also saw that it is not possible to use the Lagrange form to verify (S1) when \(x\) is close to \(-1\text{.}\) We instead used the integral form
\begin{equation*} E_n(a,x) = \int_a^x \frac{1}{n!}f^{(n+1)}(t)\, (x-t)^n\,\dee{t} \end{equation*}
We will now use the Cauchy form (part (b) of Theorem 3.6.13)
\begin{equation*} E_n(a,x)=\frac{1}{n!}f^{(n+1)}(c)\, (x-c)^n(x-a) \end{equation*}
to verify
\begin{equation*} \lim_{n\rightarrow\infty} E_n(0,x)=0 \tag{S2} \end{equation*}
when \(-1\lt x\lt 0\text{.}\) We have already noted that (S2) is equivalent to (S1).
Write \(f(x)=\log(1+x)\text{.}\) We saw in Example 3.6.12 that
\begin{equation*} f^{(n+1)}(x) = \frac{(-1)^n n!}{(1+x)^{n+1}} \end{equation*}
So, in this example, the Cauchy form is
\begin{equation*} E_n(0,x)=(-1)^n\frac{(x-c)^nx}{(1+c)^{n+1}} \end{equation*}
for some \(x\lt c\lt 0\text{.}\) When \(-1\lt x\lt c \lt 0\text{,}\)
  • \(c\) and \(x\) are negative and \(1+x\text{,}\) \(1+c\) and \(c-x\) are (strictly) positive so that
    \begin{align*} c(1+x)\lt 0 &\implies c \lt -cx \implies c-x \lt -x-xc=|x|(1+c)\\ &\implies \left|\frac{x-c}{1+c}\right| =\frac{c-x}{1+c}\lt |x| \end{align*}
    so that \(\left|\frac{x-c}{1+c}\right|^n \lt |x|^n\) and
  • the distance from \(-1\) to \(c\text{,}\) namely \(c-(-1)=1+c\) is greater than the distance from \(-1\) to \(x\text{,}\) namely \(x-(-1)=1+x\text{,}\) so that \(\frac{1}{1+c}\lt\frac{1}{1+x}\text{.}\)
So, for \(-1\lt x\lt c\lt 0\text{,}\)
\begin{gather*} |E_n(0,x)|=\left|\frac{x-c}{1+c}\right|^n\frac{|x|}{1+c} \lt \frac{|x|^{n+1}}{1+c} \lt \frac{|x|^{n+1}}{1+x} \end{gather*}
goes to zero as \(n\rightarrow\infty\text{.}\)

Subsection 3.6.2 Computing with Taylor Series

Taylor series have a great many applications. (Hence their place in this course.) One of the most immediate of these is that they give us an alternate way of computing many functions. For example, the first definition we see for the sine and cosine functions is in terms of triangles. Those definitions, however, do not lend themselves to computing sine and cosine except at very special angles. Armed with power series representations, however, we can compute them to very high precision at any angle. To illustrate this, consider the computation of \(\pi\) — a problem that dates back to the Babylonians.
There are numerous methods for computing \(\pi\) to any desired degree of accuracy 12 . Many of them use the Maclaurin expansion
\begin{align*} \arctan x &= \sum_{n=0}^\infty (-1)^n\frac{x^{2n+1}}{2n+1} \end{align*}
of Theorem 3.6.7. Since \(\arctan(1)=\frac{\pi}{4}\text{,}\) the series gives us a very pretty formula for \(\pi\text{:}\)
\begin{align*} \frac{\pi}{4} = \arctan 1 &= \sum_{n=0}^\infty \frac{(-1)^n}{2n+1}\\ \pi &= 4 \left( 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \cdots \right) \end{align*}
Unfortunately, this series is not very useful for computing \(\pi\) because it converges so slowly. If we approximate the series by its \(N^\mathrm{th}\) partial sum, then the alternating series test (Theorem 3.3.14) tells us that the error is bounded by the first term we drop. To guarantee that we have 2 decimal digits of \(\pi\) correct, we need to sum about the first 200 terms!
A much better way to compute \(\pi\) using this series is to take advantage of the fact that \(\tan\frac{\pi}{6}=\frac{1}{\sqrt{3}}\text{:}\)
\begin{align*} \pi&= 6\arctan\Big(\frac{1}{\sqrt{3}}\Big) = 6\sum_{n=0}^\infty (-1)^n\frac{1}{2n+1}\ \frac{1}{{(\sqrt{3})}^{2n+1}}\\ &= 2\sqrt{3} \sum_{n=0}^\infty (-1)^n\frac{1}{2n+1}\ \frac{1}{3^n}\\ &=2\sqrt{3}\Big(1-\frac{1}{3\times 3}+\frac{1}{5\times 9}-\frac{1}{7\times 27} +\frac{1}{9\times 81}-\frac{1}{11\times 243}+\cdots\Big) \end{align*}
Again, this is an alternating series and so (via Theorem 3.3.14) the error we introduce by truncating it is bounded by the first term dropped. For example, if we keep ten terms, stopping at \(n=9\text{,}\) we get \(\pi=3.141591\) (to 6 decimal places) with an error between zero and
\begin{equation*} \frac{2\sqrt{3}}{21\times 3^{10}} \lt 3\times 10^{-6} \end{equation*}
In 1699, the English astronomer/mathematician Abraham Sharp (1653–1742) used 150 terms of this series to compute 72 digits of \(\pi\) — by hand!
This is just one of very many ways to compute \(\pi\text{.}\) Another one, which still uses the Maclaurin expansion of \(\arctan x\text{,}\) but is much more efficient, is
\begin{equation*} \pi= 16\arctan\frac{1}{5}-4\arctan\frac{1}{239} \end{equation*}
This formula was used by John Machin in 1706 to compute \(\pi\) to 100 decimal digits — again, by hand.
Power series also give us access to new functions which might not be easily expressed in terms of the functions we have been introduced to so far. The following is a good example of this.
The error function
\begin{equation*} \erf(x) =\frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2}\dee{t} \end{equation*}
is used in computing “bell curve” probabilities. The indefinite integral of the integrand \(e^{-t^2}\) cannot be expressed in terms of standard functions. But we can still evaluate the integral to within any desired degree of accuracy by using the Taylor expansion of the exponential. Start with the Maclaurin series for \(e^x\text{:}\)
\begin{align*} e^x &= \sum_{n=0}^\infty \frac{1}{n!}x^n\\ \end{align*}

and then substitute \(x = -t^2\) into this:

\begin{align*} e^{-t^2} &= \sum_{n=0}^\infty \frac{(-1)^n}{n!}t^{2n} \end{align*}
We can then apply Theorem 3.5.13 to integrate term-by-term:
\begin{align*} \erf(x) &=\frac{2}{\sqrt{\pi}}\int_0^x \left[\sum_{n=0}^\infty \frac{{(-t^2)}^n}{n!}\right]\dee{t}\\ &=\frac{2}{\sqrt{\pi}}\sum_{n=0}^\infty (-1)^n\frac{x^{2n+1}}{(2n+1)n!} \end{align*}
For example, for the bell curve, the probability of being within one standard deviation of the mean 13 , is
\begin{align*} \amp\erf\Big(\frac{1}{\sqrt{2}}\Big) = \frac{2}{\sqrt{\pi}} \sum_{n=0}^\infty (-1)^n\frac{ {(\frac{1}{\sqrt{2}})}^{2n+1}}{(2n+1)n!} = \frac{2}{\sqrt{2\pi}} \sum_{n=0}^\infty (-1)^n\frac{1}{(2n+1) 2^n n!}\\ &=\sqrt{\frac{2}{\pi}}\Big(1-\frac{1}{3\times 2} +\frac{1}{5\times 2^2\times 2} -\frac{1}{7\times 2^3\times 3!} + \frac{1}{9\times2^4\times 4!}-\cdots \Big) \end{align*}
This is yet another alternating series. If we keep five terms, stopping at \(n=4\text{,}\) we get \(0.68271\) (to 5 decimal places) with, by Theorem 3.3.14 again, an error between zero and the first dropped term, which is minus
\begin{equation*} \sqrt{\frac{2}{\pi}}\ \frac{1}{11\times 2^5\times 5!} \lt 2\times 10^{-5} \end{equation*}
Evaluate
\begin{equation*} \sum_{n=1}^\infty \frac{(-1)^{n-1}}{n3^n}\qquad\text{and}\qquad \sum_{n=1}^\infty \frac{1}{n3^n} \end{equation*}
Solution. There are not very many series that can be easily evaluated exactly. But occasionally one encounters a series that can be evaluated simply by realizing that it is exactly one of the series in Theorem 3.6.7, just with a specific value of \(x\text{.}\) The left hand given series is
\begin{equation*} \sum_{n=1}^\infty \frac{(-1)^{n-1}}{n}\ \frac{1}{3^n} = \frac{1}{3}-\frac{1}{2}\ \frac{1}{3^2}+\frac{1}{3}\ \frac{1}{3^3} -\frac{1}{4}\ \frac{1}{3^4}+\cdots \end{equation*}
The series in Theorem 3.6.7 that this most closely resembles is
\begin{equation*} \log(1+x) = x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}-\cdots \end{equation*}
Indeed
\begin{align*} \sum_{n=1}^\infty \frac{(-1)^{n-1}}{n}\ \frac{1}{3^n} &= \frac{1}{3}-\frac{1}{2}\ \frac{1}{3^2}+\frac{1}{3}\ \frac{1}{3^3} -\frac{1}{4}\ \frac{1}{3^4}+\cdots\\ & = \bigg[x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}-\cdots\bigg]_{x=\frac{1}{3}}\\ & = \Big[\log(1+x) \Big]_{x=\frac{1}{3}}\\ & = \log \frac{4}{3} \end{align*}
The right hand series above differs from the left hand series above only that the signs of the left hand series alternate while those of the right hand series do not. We can flip every second sign in a power series just by using a negative \(x\text{.}\)
\begin{align*} \Big[\log(1+x) \Big]_{x=-\frac{1}{3}} &=\bigg[x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}-\cdots \bigg]_{x=-\frac{1}{3}}\\ &= -\frac{1}{3}-\frac{1}{2}\ \frac{1}{3^2}-\frac{1}{3}\ \frac{1}{3^3} -\frac{1}{4}\ \frac{1}{3^4}+\cdots \end{align*}
which is exactly minus the desired right hand series. So
\begin{equation*} \sum_{n=1}^\infty \frac{1}{n3^n} =- \Big[\log(1+x) \Big]_{x=-\frac{1}{3}} =-\log\frac{2}{3} =\log\frac{3}{2} \end{equation*}
Let \(f(x) = \sin(2x^3)\text{.}\) Find \(f^{(15)}(0)\text{,}\) the fifteenth derivative of \(f\) at \(x=0\text{.}\)
Solution: This is a bit of a trick question. We could of course use the product and chain rules to directly apply fifteen derivatives and then set \(x=0\text{,}\) but that would be extremely tedious 14 . There is a much more efficient approach that exploits two pieces of knowledge that we have.
  • From equation 3.6.2, we see that the coefficient of \((x-a)^n\) in the Taylor series of \(f(x)\) with expansion point \(a\) is exactly \(\frac{1}{n!} f^{(n)}(a)\text{.}\) So \(f^{(n)}(a)\) is exactly \(n!\) times the coefficient of \((x-a)^n\) in the Taylor series of \(f(x)\) with expansion point \(a\text{.}\)
  • We know, or at least can easily find, the Taylor series for \(\sin(2x^3)\text{.}\)
Let's apply that strategy.
  • First, we know that, for all \(y\text{,}\)
    \begin{equation*} \sin y = y-\frac{1}{3!}y^3+\frac{1}{5!}y^5-\cdots \end{equation*}
  • Just substituting \(y= 2x^3\text{,}\) we have
    \begin{align*} \sin(2 x^3) &= 2x^3-\frac{1}{3!}{(2x^3)}^3+\frac{1}{5!}{(2x^3)}^5-\cdots\\ &= 2x^3-\frac{8}{3!}x^9+\frac{2^5}{5!}x^{15}-\cdots \end{align*}
  • So the coefficient of \(x^{15}\) in the Taylor series of \(f(x)=\sin(2x^3)\) with expansion point \(a=0\) is \(\frac{2^5}{5!}\)
and we have
\begin{equation*} f^{(15)}(0) = 15!\times \frac{2^5}{5!} = 348{,}713{,}164{,}800 \end{equation*}
Back in Example 3.6.8, we saw that
\begin{equation*} e^x =1+x+\tfrac{x^2}{2!}+\cdots+\tfrac{x^n}{n!}+\tfrac{1}{(n+1)!}e^c x^{n+1} \end{equation*}
for some (unknown) \(c\) between \(0\) and \(x\text{.}\) This can be used to approximate the number \(e\text{,}\) with any desired degree of accuracy. Setting \(x=1\) in this equation gives
\begin{equation*} e=1+1+\tfrac{1}{2!}+\cdots+\tfrac{1}{n!}+\tfrac{1}{(n+1)!}e^c \end{equation*}
for some \(c\) between \(0\) and \(1\text{.}\) Even though we don't know \(c\) exactly, we can bound that term quite readily. We do know that \(e^c\) in an increasing function 15  of \(c\text{,}\) and so \(1=e^0 \leq e^c \leq e^1=e\text{.}\) Thus we know that
\begin{gather*} \frac{1}{(n+1)!} \leq e - \left( 1+1+\tfrac{1}{2!}+\cdots+\tfrac{1}{n!} \right) \leq \frac{e}{(n+1)!} \end{gather*}
So we have a lower bound on the error, but our upper bound involves the \(e\) — precisely the quantity we are trying to get a handle on.
But all is not lost. Let's look a little more closely at the right-hand inequality when \(n=1\text{:}\)
\begin{align*} e - (1+1) &\leq \frac{e}{2}& \text{move the $e$'s to one side}\\ \frac{e}{2} & \leq 2 & \text{and clean it up}\\ e & \leq 4. \end{align*}
Now this is a pretty crude bound 16  but it isn't hard to improve. Try this again with \(n=2\text{:}\)
\begin{align*} e - (1+1+\frac{1}{2}) & \leq \frac{e}{6} & \text{move $e$'s to one side}\\ \frac{5e}{6} & \leq \frac{5}{2}\\ e & \leq 3. \end{align*}
Better. Now we can rewrite our bound:
\begin{gather*} \frac{1}{(n+1)!} \leq e - \left( 1+1+\tfrac{1}{2!}+\cdots+\tfrac{1}{n!} \right) \leq \frac{e}{(n+1)!} \leq \frac{3}{(n+1)!} \end{gather*}
If we set \(n=4\) in this we get
\begin{align*} \frac{1}{120}=\frac{1}{5!} &\leq e - \left(1 + 1 + \frac{1}{2} + \frac{1}{6} + \frac{1}{24} \right) \leq \frac{3}{120} \end{align*}
So the error is between \(\frac{1}{120}\) and \(\frac{3}{120}=\frac{1}{40}\) — this approximation isn't guaranteed to give us the first 2 decimal places. If we ramp \(n\) up to \(9\) however, we get
\begin{align*} \frac{1}{10!} &\leq e - \left(1 + 1 + \frac{1}{2} + \cdots + \frac{1}{9!} \right) \leq \frac{3}{10!} \end{align*}
Since \(10! = 3628800\text{,}\) the upper bound on the error is \(\frac{3}{3628800} \lt \frac{3}{3000000} = 10^{-6}\text{,}\) and we can approximate \(e\) by
\begin{equation*} \begin{alignedat}{10} &1+1 +\ \tfrac{1}{2!} & &+\ \ \tfrac{1}{3!}\ & &+\hskip10pt\tfrac{1}{4!}\hskip10pt & &+\hskip10pt\tfrac{1}{5!}\hskip10pt & &+\hskip15pt\tfrac{1}{6!}\hskip15pt & &+\hskip15pt\tfrac{1}{7!}\hskip15pt \\ &&&&&&&&&+\hskip15pt\tfrac{1}{8!}\hskip15pt & &+\hskip15pt\tfrac{1}{9!} \\ =&1+1+0.5& &+0.1\dot 6& &+0.041\dot 6& &+0.008\dot 3& &+0.0013\dot 8& &+0.0001984& \\ &&&&&&&&&+0.0000248& &+0.0000028\\ =&2.718282 \end{alignedat} \end{equation*}
and it is correct to six decimal places.

Subsection 3.6.3 Optional — Linking \(e^x\) with trigonometric functions

Let us return to the observation that we made earlier about the Maclaurin series for sine, cosine and the exponential functions:
\begin{align*} \cos x + \sin x &= 1 + x - \frac{1}{2!}x^2 - \frac{1}{3!}x^3 + \frac{1}{4!}x^4 + \frac{1}{5!}x^5 - \cdots\\ e^x &= 1 + x + \frac{1}{2!}x^2 + \frac{1}{3!}x^3 + \frac{1}{4!}x^4 + \frac{1}{5!}x^5 + \cdots \end{align*}
We see that these series are identical except for the differences in the signs of the coefficients. Let us try to make them look even more alike by introducing extra constants \(A, B\) and \(q\) into the equations. Consider
\begin{align*} A \cos x + B \sin x &= A + Bx - \frac{A}{2!}x^2 - \frac{B}{3!}x^3 + \frac{A}{4!}x^4 + \frac{B}{5!}x^5 - \cdots\\ e^{q x} &= 1 + qx + \frac{q^2}{2!}x^2 + \frac{q^3}{3!}x^3 + \frac{q^4}{4!}x^4 + \frac{q^5}{5!}x^5 + \cdots \end{align*}
Let's try to choose \(A\text{,}\) \(B\) and \(q\) so that these to expressions are equal. To do so we must make sure that the coefficients of the various powers of \(x\) agree. Looking just at the coefficients of \(x^0\) and \(x^1\text{,}\) we see that we need
\begin{align*} A&=1 & \text{and}&& B&=q \end{align*}
Substituting this into our expansions gives
\begin{align*} \cos x + q\sin x &= 1 + qx - \frac{1}{2!}x^2 - \frac{q}{3!}x^3 + \frac{1}{4!}x^4 + \frac{q}{5!}x^5 - \cdots\\ e^{q x} &= 1 + qx + \frac{q^2}{2!}x^2 + \frac{q^3}{3!}x^3 + \frac{q^4}{4!}x^4 + \frac{q^5}{5!}x^5 + \cdots \end{align*}
Now the coefficients of \(x^0\) and \(x^1\) agree, but the coefficient of \(x^2\) tells us that we need \(q\) to be a number so that \(q^2 =-1\text{,}\) or
\begin{align*} q &= \sqrt{-1} \end{align*}
We know that no such real number \(q\) exists. But for the moment let us see what happens if we just assume 17  that we can find \(q\) so that \(q^2=-1\text{.}\) Then we will have that
\begin{align*} q^3 &= -q & q^4 &= 1 & q^5 &= q & \cdots \end{align*}
so that the series for \(\cos x + q\sin x\) and \(e^{q x}\) are identical. That is
\begin{align*} e^{qx} &= \cos x + q\sin x \end{align*}
If we now write this with the more usual notation \(q=\sqrt{-1}=i\) we arrive at what is now known as Euler's formula
Euler's proof of this formula (in 1740) was based on Maclaurin expansions (much like our explanation above). Euler's formula 18  is widely regarded as one of the most important and beautiful in all of mathematics.
Of course having established Euler's formula one can find slicker demonstrations. For example, let
\begin{align*} f(x) &= e^{-ix} \left(\cos x + i\sin x \right) \end{align*}
Differentiating (with product and chain rules and the fact that \(i^2=-1\)) gives us
\begin{align*} f'(x) &= -i e^{-ix} \left(\cos x + i\sin x \right) + e^{-ix} \left(-\sin x + i\cos x \right)\\ &= 0 \end{align*}
Since the derivative is zero, the function \(f(x)\) must be a constant. Setting \(x=0\) tells us that
\begin{align*} f(0) &= e^0 \left(\cos 0 + i\sin 0 \right) = 1. \end{align*}
Hence \(f(x)=1\) for all \(x\text{.}\) Rearranging then arrives at
\begin{align*} e^{ix} &= \cos x + i \sin x \end{align*}
as required.
Substituting \(x=\pi\) into Euler's formula we get Euler's identity
\begin{align*} e^{i \pi} &= -1 \end{align*}
which is more often stated
which links the 5 most important constants in mathematics, \(1,0,\pi,e\) and \(\sqrt{-1}\text{.}\)

Subsection 3.6.4 Evaluating Limits using Taylor Expansions

Taylor polynomials provide a good way to understand the behaviour of a function near a specified point and so are useful for evaluating complicated limits. Here are some examples.
In this example, we'll start with a relatively simple limit, namely
\begin{equation*} \lim_{x\rightarrow 0}\frac{\sin x}{x} \end{equation*}
The first thing to notice about this limit is that, as \(x\) tends to zero, both the numerator, \(\sin x\text{,}\) and the denominator, \(x\text{,}\) tend to \(0\text{.}\) So we may not evaluate the limit of the ratio by simply dividing the limits of the numerator and denominator. To find the limit, or show that it does not exist, we are going to have to exhibit a cancellation between the numerator and the denominator. Let's start by taking a closer look at the numerator. By Example 3.6.6,
\begin{equation*} \sin x = x-\frac{1}{3!}x^3+\frac{1}{5!}x^5 - \cdots \end{equation*}
Consequently 19 
\begin{equation*} \frac{\sin x}{x}=1-\frac{1}{3!}x^2 + \frac{1}{5!}x^4 - \cdots \end{equation*}
Every term in this series, except for the very first term, is proportional to a strictly positive power of \(x\text{.}\) Consequently, as \(x\) tends to zero, all terms in this series, except for the very first term, tend to zero. In fact the sum of all terms, starting with the second term, also tends to zero. That is,
\begin{equation*} \lim_{x\rightarrow 0}\Big[-\frac{1}{3!}x^2 + \frac{1}{5!}x^4 - \cdots\Big] =0 \end{equation*}
We won't justify that statement here, but it will be justified in the following (optional) subsection. So
\begin{align*} \lim_{x\rightarrow 0}\frac{\sin x}{x} & =\lim_{x\rightarrow 0}\Big[1-\frac{1}{3!}x^2 + \frac{1}{5!}x^4 - \cdots\Big]\\ &=1+\lim_{x\rightarrow 0}\Big[-\frac{1}{3!}x^2 + \frac{1}{5!}x^4 - \cdots\Big]\\ &=1 \end{align*}
The limit in the previous example can also be evaluated relatively easily using l'Hôpital's rule 20 . While the following limit can also, in principal, be evaluated using l'Hôpital's rule, it is much more efficient to use Taylor series 21 .
In this example we evaluate
\begin{equation*} \lim_{x\rightarrow 0}\frac{\arctan x -x}{\sin x-x} \end{equation*}
Once again, the first thing to notice about this limit is that, as x tends to zero, the numerator tends to \(\arctan 0 -0\text{,}\) which is \(0\text{,}\) and the denominator tends to \(\sin 0-0\text{,}\) which is also \(0\text{.}\) So we may not evaluate the limit of the ratio by simply dividing the limits of the numerator and denominator. Again, to find the limit, or show that it does not exist, we are going to have to exhibit a cancellation between the numerator and the denominator. To get a more detailed understanding of the behaviour of the numerator and denominator near \(x=0\text{,}\) we find their Taylor expansions. By Example 3.5.21,
\begin{equation*} \arctan x = x-\frac{x^3}{3}+\frac{x^5}{5}-\cdots \end{equation*}
so the numerator
\begin{equation*} \arctan x -x = -\frac{x^3}{3}+\frac{x^5}{5}-\cdots \end{equation*}
By Example 3.6.6,
\begin{equation*} \sin x = x-\frac{1}{3!}x^3+\frac{1}{5!}x^5 - \cdots \end{equation*}
so the denominator
\begin{equation*} \sin x -x = -\frac{1}{3!}x^3+\frac{1}{5!}x^5 - \cdots \end{equation*}
and the ratio
\begin{equation*} \frac{\arctan x -x}{\sin x - x} = \frac{-\frac{x^3}{3}+\frac{x^5}{5}-\cdots} {-\frac{1}{3!}x^3+\frac{1}{5!}x^5 - \cdots} \end{equation*}
Notice that every term in both the numerator and the denominator contains a common factor of \(x^3\text{,}\) which we can cancel out.
\begin{equation*} \frac{\arctan x -x}{\sin x - x} = \frac{-\frac{1}{3}+\frac{x^2}{5}-\cdots} {-\frac{1}{3!}+\frac{1}{5!}x^2 - \cdots} \end{equation*}
As \(x\) tends to zero,
  • the numerator tends to \(-\frac{1}{3}\text{,}\) which is not \(0\text{,}\) and
  • the denominator tends to \(-\frac{1}{3!}=-\frac{1}{6}\text{,}\) which is also not \(0\text{.}\)
so we may now legitimately evaluate the limit of the ratio by simply dividing the limits of the numerator and denominator.
\begin{align*} \lim_{x\rightarrow 0}\frac{\arctan x -x}{\sin x-x} &=\lim_{x\rightarrow 0} \frac{-\frac{1}{3}+\frac{x^2}{5}-\cdots} {-\frac{1}{3!}+\frac{1}{5!}x^2 - \cdots}\\ &=\frac{\lim_{x\rightarrow 0} \big[-\frac{1}{3}+\frac{x^2}{5}-\cdots\big]} {\lim_{x\rightarrow 0} \big[-\frac{1}{3!}+\frac{1}{5!}x^2 - \cdots\big]}\\ &=\frac{-\frac{1}{3}}{-\frac{1}{3!}}\\ &=2 \end{align*}

Subsection 3.6.5 Optional — The Big O Notation

In Example 3.6.22 we used, without justification 22 , that, as \(x\) tends to zero, not only does every term in
\begin{equation*} \frac{\sin x}{x}-1 = -\frac{1}{3!}x^2 + \frac{1}{5!}x^4 - \cdots =\sum_{n=1}^\infty (-1)^n\frac{1}{(2n+1)!}x^{2n} \end{equation*}
converge to zero, but in fact the sum of all infinitely many terms also converges to zero. We did something similar twice in Example 3.6.23; once in computing the limit of the numerator and once in computing the limit of the denominator.
We'll now develop some machinery that provides the justification. We start by recalling, from equation 3.6.1, that if, for some natural number \(n\text{,}\) the function \(f(x)\) has \(n+1\) derivatives near the point \(a\text{,}\) then
\begin{equation*} f(x) =T_n(x) +E_n(x) \end{equation*}
where
\begin{equation*} T_n(x)=f(a)+f'(a)\,(x-a)+\cdots+\tfrac{1}{n!}f^{(n)}(a)\, (x-a)^n \end{equation*}
is the Taylor polynomial of degree \(n\) for the function \(f(x)\) and expansion point \(a\) and
\begin{equation*} E_n(x)=f(x)-T_n(x)=\tfrac{1}{(n+1)!}f^{(n+1)}(c)\, (x-a)^{n+1} \end{equation*}
is the error introduced when we approximate \(f(x)\) by the polynomial \(T_n(x)\text{.}\) Here \(c\) is some unknown number between \(a\) and \(x\text{.}\) As \(c\) is not known, we do not know exactly what the error \(E_n(x)\) is. But that is usually not a problem.
In the present context 23  we are interested in taking the limit as \(x \to a\text{.}\) So we are only interested in \(x\)-values that are very close to \(a\text{,}\) and because \(c\) lies between \(x\) and \(a\text{,}\) \(c\) is also very close to \(a\text{.}\) Now, as long as \(f^{(n+1)}(x)\) is continuous at \(a\text{,}\) as \(x \to a\text{,}\) \(f^{(n+1)}(c)\) must approach \(f^{(n+1)}(a)\) which is some finite value. This, in turn, means that there must be constants \(M,D \gt 0\) such that \(\big|f^{(n+1)}(c)\big|\le M\) for all \(c\)'s within a distance \(D\) of \(a\text{.}\) If so, there is another constant \(C\) (namely \(\tfrac{M}{(n+1)!}\)) such that
\begin{equation*} \big|E_n(x)\big|\le C |x-a|^{n+1}\qquad\hbox{whenever }|x-a|\le D \end{equation*}
There is some notation for this behaviour.

Definition 3.6.24. Big O.

Let \(a\) and \(m\) be real numbers. We say that the function “\(g(x)\) is of order \(|x-a|^m\) near \(a\)” and we write \(g(x)=O\big(|x-a|^m\big)\) if there exist constants 24  \(C,D \gt 0\) such that
\begin{gather} \big|g(x)\big|\le C |x-a|^m\qquad\hbox{whenever }|x-a|\le D\tag{✶} \end{gather}
Whenever \(O\big(|x-a|^m\big)\) appears in an algebraic expression, it just stands for some (unknown) function \(g(x)\) that obeys (✶). This is called “big O” notation.
How should we parse the big O notation when we see it? Consider the following
\begin{align*} g(x) &= O( |x-3|^2 ) \end{align*}
First of all, we know from the definition that the notation only tells us something about \(g(x)\) for \(x\) near the point \(a\text{.}\) The equation above contains “\(O(|x-3|^2)\)” which tells us something about what the function looks like when \(x\) is close to \(3\text{.}\) Further, because it is “\(|x-3|\)” squared, it says that the graph of the function lies below a parabola \(y=C(x-3)^2\) and above a parabola \(y=-C(x-3)^2\) near \(x=3\text{.}\) The notation doesn't tell us anything more than this — we don't know, for example, that the graph of \(g(x)\) is concave up or concave down. It also tells us that Taylor expansion of \(g(x)\) around \(x=3\) does not contain any constant or linear term — the first nonzero term in the expansion is of degree at least two. For example, all of the following functions are \(O(|x-3|^2)\text{.}\)
\begin{gather*} 5(x-3)^2 + 6(x-3)^3,\qquad -7(x-3)^2 - 8(x-3)^4,\qquad (x-3)^3,\qquad (x-3)^{\frac{5}{2}} \end{gather*}
In the next few examples we will rewrite a few of the Taylor polynomials that we know using this big O notation.
Let \(f(x)=\sin x\) and \(a=0\text{.}\) Then
\begin{align*} f(x)&=\sin x & f'(x)&=\cos x & f''(x)&=-\sin x & f^{(3)}(x)&=-\cos x &\\ f(0)&=0 & f'(0)&=1 & f''(0)&=0 & f^{(3)}(0)&=-1 &\\ f^{(4)}(x)&=\sin x & &\cdots\\ f^{(4)}(0)&=0 & &\cdots \end{align*}
and the pattern repeats. So every derivative is plus or minus either sine or cosine and, as we saw in previous examples, this makes analysing the error term for the sine and cosine series quite straightforward. In particular, \(\big|f^{(n+1)}(c)\big|\le 1\) for all real numbers \(c\) and all natural numbers \(n\text{.}\) So the Taylor polynomial of, for example, degree 3 and its error term are
\begin{align*} \sin x &= x-\tfrac{1}{3!}x^3+\tfrac{\cos c}{5!} x^5\\ &= x-\tfrac{1}{3!}x^3+O(|x|^5) \end{align*}
under Definition 3.6.24, with \(C=\tfrac{1}{5!}\) and any \(D \gt 0\text{.}\) Similarly, for any natural number \(n\text{,}\)
When we studied the error in the expansion of the exponential function (way back in optional Example 3.6.8), we had to go to some length to understand the behaviour of the error term well enough to prove convergence for all numbers \(x\text{.}\) However, in the big O notation, we are free to assume that \(x\) is close to \(0\text{.}\) Furthermore we do not need to derive an explicit bound on the size of the coefficient \(C\text{.}\) This makes it quite a bit easier to verify that the big O notation is correct.
Let \(n\) be any natural number. Since \(\diff{}{x} e^x = e^x\text{,}\) we know that \(\ddiff{k}{}{x}\left\{ e^x \right\} = e^x\) for every integer \(k \geq 0\text{.}\) Thus
\begin{equation*} e^x=1+x+\tfrac{x^2}{2!}+\tfrac{x^3}{3!}+\cdots+\tfrac{x^n}{n!} +\tfrac{e^c}{(n+1)!} x^{n+1} \end{equation*}
for some \(c\) between \(0\) and \(x\text{.}\) If, for example, \(|x|\le 1\text{,}\) then \(|e^c|\le e\text{,}\) so that the error term
\begin{equation*} \big|\tfrac{e^c}{(n+1)!} x^{n+1}\big| \le C|x|^{n+1}\qquad\hbox{ with } C=\tfrac{e}{(n+1)!}\qquad\hbox{ whenever }|x|\le 1 \end{equation*}
So, under Definition 3.6.24, with \(C=\tfrac{e}{(n+1)!}\) and \(D=1\text{,}\)
You can see that, because we only have to consider \(x\)'s that are close to the expansion point (in this example, \(0\)) it is relatively easy to derive the bounds that are required to justify the use of the big O notation.
Let \(f(x)=\log(1+x)\) and \(a=0\text{.}\) Then
\begin{align*} f'(x)&=\tfrac{1}{1+x} & f''(x)&=-\tfrac{1}{(1+x)^2} & f^{(3)}(x)&=\tfrac{2}{(1+x)^3} &\\ f'(0)&=1 & f''(0)&=-1 & f^{(3)}(0)&=2 &\\ f^{(4)}(x)&=-\tfrac{2\times 3}{(1+x)^4} & f^{(5)}(x)&=\tfrac{2\times 3\times 4}{(1+x)^5}\\ f^{(4)}(0)&=-3! & f^{(5)}(0)&=4! \end{align*}
We can see a pattern for \(f^{(n)}(x)\) forming here — \(f^{(n)}(x)\) is a sign times a ratio with
  • the sign being \(+\) when \(n\) is odd and being \(-\) when \(n\) is even. So the sign is \((-1)^{n-1}\text{.}\)
  • The denominator is \((1+x)^n\text{.}\)
  • The numerator 25  is the product \(2\times 3\times 4\times \cdots \times (n-1) = (n-1)!\text{.}\)
Thus 26 , for any natural number \(n\text{,}\)
\begin{align*} f^{(n)}(x)&=(-1)^{n-1}\tfrac{(n-1)!}{(1+x)^n} & \text{which means that}\\ \tfrac{1}{n!}f^{(n)}(0)\,x^n &= (-1)^{n-1}\tfrac{(n-1)!}{n!}x^n = (-1)^{n-1}\tfrac{x^n}{n} \end{align*}
so
\begin{equation*} \log(1+x) = x-\tfrac{x^2}{2}+\tfrac{x^3}{3}-\cdots +(-1)^{n-1}\tfrac{x^n}{n} +E_n(x) \end{equation*}
with
\begin{equation*} E_n(x)=\tfrac{1}{(n+1)!}f^{(n+1)}(c)\, (x-a)^{n+1} = \tfrac{1}{n+1} \cdot \tfrac{(-1)^n}{(1+c)^{n+1}} \cdot x^{n+1} \end{equation*}
If we choose, for example \(D=\half\text{,}\) then 27  for any \(x\) obeying \(|x|\le D=\half\text{,}\) we have \(|c|\le\half \) and \(|1+c|\ge\half\) so that
\begin{equation*} |E_n(x)|\le \tfrac{1}{(n+1)(1/2)^{n+1}}|x|^{n+1} = O\big(|x|^{n+1}\big) \end{equation*}
under Definition 3.6.24, with \(C=\tfrac{2^{n+1}}{n+1}\) and \(D=\half\text{.}\) Thus we may write

Remark 3.6.31.

The big O notation has a few properties that are useful in computations and taking limits. All follow immediately from Definition 3.6.24.
  1. If \(p \gt 0\text{,}\) then
    \begin{equation*} \lim\limits_{x\rightarrow 0} O(|x|^p)=0 \end{equation*}
  2. For any real numbers \(p\) and \(q\text{,}\)
    \begin{equation*} O(|x|^p)\ O(|x|^q)=O(|x|^{p+q}) \end{equation*}
    (This is just because \(C|x|^p\times C'|x|^q= (CC')|x|^{p+q}\text{.}\)) In particular,
    \begin{equation*} ax^m\,O(|x|^p)=O(|x|^{p+m}) \end{equation*}
    for any constant \(a\) and any integer \(m\text{.}\)
  3. For any real numbers \(p\) and \(q\text{,}\)
    \begin{equation*} O(|x|^p) + O(|x|^q)=O(|x|^{\min\{p,q\}}) \end{equation*}
    (For example, if \(p=2\) and \(q=5\text{,}\) then \(C|x|^2+C'|x|^5 =\big(C+C'|x|^3\big) |x|^2\le (C+C')|x|^2\) whenever \(|x|\le 1\text{.}\))
  4. For any real numbers \(p\) and \(q\) with \(p \gt q\text{,}\) any function which is \(O(|x|^p)\) is also \(O(|x|^q)\) because \(C|x|^p= C|x|^{p-q}|x|^q\le C|x|^q\) whenever \(|x|\le 1\text{.}\)
  5. All of the above observations also hold for more general expressions with \(|x|\) replaced by \(|x-a|\text{,}\) i.e. for \(O(|x-a|^p)\text{.}\) The only difference being in (a) where we must take the limit as \(x \to a\) instead of \(x\to 0\text{.}\)

Subsection 3.6.6 Optional — Evaluating Limits Using Taylor Expansions — More Examples

In this example, we'll return to the limit
\begin{equation*} \lim_{x\rightarrow 0}\frac{\sin x}{x} \end{equation*}
of Example 3.6.22 and treat it more carefully. By Example 3.6.25,
\begin{equation*} \sin x = x-\frac{1}{3!}x^3+O(|x|^5) \end{equation*}
That is, for small \(x\text{,}\) \(\sin x\) is the same as \(x-\frac{1}{3!}x^3\text{,}\) up to an error that is bounded by some constant times \(|x|^5\text{.}\) So, dividing by \(x\text{,}\) \(\frac{\sin x}{x}\) is the same as \(1-\frac{1}{3!}x^2\text{,}\) up to an error that is bounded by some constant times \(x^4\) — see Remark 3.6.31(b). That is
\begin{equation*} \frac{\sin x}{x}=1-\frac{1}{3!}x^2+O(x^4) \end{equation*}
But any function that is bounded by some constant times \(x^4\) (for all \(x\) smaller than some constant \(D \gt 0\)) necessarily tends to \(0\) as \(x\rightarrow 0\) — see Remark 3.6.31(a). . Thus
\begin{equation*} \lim_{x\rightarrow 0}\frac{\sin x}{x} =\lim_{x\rightarrow 0}\Big[1-\frac{1}{3!}x^2+O(x^4)\Big] =\lim_{x\rightarrow 0}\Big[1-\frac{1}{3!}x^2\Big] =1 \end{equation*}
Reviewing the above computation, we see that we did a little more work than we had to. It wasn't necessary to keep track of the \(-\frac{1}{3!}x^3\) contribution to \(\sin x\) so carefully. We could have just said that
\begin{equation*} \sin x = x+O(|x|^3) \end{equation*}
so that
\begin{equation*} \lim_{x\rightarrow 0}\frac{\sin x}{x} =\lim_{x\rightarrow 0}\frac{x+O(|x|^3)}{x} =\lim_{x\rightarrow 0}\big[1+O(x^2)\big] =1 \end{equation*}
We'll spend a little time in the later, more complicated, examples learning how to choose the number of terms we keep in our Taylor expansions so as to make our computations as efficient as possible.
In this example, we'll use the Taylor polynomial of Example 3.6.29 to evaluate \(\lim\limits_{x\rightarrow 0}\tfrac{\log(1+x)}{x}\) and \(\lim\limits_{x\rightarrow 0}(1+x)^{a/x}\text{.}\) The Taylor expansion of equation 3.6.30 with \(n=1\) tells us that
\begin{equation*} \log(1+x)=x+O(|x|^2) \end{equation*}
That is, for small \(x\text{,}\) \(\log(1+x)\) is the same as \(x\text{,}\) up to an error that is bounded by some constant times \(x^2\text{.}\) So, dividing by \(x\text{,}\) \(\frac{1}{x}\log(1+x)\) is the same as \(1\text{,}\) up to an error that is bounded by some constant times \(|x|\text{.}\) That is
\begin{equation*} \frac{1}{x}\log(1+x)=1+O(|x|) \end{equation*}
But any function that is bounded by some constant times \(|x|\text{,}\) for all \(x\) smaller than some constant \(D \gt 0\text{,}\) necessarily tends to \(0\) as \(x\rightarrow 0\text{.}\) Thus
\begin{equation*} \lim_{x\rightarrow 0}\frac{\log(1+x)}{x} =\lim_{x\rightarrow 0}\frac{x+O(|x|^2)}{x} =\lim_{x\rightarrow 0}\big[1+O(|x|)\big] =1 \end{equation*}
We can now use this limit to evaluate
\begin{gather*} \lim_{x\rightarrow 0}(1+x)^{a/x}. \end{gather*}
Now, we could either evaluate the limit of the logarithm of this expression, or we can carefully rewrite the expression as \(e^\mathrm{(something)}\text{.}\) Let us do the latter.
\begin{align*} \lim_{x\rightarrow 0}(1+x)^{a/x} &=\lim_{x\rightarrow 0}e^{\frac{a}{x} \log(1+x) }\\ &=\lim_{x\rightarrow 0}e^{\frac{a}{x}[x+O(|x|^2)]}\\ &=\lim_{x\rightarrow 0}e^{a+O(|x|)} =e^a \end{align*}
Here we have used that if \(F(x)=O(|x|^2)\) then \(\frac{a}{x} F(x) = O(x)\) — see Remark 3.6.31(b). We have also used that the exponential is continuous — as \(x\) tends to zero, the exponent of \(e^{a+O(|x|)}\) tends to \(a\) so that \(e^{a+O(|x|)}\) tends to \(e^a\) — see Remark 3.6.31(a).
In this example, we'll evaluate 28  the harder limit
\begin{equation*} \lim_{x\rightarrow 0}\frac{\cos x -1 + \half x\sin x}{{[\log(1+x)]}^4} \end{equation*}
The first thing to notice about this limit is that, as \(x\) tends to zero, the numerator
\begin{align*} \cos x -1 + \half x\sin x &\to \cos 0 -1 +\half\cdot 0\cdot\sin 0=0 \end{align*}
and the denominator
\begin{align*} [\log(1+x)]^4 & \to [\log(1+0)]^4=0 \end{align*}
too. So both the numerator and denominator tend to zero and we may not simply evaluate the limit of the ratio by taking the limits of the numerator and denominator and dividing.
To find the limit, or show that it does not exist, we are going to have to exhibit a cancellation between the numerator and the denominator. To develop a strategy for evaluating this limit, let's do a “little scratch work”, starting by taking a closer look at the denominator. By Example 3.6.29,
\begin{gather*} \log(1+x) = x+O(x^2) \end{gather*}
This tells us that \(\log(1+x)\) looks a lot like \(x\) for very small \(x\text{.}\) So the denominator \([x+O(x^2)]^4\) looks a lot like \(x^4\) for very small \(x\text{.}\) Now, what about the numerator?
  • If the numerator looks like some constant times \(x^p\) with \(p \gt 4\text{,}\) for very small \(x\text{,}\) then the ratio will look like the constant times \(\frac{x^p}{x^4}=x^{p-4}\) and, as \(p-4 \gt 0\text{,}\) will tend to \(0\) as \(x\) tends to zero.
  • If the numerator looks like some constant times \(x^p\) with \(p \lt 4\text{,}\) for very small \(x\text{,}\) then the ratio will look like the constant times \(\frac{x^p}{x^4}=x^{p-4}\) and will, as \(p-4 \lt 0\text{,}\) tend to infinity, and in particular diverge, as \(x\) tends to zero.
  • If the numerator looks like \(Cx^4\text{,}\) for very small \(x\text{,}\) then the ratio will look like \(\frac{Cx^4}{x^4}=C\) and will tend to \(C\) as \(x\) tends to zero.
The moral of the above “scratch work” is that we need to know the behaviour of the numerator, for small \(x\text{,}\) up to order \(x^4\text{.}\) Any contributions of order \(x^p\) with \(p \gt 4\) may be put into error terms \(O(|x|^p)\text{.}\)
Now we are ready to evaluate the limit. Because the expressions are a little involved, we will simplify the numerator and denominator separately and then put things together. Using the expansions we developed in Example 3.6.25, the numerator,
\begin{align*} \cos x -1 + \frac{1}{2} x\sin x &= \left( 1 - \frac{1}{2!}x^2 + \frac{1}{4!}x^4 + O(|x|^6) \right)\\ &\phantom{=\ } -1 + \frac{x}{2}\left( x - \frac{1}{3!}x^3 + O(|x|^5) \right) & \text{expand}\\ &= \left( \frac{1}{24}-\frac{1}{12} \right)x^4 + O(|x|^6) + \frac{x}{2} O(|x|^5)\\ \end{align*}

Then by Remark 3.6.31(b)

\begin{align*} &= - \frac{1}{24}x^4 + O(|x|^6) + O(|x|^6)\\ \end{align*}

and now by Remark3.6.31(c)

\begin{align*} &= - \frac{1}{24}x^4 + O(|x|^6) \end{align*}
Similarly, using the expansion that we developed in Example 3.6.29,
\begin{align*} [ \log(1+x) ]^4 &= \big[ x + O(|x|^2) \big]^4\\ &= \big[x + x O(|x|)\big]^4 & \text{by Remark }\knowl{./knowl/rem_bigohppties.html}{\text{3.6.31}}\text{(b)}\\ &= x^4 [1 + O(|x|)]^4 \end{align*}
Now put these together and take the limit as \(x \to 0\text{:}\)
\begin{align*} \lim_{x \to 0} \frac{\cos x -1 + \half x\sin x}{[\log(1+x)]^4} &= \lim_{x \to 0} \frac{ -\frac{1}{24}x^4 + O(|x|^6)}{x^4 [1+O(|x|)]^4}\\ &= \lim_{x \to 0} \frac{-\frac{1}{24}x^4 + x^4O(|x|^2)}{x^4 [1+O(|x|)]^4}& \text{by Remark }\knowl{./knowl/rem_bigohppties.html}{\text{3.6.31}}\text{(b)}\\ &= \lim_{x \to 0} \frac{-\frac{1}{24} + O(|x|^2)}{[1+O(|x|)]^4}\\ &= -\frac{1}{24} & \text{by Remark }\knowl{./knowl/rem_bigohppties.html}{\text{3.6.31}}\text{(a)}. \end{align*}
The next two limits have much the same flavour as those above — expand the numerator and denominator to high enough order, do some cancellations and then take the limit. We have increased the difficulty a little by introducing “expansions of expansions”.
In this example we'll evaluate another harder limit, namely
\begin{equation*} \lim_{x\rightarrow 0}\frac{\log\big(\frac{\sin x}{x}\big)}{x^2} \end{equation*}
The first thing to notice about this limit is that, as \(x\) tends to zero, the denominator \(x^2\) tends to \(0\text{.}\) So, yet again, to find the limit, we are going to have to show that the numerator also tends to \(0\) and we are going to have to exhibit a cancellation between the numerator and the denominator.
Because the denominator is \(x^2\) any terms in the numerator, \(\log\big(\frac{\sin x}{x}\big)\) that are of order \(x^3\) or higher will contribute terms in the ratio \(\frac{\log(\frac{\sin x}{x})}{x^2}\) that are of order \(x\) or higher. Those terms in the ratio will converge to zero as \(x\rightarrow 0\text{.}\) The moral of this discussion is that we need to compute \(\log\frac{\sin x}{x}\) to order \(x^2\) with errors of order \(x^3\text{.}\) Now we saw, in Example 3.6.32, that
\begin{equation*} \frac{\sin x}{x}=1-\frac{1}{3!}x^2+O(x^4) \end{equation*}
We also saw, in equation 3.6.30 with \(n=1\text{,}\) that
\begin{equation*} \log(1+X) = X +O(X^2) \end{equation*}
Substituting 29  \(X= -\frac{1}{3!}x^2+O(x^4)\text{,}\) and using that \(X^2=O(x^4)\) (by Remark 3.6.31(b,c)), we have that the numerator
\begin{equation*} \log\Big(\frac{\sin x}{x}\Big) =\log(1+X) = X +O(X^2) =-\frac{1}{3!}x^2+O(x^4) \end{equation*}
and the limit
\begin{align*} \lim_{x\rightarrow 0}\frac{\log\big(\frac{\sin x}{x}\big)}{x^2} \amp=\lim_{x\rightarrow 0}\frac{-\frac{1}{3!}x^2+O(x^4)}{x^2} =\lim_{x\rightarrow 0}\Big[-\frac{1}{3!}+O(x^2)\Big] =-\frac{1}{3!}\\ \amp=-\frac{1}{6} \end{align*}
Evaluate
\begin{equation*} \lim_{x\rightarrow 0}\frac{e^{x^2}-\cos x}{\log(1+x)-\sin x} \end{equation*}
Solution: Step 1: Find the limit of the denominator.
\begin{equation*} \lim_{x\rightarrow 0}\big[\log(1+x)-\sin x\big] =\log(1+0)-\sin 0 =0 \end{equation*}
This tells us that we can't evaluate the limit just by finding the limits of the numerator and denominator separately and then dividing.
Step 2: Determine the leading order behaviour of the denominator near \(x=0\text{.}\) By equations 3.6.30 and 3.6.26,
\begin{align*} \log(1+x) & = x-\tfrac{1}{2}x^2+\tfrac{1}{3}x^3-\cdots\\ \sin x & = x-\tfrac{1}{3!}x^3+\tfrac{1}{5!}x^5-\cdots \end{align*}
Taking the difference of these expansions gives
\begin{equation*} \log(1+x)-\sin x = -\tfrac{1}{2}x^2+\big(\tfrac{1}{3} +\tfrac{1}{3!}\big)x^3 +\cdots \end{equation*}
This tells us that, for \(x\) near zero, the denominator is \(-\tfrac{x^2}{2}\) (that's the leading order term) plus contributions that are of order \(x^3\) and smaller. That is
\begin{equation*} \log(1+x)-\sin x = -\tfrac{x^2}{2}+ O(|x|^3) \end{equation*}
Step 3: Determine the behaviour of the numerator near \(x=0\) to order \(x^2\) with errors of order \(x^3\) and smaller (just like the denominator). By equation 3.6.28
\begin{equation*} e^X=1+X+O\big(X^2\big) \end{equation*}
Substituting \(X=x^2\)
\begin{align*} e^{x^2} & = 1+x^2 +O\big(x^4\big)\\ \cos x & = 1-\tfrac{1}{2}x^2+O\big(x^4\big) \end{align*}
by equation 3.6.26. Subtracting, the numerator
\begin{equation*} e^{x^2}-\cos x = \tfrac{3}{2}x^2+O\big(x^4\big) \end{equation*}
Step 4: Evaluate the limit.
\begin{align*} \lim_{x\rightarrow 0}\frac{e^{x^2}-\cos x}{\log(1+x)-\sin x} & =\lim_{x\rightarrow 0}\frac{\frac{3}{2}x^2+O(x^4)} {-\frac{x^2}{2}+ O(|x|^3)}\\ & =\lim_{x\rightarrow 0}\frac{\frac{3}{2}+O(x^2)} {-\frac{1}{2}+ O(|x|)}\\ & =\frac{\frac{3}{2}} {-\frac{1}{2}} =-3 \end{align*}

Exercises 3.6.8 Exercises

Exercises — Stage 1 .

1.
Below is a graph of \(y=f(x)\text{,}\) along with the constant approximation, linear approximation, and quadratic approximation centred at \(a=2\text{.}\) Which is which?
2.
Suppose \(T(x)\) is the Taylor series for \(f(x)=\arctan^3\left(e^x+7\right)\) centred at \(a=5\text{.}\) What is \(T(5)\text{?}\)
3.
Below are a list of common functions, and their Taylor series representations. Match the function to the Taylor series and give the radius of convergence of the series.
function series
A. \(\dfrac{1}{1-x}\) I. \(\displaystyle\sum_{n=0}^\infty(-1)^n\dfrac{x^{n+1}}{n+1}\)
B. \(\log(1+x)\) II. \(\displaystyle\sum_{n=0}^\infty(-1)^n\dfrac{x^{2n+1}}{(2n+1)!}\)
C. \(\arctan x\) III. \(\displaystyle\sum_{n=0}^\infty(-1)^n\dfrac{x^{2n}}{(2n)!}\)
D. \(e^x\) IV. \(\displaystyle\sum_{n=0}^\infty(-1)^n\dfrac{x^{2n+1}}{2n+1}\)
E. \(\sin x\) V. \(\displaystyle\sum_{n=0}^\infty x^n\)
F. \(\cos x\) VI. \(\displaystyle\sum_{n=0}^\infty \frac{x^n}{n!}\)
4.
  1. Suppose \(f(x)=\displaystyle\sum_{n=0}^\infty \frac{n^2}{(n!+1)}(x-3)^n\) for all real \(x\text{.}\) What is \(f^{(20)}(3)\) (the twentieth derivative of \(f(x)\) at \(x=3\))?
  2. Suppose \(g(x)=\displaystyle\sum_{n=0}^\infty \frac{n^2}{(n!+1)}(x-3)^{2n}\) for all real \(x\text{.}\) What is \(g^{(20)}(3)\text{?}\)
  3. If \(h(x)=\dfrac{\arctan(5x^2)}{x^4}\text{,}\) what is \(h^{(20)}(0)\text{?}\) What is \(h^{(22)}(0)\text{?}\)

Exercises — Stage 2 .

In Questions 5 through 8, you will create Taylor series from scratch. In practice, it is often preferable to modify an existing series, rather than creating a new one, but you should understand both ways.
5.
Using the definition of a Taylor series, find the Taylor series for \(f(x)=\log(x)\) centred at \(x=1\text{.}\)
6.
Find the Taylor series for \(f(x)=\sin x\) centred at \(a=\pi\text{.}\)
7.
Using the definition of a Taylor series, find the Taylor series for \(g(x)=\dfrac{1}{x}\) centred at \(x=10\text{.}\) What is the interval of convergence of the resulting series?
8.
Using the definition of a Taylor series, find the Taylor series for \(h(x)=e^{3x}\) centred at \(x=a\text{,}\) where \(a\) is some constant. What is the radius of convergence of the resulting series?

Exercise Group.

In Questions 9 through 16, practice creating new Taylor series by modifying known Taylor series, rather than creating your series from scratch.
9. (✳).
Find the Maclaurin series for \(f(x) = \dfrac{1}{2x-1}\text{.}\)
10. (✳).
Let \(\displaystyle\sum\limits_{n=0}^\infty b_nx^n\) be the Maclaurin series for \(\displaystyle f(x) = \frac{3}{x+1} - \frac{1}{2x-1}\text{,}\)
i.e. \(\displaystyle\sum\limits_{n=0}^\infty b_nx^n = \frac{3}{x+1} - \frac{1}{2x-1}\text{.}\)
Find \(b_n\text{.}\)
11. (✳).
Find the coefficient \(c_5\) of the fifth degree term in the Maclaurin series \(\displaystyle\sum_{n=0}^\infty c_nx^n\) for \(e^{3x}\text{.}\)
12. (✳).
Express the Taylor series of the function
\begin{equation*} f(x) = \log(1 + 2x) \end{equation*}
about \(x = 0\) in summation notation.
13. (✳).
The first two terms in the Maclaurin series for \(x^2 \sin(x^3)\) are \(ax^5 + bx^{11}\) , where \(a\) and \(b\) are constants. Find the values of \(a\) and \(b\text{.}\)
14. (✳).
Give the first two nonzero terms in the Maclaurin series for \(\displaystyle{\int \frac{e^{-x^2}-1}{x} \,\dee{x}}\text{.}\)
15. (✳).
Find the Maclaurin series for \(\displaystyle{\int x^4\arctan(2x) \,\dee{x}}\text{.}\)
16. (✳).
Suppose that \(\displaystyle\diff{f}{x}=\frac{x}{1+3x^3}\) and \(f(0)=1\text{.}\) Find the Maclaurin series for \(f(x)\text{.}\)

Exercise Group.

In past chapters, we were only able to exactly evaluate very specific types of series: geometric and telescoping. In Questions 17 through 25, we expand our range by relating given series to Taylor series.
17. (✳).
The Maclaurin series for \(\arctan x\) is given by
\begin{equation*} \arctan x = \sum_{n=0}^\infty (-1)^n\frac{x^{2n+1}}{2n+1} \end{equation*}
which has radius of convergence equal to \(1\text{.}\) Use this fact to compute the exact value of the series below:
\begin{equation*} \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1) 3^n} \end{equation*}
18. (✳).
Evaluate \({\displaystyle\sum_{n=0}^\infty\frac{(-1)^n}{n!}}\,\text{.}\)
19. (✳).
Evaluate \({\displaystyle\sum_{k=0}^\infty\frac{1}{e^k k!}}\,\text{.}\)
20. (✳).
Evaluate the sum of the convergent series \({\displaystyle\sum_{k=1}^\infty\frac{1}{\pi^k k!}}\,\text{.}\)
21. (✳).
Evaluate \({\displaystyle\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n\, 2^n}}\,\text{.}\)
22. (✳).
Evaluate \({\displaystyle\sum_{n=1}^\infty\frac{n+2}{n!}e^n}\,\text{.}\)
23.
Evaluate \(\displaystyle\sum_{n=1}^\infty \frac{2^n}{n}\text{,}\) or show that it diverges.
24.
Evaluate
\begin{equation*} \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!}\left(\frac{\pi}{4} \right)^{2n+1}\left(1+2^{2n+1} \right) \end{equation*}
or show that it diverges.
25. (✳).
(a) Show that the power series \(\displaystyle\sum_{n=0}^\infty \frac{x^{2n}}{(2n)!}\) converges absolutely for all real numbers \(x\text{.}\)
(b) Evaluate \(\displaystyle\sum_{n=0}^\infty \frac{1}{(2n)!}\text{.}\)
26.
  1. Using the fact that \(\arctan(1)=\dfrac{\pi}{4}\text{,}\) how many terms of the Taylor series for arctangent would you have to add up to approximate \(\pi\) with an error of at most \(4\times 10^{-5}\text{?}\)
  2. Example 3.6.15 mentions the formula
    \begin{equation*} \pi=16\arctan\frac15-4\arctan\frac{1}{239} \end{equation*}
    Using the Taylor series for arctangent, how many terms would you have to add up to approximate \(\pi\) with an error of at most \(4\times 10^{-5}\text{?}\)
  3. Assume without proof the following:
    \begin{equation*} \arctan\frac12+\arctan\frac13=\arctan\left(\frac{3+2}{2\cdot3-1}\right) \end{equation*}
    Using the Taylor series for arctangent, how many terms would you have to add up to approximate \(\pi\) with an error of at most \(4\times 10^{-5}\text{?}\)
27.
Suppose you wanted to approximate the number \(\log(1.5)\) as a rational number using the Taylor expansion of \(\log(1+x)\text{.}\) How many terms would you need to add to get 10 decimal places of accuracy? (That is, an absolute error less than \(5\times10^{-11}\text{.}\))
28.
Suppose you wanted to approximate the number \(e\) as a rational number using the Maclaurin expansion of \(e^x\text{.}\) How many terms would you need to add to get 10 decimal places of accuracy? (That is, an absolute error less than \(5\times10^{-11}\text{.}\))
You may assume without proof that \(2 \lt e \lt 3\text{.}\)
29.
Suppose you wanted to approximate the number \(\log(0.9)\) as a rational number using the Taylor expansion of \(\log(1-x)\text{.}\) Which partial sum should you use to get 10 decimal places of accuracy? (That is, an absolute error less than \(5\times10^{-11}\text{.}\))
30.
Define the hyperbolic sine function as
\begin{equation*} \sinh x = \frac{e^{x}-e^{-x}}{2}. \end{equation*}
Suppose you wanted to approximate the number \(\sinh(b)\) using the Maclaurin series of \(\sinh x\text{,}\) where \(b\) is some number in \((-2,1)\text{.}\) Which partial sum should you use to guarantee 10 decimal places of accuracy? (That is, an absolute error less than \(5\times10^{-11}\text{.}\))
You may assume without proof that \(2 \lt e \lt 3\text{.}\)
31.
Let \(f(x)\) be a function with
\begin{equation*} f^{(n)}(x)=\frac{(n-1)!}{2}\left[(1-x)^{-n}+(-1)^{n-1}(1+x)^{-n} \right] \end{equation*}
for all \(n \ge 1\text{.}\)
Give reasonable bounds (both upper and lower) on the error involved in approximating \(f\left(-\frac13 \right)\) using the partial sum \(S_6\) of the Taylor series for \(f(x)\) centred at \(a=\frac12\text{.}\)
Remark: One function with this quality is the inverse hyperbolic tangent function 30 .

Exercises — Stage 3 .

32. (✳).
Use series to evaluate \(\displaystyle \lim\limits_{x\rightarrow 0}\frac{1-\cos x}{1+x-e^x}\text{.}\)
33. (✳).
Evaluate \(\displaystyle \lim\limits_{x\rightarrow 0}\frac{\sin x -x +\frac{x^3}{6}}{x^5}\text{.}\)
34.
Evaluate \(\displaystyle \lim\limits_{x\rightarrow 0}\left(1+x+x^2\right)^{2/x}\) using a Taylor series for the natural logarithm.
35.
Use series to evaluate
\begin{equation*} \lim_{x \to \infty} \left(1+\frac{1}{2x}\right)^{x} \end{equation*}
36.
Evaluate the series \(\displaystyle\sum_{n=0}^\infty\frac{ (n+1)(n+2)}{7^n}\) or show that it diverges.
37.
Write the series \(f(x)=\displaystyle\sum_{n=0}^\infty\frac{(-1)^nx^{2n+4}}{(2n+1)(2n+2)}\) as a combination of familiar functions.
38.
  1. Find the Maclaurin series for \(f(x) = (1-x)^{-1/2}\text{.}\) What is its radius of convergence?
  2. Manipulate the series you just found to find the Maclaurin series for \(g(x)=\arcsin x\text{.}\) What is its radius of convergence?
39. (✳).
Find the Taylor series for \(f(x) = \log(x)\) centred at \(a = 2\text{.}\) Find the interval of convergence for this series.
40. (✳).
Let \(\displaystyle I(x)=\int_0^x\frac{1}{1+t^4}\ \dee{t}\text{.}\)
  1. Find the Maclaurin series for \(I(x)\text{.}\)
  2. Approximate \(I(1/2)\) to within \(\pm0.0001\text{.}\)
  3. Is your approximation in (b) larger or smaller than the true value of \(I(1/2)\text{?}\) Explain.
41. (✳).
Using a Maclaurin series, the number \(a = 1/5-1/7+1/18\) is found to be an approximation for \(\displaystyle I = \int_0^1 x^4 e^{-x^2}\,\dee{x}\text{.}\) Give the best upper bound you can for \(|I - a|\text{.}\)
42. (✳).
Find an interval of length \(0.0002\) or less that contains the number
\begin{equation*} I=\int_0^{\frac{1}{2}} x^2 e^{-x^2}\ \dee{x} \end{equation*}
43. (✳).
Let \(\displaystyle I(x)=\int_0^x\frac{e^{-t}-1}{t}\,\dee{t}\text{.}\)
  1. Find the Maclaurin series for \(I(x)\text{.}\)
  2. Approximate \(I(1)\) to within \(\pm0.01\text{.}\)
  3. Explain why your answer to part (b) has the desired accuracy.
44. (✳).
The function \(\Si(x)\) is defined by \(\Si(x)=\displaystyle\int_0^x\frac{\sin t}{t}\,\dee{t}\text{.}\)
  1. Find the Maclaurin series for \(\Si(x)\text{.}\)
  2. It can be shown that \(\Si(x)\) has an absolute maximum which occurs at its smallest positive critical point (see the graph of \(\Si(x)\) below). Find this critical point.
  3. Use the previous information to find the maximum value of \(\Si(x)\) to within \(\pm 0.01\text{.}\)
45. (✳).
Let \(\displaystyle I(x)=\int_0^x\frac{\cos t-1}{t^2}\,\dee{t}\text{.}\)
  1. Find the Maclaurin series for \(I(x)\text{.}\)
  2. Use this series to approximate \(I(1)\) to within \(\pm0.01\)
  3. Is your estimate in (b) greater than \(I(1)\text{?}\) Explain.
46. (✳).
Let \(\displaystyle I(x)=\int_0^x\frac{\cos t+t\sin t-1}{t^2}\,\dee{t}\)
  1. Find the Maclaurin series for \(I(x)\text{.}\)
  2. Use this series to approximate \(I(1)\) to within \(\pm0.001\)
  3. Is your estimate in (b) greater than or less than \(I(1)\text{?}\)
47. (✳).
Define \({\displaystyle f(x) = \int_0^x\frac{1-e^{-t}}{t}\ \dee{t}} \text{.}\)
  1. Show that the Maclaurin series for \(f(x)\) is \(\displaystyle\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n\cdot n!} x^n\text{.}\)
  2. Use the ratio test to determine the values of \(x\) for which the Maclaurin series \(\displaystyle\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n\cdot n!} x^n\) converges.
48. (✳).
Show that \(\displaystyle \int_0^1\frac{x^3}{e^x-1}\,\dee{x}\le\frac{1}{3}\text{.}\)
49. (✳).
Let \(\displaystyle \cosh(x) =\frac{e^x+e^{-x}}{2}\text{.}\)
  1. Find the power series expansion of \(\cosh(x)\) about \(x_0 = 0\) and determine its interval of convergence.
  2. Show that \(3\frac{2}{3}\le \cosh(2) \le 3\frac{2}{3} + 0.1\text{.}\)
  3. Show that \(\cosh(t) \le e^{\frac{1}{2}t^2}\) for all \(t\text{.}\)
50.
The law of the instrument says “If you have a hammer then everything looks like a nail” — it is really a description of the “tendency of jobs to be adapted to tools rather than adapting tools to jobs” 31 . Anyway, this is a long way of saying that just because we know how to compute things using Taylor series doesn't mean we should neglect other techniques.
  1. Using Newton's method, approximate the constant \(\sqrt[3]{2}\) as a root of the function \(g(x)=x^3-2\text{.}\) Using a calculator, make your estimation accurate to within 0.01.
  2. You may assume without proof that
    \begin{equation*} \sqrt[3]{x}=1+\frac{1}{6}(x-1)+\sum_{n=2}^\infty(-1)^{n-1}\frac{(2)(5)(8)\cdots(3n-4)}{3^n\, n!}(x-1)^n. \end{equation*}
    for all real numbers \(x\text{.}\) Using the fact that this is an alternating series, how many terms would you have to add for the partial sum to estimate \(\sqrt[3]{2}\) with an error less than 0.01?
51.
Let \(f(x)=\arctan(x^3)\text{.}\) Write \(f^{(10)}\left(\frac{1}{5} \right)\) as a sum of rational numbers with an error less than \(10^{-6}\) using the Maclaurin series for arctangent.
52.
Consider the following function:
\begin{equation*} f(x)=\begin{cases} e^{-1/x^2} & x \neq 0\\ 0 & x=0 \end{cases} \end{equation*}
  1. Sketch \(y=f(x)\text{.}\)
  2. Assume (without proof) that \(f^{(n)}(0)=0\) for all whole numbers \(n\text{.}\) Find the Maclaurin series for \(f(x)\text{.}\)
  3. Where does the Maclaurin series for \(f(x)\) converge?
  4. For which values of \(x\) is \(f(x)\) equal to its Maclaurin series?
53.
Suppose \(f(x)\) is an odd function, and \(f(x)=\displaystyle\sum_{n=0}^\infty\frac{f^{(n)}(0)}{n!}x^n\text{.}\) Simplify \(\displaystyle\sum_{n=0}^\infty \dfrac{f^{(2n)}(0)}{(2n)!}x^{2n}\text{.}\)
Please review your notes from last term if this material is feeling a little unfamiliar.
Did you take a quick look at your notes?
This is probably the most commonly used formula for the error. But there is another fairly commonly used formula. It, and some less commonly used formulae, are given in the next (optional) subsection “More about the Taylor Remainder”.
The discussion here is only supposed to jog your memory. If it is feeling insufficiently jogged, then please look at your notes from last term.
The reader might ask whether or not we will give the series for other trigonometric functions or their inverses. While the tangent function has a perfectly well defined series, its coefficients are not as simple as those of the series we have seen — they form a sequence of numbers known (perhaps unsurprisingly) as the “tangent numbers”. They, and the related Bernoulli numbers, have many interesting properties, links to which the interested reader can find with their favourite search engine. The Maclaurin series for inverse sine is \(\arcsin(x) = \sum_{n=0}^\infty \frac{4^{-n}}{2n+1}\frac{(2n)!}{(n!)^2} x^{2n+1}\) which is quite tidy, but proving it is beyond the scope of the course.
Warning: antique sign–sine pun. No doubt the reader first saw it many years syne.
Recall, you studied that differential equation in the section on separable differential equations (Theorem 2.4.4 in Section 2.4) as well as wayyyy back in the section on exponential growth and decay in differential calculus.
Recall that when we solve of a separable differential equation our general solution will have an arbitrary constant in it. That constant cannot be determined from the differential equation alone and we need some extra data to find it. This extra information is often information about the system at its beginning (for example when position or time is zero) — hence “initial conditions”. Of course the reader is already familiar with this because it was covered back in Section 2.4.
While the use of the ideas of induction goes back over 2000 years, the first recorded rigorous use of induction appeared in the work of Levi ben Gershon (1288–1344, better known as Gersonides). The first explicit formulation of mathematical induction was given by the French mathematician Blaise Pascal in 1665.
In Theorem 3.4.38 in the CLP-1 text, we assumed, for simplicity, that \(a\lt b\text{.}\) To get (GVMT) when \(b\lt a\) simply exchange \(a\) and \(b\) in Theorem 3.4.38.
Note that the function \(G\) need not be related to \(f\text{.}\) It just has to be differentiable with a nonzero derivative.
The computation of \(\pi\) has a very, very long history and your favourite search engine will turn up many sites that explore the topic. For a more comprehensive history one can turn to books such as “A history of Pi” by Petr Beckmann and “The joy of \(\pi\)” by David Blatner.
If you don't know what this means (forgive the pun) don't worry, because it is not part of the course. Standard deviation is a way of quantifying variation within a population.
We could get a computer algebra system to do it for us without much difficulty — but we wouldn't learn much in the process. The point of this example is to illustrate that one can do more than just represent a function with Taylor series. More on this in the next section.
Check the derivative!
The authors hope that by now we all “know” that \(e\) is between 2 and 3, but maybe we don't know how to prove it.
We do not wish to give a primer on imaginary and complex numbers here. The interested reader can start by looking at Appendix B.
It is worth mentioning here that history of this topic is perhaps a little rough on Roger Cotes (1682–1716) who was one of the strongest mathematicians of his time and a collaborator of Newton. Cotes published a paper on logarithms in 1714 in which he states \(ix = \log( \cos x + i \sin x).\) (after translating his results into more modern notation). He proved this result by computing in two different ways the surface area of an ellipse rotated about one axis and equating the results. Unfortunately Cotes died only 2 years later at the age of 33. Upon hearing of his death Newton is supposed to have said “If he had lived, we might have known something.” The reader might think this a rather weak statement, however coming from Newton it was high praise.
We are hiding some mathematics behind this “consequently”. What we are really using our knowledge of Taylor polynomials to write \(f(x) = \sin(x) = x-\frac{1}{3!}x^3+\frac{1}{5!}x^5 + E_5(x)\) where \(E_5(x) = \frac{f^{(6)}(c)}{6!} x^6\) and \(c\) is between 0 and \(x\text{.}\) We are effectively hiding “\(E_5(x)\)” inside the “\(\cdots\)”. Now we can divide both sides by \(x\) (assuming \(x \neq 0\)): \(\frac{\sin(x)}{x} = 1-\frac{1}{3!}x^2+\frac{1}{5!}x^4 + \frac{E_5(x)}{x}.\) and everything is fine provided the term \(\frac{E_5(x)}{x}\) stays well behaved.
Many of you learned about l'Hôptial's rule in school and all of you should have seen it last term in your differential calculus course.
It takes 3 applications of l'Hôpital's rule and some careful cleaning up of the intermediate expressions. Oof!
Though there were a few comments in a footnote.
It is worth pointing out that our Taylor series must be expanded about the point to which we are limiting — i.e. a. To work out a limit as \(x\to a\) we need Taylor series expanded about \(a\) and not some other point.
To be precise, \(C\) and \(D\) do not depend on \(x\text{,}\) though they may, and usually do, depend on \(m\text{.}\)
Remember that \(n! = 1\times 2\times 3\times \cdots\times n\text{,}\) and that we use the convention \(0!=1\text{.}\)
It is not too hard to make this rigorous using the principle of mathematical induction. The interested reader should do a little search-engine-ing. Induction is a very standard technique for proving statements of the form “For every natural number \(n\text{,}\)…”. For example \(\text{For every natural number $n$, } \sum_{k=1}^n k = \frac{n(n+1)}{2}\text{,}\) or \(\text{For every natural number $n$, } \ddiff{n}{}{x} \left\{ \log(1+x)\right\} = (-1)^{n-1} \frac{(n-1)!}{(1+x)^n} \text{.}\) It was also used by Polya (1887–1985) to give a very convincing (but subtly (and deliberately) flawed) proof that all horses have the same colour.
Since \(|c|\leq \half\text{,}\) \(-\half \leq c \leq \half\text{.}\) If we now add 1 to every term we get \(\half \leq 1+c \leq \frac{3}{2}\) and so \(|1+c| \geq \half\text{.}\) You can also do this with the triangle inequality which tells us that for any \(x,y\) we know that \(|x+y| \leq |x|+|y|\text{.}\) Actually, you want the reverse triangle inequality (which is a simple corollary of the triangle inequality) which says that for any \(x,y\) we have \(|x+y| \geq \big||x|-|y| \big|\text{.}\)
Use of l'Hôpital's rule here could be characterised as a “courageous decision”. The interested reader should search-engine their way to Sir Humphrey Appleby and “Yes Minister” to better understand this reference (and the workings of government in the Westminster system). Discretion being the better part of valour, we'll stop and think a little before limiting (ha) our choices.
In our derivation of \(\log(1+X) = X +O(X^2)\) in Example 3.6.29, we required only that \(|X|\le\frac{1}{2}\text{.}\) So we are free to substitute \(X= -\frac{1}{3!}x^2+O(x^4)\) for any \(x\) that is small enough that \(\big|-\frac{1}{3!}x^2+O(x^4)\big| \lt \frac{1}{2}\text{.}\)
Of course it is! Actually, hyperbolic tangent is \(\mathrm{tanh}(x) = \dfrac{e^x-e^{-x}}{e^x+e^{-x}}\text{,}\) and inverse hyperbolic tangent is its functional inverse.
Quote from Silvan Tomkins's Computer Simulation of Personality: Frontier of Psychological Theory. See also Birmingham screwdrivers.