15

I (a student) am doing a presentation on Taylor series in my class (12th grade, in Germany if this is relevant). I am looking for a good example where you can see when Taylor series might be useful. Something like

Consider this problem with this function. [Some information on how the problem is hard to solve with its current function]. Wouldn't it be practical to have a "polynomial version" of this function to solve the problem? [Some information on why this would make the problem easier]

Of course, I would be open to other examples/applications that an average 12th-grade student would understand.

I looked at What are the practical applications of the Taylor Series? on Math SE, but none of the applications/examples there seem to be showing the practicability of Taylor series. For example, we (unfortunately) haven't done differential equations.

In his video Taylor series | Chapter 11, Essence of calculus (which basically is how I learnt Taylor series), 3blue1brown (Grant Sanderson) tells how he was first confronted with Taylor series:

[...] this cosine function made the problem awkward and unwieldly. But by approximating [using Taylor series], everything fell into place much more easily.

It isn't explained how exactly the approximation made things easier, but I think this is the sort of example/application I'm looking for for my introduction – some problem, for example, where you would easily see that a polynomial approximation is a lot better to work with.

jng224
  • 261
  • 2
  • 6
  • 1
    Do they know calculus ? If so, recentering a polynomial is an easy thing to start with and it avoids convergence. For example, $f(x)=x^2$ can be written as $f(a)+f'(a)(x-a)+\frac{1}{2}f''(a)(x-a)^2$ for any choice of $a$. Then, extending to nonpolynomial examples brings in the necessity for infinitely many terms. Also, the intuition that it is just the higher order extension of the tangent line. But, again, do they know calculus ? – James S. Cook Jun 12 '21 at 13:24
  • 1
    @JamesS.Cook yes, we've done calculus. – jng224 Jun 12 '21 at 13:38
  • 1
    none of the applications/examples there seem to be showing the practicability of Taylor series --- It's curious that you think this because I thought several of those examples were really nice and they didn't depend on diff. eqs. background. See also Physical applications of higher terms of Taylor series. Some pure math examples I've used with students are given in my answer to What are power series used for? (a reference request) (most were used with honors-calculus level HS students). – Dave L Renfro Jun 12 '21 at 15:07
  • 1
    An easy to carry out and also instructive exploration I've used with students is described in my answer to What does it mean for a polynomial to be the 'best' approximation of a function around a point? – Dave L Renfro Jun 12 '21 at 15:21
  • 3
    The normal curve, used extensively in statistics, involves a constant times e^(-1/2*x^2). Calculating the area under this from -1 to 1 gives the percentage of the population within one standard deviation of the mean. There is no anti-derivative, unless you replace the function with its taylor series. – Sue VanHattum Jun 12 '21 at 22:12
  • 1
    @SueVanHattum, you might want to post your comment as an answer. – JRN Jun 13 '21 at 00:50
  • 1
    @SueVanHattum Just to be a bit picky: there is an antiderivative, it just cannot be expressed using the functions students learn about in high school. – Ferenc Beleznay Jun 13 '21 at 04:32
  • @FerencBeleznay, yeah, I know. If you can help me find a wording that skirts that issue, I would turn my comment into an answer. Would "there is no antiderivative (in terms of the functions we're familiar with)..." be correct? – Sue VanHattum Jun 13 '21 at 17:54
  • "12th grade in Germany" might mean more to those who knew what age Germans think of as 1st grade.

    Either way, why would you not check out three or even six accepted text-books then either work on what they best share, or make up for where they all fail?

    What else might matter?

    – Robbie Goodwin Jun 13 '21 at 20:29
  • @SueVanHattum Just acknowledge that it exists but the proof requires techniques that have not been covered in this class or any of the prerequisites. Dodging the issue (or worse, lying about it) is patronizing at best, and could end up being harmful. – Z4-tier Jun 14 '21 at 20:51

7 Answers7

25

One practical reason for choosing a Taylor Series approximation of a function over the function itself is if you are able to compute using only the four arithmetic operations. For example, if you are asked to find the cosine of an angle and the only computing device you have is a four-function calculator, then you can get a good approximation of the cosine of the angle using the first few terms of the Taylor Series of the cosine function.

JRN
  • 10,796
  • 2
  • 37
  • 80
  • 8
    These sort of approximations are extremely common in Physics, where more-often-than-not you have some complicated differential equation that can't be solved directly. – BlueRaja - Danny Pflughoeft Jun 13 '21 at 00:23
  • 8
    @Jonas, thank you for accepting my answer. But note that you can accept only one answer. You might want to wait for other users to answer your question before accepting an answer. Later, if you feel that another answer is more appropriate to be chosen as "accepted," then you can change your mind. – JRN Jun 13 '21 at 00:54
  • 1
    Isn't that how they used to work out tables in the old days? Except probably even without the calculators. – Bloke Down The Pub Jun 14 '21 at 19:51
19

An excellent introductory example would be exponential function $\exp(x) = e^x$.

By definition, this is the function that is its own derivative, i.e. $\exp'(x) = \exp(x)$. That's all nice and swell from a mathematical stand point, and it makes it easy to prove interesting properties of the function. But how do you actually compute it? The definition above is totally useless for computing concrete points on this function like $\exp(1)$. Again, mathematicians will frequently simply call this result $e$, but how do you even compute its numerical value of $e = 2.71828182845904523536$?

That's where the Taylor expansion comes to the rescue: Since we require $\exp(0) = 1$, and since we know that $\exp'(x) = \exp(x)$, we can easily conclude that all derivitives have the value $1$ at $x = 0$. As such, it's trivial to write down the tailor expansion of $\exp()$:

$$\exp(x) \approx \sum_{n = 0}^∞\frac{x^n}{n!} = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \dots$$

And this we can compute to any precision we like. In fact, mathematicians have shown that this expansion actually tends to the $\exp()$ function, so the $\approx$ sign above can be replaced with an $=$ sign. For instance, we can now easily give an approximation

$$e = \exp(1) \approx 1 + 1 + \frac{1}{2} + \frac{1}{6} + \frac{1}{24} \approx 2.71$$

  • 7
    Another awesome thing you can do with the Taylor expansion of $e^x$ is prove Euler's famous $e^{i\pi} + 1 = 0$. Just calculate the Taylor expansion of $e^{i\theta}$ and notice that the real and imaginary parts are the Taylor expansions of $\cos(x)$ and $\sin(x)$ respectively. Then plug in $\theta = \pi$. – BlueRaja - Danny Pflughoeft Jun 13 '21 at 09:21
  • It's a shame that Liebniz's formula for pi converges so slowly; it would be great if you could make a Taylor-series-based argument that allows students to quickly get a good approximation to π. – Michael Seifert Jun 13 '21 at 13:43
  • 1
    @MichaelSeifert: Uh? You can. Use Newton-Raphson to invert $\cos$ so that you just need to compute $\cos$ and $\sin$, which can be obtained from $\exp$, and use argument reduction $\exp(x) = \exp(x/2^k)^{2^k}$. – user21820 Jun 14 '21 at 13:17
  • @user21820: I don't know whether those methods would be terribly accessible to students who were just learning Taylor series (the OP wanted an "introductory example".) Buraian's answer is actually a really good one, though, if you're willing to include integration of Taylor series in your bag of tricks. – Michael Seifert Jun 14 '21 at 13:49
  • 5
    @MichaelSeifert $\tan^{-1}(1/2)+\tan^{-1}(1/3) = \pi/4$, taking the first four terms of each Taylor series gives me three correct digits. I don't know if that is good enough for you. – David E Speyer Jun 14 '21 at 15:25
  • @DavidESpeyer: That one's pretty good! If you were teaching it, you'd have to justify the equation you gave, maybe using the inverse tangent addition formula; but it certainly converges faster than Leibniz's formula. – Michael Seifert Jun 14 '21 at 16:47
  • 2
    @MichaelSeifert There is a quick visual proof: Draw two right triangles, one with vertices at $(0,0)$, $(6,0)$ and $(6,3)$; the other with vertices at $(0,0)$, $(6,3)$ and $(5,5)$. – David E Speyer Jun 14 '21 at 18:08
  • 1
    The definition above is totally useless for computing concrete points on this function like exp(1) - I'd say, the definition gives a rather obvious recipe to compute them - just replace the ODE with with a finite difference equation over a fine mesh and calculate recursively. – Kostya_I Jun 14 '21 at 21:52
  • @Kostya_I Whenever you try to integrate a system of differential equations numerically, you are going to make an error. Worse: This error may be systematical, especially when you are extrapolating. In the case of $\exp()$, all numerical integration methods produce such a systematic error, and your estimate of $\exp(1)$ will be way too low, no matter how fine a grid you use. As such, I would never call numerical integration a computation, but rather an estimation, especially in the case of $\exp()$. The Taylor series converges to the true value, and gives an estimate of the remaining error. – cmaster - reinstate monica Jun 15 '21 at 06:29
  • Cutting a Taylor series of $exp(1)$ after any finite number of steps also produces an error - your result will be systematically too low. And any reasonable numerical integration scheme, however naive, will certainly converge to the actual $exp(x)$ on a finite interval in the small mesh limit. Explicit errors of these approximations have also been worked out. So, qualitatively, there's no difference between Taylor approximation and finite difference scheme here. – Kostya_I Jun 15 '21 at 07:09
  • @Kostya_I True, both methods have a residual error, and the difference between the methods can be viewed as quantitive. Nevertheless, the quantitative difference is so humongous that it becomes a qualitative difference, imho. Have you ever tried to integrate to $\exp(1)$ with pen and paper? I have. I have even used the Runge-Kutta method with pen and paper. The results are abysmal. Runge-Kutta requires more work for a single step than the five-term-approximation I gave in this answer, and its result are nowhere near the precision that the Taylor estimation yields. – cmaster - reinstate monica Jun 15 '21 at 08:02
  • 1
    It is pretty easy to see why Taylor series should beat Euler's method in this case (Runge-Kutta would take more thought). Trying to compute $e^x$ on $[0,1]$ with $n$ steps by Euler's method actually computes the piecewise linear function with $f(k/n) = (1+1/n)^k$, and in particular $f(1) = (1+1/n)^n = \exp(n \log(1+1/n)) = \exp(1-1/(2n)+\cdots) = e (1-1/(2n)+\cdots)$, so the error is $O(1/n)$. Going $n$ steps into the Taylor series gives error $O(1/n!)$. – David E Speyer Jun 15 '21 at 17:37
  • I would bet that computing $\sin(10)$ shows the opposite advantage. It would be interesting to know! – David E Speyer Jun 15 '21 at 17:38
  • @DavidESpeyer I checked numerical integration of $\sin()$ once, and my results were that Taylor beats Euler anytime. – cmaster - reinstate monica Jun 15 '21 at 19:29
12

I feel it a bit strange that no one mentioned it, but a famous example is how Newton quickly estimated $\pi$ by the Taylor series.

Here is a quick sketch:

First note that the area of a quarter circle is given by the integral:

$$ \int_0^1 \sqrt{1-x^2} dx = \frac{\pi}{4} \tag{1}$$

Now look at the left side.. $x<1$ .... which means... BINOMIAL EXPANSION TIME!

$$ \int_0^1 \sqrt{1-x^2} dx = \int_0^1 ( 1- \frac12 x^2 - \frac18 x^4... \text{higher order}) dx= \left[1 -\frac{1^3}{12} - \frac{1^5}{40} + \text{higher order terms} \right] \tag{2}$$

Now equate (1) and (2), and multiply both side by (4)

$$ \pi = \left[ 4 - \frac13 -\frac{1}{10}.. \right]$$

Adding up the sums in the infinite series on the right, we get a fast approximation for $\pi$. However, Newton went further and tweaked this technique to get even faster convergence i.e: closer to $\pi$ with less terms. Veritasium made a lovely video on this.


Note: Newton figured out binomial series by 'experiment' , so it is inaccurate to say newton used Taylor series but now that we understand binomial series is just the same as Taylor series, I supppose it is fine. Ref

Even before the Veritasium video, a friend had showed me this proof in their calculus book. I am grateful for them. ;D

  • 2
    I wrote a more detailed post about Taylor series as whole in this blog post here – tryst with freedom Jun 13 '21 at 19:58
  • Hi @user615 I sadly don't have a reference to this but I took the information from veritasium video. One point I do know is that Newton figured out the binomial series empirically, that is by experiment. I think I'll add that second point in. – tryst with freedom Jun 14 '21 at 23:12
6

I suggest looking at the deduction for the equation of a simple pendulum. While a differential equation is involved, it is very simple. And it is made simple by the fact that one can safely replace, for small enough $\theta$, the sine $\sin\theta$ with $\theta$.

Martin Argerami
  • 1,078
  • 5
  • 11
5

Following the link you have in your post I found an answer mentioning combinatorics. Formal power series (generating functions) are used often in probability and combinatorics. After a bit of search on the internet I found an interesting (for me) example.

The coefficients of the expansion of $f(x)=\dfrac{x}{1-x-x^2}$ are the numbers in the Fibonacci series.This is not really along the lines you mention in your question (it is not a polynomial approximation of a function), rather, it is a function, which in some sense has all information about the Fibonacci series.

Ferenc Beleznay
  • 1,248
  • 3
  • 11
  • A great book on generating functions is "generatingfunctionology" which is available for free here: https://www2.math.upenn.edu/~wilf/DownldGF.html – TomKern Sep 26 '21 at 13:14
3

Here is one more example: A student if often taught to memorize the l'hopital rule : Hawken you see a $\frac{0}{0}$, you plug in the numbers for the limit then algorithmically like a computer you must take derivative of numerator and denominator and then you have the true value of the limit.

OK, but why does that work? A rigorous answer is difficult but Taylor's theorem can shed light:

$$ \lim_{x \to a} \frac{P(x)}{Q(x)} = \lim_{x \to a} \frac{ P(a) + \frac{P'(a)}{1!} (x-a) + \frac{P''(a) }{2!} (x-a)^2... +O\big((x-a)^3) \big)}{ Q(a) + \frac{Q'(a) }{1!}(x-a) + \frac{Q''(a)}{2!} (x-a)^2.. +O\big( (x-a)^3 \big)}$$

In the case of $\frac{0}{0}$ from directly plugging in numbers that just means $P(a)=Q(a)=0$ and we have the ratio between first derivatives.. ahhh so that was what it meant all along: L'hopital is just saying to find the limit of a ratio going to $\frac{0}{0}$ on direct substitution, take the ratio of a point close to $'a'$ instead.. but what's the value at the point controlled by? Answer to that: The derivatives.

Also, if someone wasn't paying attention in algebra class and never figured out how to use remainder theorem, then Taylor also gives an easy proof for it as well. Ref

2

Another Newton example:

Physics is full of differential equations, and one may ask how Newton dealt with them when most of the tips and tricks we know in the modern days were not yet discovered. Well, with his mighty series methods of course.

In most introductory Physics classes, simple harmonic motion is introduced as:

$$ \frac{d^2 x}{dt^2 } = -k x$$

For simplicity sake, let's assume $k=1$, then, to illustrate series in the above, assume $x= a_o + a_1 t + a_2t^2....= \sum_i a_i t^i$, then plug into the DE:

$$ (2a_2 + 3 \cdot 2 a_3 t+ 4 \cdot 3 a_4 t^2...) = -(a_o + a_1 t + a_2 t^2...)$$

Rearrange like terms:

$$ (2a_2 + a_o) + ( 3 \cdot 2 a_3+ a_1) t+ (4 \cdot 3 a_4 + a_2)t^2...=0$$

Considering the initial condition $x(0)=0$ [physically imagine giving a kick from the equilibrium position], only odd terms survive:

$$ (3 \cdot 2 a_3 + a_1)t + ( 5 \cdot 4 a_5 +a_3 )t^3...=0$$

We can write the above as a summation:

$$ \sum_{k=1} \left[ (2k+1)(2k) a_{2k+1} + a_{2k-1} \right] t = 0$$

For the whole polynomial to be zero always, the coefficient has to be always zero, we get a recurrence from that:

$$ a_{2k+1} = -\frac{a_{2k-1} }{(2k+1)(2k)}$$

The solution to this recurrence is given as $a_{2k+1} = (-1)^{k} \frac{1}{(2k+1)!}$, plug this back into our initial series expression for $x(t)$:

$$ x(t) = \sum_i (-1)^i \frac{t^{2i+1} }{(2i+1)!}$$