Processing math: 100%

What's with e^(-1/x)? On smooth non-analytic functions: part I

When you first learned about the Taylor series, your intuition probably went something like this: you have f(x), the derivative at this point tells you how f changes from x to x+dx (which tells you f(x+dx)), the second derivative tells you how f changes from x to x+dx, which recursively tells you f(x+2 dx), the third derivative tells you f(x+3 dx), and so on -- so if you have an infinite number of derivatives, you know how each derivative changes, so you should be able to predict the full global behaviour of the function, assuming it is infinitely differentiable (smooth) throughout.

Everything is nice and dandy in this picture. But then you come across two disastrous, life-changing facts that make you cry for those good old days:
  1. Taylor series have radii of convergence -- If I can predict the behaviour of a function up until a certain point, why can't I predict it a bit afterwards? It makes sense if the function becomes rough at that point, like if it jumps to infinity, but even functions like 1/(1+x2) have this problem. Sure, we've heard the explanation involving complex numbers, but why should we care about the complex singularities (here's a question: do we care about quaternion singularities?)? Specifically, a Taylor series may have a zero radius of convergence. Points around which a Taylor series has a zero radius of convergence are called Pringsheim points.
  2. Weird crap -- Like e1/x. Here, the Taylor series does converge, but it converges to the wrong thing -- in this case, to zero. Points at which the Taylor series doesn't equal a function on any neighbourhood, despite converging, are called Cauchy points.
In this article, we'll address the weird crap -- e1/x (or "e1/x for x>0, 0 for x=0" if you want to be annoyingly formal about it) will be the example we'll use throughout, so if you haven't already seen this, go plot it on Desmos and get a feel for how it looks near the origin.

Terminology: We'll refer to smooth non-analytic functions as defective functions.




The thing to realise about e1/x is that the Taylor series -- 0+0x+0x2+... -- isn't wrong. The truncated Taylor series of degree n is the best polynomial approximation for the function near zero, and none of the logic here fails for e1/x. There is honestly no other polynomial that better approximates the shape of the function as x0.

If you think about it this way, it isn't too surprising that such a function exists -- what we have is a function that goes to zero as x0 faster than any polynomial does. I.e. a function g(x) such that
n,limx0g(x)xn=0
This is not fundamentally any weirder than a function that escapes to infinity faster than all polynomials. In fact, such functions are quite directly connected. Given a function f(x) satisfying:
n,limxxnf(x)=0
We can make the substitution x1/x to get
n,limx01xnf(1/x)=0
So 1f(1/x) is a valid g(x). Indeed, we can generate plenty of the standard smooth non-analytic functions this way: f(x)=ex gives g(x)=e1/x, f(x)=xx gives g(x)=x1/x, f(x)=x! gives g(x)=1(1/x)! etc.



To better study what exactly is going on here, consider Taylor expanding e1/x around some point other than 0, or equivalently, expanding e1/(x+ε) around 0. One can see that:
f(0)=e1/εf(0)=1ε2e1/εf(0)=2ε+1ε4e1/εf(0)=6ε26ε+1ε6e1/ε
Or ignoring higher-order terms for our purposes,
f(N)(0)(1/ε)2Ne1/ε
Each derivative e1/εε2N0 as ε0, but they each approach zero slower than the previous derivative, and somehow that is enough to give the sequence of derivatives the "kick" that they need in the domino effect that follows -- from somewhere at N= (putting it non-rigorously) -- to make the function grow as x leaves zero, even though all the derivatives were zero at x=0.



But we can still make it work -- by letting N, the upper limit of the summation approach first, before ε0. In other words, instead of directly computing the derivatives f(n)(0), we consider the terms
f(0)ε=f(0)f(1)ε(0)=f(ε)f(0)εf(2)ε(0)=f(2ε)2f(ε)+f(0)ε2f(3)ε(0)=f(3ε)3f(2ε)+3f(ε)f(0)ε3
And write the generalised Hille-Taylor series as:
f(x)=limε0n=0xnn!f(n)ε(0)
Then N before ε0 so you "reach" N first (or rather, you get large nth derivatives for increasing n) before ε gets to 0.

Another way of thinking about it is that the "local determines global" stuff makes sense to predict the value of the function at Nε, countable N, but it's a stretch to talk about uncountably many εs away, which is what a finite neighbourhood is. But with these difference operators in the Hille-Taylor series, one ensures that each neighbourhood is a finite multiple of h away at any point, so the differences determine f.


Very simple (but fun to plot on Desmos) exercise: use e1/x or another defective function to construct a "bump function", i.e. a smooth function that is 0 outside (0,1), but takes non-zero values everywhere in that range.

Similarly, construct a "transition function", i.e. a smooth function that is 0 for x0, 1 for x1. (hint: think of a transition as going from a state with "none of the fraction" to "all of the fraction")

If you're done, play around with this (but no peeking): desmos.com/calculator/ccf2goi9bj

No comments:

Post a Comment