\[f(b) - f(a) = \int_a^b {f'(x)\,dx} \]\[f(b) - f(a) = f'(c)(b - a) \,\, (\exists\, c\,\, \rm{s.t.})\]
And naturally so: the fundamental theorem of calculus tells us that the boundary term $f(b) - f(a)$ is naturally related to $f'(x)$ on the interior -- specifically it's equal to its sum, and the mean-value theorem talks about the mean, which is proportional to the sum.
One may wonder: if Stokes' theorems (Navier-Stokes, Divergence, etc.) are the generalization of the fundamental theorem of calculus: can we make a "Stokes' theorem" version of the mean-value-theorem?
Actually, we can do better: the relationship between the mean-value-theorem and the fundamental theorem of calculus can be "suppressed" by equating the above two equations to reveal the key, new, general insight provided by the mean value theorem, which is that a function achieves its average value on a compact domain:
\[\exists\, c,\,\, g(c) = \frac{1}{{b - a}}\int_a^b {g(x)\,dx} \]
Where we replace $f'$ with $g$. This theorem can be generalized easily to higher dimensions:
\[\exists\, c,\,\, g(c) = \frac{1}{{\left| R \right|}}\int_R {g(x)\,dx} \]
Equating with various Stokes theorems will then get you appropriate generalizations.
Why does this not work for vector-valued functions? What does this tell you about the topology of $\mathbb{R}$ vs $\mathbb{R}^n$? What is the "best" generalization you can make to vector-valued functions?
No comments:
Post a Comment