A curious infinite sum arising from an elementary geometric argument

A well-known elementary geometric argument for the sum of an infinite geometric progression proceeds as follows: consider a Euclidean triangle $\Delta ABC$ with angles $A=\alpha$, $B=\beta$, $C=2\beta$ and bisect $C$ to create a point $C'$ on $AB$. Then $\Delta ABC \sim \Delta ACC'$. Record the area of $\Delta C'BC$ to a counter. Repeat the same bisection with $C'$, $C''$, ad infinitum, each time adding to the counter the area of the piece of the triangle that isn't similar to the parent triangle and bisecting the triangle that is.


Suppose the area of the original triangle $\Delta ABC$ is 1, and the piece $ACC'$ has area $x$ (thus each succeeding similar copy has area a fraction of $x$ of the preceding triangle). Then the total value of our counter, which approaches 1, is:

$$(1-x)+x(1-x)+x^2(1-x)+...=1$$
$$1+x+x^2+...=\frac1{1-x}$$
Where $x$ depends on the actual angle $\beta$.

It is interesting, however, to consider the case of a general scalene triangle $\Delta ABC$ where $C$ is not necessarily twice of $B$. Here each successive triangle wouldn't be similar to the last, thus we won't be dealing with a geometric series.

Let the angles of $\Delta ABC$ be $A=\alpha$, $B=\beta$,$C=\pi-\alpha-\beta$. We bisect angle $C$, as before, adding to our counter the piece that contains the angle $B$. The remaining triangle has angles $\alpha$, $\frac{\pi-\alpha-\beta}{2}$ and $\pi-\alpha-\frac{\pi-\alpha-\beta}{2}$.

We keep repeating the process, each time bisecting the angle that is neither $\alpha$ nor the angle formed as half the angle that was just bisected, and adding to our counter the area of the piece that does not contain the angle $A$, while splitting the piece that does.

To keep track of the angles in each successive triangle, we define three series:

$$\begin{gathered}
{\alpha _n} = \alpha\\
{\beta _n} = {\gamma _{n - 1}}/2\\
{\gamma _n} = \pi - {\alpha _n} - {\beta _n}\\
\end{gathered}$$
These are defined recursively, of course, so we calculate the explicit form by substituting $\gamma_n$ into $\beta_n$ to get a recursion within $\beta_n$ -- then with the simple initial-value conditions $\alpha_0=\alpha$, $\beta_0=\beta$, etc. we get:

$$\begin{gathered}
{\alpha _n} = \alpha\\
{\beta _n} = \frac{{\pi - \alpha }}{3} + {\left( { - \frac{1}{2}} \right)^n}\left( {\beta - \frac{{\pi - \alpha }}{3}} \right)\\
{\gamma _n} = \frac{{2(\pi - \alpha )}}{3} - {\left( { - \frac{1}{2}} \right)^n}\left( {\beta - \frac{{\pi - \alpha }}{3}} \right)\\
\end{gathered}$$
The area ratio of the piece we're keeping at each stage is $\frac{{\sin {\alpha _n}}}{{\sin {\alpha _n} + \sin {\beta _n}}}$, therefore the convergence of their sum of their areas to 1 implies:

$$\begin{gathered}
\frac{{\sin \alpha }}{{\sin \alpha + \sin \beta }} + \frac{{\sin \beta }}{{\sin \alpha + \sin \beta }}\frac{{\sin \alpha }}{{\sin \alpha + \sin {\beta _1}}} \hfill \\
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, + \frac{{\sin \beta }}{{\sin \alpha + \sin \beta }}\frac{{\sin {\beta _1}}}{{\sin \alpha + \sin {\beta _1}}}\frac{{\sin \alpha }}{{\sin \alpha + \sin {\beta _2}}} + ... = 1 \hfill \\
\end{gathered} $$
Or more compactly:

$$\sum\limits_{k = 0}^\infty {\left[ \left(1-x_k(\alpha,\beta)\right)\prod\limits_{j = 0}^{k - 1} {x_j(\alpha,\beta)} \right]} = 1$$
Where:

$${x_k}(\alpha ,\beta ) = \frac{{\sin \left( {\frac{{\pi - \alpha }}{3} + {{\left( { - \frac{1}{2}} \right)}^k}\left( {\beta - \frac{{\pi - \alpha }}{3}} \right)} \right)}}{{\sin \alpha + \sin \left( {\frac{{\pi - \alpha }}{3} + {{\left( { - \frac{1}{2}} \right)}^k}\left( {\beta - \frac{{\pi - \alpha }}{3}} \right)} \right)}}$$
For all values of $\alpha$ and $\beta$.



Well, have we truly discovered something new?

Turns out, no. It doesn't even matter what $x_k(\alpha,\beta)$ is, really -- the identity $\sum\limits_{k = 0}^\infty {\left[ \left(1-x_k(\alpha,\beta)\right)\prod\limits_{j = 0}^{k - 1} {x_j(\alpha,\beta)} \right]} = 1$ will always be true. Indeed, it is a telescoping sum:

$$\begin{gathered}
1 - {x_0} + \hfill \\
\left( {1 - {x_1}} \right){x_0} + \hfill \\
\left( {1 - {x_2}} \right){x_0}{x_1} + \hfill \\
\left( {1 - {x_3}} \right){x_0}{x_1}{x_2} + \hfill \\
... = 1 \hfill \\
\end{gathered} $$
All that is required is that the final term, $x_0x_1x_2x_3...x_k$ approaches 0 as $k\to\infty$ -- this ensures sum convergence. (So I suppose I was not completely right when I said it doesn't matter what $x_k$ is -- but considering renormalisation and stuff, I kinda was.)

This raises two interesting questions:
  1. How would this "telescoping sum" argument work for the simple geometric series?
  2. Can we get interesting incorrect (? perhaps renormalisations) sums by choosing an $x_k$ sequence whose product doesn't approach zero?

Well, for the geometric series we had $\beta  = (\pi  - \alpha )/3$ so that ${x_k}(\alpha ,\beta ) = x(\alpha,\beta)=\frac{{\sin \beta }}{{\sin \alpha  + \sin \beta }}$. Indeed, one may confirm that setting $x_0=x_1=x_2=...$ yields the product of the geometric series and $1-x$, and that happens to be telescoping. This is really just our standard proof of the series, where we multiply the sum by $x$, subtract this from the original sum, etc.

As for the second question -- consider, for example, $x_k=k+1$. It gives you the sum $1!\cdot1+2!\cdot2+3!\cdot3+...=-1$. Of course, this is just the identity $n\cdot n!=(n+1)!-n!$, and the telescope doesn't really cancel out so you're left with $\infty!-1$.

No comments:

Post a Comment