I've seen some claims online that the golden ratio $\phi$ or $\phi-1$ is the "most irrational number". Now this may be true in a certain sense, but the claim I've seen it made is in terms of irrational orbits on the circle $\mathbb{R}/\mathbb{Z}$.
As you know, for rational $\alpha$ the orbit $\{n\alpha\mod 1\}$ is stupid, but for irrational $\alpha$ the orbit never returns to the same point twice, and in fact is uniformly distributed across the circle (in some sense that may be formalized in ergodic theory or whatever it's called).
Now the claim is that $\alpha=\phi-1\approx0.618\dots$ is especially good as an irrational number for this purpose, because, vaguely something like "it will maximize the amount of space each new point gets". And that this is useful for e.g. plants growing new leaves, ensuring that each new leaf gets the maximum amount of uncovered space (at least that makes sense if the newest leaf always grows at the bottom of the plant) .
How might we formalize this claim?
Let $a_n:=n\alpha\mod 1$ be our sequence. We might define a "score" of the $n$th point based on its distances from the previous points:
$$s_n=f(d(a_n,a_{n-1}),d(a_n,a_{n-2}),\dots d(a_n,a_0))$$
where $d(x,y)=\min(|x-y|,1-|x-y|)$.
If you want to maximize the distance to the closest point, then this function would be the $\min$. But perhaps we care more about distances to other points as well -- maybe we care more for distances to more recent points than to older ones, because older leaves are likely to wither soon, etc. In general this function could be any weighted generalized mean of the distances:
$$s_n=\left[\frac{\sum_{i=0}^{n-1}{w(n-i)|a_n-a_i|^p}}{\sum_{i=0}^{n-1}{w(n-i)}}\right]^{1/p}$$
$w(i)$ is some (usually non-increasing) function, e.g. $i^{-q}$ -- so $q\ge 0$ would be a measure of "how much do we prioritize distance to recent points over distance to old points" and $p\in\mathbb{R}$ is roughly a measure of "how much do we prioritize distance from far points to distance from near points?" The usual formulation of the problem is $p=-\infty$ (we prioritize only the nearest point), $q=0$ (we don't care about recency).
Then we want to maximize
$$S_\infty=\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^N{s_n}$$
We can graph $S_\infty$ as a function of $\alpha$ for various $p,q$.
A weird thing I've observed is that the prettiest patterns, which confirm the golden ratio claim, seem to emerge for things like $p=-H,q=H$. E.g. here's for $p=-100,q=100$
You'd think a large value of $q$ would mean only caring about the distance to the last point and would favour $\alpha=0.5$ -- but apparently a large negative value of $p$ (prioritizing distances to close points over far ones) is just right to counteract it. Bringing $p$ closer to 0 does make the optimal value kiss $0.5$--
$p=-10,q=20$:
$p=-3,q=20$:
$p=0,q=20$ (or really any $p\ge 0$ with $q>>p$:
$p\ge 0$ with $q<p$ starts looking like a vampiric building. This is when we prioritize distance from far points, but do not sufficiently prioritize recent points.
$p\ge 0, q=0$ is uniform (or at least would be if we had taken enough iterations) -- which makes sense; if you're prioritizing far points and not prioritizing recent points, any $\alpha$ ultimately gives you similar density and thus average distances:
Now what is most odd is the $p<0$, $q<<|p|$ range. You'd expect the pretty $\phi$-confirming patterns from earlier to hold up, but they shrink in height, and $\phi-1$ no longer holds a distinct advantage. Here's $p=-100,q=50$:
And here's $p=-100,q=0$:
This is kinda surprising, because this ($p=-\infty,q=0$) is the setting of the original problem we expected to see the golden ratio in. I thought perhaps a numerical issue, but it isn't, you can see the behavior at low iterations.
Well, at least the case that does work is a reasonable model of plants growing leaves, so I guess that explains that phenomenon.
https://www.desmos.com/calculator/khvmbw2q7a
Here's another question: how would a plant actually learn to have $\alpha=\phi-1$? I could imagine a simple computational graph like this for the plant to learn the parameter $\alpha$, but given how non-differentiable the score is as a function of $\alpha$ (and rough even with a finite number of iterations), how does the plant avoid getting stuck in local minima?











No comments:
Post a Comment