Processing math: 100%

All matrices can be diagonalised over R[X]/(X^n)

This post follows from my answer to the math stackexchange question What kind of matrices are non-diagonalisable?


Non-diagonalisable 2 by 2 matrices can be diagonalised over the dual numbers -- and the "weird cases" like the Galilean transformation are not fundamentally different from the nilpotent matrices.

The intuition here is that the Galilean transformation is sort of a "boundary case" between real-diagonalisability (skews) and complex-diagonalisability (rotations) (which you can sort of think in terms of discriminants). In the case of the Galilean transformation [1v01], it's a small perturbation away from being diagonalisable, i.e. it sort of has "repeated eigenvectors" (you can visualise this with MatVis). So one may imagine that the two eigenvectors are only an "epsilon" away, where ε is the unit dual satisfying ε2=0 (called the "soul"). Indeed, its characteristic polynomial is:

(λ1)2=0
Whose solutions among the dual numbers are λ=1+kε for real k. So one may "diagonalise" the Galilean transformation over the dual numbers as e.g.:

[1001+vε]
Granted this is not unique, this is formed from the change-of-basis matrix [110ϵ], but any vector of the form (1,kε) is a valid eigenvector. You could, if you like, consider this a canonical or "principal value" of the diagonalisation, and in general each diagonalisation corresponds to a limit you can take of real/complex-diagonalisable transformations. Another way of thinking about this is that there is an entire eigenspace spanned by (1,0) and (1,ε) in that little gap of multiplicity. In this sense, the geometric multiplicity is forced to be equal to the algebraic multiplicity*.

Then a nilpotent matrix with characteristic polynomial λ2=0 has solutions λ=kε, and is simply diagonalised as:

[000ε]
(Think about this.) Indeed, the resulting matrix has minimal polynomial λ2=0, and the eigenvectors are as before.



What about higher dimensional matrices? Consider:

[0v000w000]
This is a nilpotent matrix A satisfying A3=0 (but not A2=0). The characteristic polynomial is λ3=0. Although ε might seem like a sensible choice, it doesn't really do the trick -- if you try a diagonalisation of the form diag(0,vε,wε), it has minimal polynomial A2=0, which is wrong. Indeed, you won't be able to find three linearly independent eigenvectors to diagonalise the matrix this way -- they'll all take the form (a+bε,0,0).

Instead, you need to consider a generalisation of the dual numbers, sometimes called (in computing mathematics and non-standard analysis) the "hyperdual numbers", with the soul satisfying ϵn=0. Then the diagonalisation takes for instance the form:

[0000vϵ000wϵ]


*Over the reals and complexes, when one defines algebraic multiplicity (as "the multiplicity of the corresponding factor in the characteristic polynomial"), there is a single eigenvalue corresponding to that factor. This is of course no longer true over the hyperdual numbers, because they are not a field, and ab=0 no longer implies "a=0 or b=0".

In general, if you want to prove things about these numbers, the way to formalise them is by constructing them as the quotient R[X]/(Xn), so you actually have something clear to work with.

(Perhaps relevant: Grassmann numbers as eigenvalues of nilpotent operators -- the Hyperdual numbers are not the same as the Grassmann numbers, and the algebra of the Grassmann numbers is definitely different from that of nilpotent and shear matrices, but go see if you can make sense of it.)

Something important to note is that the diagonalisation is not of the form D=P1AP, as the eigenvector matrices are not invertible. However, it is still true that PD=AP -- nonetheless, this limitation prevents this formalism for being any good for e.g. dealing with polynomial-ish differential equations with repeated roots, for instance, as far as I can see. The infinitesimal-perturbation/"take a limit" approach we talked about in Limiting Cases II: repeated roots of a differential equation are still the right approach for that.

No comments:

Post a Comment