Posts (2) containing the 'math.DG’ (Differential Geometry) tag:
In this post we will look at one of the many remarkable findings by Roger W. Brockett.
Consider a Linear Program (LP)
parametrized by the compact set and a suitable triple .
As a solution to can always be found to be a vertex of , a smooth method to solve seems somewhat awkward.
We will see that one can construct a so-called isospectral flow that does the job.
Here we will follow Dynamical systems that sort lists, diagonalize matrices and solve linear programming problems, by Roger. W. Brockett (CDC 1988) and the book Optimization and Dynamical Systems edited by Uwe Helmke and John B. Moore (Springer 2ed. 1996).
Let have vertices, then one can always find a map , mapping the simplex onto .
Indeed, with some abuse of notation, let be a matrix defined as , for , the vertices of .
Before we continue, we need to establish some differential geometric results. Given the Special Orthogonal group , the tangent space is given by . Note, this is the explicit formulation, which is indeed equivalent to shifting the corresponding Lie Algebra.
The easiest way to compute this is to look at the kernel of the map defining the underlying manifold.
Now, following Brockett, consider the function defined by for some . This approach is not needed for the full construction, but it allows for more intuition and more computations.
To construct the corresponding gradient flow, recall that the (Riemannian) gradient at is defined via for all . Using the explicit tangent space representation, we know that with .
Then, see that by using
we obtain the gradient via
This means that (the minus is missing in the paper) the (a) gradient flow becomes
Consider the standard commutator bracket and see that for one obtains from the equation above (typo in the paper)
Hence, can be seen as a reparametrization of a gradient flow.
It turns out that has a variety of remarkable properties. First of all, see that preserves the eigenvalues of .
Also, observe the relation between extremizing and the function defined via . The idea to handle LPs is now that the limiting will relate to putting weight one the correct vertex to get the optimizer, gives you this weight as it will contain the corresponding largest costs.
In fact, the matrix can be seen as an element of the set .
This set is in fact a -smooth compact manifold as it can be written as the orbit space corresponding to the group action ,
, one can check that this map satisfies the group properties. Hence, to extremize over , it appears to be appealing to look at Riemannian optimization tools indeed. When doing so, it is convenient to understand the tangent space of . Consider the map defining the manifold , . Then by the construction of , see that yields the relation for any .
For the moment, let such that and .
First we consider the convergence of .
Let have only distinct eigenvalues, then exists and is diagonal.
Using the objective from before, consider and see that by using the skew-symmetry one recovers the following
This means the cost monotonically increases, but by compactness converges to some point . By construction, this point must satisfy . As has distinct eigenvalues, this can only be true if itself is diagonal (due to the distinct eigenvalues).
More can be said about , let be the eigenvalues of , that is, they correspond to the eigenvalues of as defined above.
Then as preserves the eigenvalues of (), we must have , for a permutation matrix.
This is also tells us there is just a finite number of equilibrium points (finite number of permutations). We will write this sometimes as .
Now as is one of those points, when does converge to ? To start this investigation, we look at the linearization of , which at an equilibrium point becomes
for . As we work with matrix-valued vector fields, this might seems like a duanting computation. However, at equilibrium points one does not need a connection and can again use the directional derivative approach, in combination with the construction of , to figure out the linearization. The beauty is that from there one can see that is the only asymptotically stable equilibrium point of . Differently put, almost all initial conditions will converge to with the rate captured by spectral gaps in and . If does not have distinct eigenvalues and we do not impose any eigenvalue ordering on , one sees that an asymptotically stable equilibrium point must have the same eigenvalue ordering as . This is the sorting property of the isospectral flow and this is of use for the next and final statement.
Theorem:
Consider the LP with for all , then, there exist diagonal matrices and
such that converges for almost any to with the optimizer of being .
Proof:
Global convergence is prohibited by the topology of .
Let and let . Then, the isospectral flow will converge from almost everywhere to (only ), such that .
Please consider the references for more on the fascinating structure of .
Previously we looked at , - and its resulting flow - as the result from mapping to the sphere. However, we saw that for this flow convergences to , with such that for we have . Hence, it is interesting to look at the flow from an optimization point of view: .
A fair question would be, is our flow not simply Riemannian gradient ascent for this problem? As common with these kind of problems, such a hypothesis is something you just feel.
Now, using the tools from (CU1994, p.311) we can compute the gradient of (on the sphere) via , where is a vector field normal to , e.g., .
From there we obtain
To make our life a bit easier, use instead of the map . Moreover, set . Then it follows that
Of course, to make the computation cleaner, we changed to , but the relation between and is beautiful. Somehow, mapping trajectories of , for some , to the sphere corresponds to (Riemannian) gradient ascent applied to the problem .
Now, we formalize the previous analysis a bit a show how fast we convergence. Assume that the eigenvectors are ordered such that eigenvector corresponds to the largest eigenvalue of . Then, the solution to is given by
Let be expressed in eigenvector coordinates, with all (normalized). Moreover, assume all eigenvalues are distinct. Then, to measure if is near , we compute , which is if and only if is parallel to . To simplify the analysis a bit, we look at , for some perturbation , this yields
Next, take the the (natural) logarithm on both sides:
This log-sum-exp terms are hard to deal with, but we can apply the so-called ‘‘log-sum-exp trick’’:
In our case, we set and obtain
We clearly observe that for the LHS approaches from below, which means that from above, like intended. Of course, we also observe that the mentioned method is not completely general, we already assume distinct eigenvalues, but there is more. We do also not convergence when , which is however a set of measure on the sphere .
More interestingly, we see that the convergence rate is largely dictated by the ‘‘spectral gap/eigengap’’ . Specifically, to have a particular projection error , such that , we need
Comparing this to the resulting flow from , , we see that we have the same flow, but with .
This is interesting, since and have the same eigenvectors, yet a different (scaled) spectrum. With respect to the convergence rate, we have to compare and for any with (the additional is not so interesting).
It is obvious what we will happen, the crux is, is larger or smaller than ? Can we immediately extend this to a Newton-type algorithm? Well, this fails (globally) since we work in instead of purely with . To be concrete, , we never have degrees of freedom.
Of course, these observations turned out to be far from new, see for example (AMS2008, sec. 4.6).
(AMS2008) P.A. Absil, R. Mahony and R. Sepulchre: ‘‘Optimization Algorithms on Matrix Manifolds’’, 2008 Princeton University Press.
(CU1994) Constantin Udriste: ‘‘Convex Functions and Optimization Methods on Riemannian Manifolds’’, 1994 Kluwer Academic Publishers.
|