Posts (11) containing the 'math.DS’ (Dynamical Systems) tag:On Critical Transitions in Nature and Society |29 January 2023|
|
![]() |
Figure 1:
The manifold |
In this section we highlight a few (not all) of the most elegant topological results in stability theory. We focus on results without appealing to the given or partially known vector field .
This line of work finds its roots in work by Bhatia, Szegö, Wilson, Sontag and many others.
Theorem [The domain of attraction is contractible, Theorem 21]
Let the flow be continuous and let
be an asymptotically stable equilibrium point. Then, the domain of attraction
is contractible.
This theorem by Sontag states that the underlying topology of a dynamical system, better yet, the space on which the dynamical system evolves, dictates what is possible. Recall that linear manifolds are simply hyperplanes, which are contractible and hence, as we all know, global asymptotic stability is possible for linear systems. However, if is not contractible, there does not exist a globally asymptotically stabilizable continuous-time system on
, take for example any sphere.
The next example shows that one should not underestimate ‘‘linear systems’’ either.
Example [Dynamical Systems on ]
Recall that
is a smooth
-dimensional manifold (Lie group).
Then, consider for some
,
the (right-invariant) system
Indeed, the solution to (1) is given by . Since this group consists out of two disjoint components there does not exist a matrix
, or continuous nonlinear map for that matter, which can make a vector field akin to (1) globally asymptotically stable. This should be contrasted with the closely related ODE
,
. Even for the path connected component
, The theorem by Sontag obstructs the existence of a continuous global asymptotically stable vector field. This because the group is not simply connected for
(This follows most easily from establishing the homotopy equivalence between
and
via (matrix) Polar Decomposition), hence the fundamental group is non-trivial and contractibility is out of the picture. See that if one would pick
to be stable (Hurwitz), then for
we have
, however
.
Following up on Sontag, Bhat and Bernstein revolutionized the field by figuring out some very important ramifications (which are easy to apply). The key observation is the next lemma (that follows from intersection theory arguments, even for non-orientable manifolds).
Lemma [Proposition 1] Compact, boundaryless, manifolds are never contractible.
Clearly, we have to ignore -dimensional manifolds here. Important examples of this lemma are the
-sphere
and the rotation group
as they make a frequent appearance in mechanical control systems, like a robotic arm.
Note that the boundaryless assumption is especially critical here.
Now the main generalization by Bhat and Bernstein with respect to the work of Sontag is to use a (vector) bundle construction of .
Loosely speaking, given a base
and total manifold
,
is a vector bundle when for the surjective map
and each
we have that the fiber
is a finite-dimensional vector space (see the great book by Abraham, Marsden and Ratiu Section 3.4 and/or the figure below).
![]() |
Figure 2:
A prototypical vector bundle, in this case the cylinder |
Intuitively, vector bundles can be thought of as manifolds with vector spaces attached at each point, think of a cylinder living in , where the base manifold is
and each fiber is a line through
extending in the third direction, as exactly what the figure shows. Note, however, this triviality only needs to hold locally.
A concrete example is a rigid body with , for which a large amount of papers claim to have designed continuous globally asymptotically stable controllers. The fact that this was so enormously overlooked makes this topological result not only elegant from a mathematical point of view, but it also shows its practical value.
The motivation of this post is precisely to convey this message, topological insights can be very fruitful.
Now we can state their main result
Theorem [Theorem 1]
Let be a compact, boundaryless, manifold, being the base of the vector bundle
with
, then, there is no continuous vector field on
with a global asymptotically stable equilibrium point.
Indeed, compactness of can also be relaxed to
not being contractible as we saw in the example above, however, compactness is in most settings more tangible to work with.
Example [The Grassmannian manifold]
The Grassmannian manifold, denoted by , is the set of all
-dimensional subspaces
. One can identify
with the Stiefel manifold
, the manifold of all orthogonal
-frames, in the bundle sense of before, that is
such that the fiber
represent all
-frames generating the subspace
. In its turn,
can be indentified with the compact Lie group
(via a quotient), such that indeed
is compact.
This manifold shows up in several optimization problems and from our continuous-time point of view we see that one can never find, for example, a globally converging gradient-flow like algorithm.
Example [Tangent Bundles]
By means of a Lagrangian viewpoint, a lot of mechanical systems are of a second-order nature, this means they are defined on the tangent bundle of some , that is,
. However, then, if the configuration manifold
is compact, we can again appeal to the Theorem by Bhat and Bernstein. For example, Figure 2 can relate to the manifold over which the pair
is defined.
One might wonder, given a vector field over a manifold
, since we need contractibility of
for
to have a global asymptotically stable equilibrium points, how interesting are these manifolds? As it turns out, for
, which some might call the high-dimensional topological regime, contractible manifolds have a particular structure.
A topological space being simply connected is equivalent to the corresponding fundamental group being trivial. However, to highlight the work by Stallings, we need a slightly less well-known notion.
Definition [Simply connected at infinity]
The topological space is simply connected at infinity, when for any compact subset
there exists a
where
such that
is simply connected.
This definition is rather complicated to parse, however, we can give a more tangible description. Let be a non-compact topological space, then,
is said to have one end when for any compact
there is a
where
such that
is connected. So,
, as expected, fails to have one end, while
does have one end.
Now, Proposition 2.1 shows that if
and
are simply connected and both have one end, then
is simply connected at infinity. This statement somewhat clarifies why dimension
and above are somehow easier to parse in the context of topology. A similar statement can be made about the cylindrical space
.
Lemma [Product representation Proposition 2.4]
Let and
be manifolds of dimension greater or equal than
. If
is contractible and
, then,
is simply connected at infinity.
Now, most results are usually stated in the context of a piecewise linear (PL) topology. By appealing to celebrated results on triangulization (by Whitehead) we can state the following.
Theorem [Global diffeomorphism Theorem 5.1]
Let be a contractible
,
,
-dimensional manifold which is simply connected at infinity. If
, then,
is diffeomorphic to
.
![]() |
Figure 3:
The manifold |
You might say that this is all obvious, as in Figure 3, we can always do it, also for lower-dimensions. However, there is famous counterexample by Whitehead, which provided lots of motivation for the community:
‘‘There is an open, -dimensional manifold which is contractible, but not homeomorphic to
.’’
In fact, this example was part of the now notorious program to solve the Poincaré conjecture. All in all, this shows that contractible manifolds are not that interesting from a nonlinear dynamical systems point of view, compact manifolds is a class of manifolds providing for more of a challenge.
So far, the discussion has been on the stabilization of (equilibrium) points, now what if we want to stabilize a curve or any other desirable set, that is, a non-trivial set . It seems like the shape of these objects, with respect to the underlying topology of
, must indicate if this is possible again? Let
denote an open neighbourhood of some
. In what follows, the emphasis is on
and
having the same topology. We will be brief and follow Moulay and Bhat.
Lemma [Theorem 5]
Consider a closed-loop dynamical system given rise to a continuous flow. Suppose that the compact set is asymptotically stable with domain of attraction
. Then,
is a weak deformation retract of
.
In rather loose terms, being a weak deformation retract of
means that one can continuously ‘‘retract’’ points from
and morph them into the set
. So, this works for
and
, due to
being punctured. See p.200 for more information on retracts. In Figure 4 below we show that since this lemma is mostly concerned with the local topology, the global topology is less important. For
, the set
is not a deformation retract of the set
such that
can never be a region of attraction, this should be contrasted with the setting on the Torus
.
![]() |
Figure 4:
The topological relation between |
The beauty of all of this is that even if you are not so sure about the exact dynamical system at hand, the underlying topology can already provide for some answers. In particular, to determine, what can, or cannot be done.
This post came about during a highly recommended course at EPFL.
There was a bit of radio silence, but as indicated here, some interesting stuff is on its way.
In this post we highlight this 1976 paper by Mullis and Roberts on ’Synthesis of Minimum Roundoff Noise Fixed Point Digital Filters’.
Let us be given some single-input single-output (SISO) dynamical system
It is known that the input-output behaviour of any , that is, the map from
to
is invariant under a similarity transform.
To be explicit, the tuples
and
, which correspond to the change of coordinates
for some
, give rise to the same input-output behaviour.
Hence, one can define the equivalence relation
by imposing that the input-output maps of
and
are equivalent.
By doing so we can consider the quotient
. However, in practice, given a
, is any
such that
equally useful?
For example, the following
and
are similar, but clearly,
is preferred from a numerical point of view:
In what follows, we highlight the approach from Mullis and Roberts and conclude by how to optimally select . Norms will be interpreted in the
sense, that is, for
,
. Also, in what follows we assume that
plus that
is stable, which will mean
and that
corresponds to a minimal realization.
The first step is to quantify the error. Assume we have allocated bits at our disposal to present the
component of
. These values for
can differ, but we constrain the average by
for some
. Let
be a 'step-size’, such that our dynamic range of
is bounded by
(these are our possible representations). Next we use the modelling choices from Mullis and Roberts, of course, they are debatable, but still, we will infer some nice insights.
First, to bound the dynamic range, consider solely the effect of an input, that is, define by
,
. Then we will impose the bound
on
. In combination with the step size (quantization), this results in
. Let
be a sequence of i.i.d. sampled random variables from
. Then we see that
. Hence, one can think of
as scaling parameter related to the probability with which this dynamic range bound is true.
Next, assume that all the round-off errors are independent and have a variance of .
Hence, the worst-case variance of computating
is
. To evaluate the effect of this error on the output, assume for the moment that
is identically
.
Then,
. Similar to
, define
as the
component of
. As before we can compute the variance, this time of the full output signal, which yields
. Note, these expressions hinge on the indepedence assumption.
Next, define the (infamous) matrices ,
by
If we assume that the realization is not just minimal, but that is a controllable pair and that
is an observable pair, then,
and
.
Now the key observation is that
and similarly
.
Hence, we can write
as
and indeed
.
Using these Lyapunov equations we can say goodbye to the infinite-dimensional objects.
To combine these error terms, let say we have apply a coordinate transformation , for some
.
Specifically, let
be diagonal and defined by
.
Then one can find that
,
. Where the latter expressions allows to express the output error (variance) by
Now we want to minimize over all
plus some optimal selection of
. At this point it looks rather painful. To make it work we first reformulate the problem using the well-known arithmetic-geometric mean inequality
for non-negative sequences
:
This inequality yields
See that the right term is independent of , hence this is a lower-bound with respect to minimization over
. To achieve this (to make the inequality from
an equality), we can select
Indeed, as remarked in the paper, is not per se an integer. Nevertheless, by this selection we find the clean expression from
to minimize over systems equivalent to
, that is, over some transformation matrix
. Define a map
by
It turns out that if and only if
is diagonal. This follows
from Hadamard's inequality.
We can use this map to write
Since the term is invariant under a transformation
, we can only optimize
over a structural change in realization tuple
, that is, we need to make
and
simultaneous diagonal! It turns out that this numerically 'optimal’ realization, denoted
, is what is called a principal axis realization.
To compute it, diagonalize ,
and define
. Next, construct the diagonalization
. Then our desired transformation is
. First, recall that under any
the pair
becomes
. Plugging in our map
yields the transformed matrices
, which are indeed diagonal.
![]() |
At last, we do a numerical test in Julia. Consider a linear system ![]() ![]() To simulate numerical problems, we round the maps |
: To show this AM-GM inequality, we can use Jensen's inequality for concave functions, that is,
. We can use the logarithm as our prototypical concave function on
and find
. Then, the result follows.
: The inequality attributed to Hadamard is slightly more difficult to show. In its general form the statement is that
, for
the
column of
. The inequality becomes an equality when the columns are mutually orthogonal. The intuition is clear if one interprets the determinant as the signed volume spanned by the columns of
, which. In case
, we know that there is
such that
, hence, by this simple observation it follows that
Equality holds when the columns are orthogonal, so must be diagonal, but
must also hold, hence,
must be diagonal, and thus
must be diagonal, which is the result we use.
Lately, linear discrete-time control theory has start to appear in areas far beyond the ordinary. As a byproduct, papers start to surface which claim that stability of linear discrete-time systems
is characterized by .
The confunsion is complete by calling this object the spectral norm of a matrix
.
Indeed, stability of
is not characterized by
, but by
.
If
, then all absolute eigenvalues
of are strictly smaller than one, and hence for
,
.
This follows from the Jordan form of
in combination with the simple observation that for
,
.
To continue, define two sets; and
.
Since
we have
. Now, the main question is, how much do we lose by approximating
with
? Motivation to do so is given by the fact that
is a norm and hence a convex function
, i.e., when given a convex polytope
with vertices
, if
, then
. Note that
is not a norm,
can be
without
(consider an upper-triangular matrix with zeros on the main diagonal), the triangle inequality can easily fail as well. For example, let
Then , but
, hence
fails to hold.
A setting where the aforementioned property of
might help is Robust Control, say we want to assess if some algorithm rendered a compact convex set
, filled with
's, stable.
As highlighted before, we could just check if all extreme points of
are members of
, which might be a small and finite set.
Thus, computationally, it appears to be attractive to consider
over the generic
.
As a form of critique, one might say that is a lot larger than
.
Towards such an argument, one might recall that
. Indeed,
if
, but
. Therefore, it seems like the set of
for which considering
over
is reasonable, is negligibly small.
To say a bit more, since
we see that we can always find a ball with non-zero volume fully contained in
.
Hence,
is at least locally dense in
. The same holds for
(which is more obvious).
So in principle we could try to investigate
. For
, the sets agree, which degrades asymptotically.
However, is this the way to go? Lets say we consider the set
. Clearly, the sets
and
are different, even in volume, but for sufficiently large
, should we care? The behaviour they parametrize is effectively the same.
We will stress that by approximating with
, regardless of their volumetric difference, we are ignoring a full class of systems and miss out on a non-neglible set of behaviours.
To see this, any system described by
is contractive in the sense that
, while systems in
are merely asymptotically stable. They might wander of before they return, i.e., there is no reason why for all
we must have
. We can do a quick example, consider the matrices
Then ,
and both
and
have
.
We observe that indeed
is contractive, for any initial condition on
, we move strictly inside the sphere, whereas for
, when starting from the same initial condition, we take a detour outside of the sphere before converging to
. So although
and
have the same spectrum, they parametrize different systems.
![]() |
In our deterministic setting this would mean that we would confine our statespace to a (solid) sphere with radius , instead of
.
Moreover, in linear optimal control, the resulting closed-loop system is usually not contractive.
Think of the infamous pendulum on a cart. Being energy efficient has usually nothing to do with taking the shortest, in the Euclidean sense, path.
: Recall,
. Then, let
be some eigenvector of
. Now we have
. Since this eigenvalue is arbitrary it follows that
.
: Let
then
. This follows from the
being a norm.
: Clearly, if
, we have
. Now, when
, does this imply that
? The answer is no, consider
Then, , yet,
. For the full set of conditions on
such that
see this paper by Goldberg and Zwas.
: Recall that
. This expression is clearly larger or equal to
.
This short note is inspired by the beautiful visualization techniques from Duistermaat and Kolk (DK1999).
Let's say we have a -dimensional linear discrete-time dynamical system
, which preserves the volume of the cube
under the dynamics, i.e.
for any
.
Formally put, this means that is part of a certain group, specifically, consider some field
and
define the Special Linear group by
Now, assume that we are only interested in matrices such that the cube
remains bounded under the dynamics, i.e.,
is bounded. In this scenario we restrict our attention to
. To see why this is needed, consider
and
both with determinant
:
If we look at the image of under both these maps (for several iterations), we see that volume is preserved, but also clearly that the set is extending indefinitely in the horizontal direction.
![]() |
To have a somewhat interesting problem, assume we are given with
while it is our task to find a
such that
, hence, find feedback which not only preserves the volume, but keeps any set of initial conditions (say
) bounded over time.
Towards a solution, we might ask, given any what is the nearest (in some norm) rotation matrix? This turns out to be a classic question, starting from orthogonal matrices, the solution to
is given by
, where
follows from the SVD of
,
.
Differently put, when we use a polar decomposition of
,
, then the solution is given by
. See this note for a quick multiplier proof.
An interesting sidenote, problem
is well-defined since
is compact. To see this, recall that for any
we have
, hence the set is bounded, plus
is the pre-image of
under
,
, hence the set is closed as well.
This is great, however, we would like to optimize over instead. To do so, one usually resorts to simply checking the sign - and if necessary - flipping it via finding the closest matrix with a change in determinant sign.
We will see that by selecting appropriate coordinates we can find the solution without this checking of signs.
For today we look at and largely follow (DK1999), for such a
-dimensional matrix, the determinant requirement translates to
. Under the following (invertible) linear change of coordinates
becomes
, i.e., for any pair
the point
runs over a circle with radius
. Hence, we obtain the diffeomorphism
. We can use however a more pretty diffeomorphism of the sphere and an open set in
. To that end, use the diffeomorphism:
for (formal way of declaring that
should not be any real number) and the pair
being part of
(the open unit-disk).
To see this, recall that the determinant is not a linear operator.
Since we have a
-dimensional example we can explicitly compute the eigenvalues of any
(plug
into the characteristic equation) and obtain:
At this point, the only demand on the eigenvalues is that . Therefore, when we would consider
within the context of a discrete linear dynamical system
,
is either a saddle-point, or we speak of a marginally stable system (
).
We can visualize the set of all marginally stable
, called
, defined by all
satisfying
(DK1999) J.J. Duistermaat and J.A.C. Kolk : ‘‘Lie Groups’’, 1999 Springer.
Previously we looked at ,
- and its resulting flow - as the result from mapping
to the sphere. However, we saw that for
this flow convergences to
, with
such that for
we have
. Hence, it is interesting to look at the flow from an optimization point of view:
.
A fair question would be, is our flow not simply Riemannian gradient ascent for this problem? As common with these kind of problems, such a hypothesis is something you just feel.
Now, using the tools from (CU1994, p.311) we can compute the gradient of (on the sphere) via
, where
is a vector field normal to
, e.g.,
.
From there we obtain
To make our life a bit easier, use instead of the map
. Moreover, set
. Then it follows that
Of course, to make the computation cleaner, we changed to
, but the relation between
and
is beautiful. Somehow, mapping trajectories of
, for some
, to the sphere corresponds to (Riemannian) gradient ascent applied to the problem
.
![]() |
To do an example, let ![]() We can compare |
Now, we formalize the previous analysis a bit a show how fast we convergence. Assume that the eigenvectors are ordered such that eigenvector corresponds to the largest eigenvalue of
. Then, the solution to
is given by
Let be
expressed in eigenvector coordinates, with all
(normalized). Moreover, assume all eigenvalues are distinct. Then, to measure if
is near
, we compute
, which is
if and only if
is parallel to
. To simplify the analysis a bit, we look at
, for some perturbation
, this yields
Next, take the the (natural) logarithm on both sides:
This log-sum-exp terms are hard to deal with, but we can apply the so-called ‘‘log-sum-exp trick’’:
In our case, we set and obtain
We clearly observe that for the LHS approaches
from below, which means that
from above, like intended. Of course, we also observe that the mentioned method is not completely general, we already assume distinct eigenvalues, but there is more. We do also not convergence when
, which is however a set of measure
on the sphere
.
More interestingly, we see that the convergence rate is largely dictated by the ‘‘spectral gap/eigengap’’ . Specifically, to have a particular projection error
, such that
, we need
Comparing this to the resulting flow from ,
, we see that we have the same flow, but with
.
This is interesting, since
and
have the same eigenvectors, yet a different (scaled) spectrum. With respect to the convergence rate, we have to compare
and
for any
with
(the additional
is not so interesting).
It is obvious what we will happen, the crux is, is larger or smaller than
? Can we immediately extend this to a Newton-type algorithm? Well, this fails (globally) since we work in
instead of purely with
. To be concrete,
, we never have
degrees of freedom.
Of course, these observations turned out to be far from new, see for example (AMS2008, sec. 4.6).
(AMS2008) P.A. Absil, R. Mahony and R. Sepulchre: ‘‘Optimization Algorithms on Matrix Manifolds’’, 2008 Princeton University Press.
(CU1994) Constantin Udriste: ‘‘Convex Functions and Optimization Methods on Riemannian Manifolds’’, 1994 Kluwer Academic Publishers.
![]() |
In this post we start a bit with control of systems living on compact metric spaces. This is interesting since we have to rethink what we want, |
To start, consider a linear dynamical systems ,
, for which the solution is given by
. We can map this solution to the sphere via
.
The corresponding vector field for
is given by
The beauty is the following, to find the equilibria, consider . Of course,
is not valid since
. However, take any eigenvector
of
, then
is still an eigenvector, plus it lives on
. If we plug such a scaled
(from now on consider all eigenvectors to live on
) into
we get
, which is rather cool. The eigenvectors of
are (at least) the equilibria of
. Note, if
, then so is
, each eigenvector gives rise to two equilibria.
We say at least, since if
is non-trivial, then for any
, we get
.
Let , then what can be said about the stability of the equilibrium points?
Here, we must look at
, where the
-dimensional system is locally a
-dimensional linear system. To do this, we start by computing the standard Jacobian of
, which is given by
As said before, we do not have a -dimensional system. To that end, we assume
with diagonalization
(otherwise peform a Gram-Schmidt procedure) and perform a change of basis
. Then if we are interested in the qualitative stability properties of
(the
eigenvector of
), we can simply look at the eigenvalues of the matrix resulting from removing the
row and column in
, denoted
.
We do two examples. In both figures are the initial conditions indicated with a dot while the equilibrium points are stars.
![]() |
First, let ![]() Clearly, for |
Consider an unknown dynamical system parametrized by
.
In this short note we look at a relatively simple, yet neat method to assess stability of
via finite calls to a particular oracle. Here, we will follow the notes by Roman Vershynin (RV2011).
Assume we have no knowledge of , but we are able to obtain
for any desired
. It turns out that we can use this oracle to bound
from above.
First, recall that since is symmetric,
. Now, the common definition of
is
. The constraint set
is not per se convenient from a computational point of view. Instead, we can look at a compact constraint set:
, where
is the sphere in
.
The latter formulation has an advantage, we can discretize these compact constraints sets and tame functions over the individual chunks.
To do this formally, we can use -nets, which are convenient in quantitative analysis on compact metric spaces
. Let
be a
-net for
if
. Hence, we measure the amount of balls to cover
.
For example, take as our metric space where we want to bound
(the cardinality, i.e., the amount of balls). Then, to construct a minimal set of balls, consider an
-net of balls
, with
. Now the insight is simply that
.
Luckily, we understand the volume of
-dimensional Euclidean balls well such that we can establish
Which is indeed sharper than the bound from Lemma 5.2 in (RV2011).
Next, we recite Lemma 5.4 from (RV2011). Given an -net for
with
we have
This can be shown using the Cauchy-Schwartz inequality. Now, it would be interesting to see how sharp this bound is.
To that end, we do a -dimensional example (
). Note, here we can easily find an optimal grid using polar coordinates. Let the unknown system be parametrized by
such that . In the figures below we compute the upper-bound to
for several nets
.
Indeed, for finer nets, we see the upper-bound approaching
. However, we see that the convergence is slow while being inversely related to the exponential growth in
.
![]() |
Overall, we see that after about a 100 calls to our oracle we can be sure about being exponentially stable.
Here we just highlighted an example from (RV2011), at a later moment we will discuss its use in probability theory.