Posts (11) containing the 'math.DS’ (Dynamical Systems) tag:

On Critical Transitions in Nature and Society |29 January 2023|
tags: math.DS

The aim of this post is to highlight the 2009 book Critical Transitions in Nature and Society by Prof. Scheffer, which is perhaps more timely than ever and I strongly recommend this book to essentially anyone.

Far too often we (maybe not you) assume that systems are simply reversible, e.g. based on overly simplified models (think of a model corresponding to a flow). The crux of this book is that for many ecosystems such a reversal might be very hard or even practically impossible. Evidently, this observation has many serious ramifications with respect to understanding and tackling climate change.

Mathematically, this can be understood via bifurcation theory, in particular, via folds and cusps, introducing a form of hysteresis. The hysteresis is important here, as this means that simply restoring some parameters of an ecosystem does not directly imply that the state of the ecosystem will return to what these parameters were previously corresponding to.

The book makes a good point that catastrophe- and chaos theory (old friends of bifurcation theory) themselves went through a critical transition, that is, populair media (and some authors) blew-up those theories. To that end, see the following link for a crystal clear introduction to catastrophe theory by Zeeman (including some controversy), as well as this talk by Ralph Abraham on the history (locally) of chaos theory. One might argue that some populair fields these days are also trapped in a similar situation.

Scheffer stays away from drawing overly strong conclusions and throughout the book one can find many discussions on how and if to relate these models to reality. Overall, Scheffer promotes system theoretic thinking. For example, at first sight we might be inclined to link big (stochastic) events, e.g. a meteor hitting the earth, as the cause of some disruptive event, while actually something else might been slowly degrading, e.g. some basin of attraction shrunk, and this stochastic event was just the final push. This makes the book very timely, climate change is slowly changing ecosystems around us, we are just waiting for a detrimental push. To give another example, in Chapter 5 we find an interesting discussion on the relation between biodiversity and stability. Scheffer highlights two points: (i) robustness (there is plently back-up) and (ii) complementation (if many can perform a task, some of them will be very good at it). Overall, the discussions on ‘‘why we have so many animals’’ are very interesting.

One of the main concepts in the book is that of resilience, which is (re)-defined several times (as part of the storyline in the book), in particular, one page 103 we find it being defined as ‘‘the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks.’’ Qualitatively, this is precisely a combination of having an attractor of some system being structurally stable. However, the goal here is to quantify this structural stability to some extent. Indeed, one enters bifurcation theory, or if you like, topological dynamical systems theory.

Throughout, Scheffer provides incredibly many examples of great inspiration to anyone in dynamical system and control theory, e.g. the basin of attraction, multistability and time-separations are recurring themes. My favourite simple example (of positive feedback) being the ice-Albedo feedback, (too) simply put, less ice means less reflection of light and hence more heat absorption, resulting in even less ice (as such, the positive feedback). More details can be found in Chapter 8. Another detailed series of examples is contained in Chapter 7 on shallow lakes. This chapter explains through a qualitative lens why restoring lakes is inherently difficult. Again, (too) simply put, as with ice, if plants disappear in lakes, turbidity is promoted, making it more difficult for plants to return (they need sunlight and thus prefer clear water). For this lake model, one can find some equations in Appendix 12, which is essentially a Lotka-Volterra model. As such, some readers might be unsatisfied when it comes to the mathematical content*. Moreover, at times Scheffer himself describes bifurcations as exotic dynamical systems jargon and throughout the book one might get the feeling that the dynamical systems community has no clue how to handle time-varying models. Luckily, the latter is not completely true. Perhaps the crux here is that we can use more cross-contamination between scientific communities. Especially recognizing these bifurcations (critical transitions), not just in models, but in reality is still a big and pressing open problem.

*A fantastic and easy-going short book on catastrophe theory is written, of course, by Arnold.

The aim of this book was to have a widely accesible mathematical reference. The introduction to this book reads as: ‘‘This booklet explains what catastrophe theory is about and why it arouses such controversy. It also contains non-controversial results from the mathematical theories of singularities and bifurcation.’’

whereas Arnold ends with: ‘‘A qualitative peculiarity of Thom's papers on catastrophe theory is their original style: as one who presenses the direction of future investigations, Thom not only does not have proofs available, but does not even dispose of the precise formulations of his results. Zeeman, an ardent admirer of this style, observes that the meaning of Thom's words becomes clear only after inserting 99 lines of your own between every two of Thom's.’’

On ‘‘the art of the state’’ |29 December 2022|
tags: math.DS

In the last issue of the IEEE Control Systems Magazine, Prof. Sepulchre wrote an exhilarating short piece on the art of the state, or as he (also) puts it ‘‘the definite victory of computation over modeling in science and engineering’’. As someone with virtually no track record, I am happy to also shed my light on the matter.

We owe dynamical systems theory as we know it today largely to Henri Poincaré (1854-1912), whom started this in part being motivated by describing the solar system. Better yet, the goal was to eventually understand its stability. However, we can be a bit more precise regarding what was driving all this work. In the introduction of Les méthodes nouvelles de la mécanique céleste (EN: The new methods of celestial mechanics) - Vol I by Poincaré, published in 1892 (HP92), we find (loosely translated): ‘‘The ultimate goal of Celestial Mechanics is to resolve this great question of whether Newton's law alone explains all astronomical phenomena; the only way to achieve this is to make observations as precise as possible and then compare them with the results of the calculation.’’ and then after a few words on divergent series: ‘‘these developments should attract the attention of geometers, first for the reasons I have just explained and also for the following: the goal of Celestial Mechanics is not reached when one has calculated more or less approximate ephemerides without being able to realize the degree of approximation obtained. If we notice, in fact, a discrepancy between these ephemerides and the observations, we must be able to recognize if Newton's law is at fault or if everything can be explained by the imperfection of the theory.’’ It is perhaps interesting to note that his 1879 thesis entitled: Sur les propriétés des fonctions définies par les équations aux différences partielles (EN: On the properties of functions defined by partial difference equations) was said to lack examples (p.109 FV12).

In the 3 Volumes and well over a 1000 pages of the new celestial mechanics Poincaré develops a host of tools towards the goal sketched above. Indeed, this line of work contributed to the popularizing of the infamous equation

 frac{mathrm{d}}{mathrm{d}t}x(t) = f(x(t)),quad (1)

or as used in control theory: x^+=f(x,u), for (cdot)^+ some ‘‘update’’ operator. However, I would like to argue that the main issue with Equation (1) is not that it is frequently used, but how it is used, moving from the qualitative to the quantitative.

Unfortunately we cannot ask him, but I believe Poincaré would wholeheartedly agree with Sepulchre. Inspired by the beautiful book by Prof. Verhulst (FV12) I will try to briefly illustrate this claim. As put by Verhulst (p.80 FV12): ‘‘In his writings, Poincaré was interested primarily in obtaining insight into physical reality and mathematical reality — two different things.’’ In fact, the relation between analysis and physics was also the subject of Poincaré’s plenary lecture at the first International Congress of Mathematicians, in Zurich (1897). (p.91 FV12).

In what follows we will have a closer look at La Valeur de la Science (EN: The value of science) by Poincaré, published in 1905 (HP05). This is one of his more philosophical works. We already saw how Poincaré was largely motivated by a better understanding of physics. We can further illustrate how he — being one of the founders of many abstract mathematical fields — thought of physics. To that end, consider the following two quotes:

’’when physicists ask of us the solution of a problem, it is not a duty-service they impose upon us, it is on the contrary we who owe them thanks.’’ (p.82 HP05)

‘‘Fourier's series is a precious instrument of which analysis makes continual use, it is by this means that it has been able to represent discontinuous functions; Fourier invented it to solve a problem of physics relative to the propagation of heat. If this problem had not come up naturally, we should never have dared to give discontinuity its rights; we should still long have regarded continuous functions as the only true functions.’’ (p.81 HP05)

What is more, this monograph contains a variety of discussions of the form: ’’If we assume A, B and C, what does this imply?’’ Both mathematically and physically. For instance, after a discussion on Carnot's Theorem (principle) — or if you like the ramifications of a global flow — Poincaré writes the following:

‘‘On this account, if a physical phenomenon is possible, the inverse phenomenon should be equally so, and one should be able to reascend the course of time. Now, it is not so in nature, …’’ (p.96 HP05)

However, Poincaré continues and argues that although notions like friction seem to lead to an irreversible process this is only so because we look at the process through the wrong lens.

‘‘For those who take this point of view, Carnot's principle is only an imperfect principle, a sort of concession to the infirmity of our senses; it is because our eyes are too gross that we do not distinguish the elements of the blend; it is because our hands are too gross that we can not force them to separate; the imaginary demon of Maxwell, who is able to sort the molecules one by one, could well constrain the world to return backward.’’ (p.97 HP05)

Indeed, the section is concluded by stating that perhaps we should look through the lens of statistical mechanics. All to say that if the physical reality is ought the be described, this must be on the basis of solid principles. Let me highlight another principle: conservation of energy. Poincaré writes the following:

‘‘I do not say: Science is useful, because it teaches us to construct machines. I say: Machines are useful, because in working for us, they will some day leave us more time to make science.’’ (p.88 HP05)

‘‘Well, in regard to the universe, the principle of the conservation of energy is able to render us the same service. The universe is also a machine, much more complicated than all those of industry, of which almost all the parts are profoundly hidden from us; but in observing the motion of those that we can see, we are able, by the aid of this principle, to draw conclusions which remain true whatever may be the details of the invisible mechanism which animates them.’’ (p.94 HP05)

Then, a few pages later, his brief story on radium starts as follows:

‘‘At least, the principle of the conservation of energy yet remained to us, and this seemed more solid. Shall I recall to you how it was in its turn thrown into discredit?’’ (p.104 HP05)

Poincaré his viewpoint is to some extent nicely captured by the following piece in the book by Verhulst:

‘‘Consider a mature part of physics, classical mechanics (Poincaré 02). It is of interest that there is a difference in style between British and continental physicists. The former consider mechanics an experimental science, while the physicists in continental Europe formulate classical mechanics as a deductive science based on a priori hypotheses. Poincaré agreed with the English viewpoint, and observed that in the continental books on mechanics, it is never made clear which notions are provided by experience, which by mathematical reasoning, which by conventions and hypotheses.’’ (p.89 FV12)

In fact, for both mathematics and physics Poincaré spoke of conventions, to be understood as a convenience in physics. Overall, we see that Poincaré his attitude to applying mathematical tools to practical problems is significantly more critical than what we see nowadays in some communities. For instance:

‘‘Should we simply deduce all the consequences, and regard them as intangible realities? Far from it; what they should teach us above all is what can and what should be changed.’’ (p.79 HP05)

A clear example of this viewpoint appears in his 1909 lecture series at Göttingen on relativity:

‘‘Let us return to the old mechanics that admitted the principle of relativity. Instead of being founded on experiment, its laws were derived from this fundamental principle. These considerations were satisfactory for purely mechanical phenomena, but the old mechanics didn’t work for important parts of physics, for instance optics.’’ (p.209 FV12)

Now it is tempting to believe that Poincaré would have liked to get rid of ‘‘the old’’, but this is far from the truth.

‘‘We should not have to regret having believed in the principles, and even, since velocities too great for the old formulas would always be only exceptional, the surest way in practise would be still to act as if we continued to believe in them. They are so useful, it would be necessary to keep a place for them. To determine to exclude them altogether would be to deprive oneself of a precious weapon.’’ (p.111 HP05)

The previous was about the ‘‘old’’ and ‘‘new’’ mechanics and how differences in mathematical and physical realities might emerge. Most and for all, the purpose of the above is to show that one of the main figures in abstract dynamical system theory — someone that partially found the field motivated by a computational question — greatly cared about correct modelling.

This is one part of the story. The second part entails the recollection that Poincaré his work was largely qualitative. Precisely this qualitative viewpoint allows for imperfections in the modelling process. However, this also means conclusions can only be qualitative. To provide two examples; based on the Hartman-Grobman theorem and the genericity of hyperbolic equilibrium points it is perfectly reasonable to study linear dynamical systems, however, only to understand local qualitative behaviour of the underlying nonlinear dynamical system. The aim to draw quantitative conclusions is often futile and simply not the purpose of the tools employed. Perhaps more interesting and closer to Poincaré, the Lorenz system, being a simplified mathematical model for atmospheric convection, is well-known for being chaotic. Now, weather-forecasters do use this model, not to predict the weather, but to elucidate why their job is hard!

Summarizing, I believe the editorial note by Prof. Sepulchre is very timely as some have forgotten that the initial dynamical systems work by Poincaré and others was to get a better understanding, to adress questions, to learn from, not to end with. Equation (1) is not magical, but a tool to build the bridge, a tool to see if and where the bridge can be built.

A corollary |6 June 2022|
tags: math.DS

The lack of mathematically oriented people in the government is an often observed phenomenon |1|. In this short note we clarify that this is a corollary to ‘‘An application of Poincare's recurrence theorem to adademic adminstration’’, by Kenneth R. Meyer. In particular, since the governmental system is qualitatively the same as that of academic administration, it is also conservative. However - due to well-known results also pioneered by Poincare - this implies the following.

Corollary: The government is non-exact.

Solving Linear Programs via Isospectral flows |05 September 2021|
tags: math.OC, math.DS, math.DG

In this post we will look at one of the many remarkable findings by Roger W. Brockett. Consider a Linear Program (LP)

 mathrm{maximize}_{xin X}quad langle c,x ranglequad (1)

parametrized by the compact set X={xin mathbf{R}^n:Axleq b} and a suitable triple (A,b,c). As a solution to (1) can always be found to be a vertex of X, a smooth method to solve (1) seems somewhat awkward. We will see that one can construct a so-called isospectral flow that does the job. Here we will follow Dynamical systems that sort lists, diagonalize matrices and solve linear programming problems, by Roger. W. Brockett (CDC 1988) and the book Optimization and Dynamical Systems edited by Uwe Helmke and John B. Moore (Springer 2ed. 1996). Let X have k vertices, then one can always find a map T:mathbf{R}^kto mathbf{R}^n, mapping the simplex S={xin mathbf{R}_{geq 0}^k:sum_{j}x_j=1} onto X. Indeed, with some abuse of notation, let T be a matrix defined as T=(v_1,dots,v_k), for {v_j}_{j=1}^k, the vertices of X.

Before we continue, we need to establish some differential geometric results. Given the Special Orthogonal group mathsf{SO}(n)={Qin mathbf{R}^{ntimes n}:Q^{mathsf{T}}Q=I_n}cap mathsf{GL}^+(n,mathbf{R}), the tangent space is given by T_Q mathsf{SO}(n)={QOmega : Omegain mathrm{skew}(n,mathbf{R})}. Note, this is the explicit formulation, which is indeed equivalent to shifting the corresponding Lie Algebra. The easiest way to compute this is to look at the kernel of the map defining the underlying manifold.

Now, following Brockett, consider the function f:mathsf{SO}(n)to mathbf{R} defined by f:Thetamapsto mathrm{Tr}(QTheta NTheta^{mathsf{T}}) for some Q,Nin mathsf{Sym}(n). This approach is not needed for the full construction, but it allows for more intuition and more computations. To construct the corresponding gradient flow, recall that the (Riemannian) gradient at Thetain mathsf{SO}(n) is defined via df(Theta)[V]=langle mathrm{grad},f(Theta), Vrangle_{Theta} for all Vin T_{Theta} mathsf{SO}(n). Using the explicit tangent space representation, we know that V=Theta Omega with Omega = -Omega^{mathsf{T}}. Then, see that by using

 f(Theta+tV) = mathrm{Tr}(QTheta(I_n+tOmega)N(I_n-tOmega)Theta^{mathsf{T}})

we obtain the gradient via

 df(Theta)[V]=lim_{tdownarrow 0}frac{f(Theta+tV)-f(Theta)}{t} = langle QTheta N, Theta Omega rangle - langle Theta N Theta^{mathsf{T}} Q Theta, Theta Omega rangle.

This means that (the minus is missing in the paper) the (a) gradient flow becomes

 dot{Theta} = mathrm{grad},f(Theta) = QTheta N-Theta NTheta^{mathsf{T}}QTheta, quad Theta(0)in mathsf{SO}(n).

Consider the standard commutator bracket [A,B]=AB-BA and see that for H(t)=Theta(t)^{mathsf{T}}QTheta(t) one obtains from the equation above (typo in the paper)

 dot{H}(t) = [H(t),[H(t),N]],quad H(0)=Theta^{mathsf{T}}QThetaquad (2).

Hence, (2) can be seen as a reparametrization of a gradient flow. It turns out that (2) has a variety of remarkable properties. First of all, see that H(t) preserves the eigenvalues of Q. Also, observe the relation between extremizing f and the function g defined via g:H=Theta^{mathsf{T}}QTheta mapsto -frac{1}{2}|N-H|_F^2. The idea to handle LPs is now that the limiting H(t) will relate to putting weight one the correct vertex to get the optimizer, N gives you this weight as it will contain the corresponding largest costs.

In fact, the matrix H can be seen as an element of the set mathsf{M}(Q)={Theta^{mathsf{T}}QTheta:Thetain mathsf{O}(n)}. This set is in fact a C^{infty}-smooth compact manifold as it can be written as the orbit space corresponding to the group action sigma:mathsf{O}(n)times mathbf{R}^{ntimes n}to mathsf{R}^{ntimes}, sigma:(Theta,Q)mapsto Theta^{mathsf{T}}QTheta, one can check that this map satisfies the group properties. Hence, to extremize g over M(Q), it appears to be appealing to look at Riemannian optimization tools indeed. When doing so, it is convenient to understand the tangent space of mathsf{M}(Q). Consider the map defining the manifold h:mathsf{O}(n)to mathsf{M}(Q), h:Theta mapsto Theta^{mathsf{T}}QTheta. Then by the construction of T_Theta mathsf{O}(n), see that dh(Theta)[V]=0 yields the relation [H,Omega]=0 for any Omegain mathrm{skew}(n,mathbf{R}).

For the moment, let Q=mathrm{diag}(lambda_{1}I_{n_1},dots,lambda_{r}I_{n_r})in mathsf{Sym}(n) such that lambda_{1}>cdots>lambda_{r} and sum_{n_i}n_i=n. First we consider the convergence of (2). Let N have only distinct eigenvalues, then H_{infty}:=lim_{tto infty}H(t) exists and is diagonal. Using the objective f from before, consider f(H)=mathrm{Tr}(HN) and see that by using the skew-symmetry one recovers the following

 begin{array}{lll} frac{d}{dt}mathrm{Tr}(H(t)N) &=& mathrm{Tr}(N [H,[H,N]]) &=& -mathrm{Tr}((HN-NH)^2) &=& |HN-NH|_F^2. end{array}

This means the cost monotonically increases, but by compactness converges to some point H_{infty}. By construction, this point must satisfy [H_{infty},N]=0. As N has distinct eigenvalues, this can only be true if H_{infty} itself is diagonal (due to the distinct eigenvalues).

More can be said about H_{infty}, let (lambda_1,dots,lambda_n) be the eigenvalues of H(0), that is, they correspond to the eigenvalues of Q as defined above. Then as H(t) preserves the eigenvalues of Q (H(0)), we must have H_{infty}=pi Q pi^{mathsf{T}}, for pi a permutation matrix. This is also tells us there is just a finite number of equilibrium points (finite number of permutations). We will write this sometimes as H_{infty}=mathrm{diag}(lambda_{pi(1)},dots,lambda_{pi(n)}).

Now as Q is one of those points, when does H(t) converge to Q? To start this investigation, we look at the linearization of (2), which at an equilibrium point H_{infty} becomes

 dot{xi}_{ij} = -(lambda_{pi(i)}-lambda_{pi(j)})(mu_i-mu_j)xi_{ij}

for xiin T_{H_{infty}}mathsf{M}(Q). As we work with matrix-valued vector fields, this might seems like a duanting computation. However, at equilibrium points one does not need a connection and can again use the directional derivative approach, in combination with the construction of T_{H_{infty}}mathsf{M}(Q), to figure out the linearization. The beauty is that from there one can see that Q is the only asymptotically stable equilibrium point of (2). Differently put, almost all initial conditions H(0)in mathsf{M}(Q) will converge to Q with the rate captured by spectral gaps in Q and N. If Q does not have distinct eigenvalues and we do not impose any eigenvalue ordering on N, one sees that an asymptotically stable equilibrium point H_{infty} must have the same eigenvalue ordering as N. This is the sorting property of the isospectral flow and this is of use for the next and final statement.

Theorem: Consider the LP (1) with langle c, v_i-v_jrangle neq 0 for all i,jin [k], then, there exist diagonal matrices Q and N such that (2) converges for almost any H(0)in mathsf{M}(Q) to H_{infty}=mathrm{diag}(d) with the optimizer of (1) being x^{star}=Td.

Proof: Global convergence is prohibited by the topology of mathsf{M}(Q). Let N=mathrm{diag}(T^{mathsf{T}}c) and let Q=(1,0,dots,0)in mathsf{Sym}(k). Then, the isospectral flow will converge from almost everywhere to H_{infty}=mathrm{diag}(0,dots,0,1,0,dots,0) (only (H_{infty})_{ii}=1), such that x^{star}=v_i.

Please consider the references for more on the fascinating structure of (2).

Topological Considersations in Stability Analysis |17 November 2020|
tags: math.DS

In this post we discuss very classical, yet, highly underappreciated topological results.

We will look at dynamical control systems of the form

 dot{x}=f(x,u),quad uin mathcal{U},quad xin mathsf{M}subseteq mathbf{R}^n,

where mathsf{M} is some finite-dimensional embedded manifold.

As it turns out, topological properties of mathsf{M} encode a great deal of fundamental control possibilities, most notably, if a continuous globally asymptotically stabilizing control law exists or not (usually, one can only show the negative). In this post we will work towards two things, first, we show that a technically interesting nonlinear setting is that of a dynamical system evolving on a compact manifold, where we will discuss that the desirable continuous feedback law can never exist, yet, the boundedness implied by the compactness does allow for some computations. Secondly, if one is determined to work on systems for which a continuous globally asymptotically stable feedback law exists, then a wide variety of existential sufficient conditions can be found, yet, the setting is effectively Euclidean, which is the most basic nonlinear setting.

To start the discussion, we need one important topological notion.

Definition [Contractability]. Given a topological manifold mathsf{X}. If the map mathsf{id}_{mathsf{X}}:xmapsto x is null-homotopic, then, mathsf{X} is contractible.

To say that a space is contractible when it has no holes (genus 0) is wrong, take for example the sphere. The converse is true, any finite-dimensional space with a hole cannot be contractible. See Figure 1 below.


Figure 1: The manifold mathsf{M} is contractible, but mathbf{S}^2 and mathbf{T}^1 are not.

Topological Obstructions to Global Asymptotic Stability

In this section we highlight a few (not all) of the most elegant topological results in stability theory. We focus on results without appealing to the given or partially known vector field f. This line of work finds its roots in work by Bhatia, Szegö, Wilson, Sontag and many others.

Theorem [The domain of attraction is contractible, Theorem 21] Let the flow varphi(t,x) be continuous and let x^{star} be an asymptotically stable equilibrium point. Then, the domain of attraction {xin mathsf{M};:;lim_{tto infty}varphi (t,x)=x^{star}} is contractible.

This theorem by Sontag states that the underlying topology of a dynamical system, better yet, the space on which the dynamical system evolves, dictates what is possible. Recall that linear manifolds are simply hyperplanes, which are contractible and hence, as we all know, global asymptotic stability is possible for linear systems. However, if mathsf{M} is not contractible, there does not exist a globally asymptotically stabilizable continuous-time system on mathsf{M}, take for example any sphere.

The next example shows that one should not underestimate ‘‘linear systems’’ either.

Example [Dynamical Systems on mathsf{GL}(n,mathbf{R})] Recall that mathsf{GL}(n,mathbf{R}):={Xin mathbf{R}^{ntimes n}:mathrm{det}(X)neq 0} is a smooth n^2-dimensional manifold (Lie group). Then, consider for some Ain mathbf{R}^{ntimes n}, n>1 the (right-invariant) system

     dot{X}=AX,quad X_0in mathsf{GL}(n,mathbf{R}) quad (1)

Indeed, the solution to (1) is given by X(t)=mathrm{exp}(tcdot A)X_0 in mathsf{GL}(n,mathbf{R})=:mathsf{M}. Since this group consists out of two disjoint components there does not exist a matrix A, or continuous nonlinear map for that matter, which can make a vector field akin to (1) globally asymptotically stable. This should be contrasted with the closely related ODE dot{x}=Ax, xin mathbf{R}^n=:mathsf{M}. Even for the path connected component mathsf{GL}^+(n,mathbf{R}):={Xin mathbf{R}^{ntimes n}:mathrm{det}(X)>0}, The theorem by Sontag obstructs the existence of a continuous global asymptotically stable vector field. This because the group is not simply connected for n>1 (This follows most easily from establishing the homotopy equivalence between mathsf{GL}^+(n,mathbf{R}) and mathsf{SO}(n) via (matrix) Polar Decomposition), hence the fundamental group is non-trivial and contractibility is out of the picture. See that if one would pick A to be stable (Hurwitz), then for X_0in mathsf{GL}^+(n,mathbf{R}) we have X(t)in mathsf{GL}^+(n,mathbf{R}), however lim_{tto infty}X(t)notin mathsf{GL}^+(n,mathbf{R}) .

Following up on Sontag, Bhat and Bernstein revolutionized the field by figuring out some very important ramifications (which are easy to apply). The key observation is the next lemma (that follows from intersection theory arguments, even for non-orientable manifolds).

Lemma [Proposition 1] Compact, boundaryless, manifolds are never contractible.

Clearly, we have to ignore 0-dimensional manifolds here. Important examples of this lemma are the n-sphere mathbf{S}^{n-1} and the rotation group mathsf{SO}(3) as they make a frequent appearance in mechanical control systems, like a robotic arm. Note that the boundaryless assumption is especially critical here.

Now the main generalization by Bhat and Bernstein with respect to the work of Sontag is to use a (vector) bundle construction of mathsf{M}. Loosely speaking, given a base mathsf{Q} and total manifold mathsf{M}, mathsf{M} is a vector bundle when for the surjective map pi:mathsf{M}to mathsf{Q} and each qin mathsf{Q} we have that the fiber pi^{-1}(q) is a finite-dimensional vector space (see the great book by Abraham, Marsden and Ratiu Section 3.4 and/or the figure below).


Figure 2: A prototypical vector bundle, in this case the cylinder mathsf{M}.

Intuitively, vector bundles can be thought of as manifolds with vector spaces attached at each point, think of a cylinder living in mathbf{R}^3, where the base manifold is mathbf{S}^1subset mathbf{R}^2 and each fiber is a line through mathbf{S}^1 extending in the third direction, as exactly what the figure shows. Note, however, this triviality only needs to hold locally.

A concrete example is a rigid body with mathsf{M}=mathsf{SO}(3)times mathbf{R}^3, for which a large amount of papers claim to have designed continuous globally asymptotically stable controllers. The fact that this was so enormously overlooked makes this topological result not only elegant from a mathematical point of view, but it also shows its practical value. The motivation of this post is precisely to convey this message, topological insights can be very fruitful.

Now we can state their main result

Theorem [Theorem 1] Let mathsf{K} be a compact, boundaryless, manifold, being the base of the vector bundle pi:mathsf{M}to mathsf{K} with mathrm{dim}(mathsf{K})geq 1, then, there is no continuous vector field on mathsf{M} with a global asymptotically stable equilibrium point.

Indeed, compactness of mathsf{K} can also be relaxed to mathsf{K} not being contractible as we saw in the example above, however, compactness is in most settings more tangible to work with.

Example [The Grassmannian manifold] The Grassmannian manifold, denoted by mathsf{GR}_{k,n}, is the set of all k-dimensional subspaces mathcal{S}subseteq mathbf{R}^n. One can identify mathsf{GR}_{k,n} with the Stiefel manifold mathsf{GF}_{k,n}, the manifold of all orthogonal k-frames, in the bundle sense of before, that is pi:mathsf{GF}_{k,n}to mathsf{GR}_{k,n} such that the fiber pi^{-1}(mathcal{S}) represent all k-frames generating the subspace mathcal{S}. In its turn, mathsf{GF}_{k,n} can be indentified with the compact Lie group mathsf{O}(n) (via a quotient), such that indeed mathsf{GR}_{k,n} is compact. This manifold shows up in several optimization problems and from our continuous-time point of view we see that one can never find, for example, a globally converging gradient-flow like algorithm.

Example [Tangent Bundles] By means of a Lagrangian viewpoint, a lot of mechanical systems are of a second-order nature, this means they are defined on the tangent bundle of some mathsf{Q}, that is, pi:Tmathsf{Q}to mathsf{Q}. However, then, if the configuration manifold mathsf{Q} is compact, we can again appeal to the Theorem by Bhat and Bernstein. For example, Figure 2 can relate to the manifold over which the pair (theta,dot{theta}) is defined.

Global Isomorphisms

One might wonder, given a vector field X over a manifold mathsf{M}, since we need contractibility of mathsf{M} for X to have a global asymptotically stable equilibrium points, how interesting are these manifolds? As it turns out, for ngeq 5, which some might call the high-dimensional topological regime, contractible manifolds have a particular structure.

A topological space X being simply connected is equivalent to the corresponding fundamental group being trivial. However, to highlight the work by Stallings, we need a slightly less well-known notion.

Definition [Simply connected at infinity] The topological space mathsf{X} is simply connected at infinity, when for any compact subset mathcal{K}subseteq mathsf{X} there exists a mathcal{K}' where mathcal{K}subseteq mathcal{K}'subseteq mathsf{X} such that mathsf{X}-mathcal{K}' is simply connected.

This definition is rather complicated to parse, however, we can give a more tangible description. Let mathsf{X} be a non-compact topological space, then, mathsf{X} is said to have one end when for any compact mathcal{K} there is a mathcal{K}' where mathcal{K}subseteq mathcal{K}'subseteq mathsf{X} such that mathsf{X}-mathcal{K}' is connected. So, mathbf{R}, as expected, fails to have one end, while mathbf{R}^2 does have one end. Now, Proposition 2.1 shows that if mathsf{X} and mathsf{Y} are simply connected and both have one end, then mathsf{X}times mathsf{Y} is simply connected at infinity. This statement somewhat clarifies why dimension 4 and above are somehow easier to parse in the context of topology. A similar statement can be made about the cylindrical space mathsf{X}times mathbf{R}.

Lemma [Product representation Proposition 2.4] Let mathsf{X} and mathsf{Y} be manifolds of dimension greater or equal than 1. If mathsf{M}:=mathsf{X}times mathsf{Y} is contractible and mathrm{dim}(mathsf{M})geq 3, then, mathsf{M} is simply connected at infinity.

Now, most results are usually stated in the context of a piecewise linear (PL) topology. By appealing to celebrated results on triangulization (by Whitehead) we can state the following.

Theorem [Global diffeomorphism Theorem 5.1] Let mathsf{M} be a contractible C^r, rgeq 1, n-dimensional manifold which is simply connected at infinity. If ngeq 5, then, mathsf{M} is diffeomorphic to mathbf{R}^n.


Figure 3: The manifold mathsf{M} being diffeomorphic to mathbf{R}^2. Then, the benefit of this theorem is that in this setting, dynamical systems can be easily studied without appealing to local charts, describing curve gamma_2 is usually much easier than dealing with gamma_1 directly.

You might say that this is all obvious, as in Figure 3, we can always do it, also for lower-dimensions. However, there is famous counterexample by Whitehead, which provided lots of motivation for the community:

‘‘There is an open, 3-dimensional manifold which is contractible, but not homeomorphic to mathbf{R}^3.’’

In fact, this example was part of the now notorious program to solve the Poincaré conjecture. All in all, this shows that contractible manifolds are not that interesting from a nonlinear dynamical systems point of view, compact manifolds is a class of manifolds providing for more of a challenge.

Remark on the Stabilization of Sets

So far, the discussion has been on the stabilization of (equilibrium) points, now what if we want to stabilize a curve or any other desirable set, that is, a non-trivial set mathcal{X}. It seems like the shape of these objects, with respect to the underlying topology of mathsf{M}, must indicate if this is possible again? Let U_x denote an open neighbourhood of some xin mathcal{X}. In what follows, the emphasis is on U_{mathcal{X}}:=cup_xU_x and mathcal{X} having the same topology. We will be brief and follow Moulay and Bhat.

Lemma [Theorem 5] Consider a closed-loop dynamical system given rise to a continuous flow. Suppose that the compact set mathcal{K}subseteq mathsf{M} is asymptotically stable with domain of attraction mathcal{A}subseteq mathsf{M}. Then, mathcal{K} is a weak deformation retract of mathcal{A}.

In rather loose terms, mathcal{K} being a weak deformation retract of mathcal{A} means that one can continuously ‘‘retract’’ points from mathcal{A} and morph them into the set mathcal{K}. So, this works for mathbf{S}^{n-1} and mathbf{R}^nsetminus{0}, due to mathbf{R}^n being punctured. See p.200 for more information on retracts. In Figure 4 below we show that since this lemma is mostly concerned with the local topology, the global topology is less important. For mathsf{M}, the set mathcal{X} is not a deformation retract of the set U_{mathcal{X}} such that U_{mathcal{X}} can never be a region of attraction, this should be contrasted with the setting on the Torus mathbf{T}^1.


Figure 4: The topological relation between mathcal{X} and U_{mathcal{X}} dictates the possibility of U_{mathcal{X}} being a domain of attraction for mathcal{X}.

The beauty of all of this is that even if you are not so sure about the exact dynamical system at hand, the underlying topology can already provide for some answers. In particular, to determine, what can, or cannot be done.

This post came about during a highly recommended course at EPFL.

Optimal coordinates |24 May 2020|
tags: math.LA, math.OC, math.DS

There was a bit of radio silence, but as indicated here, some interesting stuff is on its way.

In this post we highlight this 1976 paper by Mullis and Roberts on ’Synthesis of Minimum Roundoff Noise Fixed Point Digital Filters’.

Let us be given some single-input single-output (SISO) dynamical system sigma in Sigma

 sigma :left{ begin{array}{ll} x(t+1) &= Ax(t) + bu(k) y(t) &= cx(t) + du(k)  end{array}right. .

It is known that the input-output behaviour of any sigma, that is, the map from u(t) to y(t) is invariant under a similarity transform. To be explicit, the tuples (A,b,c,d) and (TAT^{-1},Tb,cT^{-1},d), which correspond to the change of coordinates z(t)=Tx(t) for some Tin mathsf{GL}_n, give rise to the same input-output behaviour. Hence, one can define the equivalence relation sigma sim sigma' by imposing that the input-output maps of sigma and sigma' are equivalent. By doing so we can consider the quotient Sigma / mathsf{GL}_n. However, in practice, given a sigma, is any sigma' such that sigma'sim sigma equally useful? For example, the following A and A' are similar, but clearly, A' is preferred from a numerical point of view:

 A = left[begin{array}{ll} 0.5 & 10^9 0 & 0.5  end{array}right],quad  A' = left[begin{array}{ll} 0.5 & 1 0 & 0.5  end{array}right].

In what follows, we highlight the approach from Mullis and Roberts and conclude by how to optimally select Tin mathsf{GL}_n. Norms will be interpreted in the ell_2 sense, that is, for {x(t)}_{tin mathbf{N}}, |x|^2=sum^{infty}_{t=0}|x(t)|_2. Also, in what follows we assume that x(0)=0 plus that A is stable, which will mean rho(A)<1 and that sigma corresponds to a minimal realization.

The first step is to quantify the error. Assume we have allocated m_i bits at our disposal to present the i^{mathrm{th}} component of x(t). These values for m_i can differ, but we constrain the average by sum^n_{i=1}m_i = nm for some m. Let mull 1 be a 'step-size’, such that our dynamic range of x_i(t) is bounded by pm mu 2^{m_i-1} (these are our possible representations). Next we use the modelling choices from Mullis and Roberts, of course, they are debatable, but still, we will infer some nice insights.

First, to bound the dynamic range, consider solely the effect of an input, that is, define f_i(t) by x(t)=sum^{infty}_{k=1}A^{k-1}b u(t-k), x_i(t)=sum^{infty}_{k=1}f_i(t)u(t-k). Then we will impose the bound pm delta |f_i| on x_i(t). In combination with the step size (quantization), this results in delta^2 |f_i|^2 = (mu 2^{m_i-1})^2. Let u be a sequence of i.i.d. sampled random variables from mathcal{N}(0,1). Then we see that mathrm{var}big(x_i(t)big)= |f_i|^2. Hence, one can think of delta as scaling parameter related to the probability with which this dynamic range bound is true.

Next, assume that all the round-off errors are independent and have a variance of mu^2. Hence, the worst-case variance of computating x_i(t+1) is mu^2(n+1). To evaluate the effect of this error on the output, assume for the moment that u(t) is identically 0. Then, y(t) = cA^t x(0). Similar to f_i(t), define g_i(t) as the i^{mathrm{th}} component of cA^t. As before we can compute the variance, this time of the full output signal, which yields sigma_y^2:=mathrm{var}(y) = mu^2(n+1)sum^n_{i=1}|g_i|^2. Note, these expressions hinge on the indepedence assumption.

Next, define the (infamous) matrices W_c, W_o by

 begin{array}{lll} W^{(c)} &= AW^{(c)}A^{top} + bb^{top} &= sum^{infty}_{k=0} A^k bb^{top} (A^k)^{top},  W^{(o)} &= A^{top}W^{(o)} A + c^{top}c &=sum^{infty}_{k=0} (A^k)^{top}c^{top}c A.  end{array}

If we assume that the realization is not just minimal, but that (A,b) is a controllable pair and that (A,c) is an observable pair, then, W^{(c)}succ 0 and W^{(o)}succ 0. Now the key observation is that W^{(c)}_{ij} = langle f_i, f_j rangle and similarly W^{(o)}_{ij} = langle g_i, g_j rangle. Hence, we can write delta^2 |f_i|^2 = (mu 2^{m-1})^2 as delta^2 K_ii = (mu 2^{m-1})^2 and indeed sigma_y^2 = mu^2(n+1)sum^n_{i=1}W_{ii}. Using these Lyapunov equations we can say goodbye to the infinite-dimensional objects.

To combine these error terms, let say we have apply a coordinate transformation x'=Tx, for some Tin mathsf{GL}_n. Specifically, let T be diagonal and defined by T_{ii}=2delta sqrt{K_{ii}}/(mu 2^m). Then one can find that K'_{ii}=K_{ii}/T_{ii}^2, W'_{ii}=W_{ii}T_{ii}^2. Where the latter expressions allows to express the output error (variance) by

 sigma_y^2 = (n+1)delta^2 sum^n_{i=1}frac{W^{(c)}_{ii}W^{(o)}_{ii}}{(2^{m_i})^2}.

Now we want to minimize sigma_y^2 over all sigma'sim sigma plus some optimal selection of m_i. At this point it looks rather painful. To make it work we first reformulate the problem using the well-known arithmetic-geometric mean inequality^{[1]} for non-negative sequences {a_i}_{iin [n]}:

 frac{1}{n} sum^n_{i=1}a_i geq left( prod^n_{i=1}a_i right)^{1/n}.

This inequality yields

 sigma_y^2 = n(n+1)delta^2 frac{1}{n}sum^n_{i=1}frac{W^{(c)}_{ii}W^{(o)}_{ii}}{(2^{m_i})^2}geq n(n+1)left(frac{delta}{2^m}right)^2left(prod^n_{i=1}{W^{(c)}_{ii}W^{(o)}_{ii}}right)^{1/n}qquad (1).

See that the right term is independent of m_i, hence this is a lower-bound with respect to minimization over m_i. To achieve this (to make the inequality from (1) an equality), we can select

 m_i = m + frac{1}{2} left(log_2 W_{ii}^{(c)}W_{ii}^{(o)} - frac{1}{2}sum^n_{j=1}log_2 W_{ii}^{(c)}W_{ii}^{(o)} right).

Indeed, as remarked in the paper, m_i is not per se an integer. Nevertheless, by this selection we find the clean expression from (1) to minimize over systems equivalent to sigma, that is, over some transformation matrix T. Define a map f:mathcal{S}^n_{succ 0}to (0,1] by

 f(P) = left( frac{mathrm{det}(P)}{prod^n_{i=1}P_{ii}}right)^{1/2}.

It turns out that f(P)=1 if and only if P is diagonal. This follows^{[2]} from Hadamard's inequality. We can use this map to write

 sigma_y^2 = n(n+1) left(frac{delta}{2^m} right)^2 frac{mathrm{det}(W^{(c)}W^{(o)})^{1/n}}{big(f(W^{(c)})f(W^{(o)})big)^{2/n}} geq n(n+1) left(frac{delta}{2^m}right)^2 mathrm{det}(W^{(c)}W^{(o)})^{1/n}.

Since the term mathrm{det}(W^{(c)}W^{(o)}) is invariant under a transformation T, we can only optimize sigma_y^2 over a structural change in realization tuple (A,b,c,d), that is, we need to make W^{(c)} and W^{(o)} simultaneous diagonal! It turns out that this numerically 'optimal’ realization, denoted sigma^{star}, is what is called a principal axis realization.

To compute it, diagonalize W^{(o)}, W^{(o)}=Q_o Lambda_o Q_o^{top} and define T_1^{-1}:=Lambda_o^{1/2}Q_o^{top}. Next, construct the diagonalization T_1^{-1}W^{(c)}T_1^{-top}=Q_{1,c}Lambda_{1,c}Q_{1,c}^{top}. Then our desired transformation is T:=T_1 Q_{1,c}. First, recall that under any Tin mathsf{GL}_n the pair (W^{(c)},W^{(o)}) becomes (T^{-1}W^{(c)}T^{-top},T^{top}W^{(o)}T). Plugging in our map T yields the transformed matrices (Lambda_{1,c},I_n), which are indeed diagonal.


At last, we do a numerical test in Julia. Consider a linear system sigma defined by the tuple (A,b,c,d):

 A = left[begin{array}{ll} 0.8 & 0.001  0 & -0.5 end{array} right],quad b = left[begin{array}{l} 10  0.1 end{array} right],
 c=left[begin{array}{ll} 10 & 0.1 end{array} right],quad d=0.

To simulate numerical problems, we round the maps f(x,u)=Ax+bu and h(x)=cx to the closest integer where u(t) is a noisy sinusoidal input. We compare a perfect (no numerical errors) realization sigma to the rounded, denoted by [cdot], naive realization [sigma] and to the optimized one [sigma^{star}]. We do 100 experiments and show the mean plus the full confidence interval (of the input-output behaviour), the optimized representation is remarkably better. Numerically we observe that for sigma^{star} we have A^{star}_{21}neq 0, which is precisely where the naive A struggles.

[1]: To show this AM-GM inequality, we can use Jensen's inequality for concave functions, that is, mathbf{E}[g(x)]leq g(mathbf{E}[x]). We can use the logarithm as our prototypical concave function on mathbf{R}_{>0} and find logbig(frac{1}{n}sum^n_{i=1}x_ibig)geq frac{1}{n}sum^n_{i=1}log(x_i) = logbig((prod^n_{i=1}x_i)^{1/n} big). Then, the result follows.
[2]: The inequality attributed to Hadamard is slightly more difficult to show. In its general form the statement is that |mathrm{det}(A)|leq prod^n_{i=1}|a_i|_2, for a_i the i^{mathrm{th}} column of A. The inequality becomes an equality when the columns are mutually orthogonal. The intuition is clear if one interprets the determinant as the signed volume spanned by the columns of A, which. In case Ain mathcal{S}^n_{succ 0}, we know that there is L such that A=L^{top}L, hence, by this simple observation it follows that

 mathrm{det}(A)=mathrm{det}(L)^2 leq prod^n_{i=1}|ell_i|_2^2 = prod^n_{i=1}a_{ii}.

Equality holds when the columns are orthogonal, so AA^{top} must be diagonal, but A=A^{top} must also hold, hence, A^2 must be diagonal, and thus A must be diagonal, which is the result we use.

Spectral Radius vs Spectral Norm |28 Febr. 2020|
tags: math.LA, math.DS

Lately, linear discrete-time control theory has start to appear in areas far beyond the ordinary. As a byproduct, papers start to surface which claim that stability of linear discrete-time systems


is characterized by |A|_2<1. The confunsion is complete by calling this object the spectral norm of a matrix Ain mathbf{R}^{ntimes n}. Indeed, stability of A is not characterized by |A|_2:=sup_{xin mathbf{S}^{n-1}}|Ax|_2, but by rho(A):=max_{iin [n]}|lambda_i(A)|. If rho(A)<1, then all absolute eigenvalues A of are strictly smaller than one, and hence for x_k = A^k x_0, lim_{kto infty}x_k = 0. This follows from the Jordan form of A in combination with the simple observation that for lambda in mathbf{C}, lim_{kto infty} lambda^k = 0 iff |lambda|<1.

To continue, define two sets; mathcal{A}_{rho}:={Ain mathbf{R}^{ntimes n};:;rho(A)<1} and mathcal{A}_{ell_2}:={Ain mathbf{R}^{ntimes n};:;|A|_2<1}. Since^{[1]} rho(A)leq |A|_p we have mathcal{A}_{ell_2}subset mathcal{A}_{rho}. Now, the main question is, how much do we lose by approximating mathcal{A}_{rho} with mathcal{A}_{ell_2}? Motivation to do so is given by the fact that |cdot|_2 is a norm and hence a convex function^{[2]}, i.e., when given a convex polytope mathcal{P} with vertices A_i, if {A_i}_isubset mathcal{A}_{ell_2}, then mathcal{P}subset mathcal{A}_{ell_2}. Note that rho(A) is not a norm, rho(A) can be 0 without A=0 (consider an upper-triangular matrix with zeros on the main diagonal), the triangle inequality can easily fail as well. For example, let

 A_1 = left[begin{array}{ll} 0 & 1 0 & 0  end{array}right], quad A_2 = left[begin{array}{ll} 0 & 0 1 & 0  end{array}right],

Then rho(A_1)=rho(A_2)=0, but rho(A_1+A_2)=1, hence rho(A_1+A_2)leq rho(A_1)+rho(A_2) fails to hold. A setting where the aforementioned property of |cdot|_2 might help is Robust Control, say we want to assess if some algorithm rendered a compact convex set mathcal{K}, filled with A's, stable. As highlighted before, we could just check if all extreme points of mathcal{K} are members of A_{ell_2}, which might be a small and finite set. Thus, computationally, it appears to be attractive to consider mathcal{A}_{ell_2} over the generic mathcal{A}_{rho}.

As a form of critique, one might say that mathcal{A}_{rho} is a lot larger than mathcal{A}_{ell_2}. Towards such an argument, one might recall that |A|_2=sqrt{lambda_{mathrm{max}}(A^{top}A)}. Indeed, |A|_2=rho(A) if^{[3]} Ain mathsf{Sym}_n, but mathsf{Sym}_n simeq mathbf{R}^{n(n+1)/2}. Therefore, it seems like the set of A for which considering |cdot|_2 over rho(cdot) is reasonable, is negligibly small. To say a bit more, since^{[4]} |A|_2leq |A|_F we see that we can always find a ball with non-zero volume fully contained in mathcal{A}_{rho}. Hence, mathcal{A}_{rho} is at least locally dense in mathbf{R}^{ntimes n}. The same holds for mathcal{A}_2 (which is more obvious). So in principle we could try to investigate mathrm{vol}(mathcal{A}_{ell_2})/mathrm{vol}(mathcal{A}_{rho}). For n=1, the sets agree, which degrades asymptotically. However, is this the way to go? Lets say we consider the set widehat{mathcal{A}}_{rho,N}:={Ain mathbf{R}^{ntimes n};:;rho(A)<N/(N+1)}. Clearly, the sets widehat{mathcal{A}}_{rho,N} and mathcal{A}_{rho} are different, even in volume, but for sufficiently large Nin mathbf{N}, should we care? The behaviour they parametrize is effectively the same.

We will stress that by approximating mathcal{A}_{rho} with mathcal{A}_{ell_2}, regardless of their volumetric difference, we are ignoring a full class of systems and miss out on a non-neglible set of behaviours. To see this, any system described by mathcal{A}_{ell_2} is contractive in the sense that |Ax_k|_2leq |x_k|_2, while systems in mathcal{A}_{rho} are merely asymptotically stable. They might wander of before they return, i.e., there is no reason why for all kin mathbf{N} we must have |x_{k+1}|_pleq |x_{k}|_p. We can do a quick example, consider the matrices

 A_2 = left[begin{array}{lll} 0 & -0.9 & 0.1 0.9 & 0 & 0  0 & 0 & -0.9 end{array}right], quad A_{rho} = left[begin{array}{lll} 0.1 & -0.9 & 1 0.9 & 0 & 0  0 & 0 & -0.9 end{array}right].

Then A_2in mathcal{A}_{ell_2}, A_{rho}in mathcal{A}_{rho}setminus{mathcal{A}_{ell_2}} and both A_2 and A_{rho} have |lambda_i|=0.9 forall i. We observe that indeed A_2 is contractive, for any initial condition on mathbf{S}^2, we move strictly inside the sphere, whereas for A_{rho}, when starting from the same initial condition, we take a detour outside of the sphere before converging to 0. So although A_2 and A_{rho} have the same spectrum, they parametrize different systems.

Simulated link 

In our deterministic setting this would mean that we would confine our statespace to a (solid) sphere with radius |x_0|_2, instead of mathbf{R}^n. Moreover, in linear optimal control, the resulting closed-loop system is usually not contractive. Think of the infamous pendulum on a cart. Being energy efficient has usually nothing to do with taking the shortest, in the Euclidean sense, path.

[1] : Recall, |A|_p:=sup_{xin mathbf{S}^{n-1}}|Ax|_p. Then, let xin mathbf{S}^{n-1} be some eigenvector of A. Now we have |A|_pgeq |Ax|_p = |lambda x|_p = |lambda |. Since this eigenvalue is arbitrary it follows that |A|_pgeq rho(A).
[2] : Let (A_1,A_2)in mathcal{A}_{ell_2} then |theta A_1 + (1-theta)A_2|_2 leq |theta||A_1|_2 + |1-theta||A_2|_2 < 1. This follows from the |cdot|_2 being a norm.
[3] : Clearly, if Ain mathsf{Sym}_n, we have sqrt{lambda_{mathrm{max}}(A^{top}A)}=sqrt{lambda_{mathrm{max}}(A^2)}=rho(A). Now, when rho(A)=|A|_2, does this imply that Ain mathsf{Sym}_n? The answer is no, consider

 A' = left[begin{array}{lll} 0.9 & 0 & 0 0 & 0.1 & 0  0 & 0.1 & 0.1 end{array}right].

Then, A'notin mathsf{Sym}_n, yet, rho(A)=|A|_2=0.9. For the full set of conditions on A such that |A|_2=rho(A) see this paper by Goldberg and Zwas.
[4] : Recall that |A|_F = sqrt{mathrm{Tr}(A^{top}A)}=sqrt{sum_{i=1}^n lambda_i(A^{top}A)}. This expression is clearly larger or equal to sqrt{lambda_{mathrm{max}}(A^{top}A)}=|A|_2.

A Special Group, Volume Preserving Feedback |16 Nov. 2019|
tags: math.LA, math.DS

This short note is inspired by the beautiful visualization techniques from Duistermaat and Kolk (DK1999). Let's say we have a 2-dimensional linear discrete-time dynamical system x_{k+1}=Ax_k, which preserves the volume of the cube [-1,1]^2 under the dynamics, i.e. mathrm{Vol}([-1,1]^2)=mathrm{Vol}(A^k[-1,1]^2) for any kin mathbf{Z}.

Formally put, this means that A is part of a certain group, specifically, consider some field F and define the Special Linear group by

     mathsf{SL}(n,F):={ Ain mathsf{GL}(n,F);|; mathrm{det}(A)=1 }.

Now, assume that we are only interested in matrices Ain mathsf{SL}(2,mathbf{R}) such that the cube [-1,1]^2 remains bounded under the dynamics, i.e., lim_{kto infty}A^k[-1,1]^2 is bounded. In this scenario we restrict our attention to mathsf{SO}(2,mathbf{R})subset mathsf{SL}(2,mathbf{R}). To see why this is needed, consider A_1 and A_2 both with determinant 1:

   A_1 = left[   begin{array}{ll}   2 & 0    0 & frac{1}{2}    end{array}right], quad  A_2 = left[   begin{array}{ll}   1 & 2    0 & 1    end{array}right],

If we look at the image of [-1,1]^2 under both these maps (for several iterations), we see that volume is preserved, but also clearly that the set is extending indefinitely in the horizontal direction.

Linear maps 

To have a somewhat interesting problem, assume we are given x_{k+1}=(A+BK)x_k with Ain mathsf{SL} while it is our task to find a K such that A+BKin mathsf{SO}, hence, find feedback which not only preserves the volume, but keeps any set of initial conditions (say [-1,1]^2) bounded over time.

Towards a solution, we might ask, given any Ain mathsf{SL}(2,mathbf{R}) what is the nearest (in some norm) rotation matrix? This turns out to be a classic question, starting from orthogonal matrices, the solution to mathcal{P}:min_{widehat{A}in mathsf{O}(n,mathbf{R})}|A-widehat{A}|_F^2 is given by widehat{A}=UV^{top}, where (U,V) follows from the SVD of A, A=USigma V^{top}. Differently put, when we use a polar decomposition of A, A=widetilde{U}P, then the solution is given by widetilde{U}. See this note for a quick multiplier proof. An interesting sidenote, problem mathcal{P} is well-defined since mathsf{O}(n,mathbf{R}) is compact. To see this, recall that for any Qin mathsf{O}(n,mathbf{R}) we have |Q|_F = sqrt{n}, hence the set is bounded, plus mathsf{O}(n,mathbf{R}) is the pre-image of I_n under varphi:Amapsto A^{top}A, Ain mathsf{GL}(n,mathbf{R}), hence the set is closed as well.

This is great, however, we would like to optimize over mathsf{SO}subset mathsf{O} instead. To do so, one usually resorts to simply checking the sign - and if necessary - flipping it via finding the closest matrix with a change in determinant sign. We will see that by selecting appropriate coordinates we can find the solution without this checking of signs.

For today we look at mathsf{SL}(2,mathbf{R}) and largely follow (DK1999), for such a 2-dimensional matrix, the determinant requirement translates to ad-bc=1. Under the following (invertible) linear change of coordinates

 left[   begin{array}{l}   a     d     b     c     end{array}right]  =  left[   begin{array}{llll}   1 & 1 & 0 & 0    1 & -1 & 0 & 0 0 & 0 & 1 & 1    0 & 0 & 1 & -1   end{array}right] left[   begin{array}{l}   p     q     r     s     end{array}right]

mathrm{det}(A)=1 becomes p^2+s^2=q^2+r^2+1, i.e., for any pair (q,r)in mathbf{R}^2 the point (p,s) runs over a circle with radius (q^2+r^2+1)^{1/2}. Hence, we obtain the diffeomorphism mathsf{SL}(2,mathbf{R})simeq mathbf{S}^1 times mathbf{R}^2. We can use however a more pretty diffeomorphism of the sphere and an open set in mathbf{R}^2. To that end, use the diffeomorphism:

     (theta,(u,v)) mapsto frac{1}{sqrt{1-u^2-v^2}} left[ begin{array}{ll}     cos(theta)+u & -sin(theta)+v      sin(theta)+v & cos(theta)-u      end{array}right] in mathsf{SL}(2,mathbf{R}),

for theta in mathbf{R}/2pi mathbf{Z} (formal way of declaring that theta should not be any real number) and the pair (u,v) being part of mathbf{D}_o:={(u,v)in mathbf{R}^2;|; u^2+v^2<1} (the open unit-disk). To see this, recall that the determinant is not a linear operator. Since we have a 2-dimensional example we can explicitly compute the eigenvalues of any Ain mathsf{SL}(2,mathbf{R}) (plug ad-bc=1 into the characteristic equation) and obtain:

     lambda_{1,2} = frac{mathrm{Tr}(A)pmsqrt{mathrm{Tr}(A)^2-4}}{2},quad mathrm{Tr}(A) = frac{2cos(theta)}{sqrt{1-u^2-v^2}}.

At this point, the only demand on the eigenvalues is that lambda_1 cdot lambda_2 =1. Therefore, when we would consider A within the context of a discrete linear dynamical system x_{k+1}=Ax_k, 0 is either a saddle-point, or we speak of a marginally stable system (|lambda_{1,2}|=1). We can visualize the set of all marginally stable Ain mathsf{SL}, called M, defined by all big(theta,(u,v)big)in mathbf{R}/2pimathbf{Z}times mathbf{D}_o satisfying

     left|frac{cos(theta)}{sqrt{1-u^2-v^2}}pm left(frac{1-u^2-v^2-sin(theta)^2}{1-u^2-v^2} right)^{1/2}  right|=1

read more

(DK1999) J.J. Duistermaat and J.A.C. Kolk : ‘‘Lie Groups’’, 1999 Springer.

Riemannian Gradient Flow |5 Nov. 2019|
tags: math.DG, math.DS

Previously we looked at dot{x}=f(x), f(x)=big(A-(x^{top}Ax)I_n big)x - and its resulting flow - as the result from mapping dot{x}=Ax to the sphere. However, we saw that for Ain mathcal{S}^n_{++} this flow convergences to pm v_1, with Av_1=lambda_{mathrm{max}}(A)v_1 such that for g(x)=|Ax|_2 we have lim_{tto infty}gbig(x(t,x_0) big)=|A|_2 forall;x_0in mathbf{S}^{n-1}setminus{v_2,dots,v_n}. Hence, it is interesting to look at the flow from an optimization point of view: |A|_2=sup_{xin mathbf{S}^{n-1}}|Ax|_2.

A fair question would be, is our flow not simply Riemannian gradient ascent for this problem? As common with these kind of problems, such a hypothesis is something you just feel.

Now, using the tools from (CU1994, p.311) we can compute the gradient of g(x) (on the sphere) via mathrm{grad}_{mathbf{S}^2}(g|_{mathbf{S}^2})=mathrm{grad}_{mathbf{R}^3}(g)-mathrm{d}g(xi)xi, where xi is a vector field normal to mathbf{S}^2, e.g., xi=(x,y,z). From there we obtain

   mathrm{grad}_{mathbf{S}^2}(g|_{mathbf{S}^2})  = left[   begin{array}{lll} (1-x^2) & -xy  & -xz -xy & (1-y^2) & -yz -xz & -yz & (1-z^2) end{array}right] left[   begin{array}{l}partial_x g partial_y g partial_z g end{array}right]=G(x,y,z)nabla g.

To make our life a bit easier, use instead of g the map h:=g^2. Moreover, set s=(x,y,z)in mathbf{R}^3. Then it follows that

   begin{array}{ll}   mathrm{grad}_{mathbf{S}^2}(h|_{mathbf{S}^2})  		&= left(I_3-mathrm{diag}(s)left[begin{array}{l} 													s^{top}s^{top}s^{top}  													end{array}right]right) 2A^{top}As   						&= 2left(A^{top}A-(s^{top}A^{top}Asright)I_3)s.  end{array}

Of course, to make the computation cleaner, we changed g to h, but the relation between  mathrm{grad}_{mathbf{S}^2}(h|_{mathbf{S}^2}) and f is beautiful. Somehow, mapping trajectories of dot{x}=Ax, for some Ain mathcal{S}^n_{++}, to the sphere corresponds to (Riemannian) gradient ascent applied to the problem sup_{xin S^{2}}|Ax|_2.

Example sphere 1 

To do an example, let Ain mathcal{S}^3_{++} be given by

   A = left[   begin{array}{lll}   10 & 2 & 0    2 & 10 & 2    0 & 2 & 1   end{array}right].

We can compare dot{x}=f(x) with dot{y}=mathrm{grad}_{mathbf{S}^2}(h|_{mathbf{S}^2}). We see that the gradient flow under the quadratic function takes a clearly ‘‘shorter’’ path.

Now, we formalize the previous analysis a bit a show how fast we convergence. Assume that the eigenvectors are ordered such that eigenvector q_1 corresponds to the largest eigenvalue of Ain mathcal{S}^n_{++}. Then, the solution to dot{x}=big(A-(x^{top}Ax)I_nbig)x is given by

     x(t) = frac{exp({At})x_0}{|exp({At})x_0|_2}.

Let x_0=sum^{n}_{i=1}c_iq_i be x_0 expressed in eigenvector coordinates, with all q_iin mathbf{S}^{n-1} (normalized). Moreover, assume all eigenvalues are distinct. Then, to measure if x(t) is near q_1, we compute 1-|langle x(t),q_1 rangle|, which is 0 if and only if q_1 is parallel to x(t). To simplify the analysis a bit, we look at 1-langle x(t),q_1 rangle ^2=varepsilon, for some perturbation varepsilonin mathbf{R}_{geq 0}, this yields

     frac{c_1^2 exp({2lambda_1 t})}{sum^{n}_{i=1}c_i^2 exp({2lambda_i t})} = 1-varepsilon.

Next, take the the (natural) logarithm on both sides:

     2log (|c_1|)+2lambda_1 t - log left( sum^{n}_{i=1}expleft{2lambda_it + 2log(|c_i|)right}right) = log(1-varepsilon).

This log-sum-exp terms are hard to deal with, but we can apply the so-called ‘‘log-sum-exp trick’’:

     log sum_{iin I}exp (x_i) = x^{star}+log sum_{iin I}exp(x_i-x^{star}), quad forall x^{star}in mathbf{R}^n.

In our case, we set x^{star}:= 2lambda_1 t + 2log(|c_1|) and obtain

     -log left(sum^{n}_{i=1}exp left{2 left[(lambda_i-lambda_1)t + logleft(frac{|c_i|}{|c_1|} right) right] right}+1 right) = log(1-varepsilon).

We clearly observe that for tto infty the LHS approaches 0 from below, which means that varepsilonto 0 from above, like intended. Of course, we also observe that the mentioned method is not completely general, we already assume distinct eigenvalues, but there is more. We do also not convergence when x_0 in mathrm{span}{q_2,dots,q_n}, which is however a set of measure 0 on the sphere mathbf{S}^{n-1}.

More interestingly, we see that the convergence rate is largely dictated by the ‘‘spectral gap/eigengap’’ lambda_2-lambda_1. Specifically, to have a particular projection error varepsilon, such that 1gg varepsilon>0, we need

     t approx log left( frac{varepsilon}{1-varepsilon}right)frac{1}{lambda_2-lambda_1}.

Comparing this to the resulting flow from dot{s}=2left(A^{top}A-(s^{top}A^{top}Asright)I_3)s, s(0)in mathbf{S}^2, we see that we have the same flow, but with Amapsto  2 A^2. This is interesting, since A and 2A^2 have the same eigenvectors, yet a different (scaled) spectrum. With respect to the convergence rate, we have to compare (lambda_2-lambda_1) and (lambda_2^2-lambda_1^2) for any lambda_1,lambda_2in mathbf{R}_{> 0} with lambda_1>lambda_2 (the additional 2 is not so interesting).

It is obvious what we will happen, the crux is, is lambda larger or smaller than 1? Can we immediately extend this to a Newton-type algorithm? Well, this fails (globally) since we work in mathbf{R}^3 instead of purely with mathbf{S}^2. To be concrete, mathrm{det}(G(x,y,z))|_{mathbf{S}^2}=0, we never have 3 degrees of freedom.

Of course, these observations turned out to be far from new, see for example (AMS2008, sec. 4.6).

(AMS2008) P.A. Absil, R. Mahony and R. Sepulchre: ‘‘Optimization Algorithms on Matrix Manifolds’’, 2008 Princeton University Press.
(CU1994) Constantin Udriste: ‘‘Convex Functions and Optimization Methods on Riemannian Manifolds’’, 1994 Kluwer Academic Publishers.

Dynamical Systems on the Sphere |3 Nov. 2019|
tags: math.DS

Great arc G 

In this post we start a bit with control of systems living on compact metric spaces. This is interesting since we have to rethink what we want, xtoinfty cannot even occur. Let a great arc (circle) mathcal{G} on mathbf{S}^{n-1}:={xin mathbf{R}^n;:;|x|_2=1} be parametrized by the normal of a hyperplane mathcal{H}_a:={xin mathbf{R}^n;:;langle a , xrangle =0} through 0, i.e., mathcal{G}=mathcal{H}_acap mathbf{S}^{n-1}, a is the axis of rotation for mathcal{G}.

To start, consider a linear dynamical systems dot{x}=Ax, x(0)=x_0, for which the solution is given by x(t)=mathrm{exp}(At)x_0. We can map this solution to the sphere via x(t) mapsto x(t)/|x|_2=:y(t). The corresponding vector field for y(t) is given by

 begin{array}{ll} dot{y} &= f(y)        	&= frac{dot{x}}{|x|_2}+x frac{partial}{partial t}left(frac{1}{|x|_2} right)     	&= frac{dot{x}}{|x|_2}-x frac{1}{|x|_2^3}x^{top}dot{x}     	&= A frac{x}{|x|_2} - frac{x}{|x|_2} frac{x^{top}}{|x|_2}Afrac{x}{|x|_2}     	&= left(A-(y^{top}Ay) I_nright) y. end{array}

The beauty is the following, to find the equilibria, consider (A-(y^{top}Ay)I_n)y = 0. Of course, y=0 is not valid since 0notin mathbf{S}^{n-1}. However, take any eigenvector v of A, then v/|v|_2 is still an eigenvector, plus it lives on mathbf{S}^{n-1}. If we plug such a scaled v (from now on consider all eigenvectors to live on mathbf{S}^{n-1}) into f(y) = 0 we get (A-lambda I_n)v=0, which is rather cool. The eigenvectors of A are (at least) the equilibria of y. Note, if f(v)=0, then so is f(-v)=0, each eigenvector gives rise to two equilibria. We say at least, since if mathrm{ker}(A) is non-trivial, then for any yin mathrm{ker}(A)subset mathbf{R}^n, we get f(y)=0.

Let lambda(A)subset mathbf{R}, then what can be said about the stability of the equilibrium points? Here, we must look at T_vmathbf{S}^{n-1}, where the n-dimensional system is locally a n-1-dimensional linear system. To do this, we start by computing the standard Jacobian of f, which is given by

     Df(y) = A-I_n (y^{top}Ay) - 2 yy^{top}A.

As said before, we do not have a n-dimensional system. To that end, we assume Ain mathsf{Sym}(n,mathbf{R}) with diagonalization A=QLambda Q^{top} (otherwise peform a Gram-Schmidt procedure) and perform a change of basis Df'=Q^{top}DfQ. Then if we are interested in the qualitative stability properties of v_i (the i^{mathrm{th}} eigenvector of A), we can simply look at the eigenvalues of the matrix resulting from removing the i^{mathrm{th}} row and column in Df'|_{v_i}, denoted Df'|_{T_{v_i}mathbf{S}^{n-1}}.

We do two examples. In both figures are the initial conditions indicated with a dot while the equilibrium points are stars.

Example sphere 1 

First, let

   A_1 = left[   begin{array}{lll}   1 & 0 & 0    0 & -1 & 0    0 & 0 & -1   end{array}right]

Clearly, for A_1 we have Q=I_3 and let (v_1,v_2,v_3) be (e_1,e_2,e_3). Then using the procedure from before, we find that pm v_1 is stable (attracting), while pm (v_2,v_3) are locally unstable in one direction while 0 in the other (along the great arc). Hence, this example gives rise to the two stable poles.

read more

Finite Calls to Stability | 31 Oct. 2019 |
tags: math.LA, math.DS

Consider an unknown dynamical system x_{k+1}=Ax_k parametrized by Ain mathsf{Sym}(n,mathbf{R}). In this short note we look at a relatively simple, yet neat method to assess stability of A via finite calls to a particular oracle. Here, we will follow the notes by Roman Vershynin (RV2011).

Assume we have no knowledge of A, but we are able to obtain langle Ay, y rangle for any desired yin mathbf{R}^n. It turns out that we can use this oracle to bound rho(A) from above.

First, recall that since A is symmetric, |A|_2=sigma_{mathrm{max}}(A)=|lambda_{mathrm{max}}(A)|=rho(A). Now, the common definition of |A|_2 is sup_{xin mathbf{R}^nsetminus{0}}|Ax|_2/|x|_2. The constraint set mathbf{R}^nsetminus{0} is not per se convenient from a computational point of view. Instead, we can look at a compact constraint set: |A|_2=sup_{xin mathbf{S}^{n-1}} |Ax|_2, where mathbf{S}^{n-1} is the sphere in mathbf{R}^n. The latter formulation has an advantage, we can discretize these compact constraints sets and tame functions over the individual chunks.

To do this formally, we can use varepsilon-nets, which are convenient in quantitative analysis on compact metric spaces (X,d). Let mathcal{N}_{varepsilon}subseteq X be a varepsilon-net for X if forall xin X;exists yin mathcal{N}_{varepsilon}:d(x,y)leq varepsilon. Hence, we measure the amount of balls to cover X.

For example, take (mathbf{S}^{n-1},ell^n_2) as our metric space where we want to bound |mathcal{N}_{varepsilon}| (the cardinality, i.e., the amount of balls). Then, to construct a minimal set of balls, consider an varepsilon-net of balls B_{varepsilon/2}(y), with d(y_1,y_2)geq varepsilon;forall y_1,y_2in mathcal{N}_{varepsilon}. Now the insight is simply that mathrm{vol}(B_{varepsilon/2})|mathcal{N}_{varepsilon}|leq mathrm{vol}(B_{1+varepsilon/2})-mathrm{vol}(B_{1-varepsilon/2}). Luckily, we understand the volume of n-dimensional Euclidean balls well such that we can establish

     |mathcal{N}_{varepsilon}| leq frac{mathrm{vol}(B_{1+varepsilon/2})-mathrm{vol}(B_{1-varepsilon/2})}{mathrm{vol}(B_{varepsilon/2})}= (1+2/varepsilon)^n-(1-2/varepsilon)^n.

Which is indeed sharper than the bound from Lemma 5.2 in (RV2011).

Next, we recite Lemma 5.4 from (RV2011). Given an varepsilon-net for mathbf{S}^{n-1} with varepsilonin[0,1) we have

     |A|_2 = sup_{xin mathbf{S}^{n-1}}|langle Ax,xrangle | leq frac{1}{(1-2varepsilon)}sup_{xin mathcal{N}_{varepsilon}}|langle Ax,x rangle |

This can be shown using the Cauchy-Schwartz inequality. Now, it would be interesting to see how sharp this bound is. To that end, we do a 2-dimensional example (x=(x_1,x_2)). Note, here we can easily find an optimal grid using polar coordinates. Let the unknown system be parametrized by

   A = left[   begin{array}{ll}   0.1 & -0.75    -0.75 & 0.1    end{array}right]

such that rho(A)=0.85. In the figures below we compute the upper-bound to |A|_2 for several nets mathcal{N}_{varepsilon}. Indeed, for finer nets, we see the upper-bound approaching 0.85. However, we see that the convergence is slow while being inversely related to the exponential growth in |mathcal{N}_{varepsilon}|.


Overall, we see that after about a 100 calls to our oracle we can be sure about A being exponentially stable. Here we just highlighted an example from (RV2011), at a later moment we will discuss its use in probability theory.