Lyapunov Decrease Condition
To represent the condition that the Lyapunov function $V(x)$ must decrease along system trajectories, we typically introduce a new function $\dot{V}(x) = \nabla V(x) \cdot f(x)$. This function represents the rate of change of $V$ along system trajectories. That is to say, if $x(t)$ is a trajectory defined by $\frac{dx}{dt} = f(x)$, then $\dot{V}(x(t)) = \frac{d}{dt} [ V(x(t)) ]$. It is then sufficient to show that $\dot{V}(x)$ is negative away from the fixed point and zero at the fixed point, since a negative derivative means a decreasing function.
Put mathematically, it is sufficient to require $\dot{V}(x) < 0 \, \forall x \ne x_0$ and $\dot{V}(x_0) = 0$. We call such functions negative definite (around the fixed point $x_0$). The weaker condition that $\dot{V}(x) \le 0 \, \forall x \ne x_0$ and $\dot{V}(x_0) = 0$ is negative semi-definiteness.
The condition that $\dot{V}(x_0) = 0$ is satisfied by the definition of $\dot{V}$ and the fact that $x_0$ is a fixed point, so we do not need to train for it. In cases where the dynamics have some dependence on the neural network (e.g., in policy search), we should rather train directly for $f(x_0) = 0$, since the minimization condition will also guarantee $[\nabla V](x_0) = 0$, so $\dot{V}(x_0) = 0$.
NeuralLyapunov.jl provides the LyapunovDecreaseCondition
struct for users to specify the form of the decrease condition to enforce through training.
NeuralLyapunov.LyapunovDecreaseCondition
— TypeLyapunovDecreaseCondition(check_decrease, rate_metric, strength, rectifier)
Specifies the form of the Lyapunov decrease condition to be used.
Fields
check_decrease::Bool
: whether or not to train for negativity/nonpositivity of $V̇(x)$.rate_metric::Function
: should increase with $V̇(x)$; used whencheck_decrease == true
.strength::Function
: specifies the level of strictness for negativity training; should be zero when the two inputs are equal and nonnegative otherwise; used whencheck_decrease == true
.rectifier::Function
: positive when the input is positive and (approximately) zero when the input is negative.
Training conditions
If check_decrease == true
, training will enforce:
$\texttt{rate\_metric}(V(x), V̇(x)) ≤ -\texttt{strength}(x, x_0).$
The inequality will be approximated by the equation:
$\texttt{rectifier}(\texttt{rate\_metric}(V(x), V̇(x)) + \texttt{strength}(x, x_0)) = 0.$
Note that the approximate equation and inequality are identical when $\texttt{rectifier}(t) = \max(0, t)$.
If the dynamics truly have a fixed point at $x_0$ and $V̇(x)$ is truly the rate of decrease of $V(x)$ along the dynamics, then $V̇(x_0)$ will be $0$ and there is no need to train for $V̇(x_0) = 0$.
Examples:
Asymptotic decrease can be enforced by requiring $V̇(x) ≤ -C \lVert x - x_0 \rVert^2$, for some positive $C$, which corresponds to
rate_metric = (V, dVdt) -> dVdt
strength = (x, x0) -> C * (x - x0) ⋅ (x - x0)
This can also be accomplished with AsymptoticStability
.
Exponential decrease of rate $k$ is proven by $V̇(x) ≤ - k V(x)$, which corresponds to
rate_metric = (V, dVdt) -> dVdt + k * V
strength = (x, x0) -> 0.0
This can also be accomplished with ExponentialStability
.
In either case, the rectified linear unit rectifier = (t) -> max(zero(t), t)
exactly represents the inequality, but differentiable approximations of this function may be employed.
Pre-defined decrease conditions
NeuralLyapunov.AsymptoticStability
— FunctionAsymptoticStability(; C, strength, rectifier)
Construct a LyapunovDecreaseCondition
corresponding to asymptotic stability.
The decrease condition for asymptotic stability is $V̇(x) < 0$, which is here represented as $V̇(x) ≤ - \texttt{strength}(x, x_0)$, where strength
is positive definite around $x_0$. By default, $\texttt{strength}(x, x_0) = C \lVert x - x_0 \rVert^2$ for the supplied $C > 0$. $C$ defaults to 1e-6
.
The inequality is represented by $\texttt{rectifier}(V̇(x) + \texttt{strength}(x, x_0)) = 0$. The default value rectifier = (t) -> max(zero(t), t)
exactly represents the inequality, but differentiable approximations of this function may be employed.
NeuralLyapunov.ExponentialStability
— FunctionExponentialStability(k; rectifier)
Construct a LyapunovDecreaseCondition
corresponding to exponential stability of rate $k$.
The Lyapunov condition for exponential stability is $V̇(x) ≤ -k V(x)$ for some $k > 0$.
The inequality is represented by $\texttt{rectifier}(V̇(x) + k V(x)) = 0$. The default value rectifier = (t) -> max(zero(t), t)
exactly represents the inequality, but differentiable approximations of this function may be employed.
NeuralLyapunov.StabilityISL
— FunctionStabilityISL(; rectifier)
Construct a LyapunovDecreaseCondition
corresponding to stability in the sense of Lyapunov (i.s.L.).
Stability i.s.L. is proven by $V̇(x) ≤ 0$. The inequality is represented by $\texttt{rectifier}(V̇(x)) = 0$. The default value rectifier = (t) -> max(zero(t), t)
exactly represents the inequality, but differentiable approximations of this function may be employed.
NeuralLyapunov.DontCheckDecrease
— FunctionDontCheckDecrease()
Construct a LyapunovDecreaseCondition
which represents not checking for decrease of the Lyapunov function along system trajectories. This is appropriate in cases when the Lyapunov decrease condition has been structurally enforced.
Defining your own decrease condition
If a user wishes to define their own version of the decrease condition in a form other than $\texttt{rate\_metric}(V(x), \dot{V}(x)) \le - \texttt{strength}(x, x_0)$, they must define their own subtype of AbstractLyapunovDecreaseCondition
.
NeuralLyapunov.AbstractLyapunovDecreaseCondition
— TypeAbstractLyapunovDecreaseCondition
Represents the decrease condition in a neural Lyapunov problem
All concrete AbstractLyapunovDecreaseCondition
subtypes should define the check_decrease
and get_decrease_condition
functions.
When constructing the PDESystem, NeuralLyapunovPDESystem
uses check_decrease
to determine if it should include an equation equating the result of get_decrease_condition
to zero.
NeuralLyapunov.check_decrease
— Functioncheck_decrease(cond::AbstractLyapunovDecreaseCondition)
Return true
if cond
specifies training to meet the Lyapunov decrease condition, and false
if cond
specifies no training to meet this condition.
NeuralLyapunov.get_decrease_condition
— Functionget_decrease_condition(cond::AbstractLyapunovDecreaseCondition)
Return a function of $V$, $V̇$, $x$, and $x_0$ that returns zero when the Lyapunov decrease condition is met and a value greater than zero when it is violated.
Note that the first two inputs, $V$ and $V̇$, are functions, so the decrease condition can depend on the value of these functions at multiple points.