Damped Simple Harmonic Oscillator

Let's train a neural network to prove the exponential stability of the damped simple harmonic oscillator (SHO).

The damped SHO is represented by the system of differential equations

\[\begin{align*} \frac{dx}{dt} &= v \\ \frac{dv}{dt} &= -2 \zeta \omega_0 v - \omega_0^2 x \end{align*}\]

where $x$ is the position, $v$ is the velocity, $t$ is time, and $\zeta, \omega_0$ are parameters.

We'll consider just the box domain $x \in [-5, 5], v \in [-2, 2]$.

Copy-Pastable Code

using NeuralPDE, Lux, NeuralLyapunov
using Optimization, OptimizationOptimisers, OptimizationOptimJL
using StableRNGs, Random

rng = StableRNG(0)
Random.seed!(200)

######################### Define dynamics and domain ##########################

"Simple Harmonic Oscillator Dynamics"
function f(state, p, t)
    ζ, ω_0 = p
    pos = state[1]
    vel = state[2]
    vcat(vel, -2ζ * ω_0 * vel - ω_0^2 * pos)
end
lb = [-5.0, -2.0];
ub = [ 5.0,  2.0];
p = [0.5, 1.0];
fixed_point = [0.0, 0.0];
dynamics = ODEFunction(f; sys = SciMLBase.SymbolCache([:x, :v], [:ζ, :ω_0]))

####################### Specify neural Lyapunov problem #######################

# Define neural network discretization
dim_state = length(lb)
dim_hidden = 10
dim_output = 4
chain = [Lux.Chain(
             Dense(dim_state, dim_hidden, tanh),
             Dense(dim_hidden, dim_hidden, tanh),
             Dense(dim_hidden, 1)
         ) for _ in 1:dim_output]
ps = Lux.initialparameters(rng, chain)

# Define training strategy
strategy = QuasiRandomTraining(1000)
discretization = PhysicsInformedNN(chain, strategy; init_params = ps)

# Define neural Lyapunov structure
structure = NonnegativeStructure(
    dim_output;
    δ = 1e-6
)
minimization_condition = DontCheckNonnegativity(check_fixed_point = true)

# Define Lyapunov decrease condition
# Damped SHO has exponential stability at a rate of k = ζ * ω_0, so we train to certify that
decrease_condition = ExponentialStability(prod(p))

# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
    structure,
    minimization_condition,
    decrease_condition
)

############################# Construct PDESystem #############################

@named pde_system = NeuralLyapunovPDESystem(
    dynamics,
    lb,
    ub,
    spec;
    p = p
)

######################## Construct OptimizationProblem ########################

prob = discretize(pde_system, discretization)

########################## Solve OptimizationProblem ##########################

res = Optimization.solve(prob, OptimizationOptimisers.Adam(); maxiters = 500)

###################### Get numerical numerical functions ######################
net = discretization.phi
θ = res.u.depvar

V, V̇ = get_numerical_lyapunov_function(
    net,
    θ,
    structure,
    f,
    fixed_point;
    p = p
)

Detailed description

In this example, we set the dynamics up as an ODEFunction and use a SciMLBase.SymbolCache to tell the ultimate PDESystem what to call our state and parameter variables.

using SciMLBase # for ODEFunction and SciMLBase.SymbolCache

"Simple Harmonic Oscillator Dynamics"
function f(state, p, t)
    ζ, ω_0 = p
    pos = state[1]
    vel = state[2]
    vcat(vel, -2ζ * ω_0 * vel - ω_0^2 * pos)
end
lb = [-5.0, -2.0];
ub = [ 5.0,  2.0];
p = [0.5, 1.0];
fixed_point = [0.0, 0.0];
dynamics = ODEFunction(f; sys = SciMLBase.SymbolCache([:x, :v], [:ζ, :ω_0]))

Setting up the neural network using Lux and NeuralPDE training strategy is no different from any other physics-informed neural network problem. For more on that aspect, see the NeuralPDE documentation.

using Lux, StableRNGs

# Stable random number generator for doc stability
rng = StableRNG(0)

# Define neural network discretization
dim_state = length(lb)
dim_hidden = 10
dim_output = 3
chain = [Lux.Chain(
             Dense(dim_state, dim_hidden, tanh),
             Dense(dim_hidden, dim_hidden, tanh),
             Dense(dim_hidden, 1)
         ) for _ in 1:dim_output]
ps = Lux.initialparameters(rng, chain)
3-element Vector{@NamedTuple{layer_1::@NamedTuple{weight::Matrix{Float32}, bias::Vector{Float32}}, layer_2::@NamedTuple{weight::Matrix{Float32}, bias::Vector{Float32}}, layer_3::@NamedTuple{weight::Matrix{Float32}, bias::Vector{Float32}}}}:
 (layer_1 = (weight = [-0.3237479 -0.5437387; -0.048560407 -1.9027755; … ; -1.6571343 -1.0880196; 0.11297749 -0.21115682], bias = [0.12009778, 0.17294402, 0.29379776, 0.5655657, -0.108922966, 0.04379597, 0.12652467, 0.31780428, -0.2874046, 0.30533487]), layer_2 = (weight = [-0.72350246 -0.3401721 … -0.7510324 -0.069471754; 0.045605414 0.15956083 … 0.21771742 0.19644302; … ; 0.26822245 0.57399637 … -0.81163055 0.012115666; -0.78351724 0.8305831 … 0.5208219 -0.55054986], bias = [0.04156058, 0.02013397, 0.101262905, 0.26914015, -0.2074199, -0.13821417, 0.15174063, -0.14683884, -0.1337309, 0.19964091]), layer_3 = (weight = [-0.35277173 0.4447264 … 0.27290577 -0.103856936], bias = [0.22611074]))
 (layer_1 = (weight = [-1.0708913 -0.2169331; -0.36197582 1.5401223; … ; 0.15410937 -1.0339351; 1.2886711 -0.7981994], bias = [0.48097476, -0.14829622, 0.40842858, -0.15696919, -0.08328807, -0.5449018, -0.091243714, -0.44998, -0.5328614, -0.3034612]), layer_2 = (weight = [0.06931635 0.70472336 … -0.555685 0.06449507; -0.2095592 -0.2052483 … -0.58327854 0.41810396; … ; 0.16986024 -0.6749801 … -0.83834386 -0.3477046; -0.3656068 0.76393116 … -0.6311579 0.52442175], bias = [0.14191626, 0.2844518, 0.07772283, 0.15210773, 0.23963133, -0.0060114358, 0.2544337, -0.013675445, 0.112837784, -0.2950333]), layer_3 = (weight = [-0.49402636 0.35983545 … 0.28786662 0.4767699], bias = [0.074471064]))
 (layer_1 = (weight = [1.2439798 -0.31668776; 0.4865117 -1.0685825; … ; -1.3613875 -1.053776; 1.840911 0.17835672], bias = [-0.089505404, 0.371908, 0.59805584, 0.36353645, -0.499529, 0.6626108, -0.13184647, 0.3982164, -0.2829247, 0.25867644]), layer_2 = (weight = [-0.7837689 0.10970415 … -0.04149061 -0.3317203; -0.55738413 -0.2639527 … 0.73763 0.69223243; … ; 0.04197509 -0.22453102 … 0.47352976 0.5388029; 0.4078881 -0.3659533 … 0.5233756 -0.20981494], bias = [-0.21433939, 0.15363123, 0.02154136, 0.09674127, 0.10775468, 0.033996847, -0.2623025, 0.21038397, -0.036707506, -0.15219104]), layer_3 = (weight = [-0.5051298 -0.4015753 … 0.14219329 -0.5399407], bias = [-0.2908838]))
using NeuralPDE

# Define training strategy
strategy = QuasiRandomTraining(1000)
discretization = PhysicsInformedNN(chain, strategy; init_params = ps)

We now define our Lyapunov candidate structure along with the form of the Lyapunov conditions we'll be using.

For this example, let's use a Lyapunov candidate

\[V(x) = \lVert \phi(x) \rVert^2 + \delta \log \left( 1 + \lVert x \rVert^2 \right),\]

which structurally enforces nonnegativity, but doesn't guarantee $V([0, 0]) = 0$. We therefore don't need a term in the loss function enforcing $V(x) > 0 \, \forall x \ne 0$, but we do need something enforcing $V([0, 0]) = 0$. So, we use DontCheckNonnegativity(check_fixed_point = true).

To train for exponential stability we use ExponentialStability, but we must specify the rate of exponential decrease, which we know in this case to be $\zeta \omega_0$.

using NeuralLyapunov

# Define neural Lyapunov structure
structure = NonnegativeStructure(
    dim_output;
    δ = 1e-6
)
minimization_condition = DontCheckNonnegativity(check_fixed_point = true)

# Define Lyapunov decrease condition
# Damped SHO has exponential stability at a rate of k = ζ * ω_0, so we train to certify that
decrease_condition = ExponentialStability(prod(p))

# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(
    structure,
    minimization_condition,
    decrease_condition
)

# Construct PDESystem
@named pde_system = NeuralLyapunovPDESystem(
    dynamics,
    lb,
    ub,
    spec;
    p = p
)

\[ \begin{align} max\left( 0, \frac{2 \cdot 10^{-6} v x}{1 + v^{2} + x^{2}} + \frac{2 \cdot 10^{-6} v \left( - 2 v \zeta \mathtt{\omega\_0} - \mathtt{\omega\_0}^{2} x \right)}{1 + v^{2} + x^{2}} + 0.5 \left( 1 \cdot 10^{-6} \log\left( 1 + v^{2} + x^{2} \right) + \left( \mathtt{{\varphi}3}\left( x, v \right) \right)^{2} + \left( \mathtt{{\varphi}2}\left( x, v \right) \right)^{2} + \left( \mathtt{{\varphi}1}\left( x, v \right) \right)^{2} \right) + 2 \left( v \left( \mathtt{{\varphi}3}\left( x, v \right) \frac{\mathrm{d}}{\mathrm{d}x} \mathtt{{\varphi}3}\left( x, v \right) + \mathtt{{\varphi}2}\left( x, v \right) \frac{\mathrm{d}}{\mathrm{d}x} \mathtt{{\varphi}2}\left( x, v \right) + \mathtt{{\varphi}1}\left( x, v \right) \frac{\mathrm{d}}{\mathrm{d}x} \mathtt{{\varphi}1}\left( x, v \right) \right) + \left( - 2 v \zeta \mathtt{\omega\_0} - \mathtt{\omega\_0}^{2} x \right) \left( \mathtt{{\varphi}3}\left( x, v \right) \frac{\mathrm{d}}{\mathrm{d}v} \mathtt{{\varphi}3}\left( x, v \right) + \mathtt{{\varphi}2}\left( x, v \right) \frac{\mathrm{d}}{\mathrm{d}v} \mathtt{{\varphi}2}\left( x, v \right) + \mathtt{{\varphi}1}\left( x, v \right) \frac{\mathrm{d}}{\mathrm{d}v} \mathtt{{\varphi}1}\left( x, v \right) \right) \right) \right) &= 0 \end{align} \]

Now, we solve the PDESystem using NeuralPDE the same way we would any PINN problem.

prob = discretize(pde_system, discretization)

using Optimization, OptimizationOptimisers, OptimizationOptimJL

res = Optimization.solve(prob, OptimizationOptimisers.Adam(); maxiters = 500)

net = discretization.phi
θ = res.u.depvar
ComponentVector{Float32,SubArray...}(φ1 = (layer_1 = (weight = Float32[-0.37360328 -0.42321068; -0.076750726 -1.9219621; … ; -1.7319775 -1.0410409; 0.15594007 -0.11850413], bias = Float32[0.10652516, 0.14863726, 0.36818478, 0.38835967, -0.14100409, 0.23546734, 0.13478827, 0.34779376, -0.19842908, 0.45497632]), layer_2 = (weight = Float32[-0.6735979 -0.27110907 … -0.77882767 -0.05312255; 0.13309361 0.21481083 … 0.2792436 0.12886697; … ; 0.29979482 0.60477203 … -0.7822114 -0.020179499; -0.8187858 0.7986153 … 0.4857475 -0.51579326], bias = Float32[0.06932483, -0.11689423, 0.1836182, 0.20625763, -0.14732361, -0.14842062, 0.18981469, -0.07318646, -0.18152644, 0.25320855]), layer_3 = (weight = Float32[-0.38729754 0.41378024 … 0.2612934 -0.10148847], bias = Float32[0.16644314])), φ2 = (layer_1 = (weight = Float32[-0.95562136 -0.27286986; -0.345327 1.5248209; … ; 0.19478573 -0.98960257; 1.249133 -0.72722286], bias = Float32[0.5878318, -0.1299785, 0.25378445, -0.28239143, 0.023659294, -0.54364127, -0.022801187, -0.108322695, -0.47408858, -0.25204143]), layer_2 = (weight = Float32[0.09357815 0.6653794 … -0.5188749 0.05095; -0.12537546 -0.16435145 … -0.60688406 0.33788124; … ; 0.21162708 -0.6465916 … -0.79425114 -0.3921171; -0.30772257 0.8055249 … -0.57437176 0.46409115], bias = Float32[0.10124056, 0.19101188, 0.096458964, 0.14057903, 0.28322068, 0.061198417, 0.2766941, -0.006369471, 0.06984401, -0.34627202]), layer_3 = (weight = Float32[-0.43539965 0.3388176 … 0.36728206 0.3942649], bias = Float32[0.0375049])), φ3 = (layer_1 = (weight = Float32[1.3151585 -0.3988791; 0.40486366 -1.1303415; … ; -1.2699503 -1.1549481; 1.7450922 0.26064673], bias = Float32[-0.04174012, 0.3414518, 0.63959897, 0.28324485, -0.5377269, 0.6036478, -0.1661536, 0.33531702, -0.3699359, 0.17586958]), layer_2 = (weight = Float32[-0.7667084 0.07233742 … 0.006498749 -0.27741018; -0.48995402 -0.19415115 … 0.6966832 0.7638544; … ; -0.0009850627 -0.2536146 … 0.31768134 0.49123442; 0.36626095 -0.41563198 … 0.5595244 -0.24075027], bias = Float32[-0.27271667, 0.0783822, 0.07052465, 0.16007057, 0.024842968, -0.052413605, -0.25464305, 0.26658714, 0.03341519, -0.1992955]), layer_3 = (weight = Float32[-0.45096096 -0.41024134 … 0.21109638 -0.57346314], bias = Float32[-0.21359701])))

We can use the result of the optimization problem to build the Lyapunov candidate as a Julia function.

V, V̇ = get_numerical_lyapunov_function(
    net,
    θ,
    structure,
    f,
    fixed_point;
    p = p
)

Now let's see how we did. We'll evaluate both $V$ and $\dot{V}$ on a $101 \times 101$ grid:

Δx = (ub[1] - lb[1]) / 100
Δv = (ub[2] - lb[2]) / 100
xs = lb[1]:Δx:ub[1]
vs = lb[2]:Δv:ub[2]
states = Iterators.map(collect, Iterators.product(xs, vs))
V_samples = vec(V(hcat(states...)))
V̇_samples = vec(V̇(hcat(states...)))

# Print statistics
V_min, i_min = findmin(V_samples)
state_min = collect(states)[i_min]
V_min, state_min = if V(fixed_point) ≤ V_min
        V(fixed_point), fixed_point
    else
        V_min, state_min
    end

println("V(0.,0.) = ", V(fixed_point))
println("V ∋ [", V_min, ", ", maximum(V_samples), "]")
println("Minimal sample of V is at ", state_min)
println(
    "V̇ ∋ [",
    minimum(V̇_samples),
    ", ",
    max(V̇(fixed_point), maximum(V̇_samples)),
    "]",
)
┌ Warning: Mixed-Precision `matmul_cpu_fallback!` detected and Octavian.jl cannot be used for this set of inputs (C [Matrix{Float64}]: A [Base.ReshapedArray{Float32, 2, SubArray{Float32, 1, Vector{Float32}, Tuple{UnitRange{Int64}}, true}, Tuple{}}] x B [Matrix{Float64}]). Falling back to generic implementation. This may be slow.
@ LuxLib.Impl ~/.julia/packages/LuxLib/1B1qw/src/impl/matmul.jl:190
┌ Warning: Mixed-Precision `matmul_cpu_fallback!` detected and Octavian.jl cannot be used for this set of inputs (C [Matrix{Float64}]: A [Base.ReshapedArray{Float32, 2, SubArray{Float32, 1, Vector{Float32}, Tuple{UnitRange{Int64}}, true}, Tuple{}}] x B [Matrix{Float64}]). Falling back to generic implementation. This may be slow.
@ LuxLib.Impl ~/.julia/packages/LuxLib/1B1qw/src/impl/matmul.jl:190
┌ Warning: Mixed-Precision `matmul_cpu_fallback!` detected and Octavian.jl cannot be used for this set of inputs (C [Matrix{Float64}]: A [Base.ReshapedArray{Float32, 2, SubArray{Float32, 1, Vector{Float32}, Tuple{UnitRange{Int64}}, true}, Tuple{}}] x B [Matrix{Float64}]). Falling back to generic implementation. This may be slow.
@ LuxLib.Impl ~/.julia/packages/LuxLib/1B1qw/src/impl/matmul.jl:190
V(0.,0.) = 0.008926608305521779
V ∋ [0.0041967192550075106, 1.098723690163108]
Minimal sample of V is at [0.0, -0.08]
V̇ ∋ [-3.4110149877389566, 1.1184006284999146]

At least at these validation samples, the conditions that $\dot{V}$ be negative semi-definite and $V$ be minimized at the origin are nearly satisfied.

using Plots

p1 = plot(xs, vs, V_samples, linetype = :contourf, title = "V", xlabel = "x", ylabel = "v");
p1 = scatter!([0], [0], label = "Equilibrium");
p2 = plot(
    xs,
    vs,
    V̇_samples,
    linetype = :contourf,
    title = "V̇",
    xlabel = "x",
    ylabel = "v",
);
p2 = scatter!([0], [0], label = "Equilibrium");
plot(p1, p2)
Example block output

Each sublevel set of $V$ completely contained in the plot above has been verified as a subset of the region of attraction.