Estimating the Region of Attraction

In this demonstration, we add awareness of the region of attraction (RoA) estimation task to our training.

We'll be examining the simple one-dimensional differential equation:

\[\frac{dx}{dt} = - x + x^3.\]

This system has a fixed point at $x = 0$ which has a RoA of $x \in (-1, 1)$, which we will attempt to identify.

We'll train in the larger domain $x \in [-2, 2]$.

Copy-Pastable Code

using NeuralPDE, Lux, NeuralLyapunov, ComponentArrays
import Optimization, OptimizationOptimisers, OptimizationOptimJL
using Random, StableRNGs

rng = StableRNG(0)
Random.seed!(200)

######################### Define dynamics and domain ##########################

f(x, p, t) = -x .+ x .^ 3
lb = [-2.0];
ub = [ 2.0];
fixed_point = [0.0];

####################### Specify neural Lyapunov problem #######################

# Define neural network discretization
dim_state = length(lb)
dim_hidden = 5
dim_output = 2
chain = [Chain(
             Dense(dim_state, dim_hidden, tanh),
             Dense(dim_hidden, dim_hidden, tanh),
             Dense(dim_hidden, 1, use_bias = false)
         ) for _ in 1:dim_output]
ps, st = Lux.setup(rng, chain)
ps = ps |> ComponentArray |> f64
st = st |> f64

# Define training strategy
strategy = GridTraining(0.1)
discretization = PhysicsInformedNN(chain, strategy; init_params = ps, init_states = st)

# Define neural Lyapunov structure
structure = PositiveSemiDefiniteStructure(dim_output)
minimization_condition = DontCheckNonnegativity()

# Define Lyapunov decrease condition
decrease_condition = make_RoA_aware(AsymptoticStability())

# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(structure, minimization_condition, decrease_condition)

############################# Construct PDESystem #############################

@named pde_system = NeuralLyapunovPDESystem(f, lb, ub, spec)

######################## Construct OptimizationProblem ########################

prob = discretize(pde_system, discretization)

########################## Solve OptimizationProblem ##########################

res = Optimization.solve(prob, OptimizationOptimisers.Adam(); maxiters = 300)
prob = Optimization.remake(prob, u0 = res.u)
res = Optimization.solve(prob, OptimizationOptimJL.BFGS(); maxiters = 300)

###################### Get numerical numerical functions ######################
net = discretization.phi
θ = res.u.depvar

V, V̇ = get_numerical_lyapunov_function(net, θ, structure, f, fixed_point)

################################## Simulate ###################################
states = lb[]:0.001:ub[]
V_samples = vec(V(states'))
V̇_samples = vec(V̇(states'))

# Calculated RoA estimate
ρ = decrease_condition.ρ
RoA_states = states[vec(V(transpose(states))) .≤ ρ]
RoA = (first(RoA_states), last(RoA_states))

Detailed description

In this example, we set up the dynamics as a Julia function and don't bother specifying the symbols for the variables (so $x$ will be called the default state1 in the PDESystem).

f(x, p, t) = -x .+ x .^ 3
lb = [-2.0];
ub = [ 2.0];
fixed_point = [0.0];

Setting up the neural network using Lux and NeuralPDE training strategy is no different from any other physics-informed neural network problem. For more on that aspect, see the NeuralPDE documentation. Since we're only considering one dimension, training on a grid isn't so bad in this case.

using Lux, StableRNGs, ComponentArrays

# Stable random number generator for doc stability
rng = StableRNG(0)

# Define neural network discretization
dim_state = length(lb)
dim_hidden = 5
dim_output = 2
chain = [Chain(
             Dense(dim_state, dim_hidden, tanh),
             Dense(dim_hidden, dim_hidden, tanh),
             Dense(dim_hidden, 1, use_bias = false)
         ) for _ in 1:dim_output]
ps, st = Lux.setup(rng, chain)
ps = ps |> ComponentArray |> f64
st = st |> f64
2-element Vector{@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}:
 (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple())
 (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple())
using NeuralPDE

# Define training strategy
strategy = GridTraining(0.1)
discretization = PhysicsInformedNN(chain, strategy; init_params = ps, init_states = st)

We now define our Lyapunov candidate structure along with the form of the Lyapunov conditions we'll be using.

For this example, let's use the default Lyapunov candidate from PositiveSemiDefiniteStructure:

\[V(x) = \left( 1 + \lVert \phi(x) \rVert^2 \right) \log \left( 1 + \lVert x \rVert^2 \right),\]

which structurally enforces positive definiteness. We therefore use DontCheckNonnegativity().

We only require asymptotic stability in this example, but we use make_RoA_aware to only penalize positive values of $\dot{V}(x)$ when $V(x) \le 1$.

using NeuralLyapunov

# Define neural Lyapunov structure
structure = PositiveSemiDefiniteStructure(dim_output)
minimization_condition = DontCheckNonnegativity()

# Define Lyapunov decrease condition
decrease_condition = make_RoA_aware(AsymptoticStability())

# Construct neural Lyapunov specification
spec = NeuralLyapunovSpecification(structure, minimization_condition, decrease_condition)

# Construct PDESystem
@named pde_system = NeuralLyapunovPDESystem(f, lb, ub, spec)

\[ \begin{align} max\left( 0, \frac{2 \mathtt{state1} \left( - \mathtt{state1} + \mathtt{state1}^{3} \right) \left( 1 + \left( \mathtt{{\varphi}1}\left( \mathtt{state1} \right) \right)^{2} + \left( \mathtt{{\varphi}2}\left( \mathtt{state1} \right) \right)^{2} \right)}{1 + \mathtt{state1}^{2}} + 1 \cdot 10^{-6} \mathtt{state1}^{2} + \left( - \mathtt{state1} + \mathtt{state1}^{3} \right) \left( 2 \frac{\mathrm{d} \mathtt{{\varphi}1}\left( \mathtt{state1} \right)}{\mathrm{d}state1} \mathtt{{\varphi}1}\left( \mathtt{state1} \right) + 2 \frac{\mathrm{d} \mathtt{{\varphi}2}\left( \mathtt{state1} \right)}{\mathrm{d}state1} \mathtt{{\varphi}2}\left( \mathtt{state1} \right) \right) \log\left( 1 + \mathtt{state1}^{2} \right) \right) \left( 1 - \left( 1 + \left( \mathtt{{\varphi}1}\left( \mathtt{state1} \right) \right)^{2} + \left( \mathtt{{\varphi}2}\left( \mathtt{state1} \right) \right)^{2} \right) \log\left( 1 + \mathtt{state1}^{2} \right) \geq 0 \right) &= 0 \end{align} \]

Now, we solve the PDESystem using NeuralPDE the same way we would any PINN problem.

prob = discretize(pde_system, discretization)

import Optimization, OptimizationOptimisers, OptimizationOptimJL

res = Optimization.solve(prob, OptimizationOptimisers.Adam(); maxiters = 300)
prob = Optimization.remake(prob, u0 = res.u)
res = Optimization.solve(prob, OptimizationOptimJL.BFGS(); maxiters = 300)

net = discretization.phi
θ = res.u.depvar
ComponentVector{Float64,SubArray...}(φ1 = (layer_1 = (weight = [-0.4578486382961273; -0.06867478042840958; … ; 1.6441099643707275; 0.23668336868286133;;], bias = [0.34184861183166504, -0.9780089855194092, -0.04572296142578125, -0.8118267059326172, 0.055347442626953125]), layer_2 = (weight = [-0.34389057755470276 0.9279804229736328 … 0.07996013015508652 -1.0231869220733643; -1.203420877456665 0.2879287600517273 … 0.2310013771057129 0.06449579447507858; … ; -0.5013081431388855 -0.6881240010261536 … -0.5247265696525574 0.8011581897735596; -0.9508798718452454 -0.13354729115962982 … 0.5574626326560974 0.5449771285057068], bias = [0.2723175585269928, 0.33674874901771545, -0.11881285905838013, 0.1314016431570053, -0.3838435113430023]), layer_3 = (weight = [-0.2886456251144409 0.13539186120033264 … -0.14496836066246033 -0.17473424971103668])), φ2 = (layer_1 = (weight = [2.2938613891601562; -2.124943733215332; … ; 1.8151359558105469; 2.6265342235565186;;], bias = [-0.9846765995025635, -0.6382620334625244, 0.5142543315887451, -0.46349334716796875, -0.03649091720581055]), layer_2 = (weight = [-0.02004436030983925 -1.2301450967788696 … -1.070192813873291 0.8329293131828308; 0.6985112428665161 1.016635775566101 … -1.0654730796813965 -0.8717480301856995; … ; -1.1370961666107178 1.029000997543335 … -1.1567752361297607 -0.5575475692749023; -0.8275012969970703 0.5202121138572693 … -0.1531342715024948 -0.9814570546150208], bias = [-0.1508704423904419, -0.12097594141960144, -0.4097283184528351, -0.2462950050830841, -0.04345227777957916]), layer_3 = (weight = [-0.13673631846904755 0.2019369751214981 … -0.49816328287124634 -0.4100285470485687])))

We can use the result of the optimization problem to build the Lyapunov candidate as a Julia function, then sample on a finer grid than we trained on to find the estimated region of attraction.

V, V̇ = get_numerical_lyapunov_function(net, θ, structure, f, fixed_point)

# Sample
states = lb[]:0.001:ub[]
V_samples = vec(V(states'))
V̇_samples = vec(V̇(states'))

# Calculate RoA estimate
ρ = decrease_condition.ρ
RoA_states = states[vec(V(transpose(states))) .≤ ρ]
RoA = (first(RoA_states), last(RoA_states))

# Print statistics
println("V(0.,0.) = ", V(fixed_point))
println("V ∋ [", min(V(fixed_point), minimum(V_samples)), ", ", maximum(V_samples), "]")
println("V̇ ∋ [", minimum(V̇_samples), ", ", max(V̇(fixed_point), maximum(V̇_samples)), "]")
println("True region of attraction: (-1, 1)")
println("Estimated region of attraction: ", RoA)
V(0.,0.) = 0.0
V ∋ [0.0, 3.525150757887429]
V̇ ∋ [-0.7213744871117956, 11.087439221178085]
True region of attraction: (-1, 1)
Estimated region of attraction: (-0.788, 0.821)

The estimated region of attraction is within the true region of attraction.

using Plots

p_V = plot(states, V_samples, label = "V", xlabel = "x", linewidth=2);
p_V = hline!([ρ], label = "V = ρ", legend = :top);
p_V = vspan!(collect(RoA); label = "Estimated Region of Attraction", color = :gray, fillstyle = :/);
p_V = vspan!([-1, 1]; label = "True Region of Attraction", opacity = 0.2, color = :green);

p_V̇ = plot(states, V̇_samples, label = "dV/dt", xlabel = "x", linewidth=2);
p_V̇ = hline!([0.0], label = "dV/dt = 0", legend = :top);
p_V̇ = vspan!(collect(RoA); label = "Estimated Region of Attraction", color = :gray, fillstyle = :/);
p_V̇ = vspan!([-1, 1]; label = "True Region of Attraction", opacity = 0.2, color = :green);

plt = plot(p_V, p_V̇)
Example block output