A Deep Dive Into DifferentialEquations

Everything about ODEs

Oscar Smith

JuliaHub

Chris Rackauckas

JuliaHub, MIT

2025-07-22

Introduction to the Workshop

Who this workshop is for

  • This workshop is for those who want to learn about the deep features of DifferentialEquations.jl for ODEs
    • It is assumed you know what ODEs are and what they are useful for
    • We will not cover SDEs, DDEs, jump processes, etc., it’s just ODEs
  • We assume that you know a good amount of Julia
  • This is for intermediates, not for experts (but that would be a fun workshop to give if folks are interested)

For a more beginner version, see on Youtube:

“Intro to solving differential equations in Julia”

Note about coming changes

There will soon be a new major release, DifferentialEquations.jl v8!

  • The major change will be that non-ODE dependencies will be removed from DifferentialEquations.jl
    • The documentation will still cover all types of differential equations
    • For things other than ODEs, you will be required to import the solver packages
      • SDEs: StochasticDiffEq.jl
      • DDEs: DelayDiffEq.jl
    • The default solvers will be associated with those domain packages
  • Result: DifferentialEquations.jl will
    • have much fewer dependencies
    • be much faster to load
    • be focused on ODEs

With that in mind, let’s start the show!

Introduction to DifferentialEquations.jl

Let’s do a quick runthrough of the basics!

Defining and solving an ODE

Let’s solve the Lorenz equations

\[ \begin{aligned} \frac{dx}{dt} &= σ(y-x) \\ \frac{dy}{dt} &= x(ρ-z) - y \\ \frac{dz}{dt} &= xy - βz \\ \end{aligned} \]

Defining and solving an ODE

We will use the common notation. The ODE is defined as:

\[ u' = f(u,p,t) \]

where \(u\) is the state vector, \(p\) are the parameters, and \(t\) is the independent variable. In this workshop we will only be looking at initial value problems, i.e. problems for which \(u_0 = u(t_0)\) is given.

For cases where that is not true, see the Boundary Value Problem solvers (BoundaryValueDiffEq.jl, BVProblem) which will have its own JuliaCon talk

Defining and solving an ODE

import DifferentialEquations as DE

function lorenz!(du, u, p, t)
    x, y, z = u
    σ, ρ, β = p
    du[1] = dx = σ * (y - x)
    du[2] = dy = x *- z) - y
    du[3] = dz = x * y - β * z
end

u0 = [1.0, 0.0, 0.0]
tspan = (0.0, 100.0)
p = [10.0, 28.0, 8 / 3]
prob = DE.ODEProblem(lorenz!, u0, tspan, p)
ODEProblem with uType Vector{Float64} and tType Float64. In-place: true
Non-trivial mass matrix: false
timespan: (0.0, 100.0)
u0: 3-element Vector{Float64}:
 1.0
 0.0
 0.0

Defining and solving an ODE

sol = DE.solve(prob)
retcode: Success
Interpolation: 3rd order Hermite
t: 1292-element Vector{Float64}:
   0.0
   3.5678604836301404e-5
   0.0003924646531993154
   0.003262408731175873
   0.009058076622686189
   0.01695647090176743
   0.027689960116420883
   0.041856352219618344
   0.060240411865493296
   0.08368541210909924
   ⋮
  99.43545175575305
  99.50217600300971
  99.56297541572351
  99.62622492183432
  99.69561088424294
  99.77387244562912
  99.86354266863755
  99.93826978918452
 100.0
u: 1292-element Vector{Vector{Float64}}:
 [1.0, 0.0, 0.0]
 [0.9996434557625105, 0.0009988049817849058, 1.781434788799189e-8]
 [0.9961045497425811, 0.010965399721242457, 2.1469553658389193e-6]
 [0.9693591548287857, 0.0897706331002921, 0.00014380191884671585]
 [0.9242043547708632, 0.24228915014052968, 0.0010461625485930237]
 [0.8800455783133068, 0.43873649717821195, 0.003424260078582332]
 [0.8483309823046307, 0.6915629680633586, 0.008487625469885364]
 [0.8495036699348377, 1.0145426764822272, 0.01821209108471829]
 [0.9139069585506618, 1.442559985646147, 0.03669382222358562]
 [1.0888638225734468, 2.0523265829961646, 0.07402573595703686]
 ⋮
 [1.2013409155396158, -2.429012698730855, 25.83593282347909]
 [-0.4985909866565941, -2.2431908075030083, 21.591758421186338]
 [-1.3554328352527145, -2.5773570617802326, 18.48962628032902]
 [-2.1618698772305467, -3.5957801801676297, 15.934724265473792]
 [-3.433783468673715, -5.786446127166032, 14.065327938066913]
 [-5.971873646288483, -10.261846004477597, 14.060290896024572]
 [-10.941900618598972, -17.312154206417734, 20.65905960858999]
 [-14.71738043327772, -16.96871551014668, 33.06627229408802]
 [-13.714517151605754, -8.323306384833089, 38.798231477265624]

Plotting the Solution

Tweaking the tolerances

sol = DE.solve(prob; abstol=1e-8, reltol=1e-8)
retcode: Success
Interpolation: 3rd order Hermite
t: 5149-element Vector{Float64}:
   0.0
   0.017370539506359785
   0.02697349576269261
   0.04067533495752067
   0.05298597150659302
   0.06690507927421295
   0.08100844722335863
   0.09613437701906993
   0.11195115057270745
   0.12869064643044806
   ⋮
  99.88145264366318
  99.89776638841587
  99.91349616955557
  99.92881879506439
  99.94396884419129
  99.95924140880066
  99.97505869890645
  99.99229347018232
 100.0
u: 5149-element Vector{Vector{Float64}}:
 [1.0, 0.0, 0.0]
 [0.8782427318469918, 0.4487441646756523, 0.003581924299725066]
 [0.8495175483437437, 0.6750275202519092, 0.008087863943776799]
 [0.8477021970385489, 0.9876816439219543, 0.017264636106651934]
 [0.8805584624732705, 1.2705298190082202, 0.02850040884117731]
 [0.9532932263305036, 1.6062401135067121, 0.045444146019208674]
 [1.0637408103939476, 1.9772602908812493, 0.06873055292620642]
 [1.2235187118943363, 2.425384805111449, 0.10326344169622992]
 [1.4390272109339062, 2.9678317856555, 0.15449440811602191]
 [1.7269175675656934, 3.646943354685834, 0.23329720649508148]
 ⋮
 [14.101480342525452, 14.177461955160991, 34.43692409990671]
 [13.971664076404744, 12.301637498213427, 35.94698220451645]
 [13.579719977934808, 10.2893194497548, 36.872366236627165]
 [12.966406662786008, 8.293409638528471, 37.24938518176801]
 [12.172700035932502, 6.4215209659058985, 37.14703393143843]
 [11.23311742516435, 4.7397816773258405, 36.63987812345723]
 [10.169872923030155, 3.2805669234079806, 35.78826500534756]
 [8.973611066615545, 2.043006860586506, 34.60607875270975]
 [8.442686247736885, 1.6049188787066433, 34.02301082840038]
  • abstol: controls behavior near zero
  • reltol: general tolerance

Interpolating the solution

sol(2.0)
3-element Vector{Float64}:
 -7.876082548012349
 -8.76162181505105
 24.9902609932444
sol([1.0,2.0])
t: 2-element Vector{Float64}:
 1.0
 2.0
u: 2-element Vector{Vector{Float64}}:
 [-9.40845056711063, -9.09619907282856, 28.58162762347937]
 [-7.876082548012349, -8.76162181505105, 24.9902609932444]

You can even get derivatives!

sol([1.0,2.0], Val{1}) # 1st derivative
t: 2-element Vector{Float64}:
 1.0
 2.0
u: 2-element Vector{Vector{Float64}}:
 [3.122514944809218, 14.568413809580317, 9.363465665322934]
 [-8.855392680026874, -14.943331032821312, 2.3665606859746147]

Controlling Saving

sol = DE.solve(prob; saveat = 1.0)
retcode: Success
Interpolation: 1st order linear
t: 101-element Vector{Float64}:
   0.0
   1.0
   2.0
   3.0
   4.0
   5.0
   6.0
   7.0
   8.0
   9.0
   ⋮
  92.0
  93.0
  94.0
  95.0
  96.0
  97.0
  98.0
  99.0
 100.0
u: 101-element Vector{Vector{Float64}}:
 [1.0, 0.0, 0.0]
 [-9.3956448118492, -9.093947219695087, 28.553962327586213]
 [-7.883336671160232, -8.75900465399957, 25.013756136771793]
 [-8.15133981114985, -6.944084586929072, 28.13292182412072]
 [-9.454057381860475, -10.445681428594588, 26.914655951104073]
 [-6.951827816117757, -6.9581421823366085, 25.152213205863525]
 [-9.752018259685896, -8.574534236332523, 29.973559700596777]
 [-7.76079401395273, -9.393426195854332, 23.676418666075953]
 [-7.486378556818742, -5.591207351209719, 28.232853418020085]
 [-10.07114943576555, -11.93933128134955, 26.554126851623852]
 ⋮
 [-2.054059425929499, -3.1365115043748077, 14.919649798319407]
 [8.347533862895407, 14.633644912860765, 14.70837012350214]
 [-7.760751996968182, 0.06972628539380638, 34.214501132556286]
 [0.9036681392660211, -4.652255418950352, 28.194817316941055]
 [-8.179107897349105, -12.698373284982347, 18.77809476952777]
 [-7.326449828060204, -0.47164658956708766, 33.04051682631841]
 [1.2195640115658792, 3.043727402437567, 21.399110484616617]
 [4.4639940346461495, 7.97466626477059, 11.70848880623241]
 [-13.714517151605754, -8.323306384833089, 38.798231477265624]
sol = DE.solve(prob; saveat = [1.0,5.0,100.0])
retcode: Success
Interpolation: 1st order linear
t: 3-element Vector{Float64}:
   1.0
   5.0
 100.0
u: 3-element Vector{Vector{Float64}}:
 [-9.3956448118492, -9.093947219695087, 28.553962327586213]
 [-6.951827816117757, -6.9581421823366085, 25.152213205863525]
 [-13.714517151605754, -8.323306384833089, 38.798231477265624]

Choosing Solvers

retcode: Success
Interpolation: specialized 4th order "free" interpolation
t: 1286-element Vector{Float64}:
   0.0
   3.5678604836301404e-5
   0.0003924646531993154
   0.003262408731175873
   0.009058076622686189
   0.01695647090176743
   0.027689960116420883
   0.041856352219618344
   0.060240411865493296
   0.08368541210909924
   ⋮
  99.45676937480482
  99.53568468520979
  99.59226237099575
  99.65612897206336
  99.72302555022908
  99.79969793686439
  99.8850058465883
  99.96886914488182
 100.0
u: 1286-element Vector{Vector{Float64}}:
 [1.0, 0.0, 0.0]
 [0.9996434557625105, 0.0009988049817849058, 1.781434788799189e-8]
 [0.9961045497425811, 0.010965399721242457, 2.1469553658389193e-6]
 [0.9693591548287857, 0.0897706331002921, 0.00014380191884671585]
 [0.9242043547708632, 0.24228915014052968, 0.0010461625485930237]
 [0.8800455783133068, 0.43873649717821195, 0.003424260078582332]
 [0.8483309823046307, 0.6915629680633586, 0.008487625469885364]
 [0.8495036699348377, 1.0145426764822272, 0.01821209108471829]
 [0.9139069585506618, 1.442559985646147, 0.03669382222358562]
 [1.0888638225734468, 2.0523265829961646, 0.07402573595703686]
 ⋮
 [-1.1657619305117723, 1.373685296124218, 24.12980077413763]
 [0.11573784756846664, 1.1078629586437472, 19.514375648138515]
 [0.5666934358539449, 1.247206717186019, 16.80370266309915]
 [1.0107556004930176, 1.7854045103425413, 14.242423271505206]
 [1.6719508839568509, 2.9463726083286934, 12.107406226477286]
 [3.0692897008687883, 5.591250595471577, 10.557579389411638]
 [6.268908864716439, 11.506784557952026, 11.415847825035476]
 [11.957358526233273, 19.717770563628097, 20.19016745078509]
 [14.260722802423686, 20.959798540170226, 26.660823824345428]
sol = DE.solve(prob, DE.Vern9())
retcode: Success
Interpolation: specialized 9th order lazy interpolation
t: 869-element Vector{Float64}:
   0.0
   3.5678604836301404e-5
   0.0002500863973219907
   0.0014687170575720212
   0.0084553875791472
   0.034583010273835035
   0.0826165272957066
   0.1436941282721654
   0.21803197890884696
   0.2924347752876473
   ⋮
  99.23419421575856
  99.35009843927007
  99.47004723235709
  99.57647489190444
  99.6797779923158
  99.77079061783341
  99.88458995682443
  99.98374796067499
 100.0
u: 869-element Vector{Vector{Float64}}:
 [1.0, 0.0, 0.0]
 [0.9996434557625105, 0.0009988049817849058, 1.7814347887992147e-8]
 [0.997511001329676, 0.006992815905419874, 8.731563582785523e-7]
 [0.9857190854969888, 0.04079760693551791, 2.9712513424494545e-5]
 [0.9283733223848135, 0.22679983020129352, 0.0009168019978566616]
 [0.8432124214640918, 0.8492478836495613, 0.012780411642447649]
 [1.078671699259547, 2.0221541799417784, 0.0718736646108536]
 [2.0446059958655516, 4.369060910574929, 0.3351465502366729]
 [4.8159406727278, 10.336013728657665, 1.9289458242152586]
 [10.858647536596456, 21.704513304804223, 10.083379735562145]
 ⋮
 [-1.4438544974877383, -2.5001664078223746, 13.573652664098514]
 [-3.506564427383771, -6.337938773434131, 11.014139863903912]
 [-9.232986902131485, -16.17391825606947, 15.369910008072283]
 [-15.885675920862978, -17.945963333937506, 35.143571613051925]
 [-10.315356181823393, -0.4221687097732117, 38.150472876653986]
 [-2.773634018853797, 3.007289653693769, 29.04188154819498]
 [1.0174413740077566, 2.7792626118000903, 21.318226515452114]
 [2.5076595450930452, 4.099690043790872, 16.899383745978227]
 [2.7778681774740885, 4.518071195311841, 16.363654849493063]

What Solvers Are There?

OrdinaryDiffEq.jl Solvers: 299

| Category                | Count |
|-------------------------|-------|
| Adams-Bashforth-Moulton | 13    |
| BDF Methods             | 18    |
| Explicit RK             | 1     |
| Exponential RK          | 17    |
| Extrapolation           | 7     |
| Feagin                  | 3     |
| FIRK                    | 4     |
| Function Map            | 1     |
| High Order RK           | 4     |
| IMEX Multistep          | 2     |
| Linear/Magnus           | 16    |
| Low Order RK            | 26    |
| Low Storage RK          | 45    |
| Nordsieck               | 4     |
| PDIRK                   | 1     |
| PRK                     | 1     |
| QPRK                    | 1     |
| RKN (Nyström)           | 17    |
| Rosenbrock              | 37    |
| SDIRK                   | 29    |
| SSPRK                   | 13    |
| Stabilized IRK          | 1     |
| Stabilized RK           | 6     |
| Symplectic RK           | 18    |
| Taylor Series           | 2     |
| Tsit5                   | 1     |
| Verner                  | 4     |
| Core/Other              | 7     |

DiffEqDevTools.jl ODE Tableaus: 105

Total: 404

What Solvers Are There?

For complete information, see:

https://docs.sciml.ai/DiffEqDocs/stable/solvers/ode_solve/

And the new OrdinaryDiffEq API pages!

What Solver Packages Are There?

There are many different packages to be aware of which all use the same API:

  • OrdinaryDiffEq.jl: the main ODE solver package. Split into subpackages:
    • OrdinaryDiffEqTsit5: Just the non-stiff 5th order adaptive Tsit5 method
    • OrdinaryDiffEqVerner: The VernX high efficiency non-stiff methods
    • OrdinaryDiffEqRosenbrock: The Rosenbrock methods, i.e. Rosenbrock23 and Rodas5P, for small stiff systems
    • OrdinaryDiffEqBDF: The BDF methods, QNDF and FBDF, for large stiff systems
    • OrdinaryDiffEqDefault: The default solver for automatic choice
  • Sundials.jl: Wrapper for SUNDIALS CVODE and IDA. Can be efficient for large stiff systems
  • LSODA.jl: Wrapper for the classic lsoda, tends to be generally good for small systems
  • ODEInterfaceDiffEq.jl: Wrappers for classic Fortran ODE solvers, including dopri5 and radau

Stiff Equations

One major class of equations to know about are stiff equations. While difficult to define rigorously, there are two simple way to think about them:

  1. If your problem has multiple time scales (6+ orders of magnitude apart), then it’s stiff. Look for parameter values that are very different.
  2. If solvers for non-stiff equations are taking lots of steps, it’s stiff.

This is a very common numerical difficulty, and when identified these problems require different sets of solvers

Stiff Equation Example: ROBER

function rober!(du, u, p, t)
    y₁, y₂, y₃ = u
    k₁, k₂, k₃ = p
    du[1] = -k₁ * y₁ + k₃ * y₂ * y₃
    du[2] = k₁ * y₁ - k₂ * y₂^2 - k₃ * y₂ * y₃
    du[3] = k₂ * y₂^2
    nothing
end

prob = DE.ODEProblem(rober!, [1.0, 0.0, 0.0], (0.0, 1e5), [0.04, 3e7, 1e4])
sol = DE.solve(prob, DE.Rodas5P())
Plots.plot(sol, tspan = (1e-6, 1e5))

Stiff Equation Example: ROBER

Plots.plot(sol, tspan = (1e-6, 1e5), xscale = :log10, yscale=:log10)

Some special solvers you should be aware of

  • ROCK2: An explicit method that is efficient for stiff equations which are dominated by real eigenvalues
  • SSP methods: Methods which enforce certain properties (total variation, maximum norm, entropy) if dt is sufficiently small. This can be required for stability with some partial differential equations (hyperbolic equations)
  • low-memory RK methods: These methods require less RAM than other methods, at the cost of being less computationally efficient. Good for very large PDE discretizations

Faster Loading Times / Decreased Depedendency Usage

DifferentialEquations.jl loads a full set of solvers, but if you know exactly the solver that you want, you can simply load the subpackage with that solver!

import OrdinaryDiffEqTsit5 as ODE5

function lorenz!(du, u, p, t)
    x, y, z = u
    σ, ρ, β = p
    du[1] = dx = σ * (y - x)
    du[2] = dy = x *- z) - y
    du[3] = dz = x * y - β * z
end

u0 = [1.0, 0.0, 0.0]
tspan = (0.0, 100.0)
p = [10.0, 28.0, 8 / 3]
prob = ODE5.ODEProblem(lorenz!, u0, tspan, p)
sol = ODE5.solve(prob, ODE5.Tsit5())
Plots.plot(sol, idxs=(1,2,3))

SplitODEProblem and IMEX methods

Sometimes you may have one part of the problem operating at a much faster time scale than the other. In that case, you can split the problem and use a method for stiff equations on the fast part and a method for explicit integrators on the slow part. This is calls an implicit-explicit or IMEX integration. If we define:

\[ u' = f(u,p,t) + g(u,p,t) \]

Example Defining A SplitODEProblem

u = rand(4, 2)
f1(du, u, p, t) = du .= 2u
f2(du, u, p, t) = du .= 2u
prob = DE.SplitODEProblem(f1, f2, u, (0.0, 1.0))
sol = DE.solve(prob, DE.KenCarp4());

NOTE: SplitODEProblems can be solved by standard ODE solvers (by definition, just putting them back together) but this affords no performance advantage, it’s simply a convenience.

Special Case: Semilinear ODE Problem

There is a special case for a split ODE problem where \(f(u,p,t)\) is linear, i.e. \(f(u,p,t)=Au\). When this occurs, ODE solvers can specialize on being able to solve that part exactly via the matrix exponential \(exp(At)v\), and thus we can use speical integrators known as Exponential Runge-Kutta Methods. These can be fast for PDE discretizations that tend of have this form.

using OrdinaryDiffEqExponentialRK, SciMLOperators
A = [2.0 -1.0; -1.0 2.0]
linnonlin_f1 = MatrixOperator(A)
linnonlin_f2 = (du, u, p, t) -> du .= 1.01 .* u
linnonlin_fun_iip = SplitFunction(linnonlin_f1, linnonlin_f2)
tspan = (0.0, 1.0)
u0 = [0.1, 0.1]
prob = SplitODEProblem(linnonlin_fun_iip, u0, tspan)
sol = solve(prob, ETDRK4(), dt = 1 / 4)
retcode: Success
Interpolation: 3rd order Hermite
t: 5-element Vector{Float64}:
 0.0
 0.25
 0.5
 0.75
 1.0
u: 5-element Vector{Vector{Float64}}:
 [0.1, 0.1]
 [0.16528262442126226, 0.16528262442126226]
 [0.2731834593558004, 0.2731834593558004]
 [0.4515247911080592, 0.45152479110805915]
 [0.7462920246560225, 0.7462920246560223]

Linear Specialized Methods

Similarly, if \(u'=f\) is almost linear, there are specializations.

  • For \(u'=Au\), the analytical solution is \(u(t) = exp(At)u_0\) which has fast solvers in ExponentialUtilities.jl.
  • For \(u'=A(t)u\), there are Magnus methods
  • For \(u'=A(u)u\), there are Lie group methods

By using matrix exponentials, these methods can have good conservation properties.

Example Magnus Method Solve

using OrdinaryDiffEqLinear, SciMLOperators
function update_func(A, u, p, t)
    A[1, 1] = 0
    A[2, 1] = sin(t)
    A[1, 2] = -1
    A[2, 2] = 0
end
A0 = ones(2, 2)
A = MatrixOperator(A0, update_func! = update_func)
u0 = ones(2)
tspan = (0.0, 30.0)
prob = ODEProblem(A, u0, tspan)
sol = solve(prob, MagnusGL6(), dt = 1 / 4)
retcode: Success
Interpolation: 3rd order Hermite
t: 121-element Vector{Float64}:
  0.0
  0.25
  0.5
  0.75
  1.0
  1.25
  1.5
  1.75
  2.0
  2.25
  ⋮
 28.0
 28.25
 28.5
 28.75
 29.0
 29.25
 29.5
 29.75
 30.0
u: 121-element Vector{Vector{Float64}}:
 [1.0, 1.0]
 [0.7477293614981243, 1.0258828420877586]
 [0.4846168770197237, 1.0809910052153802]
 [0.2077086776816325, 1.1304780421849812]
 [-0.07743720021742452, 1.142040711171155]
 [-0.35815132720055304, 1.0921780876094724]
 [-0.6176147554090752, 0.9719724825053363]
 [-0.8389265268887828, 0.7897594565342917]
 [-1.0092640424285149, 0.5690843306580033]
 [-1.1229624436986743, 0.34234573310103056]
 ⋮
 [-7495.028632347559, 35809.99852774761]
 [-16389.324119293113, 35412.37325589443]
 [-25281.82163999651, 35980.17344234276]
 [-34549.27933515689, 38583.24788622997]
 [-44825.122533718866, 44203.92282104653]
 [-56978.455843667, 53749.35858093559]
 [-72097.60641851061, 68065.2458523157]
 [-91470.65975493801, 87887.39535208608]
 [-116539.7869763494, 113671.55135864494]

DyanmicalODEProblem and Symplectic Integrators

using OrdinaryDiffEqSymplecticRK, LinearAlgebra, ForwardDiff, Plots; gr()
H(q,p) = norm(p)^2/2 - inv(norm(q))
L(q,p) = q[1]*p[2] - p[1]*q[2]

pdot(dp,p,q,params,t) = ForwardDiff.gradient!(dp, q->-H(q, p), q)
qdot(dq,p,q,params,t) = ForwardDiff.gradient!(dq, p-> H(q, p), p)

initial_position = [.4, 0]
initial_velocity = [0., 2.]
initial_cond = (initial_position, initial_velocity)
initial_first_integrals = (H(initial_cond...), L(initial_cond...))
tspan = (0,100.)
prob = DynamicalODEProblem(pdot, qdot, initial_velocity, initial_position, tspan)
sol = solve(prob, KahanLi6(), dt=1//10);

Symplectic Integrator Long Time Solution

plot_orbit(sol) = plot(sol,idxs=(3,4), lab="Orbit", title="Kepler Problem Solution")

function plot_first_integrals(sol, H, L)
    plot(initial_first_integrals[1].-map(u->H(u.x[2], u.x[1]), sol.u), lab="Energy variation", title="First Integrals")
    plot!(initial_first_integrals[2].-map(u->L(u.x[2], u.x[1]), sol.u), lab="Angular momentum variation")
end
analysis_plot(sol, H, L) = plot(plot_orbit(sol), plot_first_integrals(sol, H, L))
analysis_plot(sol, H, L)

Normal Integrator Long Time Solution

sol = solve(prob, DE.Tsit5());
analysis_plot(sol, H, L)

Mass Matrices and Differential-Algebraic Equations (DAEs)

Instead of just an ODE \(u'=f(u,p,t)\), DifferentialEquations.jl can express mass matrix ODEs:

\[ Mu' = f(u,p,t) \]

In many cases this can be a performance improvement if \(M\) is sparse (since \(M^{-1}\)) would likely be dense! However, it can be more than just a performance improvement. Mass matrices can be used to impose constraints when \(M\) is singular.

This constrained ODE is called a DAE.

DAE via Mass Matrix Example

Say we want to solve:

\[ \begin{aligned} \frac{dy_1}{dt} &= -0.04 y_1 + 10^4 y_2 y_3 \\ \frac{dy_2}{dt} &= 0.04 y_1 - 10^4 y_2 y_3 - 3*10^7 y_{2}^2 \\ 1 &= y_{1} + y_{2} + y_{3} \\ \end{aligned} \]

Let’s write this in mass matrix form:

\[ \begin{align} \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} y_1'\\ y_2'\\ y_3' \end{bmatrix} = \begin{bmatrix} -0.04 y_1 + 10^4 y_2 y_3\\ 0.04 y_1 - 10^4 y_2 y_3 - 3*10^7 y_{2}^2\\ y_{1} + y_{2} + y_{3} - 1 \end{bmatrix} \end{align} \]

If you do the matrix-vector multiplication out, you see that last row of zeros simply gives that the last equation is \(0 = y_{1} + y_{2} + y_{3} - 1\). Once you see that trick, it’s immediately clear how mass matrices can write out any constraint equation \(g\). Done.

DAE via Mass Matrix Example

function rober(du, u, p, t)
    y₁, y₂, y₃ = u
    k₁, k₂, k₃ = p
    du[1] = -k₁ * y₁ + k₃ * y₂ * y₃
    du[2] = k₁ * y₁ - k₃ * y₂ * y₃ - k₂ * y₂^2
    du[3] = y₁ + y₂ + y₃ - 1
    nothing
end
M = [1.0 0 0
     0 1.0 0
     0 0 0]
mmf = DE.ODEFunction(rober, mass_matrix = M)
prob_mm = DE.ODEProblem(mmf, [1.0, 0.0, 0.0], (0.0, 1e5), (0.04, 3e7, 1e4))
sol = DE.solve(prob_mm, DE.Rodas5(), reltol = 1e-8, abstol = 1e-8)

Plots.plot(sol, xscale = :log10, tspan = (1e-6, 1e5), layout = (3, 1))

DAEs More Information

For a deeper dive into methods for defining DAEs, see the blog post:

https://www.stochasticlifestyle.com/machine-learning-with-hard-constraints-neural-differential-algebraic-equations-daes-as-a-general-formalism/

Writing fast Julia code

Make sure your code is type stable.

  • @code_warntype f(du, u, p, t)
  • Profile to find slow spots

Avoid unnecessary array allocations

  • Use @views when appropriate
  • Preallocate when possible
  • Default to place form of ODEs

@views

v = rand(10000)

@time v[2:end]
@time @views v[2:end]
  0.000044 seconds (5 allocations: 78.242 KiB)
  0.000011 seconds (3 allocations: 96 bytes)

Preallocate when possible

using LinearAlgebra

A, b, v = rand(10000,10000), rand(10000), rand(10000)
@time b*v'
@time mul!(A, b, v')
  0.561439 seconds (4 allocations: 762.941 MiB, 27.63% gc time)
  0.062975 seconds (1 allocation: 16 bytes)

Rober out of place

using OrdinaryDiffEq
function rober_oop(u, p, t)
    y₁, y₂, y₃ = u
    k₁, k₂, k₃ = p
    return [-k₁ * y₁ + k₃ * y₂ * y₃
             k₁ * y₁ - k₂ * y₂^2 - k₃ * y₂ * y₃
             k₂ * y₂^2]
end
function rober!(du, u, p, t)
    y₁, y₂, y₃ = u
    k₁, k₂, k₃ = p
    du[1] = -k₁ * y₁ + k₃ * y₂ * y₃
    du[2] = k₁ * y₁ - k₂ * y₂^2 - k₃ * y₂ * y₃
    du[3] = k₂ * y₂^2
    nothing
end
prob_oop = ODEProblem(rober_oop, [1.0, 0.0, 0.0], (0.0, 1e5), [0.04, 3e7, 1e4])
prob! = ODEProblem(rober!, [1.0, 0.0, 0.0], (0.0, 1e5), [0.04, 3e7, 1e4])
@time solve(prob_oop)
@time solve(prob!)
  0.000698 seconds (12.63 k allocations: 471.234 KiB)
  0.000252 seconds (1.74 k allocations: 124.875 KiB)

Avoid Splatting into arrays

u1, u2 = rand(1000), rand(1000)
@time [u1..., u2...]
@time vcat(u1, u2)
  0.000196 seconds (2.00 k allocations: 62.695 KiB)
  0.000030 seconds (3 allocations: 15.695 KiB)

Cost of Operations

Operation Cost Estimate (ns)
+, -, * 0.5 ns
polynomial (degree 6) 2 ns
exp, log, trig 4 ns
div. sqrt 5ns (varries)
^ (int exponent) 5 ns
^ (float exponent) 15 ns
Speccial functions 20-100 ns
DRAM 100 ns

Use @fastmath where appropriate

  • Reassociation,
  • Substitutes functions for less accurate forms
  • Finite numbers

Interaction time

function rober!(du, u, p, t)
    y₁, y₂, y₃ = u
    k₁, k₂, k₃ = p
    return [-k₁ * y₁ + k₃ * y₂ * y₃
             k₁ * y₁ - k₂ * y₂^2.0 - k₃ * y₂ * y₃
             k₂ * y₂^2.0]
end
rober! (generic function with 1 method)

Faster ODEs with Automatic Differentiation

First a disambiguation…

Two ways that automatic differentiation is used

  1. For improving the solving process (particularly for implicit solvers)
  2. For calculating the derivative of the solution (w.r.t. parameters and initial conditions)

Quick Note: automatic differentiation is not symbolic or numerical differentiation

Automatic differentiation is a mix: it is a compiler-based method which changes a code into the code for computing the solution and its derivative simultaneously.

Demonstration of symbolic forms of code

using Symbolics
@variables x
function f(x)
    out = x
    for i in 1:5
        out *= sin(out)
    end
    out
end
sin(f(x))
sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x))*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x))))

Demonstration of symbolic forms of code

Symbolics.derivative(sin(f(x)),x)
(sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x))*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x))) + x*sin(x*sin(x)*sin(x*sin(x)))*cos(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x))*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x))) + x*(sin(x) + x*cos(x))*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x)))*cos(x*sin(x)) + x*(sin(x)*sin(x*sin(x)) + x*cos(x)*sin(x*sin(x)) + x*(sin(x) + x*cos(x))*sin(x)*cos(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x))*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x)))*cos(x*sin(x)*sin(x*sin(x))) + x*(sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)) + x*sin(x*sin(x)*sin(x*sin(x)))*cos(x)*sin(x*sin(x)) + x*(sin(x) + x*cos(x))*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*cos(x*sin(x)) + x*(sin(x)*sin(x*sin(x)) + x*cos(x)*sin(x*sin(x)) + x*(sin(x) + x*cos(x))*sin(x)*cos(x*sin(x)))*sin(x)*sin(x*sin(x))*cos(x*sin(x)*sin(x*sin(x))))*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x))*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x)))*cos(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x))) + x*(sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x)) + x*sin(x*sin(x)*sin(x*sin(x)))*cos(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x)) + x*(sin(x) + x*cos(x))*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*cos(x*sin(x)) + x*(sin(x)*sin(x*sin(x)) + x*cos(x)*sin(x*sin(x)) + x*(sin(x) + x*cos(x))*sin(x)*cos(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x))*cos(x*sin(x)*sin(x*sin(x))) + x*(sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)) + x*sin(x*sin(x)*sin(x*sin(x)))*cos(x)*sin(x*sin(x)) + x*(sin(x) + x*cos(x))*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*cos(x*sin(x)) + x*(sin(x)*sin(x*sin(x)) + x*cos(x)*sin(x*sin(x)) + x*(sin(x) + x*cos(x))*sin(x)*cos(x*sin(x)))*sin(x)*sin(x*sin(x))*cos(x*sin(x)*sin(x*sin(x))))*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x))*cos(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x))))*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x))*cos(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x))))*cos(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x))*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x*sin(x)*sin(x*sin(x)))*sin(x)*sin(x*sin(x)))*sin(x*sin(x))))

Automatic Differentiation done by hand

function f2(x)
    out = x
    for i in 1:5
        # sin(out) => chain rule sin' = cos
        tmp = (sin(out[1]), out[2] * cos(out[1])) 
        # out = out * tmp => product rule
        out = (out[1] * tmp[1], out[1] * tmp[2] + out[2] * tmp[1])
    end
    out
end
function outer(x)
    # sin(x) => chain rule sin' = cos
    out1, out2 = f(x)
    sin(out1), out2 * cos(out1)
end
dsinfx(x) = outer((x,1))[2]

f2((1,1))
(0.01753717849708632, 0.36676042682811677)

Validation

f2((1,1))
(0.01753717849708632, 0.36676042682811677)
(substitute(sin(f(x)),x=>1), substitute(Symbolics.derivative(sin(f(x)),x),x=>1))
(0.01753627957668249, 0.36670402920671613)

A few things to understand about automatic differentiation

  1. It recompiles your code to do something slightly different, which means it needs to be Julia code in the Julia compiler (calling C or Python code will make this fail)
  2. Automatic differentiation does not necessarily work with the same number types, and so you need to be careful about the buffers that you create as they may need to be resized for the AD context
  3. It tends to be a lot more numerically stable than finite differencing, and thus it is not just a performance improvement but also an important improvement to accuracy (this is especially important to some solvers such as Rodas5P)

These facts will become important in a second…

How is autodiff used in the solving process?

To understand how and where automatic differentiation is used, let’s look at the implicit Euler discretization. We approximate \(u(t_n)\) numerically as \(u_{n}\) using the stepping process:

\[ u_{n+1} = u_n + hf(u_{n+1},p,t_n) \]

Notice that \(u_{n+1}\) is on both sides of the equation, so we define:

\[ g(u_{n+1}) = u_{n+1} - u_n + hf(u_{n+1},p,t_n) \]

To find the solution for a step of implicit Euler, we need to find where \(g(x) = 0\).

How is autodiff used in the solving process?

In order to find the \(x\) s.t. \(g(x) = 0\), we use a Newton method:

\[ x_{n+1} = x_n - g'^{-1}(x_n) g(x_n) \]

where \(g'\) is the Jacobian of \(g\), i.e. the matrix of derivatives for every output w.r.t. every input. This is where the derivative is used in the solver!

Using Automatic Differentiation

using ForwardDiff

function rober!(du, u, p, t)
    y₁, y₂, y₃ = u
    k₁, k₂, k₃ = p
    du[1] = -k₁ * y₁ + k₃ * y₂ * y₃
    du[2] = k₁ * y₁ - k₂ * y₂^2 - k₃ * y₂ * y₃
    du[3] = k₂ * y₂^2
    nothing
end

u0 = [1.0, 0.0, 0.0]
du = copy(u0)
p = [0.04, 3e7, 1e4]
_t = 0.0
0.0

Using Automatic Differentiation

Uhh?

ForwardDiff.jacobian((x)->(rober!(du,x,p,t); du), u0)
ERROR: UndefVarError: `t` not defined
Stacktrace:
 [1] (::var"#11#12")(x::Vector{ForwardDiff.Dual{ForwardDiff.Tag{var"#11#12", Float64}, Float64, 3}})
   @ Main ~/.julia/external/2025-JuliaCon-DifferentialEquations-Workshop/Autodiff.qmd:1
 [2] vector_mode_dual_eval!
   @ ~/.julia/packages/ForwardDiff/UBbGT/src/apiutils.jl:24 [inlined]
 [3] vector_mode_jacobian(f::var"#11#12", x::Vector{…}, cfg::ForwardDiff.JacobianConfig{…})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/UBbGT/src/jacobian.jl:129
 [4] jacobian
   @ ~/.julia/packages/ForwardDiff/UBbGT/src/jacobian.jl:22 [inlined]
 [5] jacobian(f::var"#11#12", x::Vector{Float64}, cfg::ForwardDiff.JacobianConfig{ForwardDiff.Tag{…}, Float64, 3, Vector{…}})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/UBbGT/src/jacobian.jl:19
 [6] jacobian(f::var"#11#12", x::Vector{Float64})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/UBbGT/src/jacobian.jl:19
 [7] top-level scope
   @ ~/.julia/external/2025-JuliaCon-DifferentialEquations-Workshop/Autodiff.qmd:1
Some type information was truncated. Use `show(err)` to see complete types.

Using Automatic Differentiation

You have to be careful about types! du is only 3 64-bit values so it cannot hold the 2x64-bit dual numbers used in AD!

ForwardDiff.jacobian(u0) do x
    dx = copy(x)
    rober!(dx,x,p,_t)
    dx
end
3×3 Matrix{Float64}:
 -0.04  0.0  0.0
  0.04  0.0  0.0
  0.0   0.0  0.0

Using Automatic Differentiation in DiffEq

Easy.

Using Automatic Differentiation in DiffEq

It’s actually harder to turn it off.

import ADTypes
sol = DE.solve(prob, DE.Rodas5P(autodiff = ADTypes.AutoFiniteDiff()))
retcode: Success
Interpolation: specialized 4rd order "free" stiffness-aware interpolation
t: 61-element Vector{Float64}:
      0.0
      0.0005898490999384442
      0.0008686786817598709
      0.0018428788583447504
      0.002597063393923121
      0.004200336689409958
      0.005977753550395024
      0.009651297872028763
      0.018156788612854836
      0.04904370004915961
      ⋮
  37699.80891449364
  44127.0986055743
  51240.440079522916
  59083.32441144187
  67699.46602707569
  77132.7516300622
  87427.12086226104
  98626.44266235306
 100000.0
u: 61-element Vector{Vector{Float64}}:
 [1.0, 0.0, 0.0]
 [0.9999764064263935, 2.1001873344098315e-5, 2.5917002625272623e-6]
 [0.9999652539053747, 2.7187341602840628e-5, 7.558753022701347e-6]
 [0.9999262950722889, 3.5293423788349034e-5, 3.841150392292021e-5]
 [0.9998961447046213, 3.6268699148795714e-5, 6.758659623016065e-5]
 [0.9998320805967901, 3.64864103399262e-5, 0.0001314329928702305]
 [0.9997611066328654, 3.6479995108168136e-5, 0.00020241337202657502]
 [0.9996145800623335, 3.645304903213494e-5, 0.0003489668886346249]
 [0.999276154652567, 3.6390568458218745e-5, 0.0006874547789749958]
 [0.9980568697361171, 3.616619064406446e-5, 0.0019069640732392314]
 ⋮
 [0.040889060897089324, 1.7043829308194423e-7, 0.9591107686646148]
 [0.03598519468203274, 1.4924466251975596e-7, 0.9640146560733026]
 [0.031802836595384756, 1.3133655943518178e-7, 0.9681970320680521]
 [0.028215576689018702, 1.1609770327977974e-7, 0.9717843072132746]
 [0.025123317223218807, 1.030504351603955e-7, 0.9748765797263433]
 [0.02244573403923267, 9.18186258725622e-8, 0.9775541741421392]
 [0.02011768044358689, 8.210224309350059e-8, 0.9798822374541671]
 [0.01808584464339182, 7.36593229155116e-8, 0.9819140816972826]
 [0.017865049092222637, 7.274390007183009e-8, 0.9821348781638749]

For more information on choices and methods, see the ADTypes.jl documentation and DifferentiationInterface.jl. OrdinaryDiffEq.jl uses DifferentiationInterface.jl internally.

If it’s automatic, why care?

import LinearAlgebra as LA

A = rand(3,3) ./ 100
function rober!(du, u, p, t)
    y₁, y₂, y₃ = u
    k₁, k₂, k₃, A, cache = p
    LA.mul!(cache, A, u)
    du[1] = -k₁ * y₁ + k₃ * y₂ * y₃ - cache[1]
    du[2] = k₁ * y₁ - k₂ * y₂^2 - k₃ * y₂ * y₃  - cache[2]
    du[3] = k₂ * y₂^2  - cache[3]
    nothing
end
cache = zeros(3)
prob = DE.ODEProblem(rober!, [1.0, 0.0, 0.0], (0.0, 1e5), (0.04, 3e7, 1e4, A, cache))
ODEProblem with uType Vector{Float64} and tType Float64. In-place: true
Non-trivial mass matrix: false
timespan: (0.0, 100000.0)
u0: 3-element Vector{Float64}:
 1.0
 0.0
 0.0
sol = DE.solve(prob, DE.Rodas5P())
ERROR: First call to automatic differentiation for the Jacobian
failed. This means that the user `f` function is not compatible
with automatic differentiation. Methods to fix this include:

1. Turn off automatic differentiation (e.g. Rosenbrock23() becomes
   Rosenbrock23(autodiff = AutoFiniteDiff())). More details can befound at
   https://docs.sciml.ai/DiffEqDocs/stable/features/performance_overloads/
2. Improving the compatibility of `f` with ForwardDiff.jl automatic
   differentiation (using tools like PreallocationTools.jl). More details
   can be found at https://docs.sciml.ai/DiffEqDocs/stable/basics/faq/#Autodifferentiation-and-Dual-Numbers
3. Defining analytical Jacobians. More details can be
   found at https://docs.sciml.ai/DiffEqDocs/stable/types/ode_types/#SciMLBase.ODEFunction

Note: turning off automatic differentiation tends to have a very minimal
performance impact (for this use case, because it's forward mode for a
square Jacobian. This is different from optimization gradient scenarios).
However, one should be careful as some methods are more sensitive to
accurate gradients than others. Specifically, Rodas methods like `Rodas4`
and `Rodas5P` require accurate Jacobians in order to have good convergence,
while many other methods like BDF (`QNDF`, `FBDF`), SDIRK (`KenCarp4`),
and Rosenbrock-W (`Rosenbrock23`) do not. Thus if using an algorithm which
is sensitive to autodiff and solving at a low tolerance, please change the
algorithm as well.

MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{DiffEqBase.OrdinaryDiffEqTag, Float64}, Float64, 1})

Closest candidates are:
  (::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat
   @ Base rounding.jl:207
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  Float64(::Int8)
   @ Base float.jl:159
  ...

Stacktrace:
  [1] jacobian!(J::Matrix{…}, f::Function, x::Vector{…}, fx::Vector{…}, integrator::OrdinaryDiffEqCore.ODEIntegrator{…}, jac_config::Tuple{…})
    @ OrdinaryDiffEqDifferentiation ~/.julia/packages/OrdinaryDiffEqDifferentiation/I5Bk2/src/derivative_wrappers.jl:223
  [2] calc_J!
    @ ~/.julia/packages/OrdinaryDiffEqDifferentiation/I5Bk2/src/derivative_utils.jl:222 [inlined]
  [3] calc_W!
    @ ~/.julia/packages/OrdinaryDiffEqDifferentiation/I5Bk2/src/derivative_utils.jl:627 [inlined]
  [4] calc_W!
    @ ~/.julia/packages/OrdinaryDiffEqDifferentiation/I5Bk2/src/derivative_utils.jl:565 [inlined]
  [5] calc_rosenbrock_differentiation!
    @ ~/.julia/packages/OrdinaryDiffEqDifferentiation/I5Bk2/src/derivative_utils.jl:702 [inlined]
  [6] perform_step!(integrator::OrdinaryDiffEqCore.ODEIntegrator{…}, cache::OrdinaryDiffEqRosenbrock.RosenbrockCache{…}, repeat_step::Bool)
    @ OrdinaryDiffEqRosenbrock ~/.julia/packages/OrdinaryDiffEqRosenbrock/1cjFj/src/rosenbrock_perform_ste

Easy answer to autodiff issues: just move to finite diff

Small performance and robustness loss, but if it works it works

fsol = DE.solve(prob, DE.Rodas5P(autodiff=ADTypes.AutoFiniteDiff()))
Warning: At t=267.96164310191983, dt was forced below floating point epsilon 5.684341886080802e-14, and step error estimate = 8.33474942347665. Aborting. There is either an error in your model specification or the true solution is unstable (or the true solution can not be represented in the precision of Float64).
@ SciMLBase ~/.julia/packages/SciMLBase/u2Ue2/src/integrator_interface.jl:623
retcode: Unstable
Interpolation: specialized 4rd order "free" stiffness-aware interpolation
t: 107-element Vector{Float64}:
   0.0
   0.0006372277780748766
   0.0009403242252171569
   0.0020037176209662493
   0.0028347801790132906
   0.004559120677393173
   0.006529428504936921
   0.010589376800054505
   0.020293465639620416
   0.0550437011837185
   ⋮
 267.9616431019171
 267.96164310191796
 267.9616431019186
 267.961643101919
 267.96164310191926
 267.9616431019195
 267.96164310191966
 267.9616431019198
 267.96164310191983
u: 107-element Vector{Vector{Float64}}:
 [1.0, 0.0, 0.0]
 [0.9999737384176841, 1.9582232039434257e-5, -5.886593627530886e-7]
 [0.9999612474251077, 2.5355665590851728e-5, 2.6718517752458944e-6]
 [0.9999174290503308, 3.284742580810992e-5, 2.6869533284712616e-5]
 [0.9998831925792162, 3.3723725343143474e-5, 5.075060602402636e-5]
 [0.999812182273397, 3.390997831893462e-5, 0.00010190656915650495]
 [0.9997310851650754, 3.39043573272288e-5, 0.0001605352376848951]
 [0.9995641215509289, 3.38810426104989e-5, 0.00028121144975220967]
 [0.9991658186113452, 3.382527327024741e-5, 0.0005688704412179863]
 [0.9977483499853546, 3.362778095003522e-5, 0.0015900326698304091]
 ⋮
 [-3.84152549059517, -11454.484949209964, 11458.33996931608]
 [-5.395018441941923, -16111.844586404557, 16117.253099467725]
 [-7.580070213469466, -22662.617354935217, 22670.210919761754]
 [-10.653434141063006, -31876.549669990924, 31887.21659873883]
 [-14.9762473079658, -44836.33038648378, 44851.320128413194]
 [-21.056467315130995, -63064.816222397436, 63085.88618438378]
 [-29.60855483181446, -88703.96002672527, 88733.58207618771]
 [-41.637424277764374, -124766.49489315216, 124808.14581213206]
 [-58.556533651005665, -175489.96742994446, 175548.53745824995]

More involved answer: PreallocationTools.jl

The issue is that cache is only 3 64-bit numbers, and so it needs to change when in the context of autodiff. PreallocationTools.jl is a package that helps make this process easier.

import PreallocationTools as PT
cache = PT.DiffCache(zeros(3))

function rober!(du, u, p, t)
    y₁, y₂, y₃ = u
    k₁, k₂, k₃, A, _cache = p
    cache = PT.get_tmp(_cache, du)
    LA.mul!(cache, A, u)
    du[1] = -k₁ * y₁ + k₃ * y₂ * y₃ - cache[1]
    du[2] = k₁ * y₁ - k₂ * y₂^2 - k₃ * y₂ * y₃  - cache[2]
    du[3] = k₂ * y₂^2  - cache[3]
    nothing
end

prob = DE.ODEProblem(rober!, [1.0, 0.0, 0.0], (0.0, 1e5), (0.04, 3e7, 1e4, A, cache))
sol = DE.solve(prob, DE.Rodas5P())
Warning: At t=268.9454579166564, dt was forced below floating point epsilon 5.684341886080802e-14, and step error estimate = 8.334719363745716. Aborting. There is either an error in your model specification or the true solution is unstable (or the true solution can not be represented in the precision of Float64).
@ SciMLBase ~/.julia/packages/SciMLBase/u2Ue2/src/integrator_interface.jl:623
retcode: Unstable
Interpolation: specialized 4rd order "free" stiffness-aware interpolation
t: 104-element Vector{Float64}:
   0.0
   0.0006371574181692174
   0.000940222792276116
   0.0020763173145450604
   0.0029172544775780693
   0.004670663063468606
   0.006738659629809091
   0.011030402383094476
   0.022008158286777615
   0.06428754986866658
   ⋮
 268.9454579166542
 268.9454579166549
 268.94545791665536
 268.9454579166557
 268.9454579166559
 268.9454579166561
 268.9454579166562
 268.9454579166563
 268.9454579166564
u: 104-element Vector{Vector{Float64}}:
 [1.0, 0.0, 0.0]
 [0.9999737413173063, 1.9580522867004124e-5, -5.890473060718868e-7]
 [0.999961251605183, 2.535405396430276e-5, 2.6704402496189148e-6]
 [0.9999144379101352, 3.300521806982049e-5, 2.8874812546896773e-5]
 [0.9998797953979428, 3.375650459437418e-5, 5.3174301684686573e-5]
 [0.9998075900201546, 3.391060044675872e-5, 0.00010522591871816639]
 [0.9997224759407343, 3.390318735636613e-5, 0.00016675903736208803]
 [0.9995459960796241, 3.3878501732566826e-5, 0.0002943086872484344]
 [0.9990955522880406, 3.381544789796154e-5, 0.0006195850323504527]
 [0.9973735937343967, 3.357581920329294e-5, 0.001859344398662139]
 ⋮
 [-4.884616129049183, -14577.063258874603, 14581.960939896348]
 [-6.861559992261805, -20503.928210994196, 20510.80283587916]
 [-9.64221249891735, -28840.311407037836, 28849.966684429495]
 [-13.553313833773103, -40565.77981182579, 40579.34619055238]
 [-19.054440852810583, -57058.144686220694, 57077.212191966355]
 [-26.79200808659575, -80255.35657719205, 80282.1616501716]
 [-37.675223555418484, -112883.22082626369, 112920.90911471209]
 [-52.98292256254828, -158775.68524605222, 158828.68123350802]
 [-74.51384116236463, -223325.3599275932, 223399.88683364954]

Automatic Differentiation use case 2: Differentiating Solvers

That covers how it’s used in the solvers. But the other use case is on the solver. The use case is if you need the derivative:

\[ \frac{\partial u(t)}{\partial dp} \]

i.e. you want to know how the solution changes if you change parameters. This is used in applications like parameter estimation, optimal control, and beyond.

Easy Answer: DifferentialEquations.jl is compatible with AD

AD differentiates Julia code, DiffEq.jl is Julia code.

function rober!(du, u, p, t)
    y₁, y₂, y₃ = u
    k₁, k₂, k₃ = p
    du[1] = -k₁ * y₁ + k₃ * y₂ * y₃
    du[2] = k₁ * y₁ - k₂ * y₂^2 - k₃ * y₂ * y₃
    du[3] = k₂ * y₂^2
    nothing
end

function solve_with_p(p)
    prob = DE.ODEProblem(rober!, [1.0, 0.0, 0.0], (0.0, 1e5), p)
    sol = DE.solve(prob, DE.Rodas5P(), saveat=50.0)
    Array(sol)
end

solve_with_p([0.04, 3e7, 1e4])
3×2001 Matrix{Float64}:
 1.0  0.692876    0.617232    …  0.0178817   0.0178738  0.0178658
 0.0  8.34418e-6  6.15356e-6     7.28129e-8  7.278e-8   7.27471e-8
 0.0  0.307116    0.382762       0.982118    0.982126   0.982134

Easy Answer: DifferentialEquations.jl is compatible with AD

QED.

ForwardDiff.jacobian(solve_with_p, [0.04, 3e7, 1e4])
6003×3 Matrix{Float64}:
  0.0          0.0           0.0
  0.0          0.0           0.0
  0.0          0.0           0.0
 -4.47088     -2.47976e-9    1.48791e-5
  3.94713e-5  -1.052e-13    -2.03235e-10
  4.47084      2.47986e-9   -1.48789e-5
 -5.11648     -3.03525e-9    1.82119e-5
  2.32869e-5  -8.24575e-14  -1.20614e-10
  5.11646      3.03533e-9   -1.82118e-5
 -5.44175     -3.3173e-9     1.99041e-5
  ⋮                         
 -0.793681    -5.28602e-10   3.17161e-6
 -1.47e-6     -2.19166e-15   5.86868e-12
  0.793682     5.28604e-10  -3.17161e-6
 -0.79336     -5.28389e-10   3.17032e-6
 -1.46944e-6  -2.19074e-15   5.86645e-12
  0.793361     5.28391e-10  -3.17033e-6
 -0.793039    -5.28175e-10   3.16904e-6
 -1.46888e-6  -2.18982e-15   5.86423e-12
  0.793041     5.28177e-10  -3.16905e-6

Complications

ForwardDiff is forward-mode automatic differentiation. This is only efficient when the number of inputs is much equal to, or larger than, the number of outputs. Or if the equation is “sufficiently small”. A good rule of thumb is:

number of parameters + number of equations < 100 => forward-mode
anything else? => reverse-mode

To use reverse-mode, we need to switch what we’re doing.

Reverse-Mode AD of Solver

Note you need to import SciMLSensitivity for the adjoint system, even if its functions are not directly used. If it’s not imported then you will get an error instructing you to import it!

#| echo: true
import Zygote, SciMLSensitivity
Zygote.jacobian(solve_with_p, [0.04, 3e7, 1e4])

SciMLSensitivity.jl has tons of options, we’d recommend that even intermediate users only use the default method like this.

Partial Differential Equations

Overview of (some) PDE tools

Check out github.com/JuliaPDE/SurveyofPDEPackages

Finite Differences

  • DiffEqOperators.jl
  • MethodOfLines.jl
  • ParallelStencil.jl
  • ImplicitGlobalGrid.jl

Finite Elements

  • Ferrite.jl
  • Gridap.jl
  • FEniCS.jl
  • Trixi.jl

Finite Volumes

Brusselator

\[ \begin{align} \frac{\partial U}{\partial t} &= 1 + U^2V - 4.4U + A\nabla^2U + f(x,y,t)\\ \frac{\partial V}{\partial t} &= 3.4U -U^2V - \alpha\nabla^2V \\ U(x, y, 0) &= 22\cdot (y(1-y))^{3/2} \\ V(x, y, 0) &= 27\cdot (x(1-x))^{3/2} \\ U(x+1,y,t) &= U(x,y,t) \\ V(x,y+1,t) &= V(x,y,t) \end{align} \]

function f(x,y,t)
    if t < 1.1 || (x-0.3)^2+(y-0.6)^2 > 0.1^2 
        return 0
    else
        return 5
    end
end

Brusselator in Julia

using OrdinaryDiffEq, LinearAlgebra, SparseArrays
const N = 32
const xyd_brusselator = range(0, stop = 1, length = N)
brusselator_f(x, y, t) = (((x - 0.3)^2 + (y - 0.6)^2) <= 0.1^2) * (t >= 1.1) * 5.0
function brusselator_loop(du, u, p, t)
    A, B, alpha, dx = p
    alpha /= dx^2
    @inbounds for i in 1:N, j in 1:N
        x, y = xyd_brusselator[i], xyd_brusselator[j]
        ip1, im1, jp1, jm1 = clamp.((i + 1, i - 1, j + 1, j - 1), 1, N)
        du[i, j, 1] = alpha * (u[im1, j,   1] + u[ip1, j,   1]
                             + u[i,   jp1, 1] + u[i,   jm1, 1]
                             -4u[i,   j,   1]) +
                      B + u[i, j, 1]^2 * u[i, j, 2] - (A + 1) * u[i, j, 1] +
                      brusselator_f(x, y, t)
        du[i, j, 2] = alpha * (u[im1, j,   2] + u[ip1, j,   2]
                             + u[i,   jp1, 2] + u[i,   jm1, 2]
                             -4u[i,   j,   2]) +
                      A * u[i, j, 1] - u[i, j, 1]^2 * u[i, j, 2]
    end
end

Brusselator in Julia (cont)

p = (3.4, 1.0, 10.0, step(xyd_brusselator))
function init_brusselator(xyd)
    u = zeros(N, N, 2)
    for i in 1:N, j in 1:N
        x, y = xyd[i], xyd[j]
        u[i, j, 1] = 22 * (y * (1 - y))^(3 / 2)
        u[i, j, 2] = 27 * (x * (1 - x))^(3 / 2)
    end
    u
end
u0 = init_brusselator(xyd_brusselator)
brusselator = ODEProblem(brusselator_loop, u0, (0.0, 11.5), p)

Out of the box performance

@time solve(brusselator);
@time solve(brusselator, DefaultODEAlgorithm());
 27.681594 seconds (1.76 M allocations: 1.111 GiB, 4.16% gc time)
  5.158722 seconds (332.72 k allocations: 321.939 MiB, 4.56% gc time)

Sparse Jacobian

using SparseConnectivityTracer, ADTypes
detector = TracerSparsityDetector()
du0 = copy(u0)
jac_sparsity = ADTypes.jacobian_sparsity(
    (du, u) -> brusselator_loop(du, u, p, 0.0), 
    du0, 
    u0, 
    detector)
brusselator_f_sparse = ODEFunction(brusselator_loop;
    jac_prototype = float.(jac_sparsity))
brusselator_sparse = ODEProblem(brusselator_f_sparse, u0, (0.0, 11.5), p)
@time solve(brusselator_sparse, DefaultODEAlgorithm())
  5.178522 seconds (332.72 k allocations: 322.510 MiB, 4.45% gc time)

Specialized Linear Solvers

using LinearSolve
@time solve(brusselator_sparse, DefaultODEAlgorithm(linsolve=KLUFactorization()))
@time solve(brusselator_sparse, DefaultODEAlgorithm(linsolve=KrylovJL()))
  0.743476 seconds (17.28 k allocations: 53.201 MiB, 11.85% gc time)
  5.163615 seconds (331.50 k allocations: 322.543 MiB, 5.01% gc time)

Preconditioned Krylov

using IncompleteLU
function incompletelu(W, du, u, p, t, newW, Plprev, Prprev, solverdata)
    if newW === nothing || newW
        Pl = ilu(convert(AbstractMatrix, W), τ = 50.0)
    else
        Pl = Plprev
    end
    Pl, nothing
end
@time solve(brusselator_sparse,
    KenCarp47(linsolve = KrylovJL(), precs = incompletelu,
        concrete_jac = true));
  0.267761 seconds (18.22 k allocations: 42.294 MiB)

Faster Solves with GPUs

Two ways to use GPUs

  • GPU within problem (PDEs)
  • GPU ensembles

Within GPU paralellization

  • Many states (>1000)
  • Expensive f
  • fairly simple f (array based paralellism)

Within GPU example

using DiffEqGPU, OrdinaryDiffEq, CUDA
function linear_ode(du, u, p, t)
    mul!(du, p, u)
end

A = CUDA.randn(Float32, 1000, 1000) / 100
u0 = CUDA.rand(Float32, 1000)
prob = ODEProblem(linear_ode, u0, (0f0, 10f0), A)
solve(prob, Tsit5())
retcode: Success
Interpolation: specialized 4th order "free" interpolation
t: 13-element Vector{Float32}:
  0.0
  0.069257945
  0.21196604
  0.4806702
  0.95397186
  1.6615195
  2.5212424
  3.5669923
  4.78281
  6.116977
  7.608575
  9.243874
 10.0
u: 13-element Vector{CuArray{Float32, 1, CUDA.DeviceMemory}}:
 Float32[0.70759046, 0.6569301, 0.53007174, 0.2943177, 0.19073877, 0.99055743, 0.6346979, 0.9606276, 0.2170439, 0.24056292  …  0.19096243, 0.40667546, 0.49368784, 0.96600854, 0.57604057, 0.81384856, 0.9883602, 0.55040383, 0.75501007, 0.19332583]
 Float32[0.71609575, 0.6560301, 0.53797233, 0.29051888, 0.20398696, 1.0086732, 0.6329539, 0.971091, 0.21881239, 0.24230978  …  0.18737036, 0.41595456, 0.48788795, 0.98339266, 0.56776, 0.80013794, 0.9898723, 0.5603459, 0.7437507, 0.19560462]
 Float32[0.7345232, 0.65330184, 0.55475944, 0.2818495, 0.2332636, 1.044774, 0.62984425, 0.9938152, 0.22255318, 0.24710067  …  0.1800508, 0.43571007, 0.47614735, 1.017215, 0.5503546, 0.77064526, 0.99289507, 0.5805916, 0.7211425, 0.19973527]
 Float32[0.7724313, 0.644886, 0.58825314, 0.26252377, 0.29573837, 1.1082435, 0.62559414, 1.040882, 0.22999547, 0.2606462  …  0.1662333, 0.4752473, 0.4549152, 1.0735497, 0.5164017, 0.7105209, 0.9982251, 0.61769587, 0.6808275, 0.20533095]
 Float32[0.848914, 0.6189902, 0.65351427, 0.21949457, 0.42974415, 1.2056544, 0.62238276, 1.1374166, 0.24471496, 0.3000287  …  0.13970427, 0.5523631, 0.42090032, 1.1491443, 0.45317397, 0.58971375, 1.0064608, 0.6790214, 0.6175167, 0.20764129]
 Float32[0.9845507, 0.55042607, 0.76770514, 0.13670094, 0.6888046, 1.3169422, 0.6254234, 1.313507, 0.27237388, 0.40128705  …  0.085966334, 0.6851677, 0.3804502, 1.2046099, 0.3511185, 0.37251627, 1.0168744, 0.7580676, 0.5433924, 0.19130231]
 Float32[1.1803, 0.4082817, 0.93941355, 0.015639352, 1.099764, 1.3967352, 0.63946944, 1.5744629, 0.32069528, 0.6082204  …  -0.024703208, 0.87265056, 0.3536432, 1.1750363, 0.2141265, 0.04720376, 1.0313551, 0.8273268, 0.4917156, 0.1376316]
 Float32[1.4606214, 0.12282035, 1.2144653, -0.13688916, 1.737457, 1.4107859, 0.6754389, 1.9456614, 0.4149509, 1.0211052  …  -0.27584475, 1.1308275, 0.36626697, 0.98282593, 0.021204611, -0.4433165, 1.0687805, 0.86193264, 0.4965899, 0.026111988]
 Float32[1.8486929, -0.4128435, 1.6669319, -0.26405394, 2.644757, 1.3057048, 0.7664433, 2.4089212, 0.60083854, 1.7989454  …  -0.8236824, 1.4487648, 0.4696058, 0.50726557, -0.26145858, -1.1502013, 1.184079, 0.8245555, 0.6167969, -0.14172813]
 Float32[2.382389, -1.3230718, 2.4091356, -0.2299938, 3.7800963, 1.0170074, 0.9907916, 2.8685818, 0.9425257, 3.147626  …  -1.900385, 1.7699511, 0.74643695, -0.4152998, -0.6893009, -2.1010816, 1.4879358, 0.69471943, 0.9302862, -0.29584488]
 Float32[3.2170486, -2.824339, 3.6970053, 0.23175998, 5.082649, 0.4233998, 1.5357498, 3.1622386, 1.5560452, 5.463467  …  -3.9547343, 2.0076652, 1.3723382, -2.1495428, -1.3883233, -3.3812416, 2.2013066, 0.5004313, 1.593911, -0.24391681]
 Float32[4.69562, -5.131699, 5.9411216, 1.6258798, 6.308878, -0.6534633, 2.7510364, 2.9684286, 2.5839798, 9.240084  …  -7.6574683, 2.0093393, 2.6814373, -5.3559394, -2.5117002, -5.011718, 3.6586483, 0.42601842, 2.919288, 0.4706146]
 Float32[5.7148967, -6.444619, 7.3825154, 2.7321675, 6.7252145, -1.3288784, 3.6278863, 2.6191099, 3.2049384, 11.490733  …  -10.034882, 1.9019846, 3.6148245, -7.508121, -3.17078, -5.812187, 4.6313725, 0.5482823, 3.8603675, 1.1908096]

DiffEqGPU.jl

  • EnsembleGPUArray
  • EnsembleGPUKernel

EnsembleGPUArray

  • Easy
  • Higher overhead
using DiffEqGPU, OrdinaryDiffEq, CUDA
function lorenz(du, u, p, t)
    du[1] = p[1] * (u[2] - u[1])
    du[2] = u[1] * (p[2] - u[3]) - u[2]
    du[3] = u[1] * u[2] - p[3] * u[3]
end

u0 = Float32[1.0; 0.0; 0.0]
tspan = (0.0f0, 100.0f0)
p = [10.0f0, 28.0f0, 8 / 3.0f0]
prob = ODEProblem(lorenz, u0, tspan, p)
prob_func = (prob, i, repeat) -> remake(prob, p = rand(Float32, 3) .* p)
monteprob = EnsembleProblem(prob, prob_func = prob_func, safetycopy = false)
@time solve(monteprob, Tsit5(), EnsembleGPUArray(CUDA.CUDABackend()),
            trajectories = 10_000, saveat = 1.0f0);
  0.769707 seconds (2.92 M allocations: 871.065 MiB, 43.39% gc time)

EnsembleGPUKernel

  • Runs entire solver on the GPU
  • Faster
  • Slightly trickier to set up
  • Fewer supported solvers

EnsembleGPUKernel Continued

using StaticArrays

function lorenz2(u, p, t)
    du1 = p[1] * (u[2] - u[1])
    du2 = u[1] * (p[2] - u[3]) - u[2]
    du3 = u[1] * u[2] - p[3] * u[3]
    return SVector{3}(du1, du2, du3)
end

u0 = @SVector [1.0f0; 0.0f0; 0.0f0]
tspan = (0.0f0, 10.0f0)
p = @SVector [10.0f0, 28.0f0, 8 / 3.0f0]
prob = ODEProblem{false}(lorenz2, u0, tspan, p)
prob_func = (prob, i, repeat) -> remake(prob, p = (@SVector rand(Float32, 3)) .* p)
monteprob = EnsembleProblem(prob, prob_func = prob_func, safetycopy = false)
@time solve(monteprob, GPUTsit5(), EnsembleGPUKernel(CUDA.CUDABackend()),
    trajectories = 10_000, saveat = 1.0f0)
  0.015458 seconds (238.29 k allocations: 14.627 MiB)

GPU Performance (Lorentz)

GPU Performance

Downsides of GPUs

  • Function calls very slow
  • Branches can reduce throughput
  • Less solid (especially on non-Nvidia)

GPU Performance profile

  • Lots of paralelism necessary
  • Float64 is the enemy. Use Float32 wherever possible
  • Minimize data transfer
  • Be careful with scalar indexing
  • Potentially must faster than CPUs

Handling Discontinuous Behavior with Callbacks

Discontinuities the right way.

First of all, what is a callback? By demonstration

Let’s create an exponential decay problem. This is a model of a drug concentration in your body after you took a pill.

import DifferentialEquations as DE
function f(du, u, p, t)
    du[1] = -p*u[1]
end
u0 = [10.0]
p = 1.0
prob = DE.ODEProblem(f, u0, (0.0, 10.0), p)
sol = DE.solve(prob, DE.Tsit5())
import Plots;
Plots.plot(sol)

Setup an intervention

Now let’s add to the simulation that you get an injection of this drug at time \(t=4\).

condition(u, t, integrator) = t == 4
affect!(integrator) = integrator.u[1] += 10
cb = DE.DiscreteCallback(condition, affect!)
DiscreteCallback{typeof(condition), typeof(affect!), typeof(SciMLBase.INITIALIZE_DEFAULT), typeof(SciMLBase.FINALIZE_DEFAULT), Nothing}(condition, affect!, SciMLBase.INITIALIZE_DEFAULT, SciMLBase.FINALIZE_DEFAULT, Bool[1, 1], nothing)

Nothing happened?

sol = DE.solve(prob, DE.Tsit5(), callback = cb)
Plots.plot(sol)

Setup an intervention: needs tstops

sol = DE.solve(prob, DE.Tsit5(), callback = cb, tstops = [4.0])
Plots.plot(sol)

Multiple doses

dosetimes = [4.0, 8.0]
condition(u, t, integrator) = t  dosetimes
affect!(integrator) = integrator.u[1] += 10
cb = DE.DiscreteCallback(condition, affect!)
sol = DE.solve(prob, DE.Tsit5(), callback = cb, tstops = dosetimes)
Plots.plot(sol)

PresetTimeCallback

PresetTimeCallback automates the setting of tstops:

dosetimes = [4.0, 8.0]
affect!(integrator) = integrator.u[1] += 10
cb = DE.PresetTimeCallback(dosetimes, affect!)
sol = DE.solve(prob, DE.Tsit5(), callback = cb)
Plots.plot(sol)

Implementing PresetTimeCallback: Initialization Phase

Understanding how PresetTimeCallback can elucidate some information about callbacks:

function cb_init(c, u, t, integrator)
    for tstop in dosetimes
        DE.add_tstop!(integrator, tstop)
    end
end
condition(u, t, integrator) = t  dosetimes
cb = DE.DiscreteCallback(condition, affect!, initialize = cb_init)
sol = DE.solve(prob, DE.Tsit5(), callback = cb)
Plots.plot(sol)

Parameter Interventions

We can also change other aspects of the system. Let’s change the decay parameter into a positive number to model that the dose does not come all at once:

dosetimes = [4.0, 5.0, 8.0, 9.0]
function cb_init(c, u, t, integrator)
    for tstop in dosetimes
        DE.add_tstop!(integrator, tstop)
    end
end
condition(u, t, integrator) = t  dosetimes
function affect!(integrator)
    if integrator.t in [4.0,8.0]
        integrator.p = -2.0
    else
        integrator.p = 1.0
    end
end
cb = DE.DiscreteCallback(condition, affect!, initialize = cb_init)
sol = DE.solve(prob, DE.Tsit5(), callback = cb)
Plots.plot(sol)

What is the integrator all about?

See the documentation of the integrator interface to find all of the functions you can do on the integrator.

Using DiscreteCallbacks for Logging

condition(u, t, integrator) = true
affect!(integrator) = @show integrator.t
cb = DE.DiscreteCallback(condition, affect!)
sol = DE.solve(prob, DE.Tsit5(), callback = cb)
Plots.plot(sol)
integrator.t = 0.10000199992000479
integrator.t = 0.3420229374970224
integrator.t = 0.6552769636831827
integrator.t = 1.0310596738110869
integrator.t = 1.4706108682063965
integrator.t = 1.9654373039348392
integrator.t = 2.510852308683936
integrator.t = 3.0991541528874187
integrator.t = 3.7244761996967304
integrator.t = 4.380598000286985
integrator.t = 5.062515128831891
integrator.t = 5.765765004620499
integrator.t = 6.486892978408875
integrator.t = 7.223465788085796
integrator.t = 7.974457017037918
integrator.t = 8.741032724791962
integrator.t = 9.527875326067512
integrator.t = 10.0

Implicitly Defined Event Times: ContinuousCallback

DiscreteCallback works if we know the time points where we want to intervene, but what if those time points are defined implicitly? Let’s look at the bouncing ball. The model is simple: \(x'' = -g\), i.e. the ball is accelerating downwards. We turn the second order ODE into two first order ODEs: \(x' = v\), \(v' = -g\).

function f(du, u, p, t)
    du[1] = u[2]
    du[2] = -p
end
u0 = [50.0, 0.0]
tspan = (0.0, 4.0)
p = 9.8
prob = DE.ODEProblem(f, u0, tspan, p)
sol = DE.solve(prob, DE.Tsit5())
import Plots;
Plots.plot(sol)

Implicitly Defined Event Times: ContinuousCallback

To stop the ball from going into the floor, we will tell it that when u[1] = 0, we should flip the velocity. ContinuousCallbacks work by having the condition be a rootfinding function, i.e. gives an event when condition(u(t), t) = 0

function condition(u, t, integrator) # Event when condition(u,t,integrator) == 0
    u[1]
end
function affect!(integrator)
    integrator.u[2] = -integrator.u[2]
end
cb = DE.ContinuousCallback(condition, affect!)
sol = DE.solve(prob, DE.Tsit5(), callback = cb)
Plots.plot(sol)

Implicitly Defined Event Times: ContinuousCallback

tspan = (0.0, 15.0)
prob = DE.ODEProblem(f, u0, tspan, p)
sol = DE.solve(prob, DE.Tsit5(), callback = cb)
Plots.plot(sol)

Add Some Friction

Instead of just flipping the velocity, let’s add some friction:

function condition(u, t, integrator) # Event when condition(u,t,integrator) == 0
    u[1]
end
function affect!(integrator)
    integrator.u[2] = -integrator.u[2]/2
end
cb = DE.ContinuousCallback(condition, affect!)
sol = DE.solve(prob, DE.Tsit5(), callback = cb)
Plots.plot(sol)

Handling the Accumulation Point

The issue is that the rootfinding cannot be 100% accurate, so we need to help it. If we get to a low enough velocity, we’ll simply make the ball stick to the floor. To do this, we’ll have a parameter p[1] that we will have be 1.0 and multiply the velocity, but after sticking this value is 0.0. Then when it’s stuck, we will make the values be exactly 0.0. This looks like:

function dynamics!(du, u, p, t)
    du[1] = u[2]
    du[2] = p[1] * -9.8
end
function floor_aff!(integ)
    integ.u[2] *= -0.5
    if integ.dt > 1e-12
        DE.set_proposed_dt!(integ, (integ.t - integ.tprev) / 100)
    else
        integ.u[1] = 0
        integ.u[2] = 0
        integ.p[1] = 0
    end
    integ.p[2] += 1
    integ.p[3] = integ.t
end
floor_event = DE.ContinuousCallback(condition, floor_aff!)
u0 = [1.0, 0.0]
p = [1.0, 0.0, 0.0]
prob = DE.ODEProblem{true}(dynamics!, u0, (0.0, 2.0), p)
sol = DE.solve(prob, DE.Tsit5(), callback = floor_event)
Plots.plot(sol, idxs=1)

Terminating Integrations

We can also dynamically choose where to terminate integrations using DE.terminate!. Let’s now terminate the integration when it’s stuck to the floor:

function floor_aff!(integ)
    integ.u[2] *= -0.5
    if integ.dt > 1e-12
        DE.set_proposed_dt!(integ, (integ.t - integ.tprev) / 100)
    else
        integ.u[1] = 0
        integ.u[2] = 0
        integ.p[1] = 0
        DE.terminate!(integ)
    end
    integ.p[2] += 1
    integ.p[3] = integ.t
end
floor_event = DE.ContinuousCallback(condition, floor_aff!)
u0 = [1.0, 0.0]
p = [1.0, 0.0, 0.0]
prob = DE.ODEProblem{true}(dynamics!, u0, (0.0, 2.0), p)
sol = DE.solve(prob, DE.Tsit5(), callback = floor_event)
Plots.plot(sol, idxs=1)
stuck_time = sol.t[end]
1.3552618543570611

Multiple Walls: VectorContinuousCallback

Sometimes you may want to track multiple simultanious continuous conditions. For example, let’s model the ball in a room, where there are two vertical walls along with the floor. We will want to bounce the ball if it hits any of the walls or the floor. We do this via VectorContinuousCallback. It’s almost the same as ContinuousCallback except that it allows for a vector of conditions where an event triggers at the first condition to be equal to zero.

Let’s start by setting up the model. Now we have \(x'' = -g\) and \(y'' = 0\) leading to the system of equations:

function f(du, u, p, t)
    du[1] = u[2]
    du[2] = -p
    du[3] = u[4]
    du[4] = 0.0
end
f (generic function with 3 methods)

where u[1] denotes y-coordinate, u[2] denotes velocity in y-direction, u[3] denotes x-coordinate and u[4] denotes velocity in x-direction. We will make a VectorContinuousCallback of length 2 - one for x axis collision, one for walls parallel to y axis.

Multiple Walls: VectorContinuousCallback

Now let’s use the function (u[3] - 10.0)u[3] to denote hitting vertical walls, since it’s zero if either u[3] == 0.0 or u[3] == 10.0.

function condition(out, u, t, integrator)
    out[1] = u[1]
    out[2] = (u[3] - 10.0)u[3]
end

function affect!(integrator, idx)
    if idx == 1
        integrator.u[2] = -0.9integrator.u[2]
    elseif idx == 2
        integrator.u[4] = -0.9integrator.u[4]
    end
end
import DifferentialEquations as DE
cb = DE.VectorContinuousCallback(condition, affect!, 2)
VectorContinuousCallback{typeof(condition), typeof(affect!), typeof(affect!), typeof(SciMLBase.INITIALIZE_DEFAULT), typeof(SciMLBase.FINALIZE_DEFAULT), Float64, Int64, Rational{Int64}, Nothing, Nothing, Int64}(condition, affect!, affect!, 2, SciMLBase.INITIALIZE_DEFAULT, SciMLBase.FINALIZE_DEFAULT, nothing, SciMLBase.LeftRootFind, 10, Bool[1, 1], 1, 2.220446049250313e-15, 0, 1//100, nothing)

Multiple Walls: VectorContinuousCallback

u0 = [50.0, 0.0, 0.0, 2.0]
tspan = (0.0, 15.0)
p = 9.8
prob = DE.ODEProblem(f, u0, tspan, p)
sol = DE.solve(prob, DE.Tsit5(), callback = cb, dt = 1e-3, adaptive = false)
Plots.plot(sol, idxs = (3, 1))