### Collocation for finding periodic orbits of ODEs

Every now and again I’m asked how to compute the periodic orbits of ODEs using a boundary value solver. Each time, I go looking for old code that does this and, each time, I can’t find it and end up rewriting the collocation code from scratch.

This time I thought I’d put my code here so that I have a better chance of finding it again in the future!

The basic idea is to use a Fourier differentiation matrix to approximate the derivatives along the orbit and use a nonlinear solver to ensure that those derivatives match the vector field. If you want to know more about these types of spectral methods, take a look at the excellent (and short!) introduction by Trefethen in “Spectral Methods in MATLAB”, SIAM 2000. If you want more detail then the magnum opus by Boyd “Chebyshev and Fourier Spectral Methods”, Dover 2001 (freely available on his personal website) is also very good.

Nowadays, my preference is for coding in Julia – it’s very clean and flexible. Here is the code (which could be better!).

# Released under the MIT expat license by David A.W. Barton (david.barton@bristol.ac.uk) 2020
using StaticArrays
using NLsolve
using OrdinaryDiffEq

"""
duffing(u, p, t)

The vector field of the forced Duffing equation.
"""
duffing(u, p, t) = SVector(u[2], p.Γ*sin(p.ω*t) - 2p.ξ*u[2] - p.ωₙ^2*u[1] - p.β*u[1]^3)

"""
fourier_diff([T=Float64,] N; order=1)

Create a Fourier differentiation matrix of the specified order with numerical type T on the
domain x = LinRange{T}(0, 2π, N+1)[1:end-1].
"""
function fourier_diff(T::Type{<:Number}, N::Integer; order=1)
D = zeros(T, N, N)
n1 = (N - 1) ÷ 2
n2 = N ÷ 2
x = LinRange{T}(0, π, N+1)
if order == 1
for i in 2:N
sgn = (one(T)/2 - iseven(i))
D[i, 1] = iseven(N) ? sgn*cot(x[i]) : sgn*csc(x[i])
end
elseif order == 2
D[1, 1] = iseven(N) ? -N^2*one(T)/12 - one(T)/6 : -N^2*one(T)/12 + one(T)/12
for i in 2:N
sgn = -(one(T)/2 - iseven(i))
D[i, 1] = iseven(N) ? sgn*csc(x[i]).^2 : sgn*cot(x[i])*csc(x[i])
end
else
error("Not implemented")
end
for j in 2:N
D[1, j] = D[N, j-1]
D[2:N, j] .= D[1:N-1, j-1]
end
return D
end
fourier_diff(N::Integer; kwargs...) = fourier_diff(Float64, N; kwargs...)

"""
collocation_setup(u)

Return a data structure used internally by the collocation! function. u should be a
matrix with states down the columns and time across the rows (used for size/type information
only).
"""
function collocation_setup(u::AbstractMatrix)
return (ndim=size(u, 1), nmesh=size(u, 2), Dt=-fourier_diff(eltype(u), size(u, 2))*2π)
end

"""
collocation!(res, f, u, p, T, coll)

Calculate the residual of the collocation equations using a Fourier discretisation. Assumes
that a phase condition is not required (i.e., the equations are non-autonomous or the period
is known).

# Arguments
- res: residual (mutated)
- f: vector field function (expected to take the arguments (u, p, t))
- u: state variables along the orbit (vector)
- p: parameter vector passed to the vector field function
- T: period of oscillation
- coll: the output of collocation_setup

# Returns
- res: residual
"""
function collocation!(res, f, u, p, T, coll)
# Matrix of derivatives along the orbit
D = reshape(u, (coll.ndim, coll.nmesh))*coll.Dt
ii = 1:coll.ndim
for i in 1:coll.nmesh
# Subtract the desired derivative from the actual derivative
res[ii] .= D[ii] .- T.*f(u[ii], p, T*(i-1)/coll.nmesh)
ii = ii .+ coll.ndim
end
return res
end

function example(; nmesh=20)
p = (Γ=0.1, ω=1.0, ξ=0.05, ωₙ=1.0, β=0.1)
# Do initial value simulation to get a reasonable starting point
prob = ODEProblem(duffing, SVector(0.0, 0.0), (0.0, 100*2π/p.ω), p)
odesol = solve(prob, Tsit5())
# Refine using collocation
t = range(0, 2π/p.ω, length=nmesh+1)[1:end-1]
uvec = reinterpret(Float64, odesol(99*2π/p.ω .+ t).u)
umat = reshape(uvec, (:, nmesh))
coll = collocation_setup(umat)
nlsol1 = nlsolve((res, u) -> collocation!(res, duffing, u, p, 2π/p.ω, coll), uvec)
# Adjust the parameters slightly (actually quite a bit!) and correct
p = (Γ=0.1, ω=1.1, ξ=0.05, ωₙ=1.0, β=0.1)
nlsol2 = nlsolve((res, u) -> collocation!(res, duffing, u, p, 2π/p.ω, coll), uvec)
return (nlsol1, nlsol2)
end

function plot_example()
# Needs using Plots or similar
nmesh = 20
# The two solutions don't actually have the same period but normalize to [0, 2π]
t = linspace(0, 2π, length=nmesh+1)[1:end-1]
(sol1, sol2) = example()
plot(t, sol1.zero[1:2:end])
plot!(t, sol2.zero[1:2:end])
end


If you insist on using Matlab, the translation of the Julia code is below. Note that this uses the fourdif function by Reddy and Weideman to generate the Fourier differentiation matrix. (Also note that this can be put in a single file called fourier_collocation.m.)

function [nlsol1, nlsol2] = fourier_collocation()
% FOURIER_COLLOCATION Implement Fourier collocation for an arbitrary autonomous
% ODE. Assumes that the equations are non-autonomous or the period is known.

% Released under the MIT expat license by David A.W. Barton (david.barton@bristol.ac.uk) 2020

nmesh = 20;
p = struct('Gamma', 0.1, 'omega', 1.0, 'xi', 0.05, 'omegan', 1.0, 'beta', 0.1);
% Do initial value simulation to get a reasonable starting point
sol = ode45(@(t, u)duffing(t, u, p), [0, 100*2*pi/p.omega], [0, 0]);
% Refine using collocation
t = linspace(0, 2*pi/p.omega, nmesh+1);
t = t(1:end-1);
umat = deval(sol, 99*2*pi/p.omega + t);
uvec = umat(:);
coll = collocation_setup(umat);
nlsol1 = fsolve(@(u)collocation(@duffing, u, p, 2*pi/p.omega, coll), uvec)
% Adjust the parameters slightly (actually quite a bit!) and correct
p = struct('Gamma', 0.1, 'omega', 1.1, 'xi', 0.05, 'omegan', 1.0, 'beta', 0.1);
nlsol2 = fsolve(@(u)collocation(@duffing, u, p, 2*pi/p.omega, coll), uvec)

plot(t, nlsol1(1:2:end), 'b', t, nlsol2(1:2:end), 'r');

end

function du = duffing(t, u, p)
du = [u(2); p.Gamma*sin(p.omega*t) - 2*p.xi*u(2) - p.omegan^2*u(1) - p.beta*u(1)^3];
end

function coll = collocation_setup(u)
[~, D] = fourdif(size(u, 2), 1);
coll = struct('ndim', size(u, 1), 'nmesh', size(u, 2), 'Dt', -D*2*pi);
end

function res = collocation(f, u, p, T, coll)
% Matrix of derivatives along the orbit
res = zeros(size(u));
D = reshape(u, [coll.ndim, coll.nmesh])*coll.Dt;
ii = 1:coll.ndim;
for i = 1:coll.nmesh
% Subtract the desired derivative from the actual derivative
res(ii) = D(ii) - T*f(T*(i-1)/coll.nmesh, u(ii), p)';
ii = ii + coll.ndim;
end
end

### Lecturer (Assistant Prof) position available in Engineering Mathematics (University of Bristol)

Due to growth in students in my department, we’ve got a new opening for a lecturer (broadly equivalent to assistant prof in other countries) in Engineering Mathematics at the University of Bristol. See the job advert for details; it’s a four year, fixed-term contract with salary in the range £38,017 to £42,792. While there are no guarantees that the position will be made permanent, for the right candidate it might be possible.

The department divides into roughly three areas: general mathematical modelling (ODEs, PDEs, SDEs, and other approaches), artificial intelligence/data science, and robotics. The position isn’t focused on any one area. We’re a very colligiate bunch and look forward to working with people who are similarly collaborative.

If you have any questions, do feel free to get in touch. Otherwise apply online!

### Research Associate in Dynamics and Uncertainty

We are looking to recruit a passionate researcher to join a multi-institution project as a Research Associate, tenable for 1 year with the potential of an extension subject to satisfactory progress and funding availability.

The position is funded by an EPSRC-funded Programme Grant on Digital Twins for Improved Dynamic Design run in collaboration with the universities of Cambridge, Liverpool, Sheffield, Southampton, and Swansea. The overall aim of the programme grant is to create a robustly-validated virtual prediction tool called a “digital twin” for designing complex structures.

The Bristol component of this project is focussed on the interaction of numerical models and physical experiments in so-called hybrid tests where two substructures (one physical and one numerical) are coupled together in real-time. This novel approach to testing provides a great deal of flexibility in the design process since it enables poorly modelled (or externally supplied) parts of the structure to be tested physically while retaining the freedom to rapidly change and test different numerical models. Of particular interest within this testing framework is the investigation and exploitation of nonlinear behaviours and how uncertainties manifest and propagate through the system.

This position is ideal for a researcher with an interest in uncertainty quantification, nonlinear behaviour, and control. You should have good computational skills (MATLAB and/or Julia are commonly used) and experience of working with experimental data. You will be part of a vibrant Dynamics and Control research group, working alongside researchers dealing wide variety of application areas. In addition, there will be regular meetings of the full project team from each of the universities with an emphasis on cross-fertilisation of ideas and collaborative working.

Apply via the University of Bristol online portal.

### PhD position in dynamical systems/nonlinear dynamics and Julia

A bit of a long shot, but if there is anyone who is looking to do a PhD in dynamical systems/nonlinear dynamics, would like to develop Julia-based dynamics software, and is based in the UK, I’d love to hear from you. I’m particularly interested in stochastic dynamics and links to machine learning.

I’m part of a small group researchers in dynamics at the University of Bristol, UK based in the Department of Engineering Mathematics and there is the opportunity for funded PhD studentships (ca. £15k a year plus tuition fees) starting in January. These are competitively awarded (i.e., I’m not guaranteed any for this project) and unfortunately restricted to people who have been resident in the UK for a minimum of 3 years and have leave to stay (see https://epsrc.ukri.org/skills/students/help/eligibility/ – note that the 10% rule mentioned has already been allocated this year, so you do need to be UK-based).

The deadline for application is the end of September 2019. (It’s not an entirely strict deadline but there are some internal processes I will need to complete before the hard deadline.)

### Post-doc position available

I have a post-doctoral research associate position available to work on control-based continuation (nonlinear dynamics in experiments). The position will run until May 2020 with a possible extension to August 2020 (subject to EPSRC approval).

For details see the University of Bristol jobs website. The deadline for applications is 24 February 2019 with a provisional interview date of 7 March.

The text of the advert is below –

We seek a highly motivated Research Associate who is interested in working as part of a team at the interface between Engineering and Applied Mathematics to investigate new methods for exploring the nonlinear behaviour of engineered systems. The post will run until 31 May 2020, funded by an EPSRC grant with the possibility of an extension subject to funds and EPSRC permission.

Modern test methods for investigating the dynamics of engineered structures are inadequate for dealing with the presence of significant nonlinearity since they have largely been developed under the assumption of linear behaviour. In contrast, control-based continuation (CBC), a versatile non-parametric identification method, has been developed with nonlinearity in mind from the beginning. It has been demonstrated on simple experiments but now advances in underlying methodology are required to apply CBC to real-world experiments which have higher levels of measurement noise and many degrees of freedom. The versatility of CBC is such that, with these advances, it will also become relevant for researchers studying nonlinear systems in both engineering and other fields, such as in the biological sciences.

We are seeking a Research Associate to drive this research forward alongside other researchers (both PhD students and other post-doctoral staff) who are working on closely related problems. Support will be readily available from the investigators David Barton, Simon Neild and Djamel Rezgui. More widely, you will be part of the Dynamics and Control research group and the Applied Nonlinear Mathematics research group both of which carry out cutting-edge research in a wide range of application areas.

CBC presently draws on a wide range of underlying areas including, but not limited to, dynamical systems and bifurcation theory, control theory, system identification, and machine learning. Applicants are expected to have experience in at least one of these areas in addition to a first degree and preferably a PhD in Applied Mathematics/Physics/Engineering (or a closely related discipline).

Possible initial avenues of research include

• Improving the robustness of CBC in the presence of noise using surrogate models. Gaussian processes have previously been investigated and may be useful.
• Investigating the scaling up of CBC to many degree-of-freedom systems. Ideas from numerical continuation of PDE systems could yield insights.
• Implementation of CBC on existing aerospace experiments for dynamic testing and wind tunnel testing.

### Working with broadcasting in Julia

Broadcasting in Julia is a way of writing vectorised code (think Matlab) that is performant and explicit. The benefits of performant code are obvious (faster!) but explicit vectorisation is also a significant benefit.

When I first saw Matlab and how you could call the sin with a vector input, I was (slightly) blown away by the usefulness of this. It didn’t take too long for me to realise the limitations though; vectorising a complicated function can require quite a bit of code gymnastics, which doesn’t usually help the readability, particularly for those students who are relatively new to programming.

This is where Julia’s dot broadcasting (vectorisation) comes in. If you want a function to work on a vector of inputs (applying the same function to each element of the vector) you simply put a dot on the function call. For example, the sine of a vector of values becomes sin.([1.1, 0.3, 2.3]); note the extra dot between the sin and the first bracket.

For a really good introduction to this, see the blog post More Dots: Syntactic Loop Fusion in Julia.

In Julia v0.7/1.0, there were some changes under the hood to how broadcasting works. (See Extensible broadcast fusion for more details and how it can be customised by different types.) It now creates a series of Broadcasted objects that get fused together before finally being materialised to give the final answer. For example, consider

r = sqrt(sum(x.^2 .+ y.^2))

Internally this gets rewritten (“lowered”) to

r = sqrt(sum(materialize(broadcasted(+, broadcasted(^, x, 2), broadcasted(^, y, 2)))))

(This isn’t quite accurate on the details since the squaring is implemented slightly differently.) Notice the hierarchy of broadcasted calls enclosed within a call to materialize. This is where the magic of broadcast fusion happens (and enables Julia to construct performant code). The broadcasted calls create a nested set of Broadcasted objects that contain the (lazily evaluated) vectorised expression and the materialize call creates the final vector from this.

Most of the time this automatic magic is exactly what we want. But sometimes it’s not.

Consider the case above where the sum is being computed; a vector will be allocated in memory for the calculation x.^2 + y.^2 and if x and y are large then a large amount of memory will be allocated unnecessarily for this intermediate value. Since the sum function doesn’t need all the values at the same time, couldn’t we just lazily compute x.^2 + y.^2 as individual numbers and feed them to the sum one-by-one? For example, we could do something like

acc = 0.0
for i = eachindex(x, y)
acc += x[i]^2 + y[i]^2
end
r = sqrt(acc)

In this case writing out the explicit for loop is something we’re trying to avoid (otherwise why bother with broadcasting?). Can we somehow extract the lazy representation from the broadcasting without materializing the intermediate result?

The answer is yes, but unfortunately it’s not part of the base Julia (yet). The code below gives us a lazy macro that enables us to get access to that lazy representation that broadcasting creates and use it explicitly in our surrounding code.

@inline _lazy(x) = x[1]  # unwrap the tuple
@inline Broadcast.broadcasted(::typeof(_lazy), x) = (x,)  # wrap the Broadcasted object in a tuple to avoid materializing
macro lazy(x)
return esc(:(_lazy(_lazy.(\$x))))
end

Now we can compare the lazy version and the eager (materialized) versions.

julia> using BenchmarkTools

julia> x = rand(1_000_000) ; y = rand(1_000_000) ;

julia> @btime sqrt(sum(x.^2 .+ y.^2))  # normal eager evaluation
2.837 ms (16 allocations: 7.63 MiB)
816.7514405417339

julia> @btime sqrt(sum(@lazy x.^2 .+ y.^2))  # lazy broadcasted evaluation
1.075 ms (12 allocations: 208 bytes)
816.7514405417412

Notice the memory consumption: 7.63 MiB for the normal version versus 208 bytes for the lazily evaluated version. Similarly the lazy version is significantly faster (though that depends quite a lot on the size of the vectors used). There is a slightly different answer in the two cases since the Julia sum function uses slightly different algorithms for vectors versus iterators (so I’m not quite comparing like-for-like).

Why is the lazy version not the default? Well here is the caveat: as soon as you do lazy evaluation the performance becomes much more problem dependent – it can get faster (as in this case) but, equally, it can get slower. BenchmarkTools.jl is your friend!

### Barycentric.jl

Over the past couple of years or so I’ve been getting into the Julia programming language; it’s been great to watch the language mature over time. Many people proclaim the virtues of its speed (it’s very fast for a dynamic language) but really I like its elegance – it’s a very well designed language that makes full use of multiple dispatch. (Multiple dispatch is something that I doubt most coders know much about but once you are used to it, it’s indispensable!)

My first foray into the world of Julia package development is Barycentric.jl, a small package to do polynomial interpolation using a Barycentric representation. This approach is espoused in Berrut and Trefethen, SIAM Review 2004 as a way to do polynomial interpolation with O(n) operations, rather than O(n2) operations as is more typical for interpolation with Lagrange polynomials.

While this package isn’t really a general purpose interpolation code (see Interpolations.jl for that), it is good for building numerical algorithms such as collocation.

One example of this is a simple(ish) simulation of a dynamic cantilever beam. The Euler-Bernoulli equation is the most straightforward, non-trivial model  we can use –

$$\frac{EI}{\rho AL^4}\frac{\partial^4u}{\partial x^4} + \frac{\partial^2 u}{\partial t^2} + \xi\frac{\partial u}{\partial t} = 0$$

where $E$ is Young’s modulus, $I$ is the second moment of area, $\rho A$ is the mass per unit length, $L$ is the length, and $\xi$ is the (external) damping coefficient.

Since it is a fourth-order partial differential equation in space we need four boundary conditions. For a cantilever beam we have (primes denote derivatives with respect to $x$)

$u(0, t) = 0$ (zero displacement at wall)

$u'(0,t) = 0$ (zero slope at wall)

$u''(1,t) = 0$ (zero torque at free end)

$u'''(1,t) = 0$ (zero shear at free end)

To solve the Euler-Bernoulli equation we discretise the model in space using Chebyshev polynomials (for an introduction to Chebyshev approximations to differential equations see the excellent, and relatively short, book Spectral Methods in Matlab by Nick Trefethen). This is where Barycentric.jl comes in.

In a nutshell, we’re going to use an $N$ degree polynomial to approximate the solution in the $x$ direction by constraining the polynomial to satisfy the four boundary conditions at $x=0$ and $x=1$ and then evaluating the fourth derivative for the interior of the Euler-Bernoulli equation.

I’m going to arbitrarily choose to evaluate the Euler-Bernoulli equation at the Chebyshev nodes of the $N-2$ degree Chebyshev polynomial, excluding the end points, so $N-3$ points in total. Hence these points plus the four boundary conditions gives $N+1$ equations to match the $N+1$ unknowns of the $N$ degree Chebyshev polynomial.

The code to do this is as follows. The end result is a fourth-order derivative matrix defined on the collocation points.

using Barycentric

N = 10  # degree of the polynomial
n = N-2
# Construct the polynomial
P = Chebyshev2{N}()
# Generate the differentiation matrix y' ≈ Dy
D = differentiation_matrix(P)
# Collocation points (nodes of the N-2 degree second-kind Chebyshev polynomial)
x_coll = [-cospi(j/n) for j = 1:n-1]
# Interpolation matrix from nodes(P) to x_coll
In = interpolation_matrix(P, x_coll)

# Construct the mapping from the values at the collocation points to the
# values at the nodes of the Chebyshev polynomial, simultaneously
# incorporating the  boundary conditions
In⁻¹ = inv([In;                           # interpolation to collocation points
[1 zeros(1, N)];              # u(0, t) = 0
D[1:1, :];                    # u'(0, t) = 0
(D^2)[end:end, :]             # u''(1, t) = 0
(D^3)[end:end, :]             # u'''(1, t) = 0
])[:, 1:end-4]  # remove the boundary condition inputs since they are zero

# Construct the differentiation matrix that incorporates the boundary conditions
D₄ = In*(D^4)*In⁻¹

The basic premise is to construct a fourth-order differentiation matrix on the $N$-degree Chebyshev polynomial whilst incorporating the boundary conditions. This is done by mapping from the collocation points onto the nodes of the Chebyshev polynomial, incorporating the boundary conditions, then applying the differentiation matrix before mapping back to the collocation points.

To integrate the equations of motion, the second-order (in time) differential equation is rewritten as a system of first-order ODEs and thrown into DifferentialEquations.jl.

function beammodel!(dudt, u, p, t)
n = size(p.D₄, 2)  # number of collocation points
dudt[1:n] .= u[n+1:2n]  # u̇₁ = u₂
dudt[n+1:2n] .= -p.EI/p.ρA*(p.D₄*u[1:n]) .- p.ξ*u[n+1:2n]  # u̇₂ = -EI/ρA*u₁'''' - ξ*u₂
end

Before integrating, we need some initial conditions. To avoid putting energy into the higher modes of the beam, I use the mode shape of the first beam mode for the initial conditions.

# A parameter vector for integration; a steel beam (1m × 10mm × 1mm)
p = (D₄ = D₄, EI = 1666.6, ρA = 8.0, ξ = 0.2)

# Jacobian matrix of the differential equation
using LinearAlgebra
A = [zeros(size(p.D₄)) I; -p.EI/p.ρA*p.D₄ -p.ξ*I]
ev = eigen(A)
idx = argmin(abs.(ev.values))  # lowest mode
u0 = real.(ev.vectors[:, idx])  # ignore rotations

# Integrate!
using OrdinaryDiffEq
prob = ODEProblem(beammodel!, u0, (0, 10.0), p)
sol = solve(prob, Rodas5(), dtmax=0.05)  # use a stiff solver

And to plot

using Makie
sc = Scene()
wf = wireframe!(sc, x_coll, sol.t, sol[1:N-3, :])
scale!(wf, 1.0, 1.0, 10.0)
l = lines!(sc, [x_coll[end]], sol.t, sol[N-3, :], color=:red, linewidth=3.0)

The result is at the top of this post!

While this is a largely academic example (we could solve this problem analytically) there are lots of extensions that can be made with this approach.

### New PhD scholarship opportunity in robotics/machine learning

There is the opportunity for fully-funded PhD scholarships starting September 2019 as part of the next University of Bristol funding competition. The deadline for applications is January 2019 (the precise date is to be announced).

Funding can be awarded to students of any nationality, though the chances of funding are likely higher for UK nationals (and others eligible for EPSRC doctoral funding) and Chinese nationals (via the CSC funding programme) since more funding is available through those routes.

I am particular interested in recruiting students for a PhD opportunity in tactile robotics and machine learning (though do also get in touch if you are considering nonlinear dynamics and control more generally).

A short project description is below.

Present approaches to tactile sensing and control require large amounts of data to train machine learning algorithms, or other statistical methods, to transform low-level sensory data into high-level information such as contact position, angle and force. Once a suitable model is learnt from data it is then used to within a control policy to complete the desired robotic manipulation task. While this approach is effective, it is far from efficient. This project will investigate the use of online learning combined with a high-level objective function to minimise the amount of prior training required. A local interaction model can be learnt from online sensor readings and the known movements between them and, as such, a robot manipulator can learn how to interact with its surroundings as it is carrying out useful tasks. This project has the opportunity to make use of extensive experimental facilities in conjunction with the Bristol Robotics Laboratory.

A more detailed version is also available.

### Funding available for PhD positions – Oct 2018 start

There is funding available (competitively awarded across my department) for PhD students to start September/October 2018. There are a variety of funding sources including: EPSRC, China Scholarship Council (CSC), and University Scholarships. These all provide funding for fees and living costs.

I am particularly interested in topics around computational dynamics with links to machine learning and/or uncertainty quantification, largely from an engineering point of view but other areas might be considered.

I’m also interested in experiment-based dynamics and the real-time link with computational dynamics.

If you are considering a PhD in any of these areas, get in touch with me at david.barton@bristol.ac.uk.

(The image above is borrowed from Mike Henderson’s Multifaro page – a nice example of computational dynamics in action!)