Working with broadcasting in Julia

Broadcasting in Julia is a way of writing vectorised code (think Matlab) that is performant and explicit. The benefits of performant code are obvious (faster!) but explicit vectorisation is also a significant benefit.

When I first saw Matlab and how you could call the sin with a vector input, I was (slightly) blown away by the usefulness of this. It didn’t take too long for me to realise the limitations though; vectorising a complicated function can require quite a bit of code gymnastics, which doesn’t usually help the readability, particularly for those students who are relatively new to programming.

This is where Julia’s dot broadcasting (vectorisation) comes in. If you want a function to work on a vector of inputs (applying the same function to each element of the vector) you simply put a dot on the function call. For example, the sine of a vector of values becomes sin.([1.1, 0.3, 2.3]); note the extra dot between the sin and the first bracket.

For a really good introduction to this, see the blog post More Dots: Syntactic Loop Fusion in Julia.

In Julia v0.7/1.0, there were some changes under the hood to how broadcasting works. (See Extensible broadcast fusion for more details and how it can be customised by different types.) It now creates a series of Broadcasted objects that get fused together before finally being materialised to give the final answer. For example, consider

r = sqrt(sum(x.^2 .+ y.^2))

Internally this gets rewritten (“lowered”) to

r = sqrt(sum(materialize(broadcasted(+, broadcasted(^, x, 2), broadcasted(^, y, 2)))))

(This isn’t quite accurate on the details since the squaring is implemented slightly differently.) Notice the hierarchy of broadcasted calls enclosed within a call to materialize. This is where the magic of broadcast fusion happens (and enables Julia to construct performant code). The broadcasted calls create a nested set of Broadcasted objects that contain the (lazily evaluated) vectorised expression and the materialize call creates the final vector from this.

Most of the time this automatic magic is exactly what we want. But sometimes it’s not.

Consider the case above where the sum is being computed; a vector will be allocated in memory for the calculation x.^2 + y.^2 and if x and y are large then a large amount of memory will be allocated unnecessarily for this intermediate value. Since the sum function doesn’t need all the values at the same time, couldn’t we just lazily compute x.^2 + y.^2 as individual numbers and feed them to the sum one-by-one? For example, we could do something like

acc = 0.0
for i = eachindex(x, y)
    acc += x[i]^2 + y[i]^2
r = sqrt(acc)

In this case writing out the explicit for loop is something we’re trying to avoid (otherwise why bother with broadcasting?). Can we somehow extract the lazy representation from the broadcasting without materializing the intermediate result?

The answer is yes, but unfortunately it’s not part of the base Julia (yet). The code below gives us a lazy macro that enables us to get access to that lazy representation that broadcasting creates and use it explicitly in our surrounding code.

@inline _lazy(x) = x[1]  # unwrap the tuple
@inline Broadcast.broadcasted(::typeof(_lazy), x) = (x,)  # wrap the Broadcasted object in a tuple to avoid materializing
macro lazy(x)
	return esc(:(_lazy(_lazy.($x))))

Now we can compare the lazy version and the eager (materialized) versions.

julia> using BenchmarkTools

julia> x = rand(1_000_000) ; y = rand(1_000_000) ;

julia> @btime sqrt(sum(x.^2 .+ y.^2))  # normal eager evaluation
  2.837 ms (16 allocations: 7.63 MiB)

julia> @btime sqrt(sum(@lazy x.^2 .+ y.^2))  # lazy broadcasted evaluation
  1.075 ms (12 allocations: 208 bytes)

Notice the memory consumption: 7.63 MiB for the normal version versus 208 bytes for the lazily evaluated version. Similarly the lazy version is significantly faster (though that depends quite a lot on the size of the vectors used). There is a slightly different answer in the two cases since the Julia sum function uses slightly different algorithms for vectors versus iterators (so I’m not quite comparing like-for-like).

Why is the lazy version not the default? Well here is the caveat: as soon as you do lazy evaluation the performance becomes much more problem dependent – it can get faster (as in this case) but, equally, it can get slower. BenchmarkTools.jl is your friend!


Over the past couple of years or so I’ve been getting into the Julia programming language; it’s been great to watch the language mature over time. Many people proclaim the virtues of its speed (it’s very fast for a dynamic language) but really I like its elegance – it’s a very well designed language that makes full use of multiple dispatch. (Multiple dispatch is something that I doubt most coders know much about but once you are used to it, it’s indispensable!)

My first foray into the world of Julia package development is Barycentric.jl, a small package to do polynomial interpolation using a Barycentric representation. This approach is espoused in Berrut and Trefethen, SIAM Review 2004 as a way to do polynomial interpolation with O(n) operations, rather than O(n2) operations as is more typical for interpolation with Lagrange polynomials.

While this package isn’t really a general purpose interpolation code (see Interpolations.jl for that), it is good for building numerical algorithms such as collocation.

One example of this is a simple(ish) simulation of a dynamic cantilever beam. The Euler-Bernoulli equation is the most straightforward, non-trivial model  we can use –

\frac{EI}{\rho AL^4}\frac{\partial^4u}{\partial x^4} + \frac{\partial^2 u}{\partial t^2} + \xi\frac{\partial u}{\partial t} = 0

where E is Young’s modulus, I is the second moment of area, \rho A is the mass per unit length, L is the length, and \xi is the (external) damping coefficient.

Since it is a fourth-order partial differential equation in space we need four boundary conditions. For a cantilever beam we have (primes denote derivatives with respect to x)

u(0, t) = 0 (zero displacement at wall)

u'(0,t) = 0 (zero slope at wall)

u''(1,t) = 0 (zero torque at free end)

u'''(1,t) = 0 (zero shear at free end)

To solve the Euler-Bernoulli equation we discretise the model in space using Chebyshev polynomials (for an introduction to Chebyshev approximations to differential equations see the excellent, and relatively short, book Spectral Methods in Matlab by Nick Trefethen). This is where Barycentric.jl comes in.

In a nutshell, we’re going to use an N degree polynomial to approximate the solution in the x direction by constraining the polynomial to satisfy the four boundary conditions at x=0 and x=1 and then evaluating the fourth derivative for the interior of the Euler-Bernoulli equation.

I’m going to arbitrarily choose to evaluate the Euler-Bernoulli equation at the Chebyshev nodes of the N-2 degree Chebyshev polynomial, excluding the end points, so N-3 points in total. Hence these points plus the four boundary conditions gives N+1 equations to match the N+1 unknowns of the N degree Chebyshev polynomial.

The code to do this is as follows. The end result is a fourth-order derivative matrix defined on the collocation points.

using Barycentric

N = 10  # degree of the polynomial
n = N-2  
# Construct the polynomial
P = Chebyshev2{N}()
# Generate the differentiation matrix y' ≈ Dy
D = differentiation_matrix(P)
# Collocation points (nodes of the N-2 degree second-kind Chebyshev polynomial)
x_coll = [-cospi(j/n) for j = 1:n-1] 
# Interpolation matrix from nodes(P) to x_coll 
In = interpolation_matrix(P, x_coll)  

# Construct the mapping from the values at the collocation points to the
# values at the nodes of the Chebyshev polynomial, simultaneously
# incorporating the  boundary conditions
In⁻¹ = inv([In;                           # interpolation to collocation points
		    [1 zeros(1, N)];              # u(0, t) = 0
            D[1:1, :];                    # u'(0, t) = 0
            (D^2)[end:end, :]             # u''(1, t) = 0
            (D^3)[end:end, :]             # u'''(1, t) = 0
           ])[:, 1:end-4]  # remove the boundary condition inputs since they are zero

# Construct the differentiation matrix that incorporates the boundary conditions
D₄ = In*(D^4)*In⁻¹

The basic premise is to construct a fourth-order differentiation matrix on the N-degree Chebyshev polynomial whilst incorporating the boundary conditions. This is done by mapping from the collocation points onto the nodes of the Chebyshev polynomial, incorporating the boundary conditions, then applying the differentiation matrix before mapping back to the collocation points.

To integrate the equations of motion, the second-order (in time) differential equation is rewritten as a system of first-order ODEs and thrown into DifferentialEquations.jl.

function beammodel!(dudt, u, p, t)
	n = size(p.D₄, 2)  # number of collocation points
	dudt[1:n] .= u[n+1:2n]  # u̇₁ = u₂
	dudt[n+1:2n] .= -p.EI/p.ρA*(p.D₄*u[1:n]) .- p.ξ*u[n+1:2n]  # u̇₂ = -EI/ρA*u₁'''' - ξ*u₂

Before integrating, we need some initial conditions. To avoid putting energy into the higher modes of the beam, I use the mode shape of the first beam mode for the initial conditions.

# A parameter vector for integration; a steel beam (1m × 10mm × 1mm)
p = (D₄ = D₄, EI = 1666.6, ρA = 8.0, ξ = 0.2)

# Jacobian matrix of the differential equation
using LinearAlgebra
A = [zeros(size(p.D₄)) I; -p.EI/p.ρA*p.D₄ -p.ξ*I] 
ev = eigen(A)
idx = argmin(abs.(ev.values))  # lowest mode
u0 = real.(ev.vectors[:, idx])  # ignore rotations 

# Integrate!
using OrdinaryDiffEq
prob = ODEProblem(beammodel!, u0, (0, 10.0), p)
sol = solve(prob, Rodas5(), dtmax=0.05)  # use a stiff solver

And to plot

using Makie
sc = Scene()
wf = wireframe!(sc, x_coll, sol.t, sol[1:N-3, :])
scale!(wf, 1.0, 1.0, 10.0)
l = lines!(sc, [x_coll[end]], sol.t, sol[N-3, :], color=:red, linewidth=3.0)

The result is at the top of this post!

While this is a largely academic example (we could solve this problem analytically) there are lots of extensions that can be made with this approach.

New PhD scholarship opportunity in robotics/machine learning

There is the opportunity for fully-funded PhD scholarships starting September 2019 as part of the next University of Bristol funding competition. The deadline for applications is January 2019 (the precise date is to be announced).

Funding can be awarded to students of any nationality, though the chances of funding are likely higher for UK nationals (and others eligible for EPSRC doctoral funding) and Chinese nationals (via the CSC funding programme) since more funding is available through those routes.

I am particular interested in recruiting students for a PhD opportunity in tactile robotics and machine learning (though do also get in touch if you are considering nonlinear dynamics and control more generally).

A short project description is below.

Present approaches to tactile sensing and control require large amounts of data to train machine learning algorithms, or other statistical methods, to transform low-level sensory data into high-level information such as contact position, angle and force. Once a suitable model is learnt from data it is then used to within a control policy to complete the desired robotic manipulation task. While this approach is effective, it is far from efficient. This project will investigate the use of online learning combined with a high-level objective function to minimise the amount of prior training required. A local interaction model can be learnt from online sensor readings and the known movements between them and, as such, a robot manipulator can learn how to interact with its surroundings as it is carrying out useful tasks. This project has the opportunity to make use of extensive experimental facilities in conjunction with the Bristol Robotics Laboratory.

A more detailed version is also available.

Funding available for PhD positions – Oct 2018 start

There is funding available (competitively awarded across my department) for PhD students to start September/October 2018. There are a variety of funding sources including: EPSRC, China Scholarship Council (CSC), and University Scholarships. These all provide funding for fees and living costs.

I am particularly interested in topics around computational dynamics with links to machine learning and/or uncertainty quantification, largely from an engineering point of view but other areas might be considered.

I’m also interested in experiment-based dynamics and the real-time link with computational dynamics.

If you are considering a PhD in any of these areas, get in touch with me at

(The image above is borrowed from Mike Henderson’s Multifaro page – a nice example of computational dynamics in action!)

Postdoc position now available

3 year postdoc position available for 1 June 2017 start!

We seek a highly motivated Research Associate who is interested in working as part of a team at the interface between Engineering and Applied Mathematics to investigate new methods for exploring the nonlinear behaviour of engineered systems and to develop numerical continuation techniques for physical experiments.

Modern test methods for investigating the dynamics of engineered structures are inadequate for dealing with the presence of significant nonlinearity since they have largely been developed under the assumption of linear behaviour. In contrast, control-based continuation (CBC), a versatile non-parametric identification method, has been developed with nonlinearity in mind from the beginning. It has been demonstrated on simple experiments but now advances in underlying methodology are required to apply CBC to real-world experiments which have higher levels of measurement noise and many degrees of freedom. The versatility of CBC is such that, with these advances, it will also become relevant for researchers studying nonlinear systems in both engineering and other fields, such as in the biological sciences.

We are seeking a Research Associate to drive this research forward alongside researchers working on closely related problems from the Departments of Engineering Mathematics, Mechanical Engineering and Aerospace Engineering. Support will be readily available from the investigators David Barton, Simon Neild and Djamel Rezgui. More widely, you will be part of the Dynamics and Control research group and the Applied Nonlinear Mathematics research group both of which carry out cutting-edge research in a wide range of application areas.

CBC presently draws on a wide range of underlying areas including, but not limited to, dynamical systems and bifurcation theory, system identification, control theory and machine learning. Applicants are expected to have experience in at least one of these areas in addition to a first degree and preferably a PhD in Applied Mathematics/Physics/Engineering (or a closely related discipline).

Possible initial avenues of research include

  • Improving the robustness of CBC in the presence of noise using surrogate models. Gaussian processes have previously been investigated and may be useful.
  • Investigating the scaling up of CBC to many degree-of-freedom systems. Ideas from numerical continuation of PDE systems could yield insights.
  • Implementation of CBC on existing aerospace experiments for dynamic testing and wind tunnel testing.

The post is available from 1 June 2017 with funding available for up to three years.

To apply visit

Please direct any questions to David Barton,

Studentships available for immediate start

We have some EPSRC funded studentships available for immediate start (competitively awarded) in my department/school. The studentships pay an annual stipend of ~£14k plus tuition fees, though you must meet EPSRC eligibility requirements (there are no awards available for overseas students).

The deadline is very tight though, you need to have an application completed for 10 October.

Applications coming in after this time will be considered for a September 2017 start; but please do still get in touch.

If you are interested in nonlinear dynamics (from a numerical/computational/experimental viewpoint), please get in touch.  I’m particularly interested in stochastic dynamics and/or links with machine learning.

Email me at with your current CV and a short description of your (academic) interests. General enquires along the lines of “I don’t really know what I want to do” are also welcome.

Nonlinear vibrations, localisation and energy transfer

I recently had the privilege of being invited to the 6th conference on Nonlinear Vibrations, Localization and Energy Transfer in Liege, 4-8 July 2016. The conference was superbly organised by Gaetan Kerschen and Jean-Philippe Noël in remarkable surroundings and with very good food (and beer — a trip to the Val-Dieu abbey to taste good Belgian beer was a highlight of the social part of the conference!).

The conference was a good mix of cutting-edge research combined with tutorial lectures for PhD students (and above!) on a range of topics. The lectures on nonlinear system identification were particularly useful for me — it’s an area that I’m keen to learn more about and import into my own work.

I had the pleasure of presenting on “Control-based Continuation – A new experimental approach for testing nonlinear dynamic systems” (with co-authors Ludovic Renson and Simon Neild); hopefully I was able to impart some of the enthusiasm we have for working on nonlinear experiments and understanding their nonlinear dynamics and bifurcations via numerical continuation!

Other talks of particular interest were by Simon Peter (Stuttgart) and Peter Bruns (Hannover) on using phase-locked loops to extract backbone (nonlinear normal modes) from experiments. Their approaches are highly complementary to my own. Cyril Touzé also gave a fascinating talk on turbulence in metallic plates, along with convincing experiments that show energy transfers from low-frequency modes to higher-frequency modes of vibration. I was also pleased to see that software developments are slowly propagating through into applied communities; Bruno Cochelin presented an interesting take on using Automatic Differentiation for high-order numerical continuation methods, I’m curious as to how far this can be applied.

Overall a thoroughly productive week!

PhD opportunity in Industrial Mathematics

I am seeking to recruit a student who is keen to work on industrial applications of nonlinear dynamics. They are expected to have a good degree (first or upper second) in a related subject, for example, applied mathematics, physics or engineering. The project is centred around investigating the nonlinear dynamics of physical experiments; in this case, the dynamics and bifurcations of rotating machinery. The project will combine ideas from nonlinear dynamics, control theory and numerical analysis to extract the required information from the physical experiment in real-time. Experience in one (or more) of these areas is ideal but not necessary as training will be given.

For start of this project, a specific engineering experiment has been constructed (a rotor rig) but the methodology being developed is applicable to a wide range of areas, from engineering to biological systems. Any controllable experiment that exhibits nonlinear behaviour could potentially benefit.

This project is funded by an EPSRC iCASE award meaning that the studentship pays the standard EPSRC rate of £14,296 plus an industrial top-up of around £5,000 (to be confirmed). The studentship is open to any UK/EU student who has been normally resident in the UK for 3 years (see the EPSRC eligibility requirements at ).

For more information please send a short email to Dr David Barton outlining your academic background (i.e., current/most recent degree).

PhD studentship opportunities

Breaking news: PhD studentships (scholarships) available for UK and EU* students in the Department of Engineering Mathematics at the University of Bristol for immediate start.

Due to a number of students withdrawing for personal reasons, there are now several PhD studentships available that must be taken up by September 2016. I am looking for motivated and able  students who are interested in doing research at the intersection of Applied Mathematics and Engineering/Science (my interests are quite broad!). In particular, I have three different topics that I’m actively pursuing at the moment all of which feature numerical computation/numerical analysis:

  • Nonlinear dynamics of stochastic differential equations — I’m interested in investigating how the tools and concepts of nonlinear dynamics and bifurcation theory can be applied to stochastic differential equations arising from various application areas (e.g., neuroscience or climate science).
  • Control-based continuation — numerical continuation is a very effective tool for investigating the nonlinear behaviour and bifurcations of mathematical models, and control-based continuation is a means for applying this tool to physical experiments (engineered systems or, hopefully, biological systems) without the need for a mathematical model. Research in this area requires a very interesting mix of numerical analysis, control theory, system identification and the theory of stochastic processes (I don’t expect students to have a background in all of these subjects!).
  • Equation-free methods and agent-based modelling — equation-free methods are a means for obtaining a macroscopic model from microscopic simulations. They have been used for many physical processes previously and I am interested in how they can be extended to agent-based models, such as models of Zebrafish locomotion to investigate the dynamics of shoaling (Zebrafish are just one example).

I have also worked extensively with delay differential equations and, though I don’t have any active work in this area at the moment, I’m happy to solicit project suggestions in this area. All of these projects are very open ended and I’m happy to work with students to tailor the projects to their own interests.

There is no deadline for these studentships, though obviously it’s better to apply sooner rather than later. If you are interested, get in touch for more information.

* EU students are eligible provided they have been resident in the UK for at least 3 years. See the EPSRC website for more details.

Data-driven stochastic modelling of Zebrafish locomotion

Ever wondered how you can write down equations that govern the behaviour of a moving creature? Let’s start simple and try something like a fish. (OK, a fish isn’t that simple…) It turns out that Zebrafish are quite a convenient animal to study and one of our collaborators (Maurizio Porfiri from New York University, USA) has great facilities for the fish to happily swim around in while being tracked by video cameras.

With the data from Maurizio’s lab, we have constructed a mathematical model of Zebrafish locomotion based on stochastic differential equations (SDEs), that is equations that govern the rates of change of speed and of turning speed (the differential bit) and are driven by random noise (the stochastic bit). The randomness is there to provide a mathematical description of all the seemingly erratic behaviours that the Zebrafish exhibit which we don’t really understand — we can match the statistics of the randomness to the statistics of the erratic behaviour but we can’t say anything about the higher level thought (if that is an appropriate description for a fish!) that is generating it.

The SDEs are relatively simple (if you are used to such things) and take the form
$$ \begin{gathered}
\mathrm{d}U = -\theta_u(U – \mu_u – U^*)\mathrm{d}t + \sigma_u\mathrm{d}W_1(t) \\
\mathrm{d}\Omega = -\theta_\omega(\Omega – \Omega^*)\mathrm{d}t + \sigma_\omega\mathrm{d}W_2(t)
\end{gathered} $$
where $U$ is the speed and $\Omega$ is the turning speed. This is an Ito stochastic differential equation (though perhaps it should be Stratonovich…). This is in effect a mean-reverting process or Ornstein-Uhlenbeck process. In short, the speed is tracking the value $\mu_u + U^*$ with a time scale $\theta_u$ where $\mu_u$ is the natural speed of the Zebrafish and $U^*$ is a modification to the natural speed due to external factors (e.g., getting close to a wall or other Zebrafish). The same description holds for the turning speed equation (except that the natural turning speed is zero and so not included). The terms $U^*$, $\Omega^*$ and $\sigma_u$ are not actually constants but functions of the fish position relative to other fish and any obstacles.

Take a read of the article and see what you think! (The article is open access to all.)

If you like this, a follow on post will provide a nice graphical interface for playing with these equations (written in Python).

Reference: Data-driven stochastic modelling of zebrafish locomotion Adam Zienkiewicz, David A.W. Barton, Maurizio Porfiri and Mario di Bernardo, Journal of Mathematical Biology 71(5) 2015 pp. 1081–1105. DOI: 10.1007/s00285-014-0843-2 (open access).