Picture of me

William Dorrell

Serious Things

Research | Teaching & Notes | CV | Resources

Not-so-serious Things


The Efficient Computing Hypothesis

I've helped in a few different areas of neuroscience, machine learning, and phyiscs. But the area of work I'm most passionate about (addicted to, obsessed with, consumed by) is normative theories of brain function. The Efficient Computing Hypothesis was the (grand & wanky) title of my PhD thesis, which was all about building mathematical theories that predict the behaviour of neurons as they perform different algorithms.

That's a lot of words, I gave a talk for non-specialists on it at my PhD defence (attack!) which you can watch here (my talk is the second half, it's only 30mins (or 15 on double speed!) - and the first half is my friend Tom giving a really cute talk about his scientific trajectory), and the introduction of my thesis gives a brief introduction to the ideas.

In very brief, the idea is the following. Your brain is composed of bits that seem to do very prescribed computations, little algorithms running in parallel and talking to one another. We then look at the neurons in those bits and get all confused, 'why is that neuron behaving like that?' My thesis tries to answer these 'why' questions. The theories are based on the following assumption: if we really knew (1) what computation/algorithm a brain area/circuit is performing, and we knew (2) how brains tend to implement things, we should be able to predict neural activity from the supposed algorithm. I build maths that makes exactly this link, you frame an algorithm, and the theory makes assumptions and performs optimisations to try to predict how the neurons should implement this.

This sounds mad, but turns out it can work, predicting neural activity, sometimes with exquisite precision. People have played with theories like this for 75 years, to some success. But the existing theories are either beautiful but insufficiently rich (the Efficient Coding Hypothesis) or flexible but insufficiently simple (Connectionism: modeling neural activity with neural networks, which is progress, but just as confusing as the brain was a lot of the time), leaving us confused. My thesis developed a version, the Efficient Computing Hypothesis, that pairs the understandability/tractability/beauty of the Efficient Coding Hypothesis with the computing abilities of Connectionism. We then used it to understand a few representations from around cortex.

It remains a source of continual wonder to me that I can scratch arcane maths symbols on a page and have a reasonable hope of predicting what the brain will do. In my next round of research I aim to extend, develop, elaborate and generally better-ify these theories, making them as useful as possible for people studying how brains and artificial neural networks implement computations.

Major projects I really worked on in my PhD: Along with some projects and collaborations I did before my PhD or on the side of the main stuff, none of which I'm actively working on: