Readers of a certain age will have fond memories of the BBC Micro. Purchased in bulk by UK schools in the 1980s from Acorn, it gave many their first taste of computing and set plenty of programmers on their career path. Many credit this machine for the UK’s enviable software industry.
Professor of Computer Engineering University of Manchester
Education
- 1978 BA in Mathematics, University of Cambridge
- 1980 PhD in Aerodynamics, University of Cambridge
- 1981 Completed Rolls-Royce fellowship at Emmanuel College, Cambridge
Career
- 1981-90 Head of advanced R&D at Acorn Computers; principal architect of BBC Micro and its successors
- 1983-85 Designed hardware organisation and logic of ARM1 microprocessor
- 1985-87 Designed logic for main processor of Acorn Archimedes
- 1990 ICL professor of Computer Engineering, University of Manchester
- 1999 Fellowship of Royal Academy of Engineering
- 2001-04 Head of Computer Science, Manchester University
- 2002 Fellowship of the Royal Society
- 2005 Began Spinnaker project
- 2007 IET Faraday Medal
- 2008 Awarded CBE for services to computer science
- 2010 Named Millennium Prize laureate
Q&A: Grey matter
How did you become interested in neuroscience?
It started off with an interest in trying to understand how the brain uses associative memories. Electronic associative memory is very brittle; give it exactly the right input, and you get the right output, but give it a slightly wrong input and you get nothing. It’s clear that, in biology, associations are much looser and a lot of human creativity seems to come from slightly confused associations or spotting associations that nobody else has seen.
It’s still a leap from chips to neurons.
Well, in thinking through the problem of how to build chip memories that had the sort of fuzzy capabilities humans display, I found that every line of thinking led back to reinventing neural networks in some form or another. Any way I thought about it, I ended up with something like what the rest of the world would call an artificial neuron.
But Spinnaker goes beyond neurons.
I looked for gaps in knowledge and it seemed to be between the neuron and the whole brain. We can isolate neurons and look at them on the bench, and we can also look at activity on the macro scale with fMRI, where you can see areas of the brain — millions of neurons — light up. But, the more I thought about it, the more I became convinced the real action is between those two levels; it’s which neurons of those millions are active at any one time. You can’t see that with fMRI because the resolution’s too low and with probes in brain tissue you can’t be sure which neuron they’re looking at. But in computing you can build a model to test a hypothesis.
How do you start?
With the chip. About five years ago we came up with an architecture based on the ARM processor. We’ve now got the first test chip and have built small systems around that and designed the full chip. We hope to get that fabricated by the autumn and we’ll scale that up into systems of interesting scale.
What’s an ’interesting scale’?
We’re aiming for a machine with a million ARM processors in it. We’re doing the modelling in software, which gives us some flexibility. We can run about a thousand neuron models per processor in real time, so with the whole machine we can model a billion neurons. That’s about one per cent of a human brain. We’ll have 18 processors per chip, so 55,000 chips, and each will have 128MB of memory, so that’s 7TB.
It’s still only a fraction of the human brain.
It should be big enough to work on areas such as low-level vision. You can recognise your mother, for example, if she’s right in front of you or if she’s 20ft away and it’s very hard to programme a computer to do that. If we can understand how to do it without complex 3D trigonometric calculations, it’ll go a long way to understanding sight.
Will this also help maker better computers?
As transistors shrink, they become less controllable and less reliable, and we don’t know how to build complex electronics using unreliable components. Over a lifetime, the brain loses one per cent of its neurons. We can’t build chips that’ll work if one per cent of the components fail. But the brain does: it actually exploits the diversity of different types of cell. If we can find out something about that, that’s progress on many levels.
UK productivity hindered by digital skills deficit – report
This is a bit of a nebulous subject. There are several sub-disciplines of 'digital skills' which all need different approaches. ...