The brain is the most complex biological structure known and, when one considers its size in relation to the plethora of functions that it performs, one quickly realises that the performance of even the most complex microprocessor architectures that have been developed by man pale in comparison.
But, while studying the brain might be quite fascinating, the subject might not initially appear all that relevant to those of us involved in the more traditional field of electronic engineering. After all, the masses of individual neurones and synapses within the brain are markedly removed from the architectures of the programmable silicon-based processors that are widely deployed in industrial control systems.
Nothing, however, could be further from the truth. You see, the study of the how these individual neurons and synapses co-operate in their millions to enable our bodies to perform any number of functions could actually teach us how to build more sophisticated systems from more traditional microprocessors and field-programmable gate arrays.
To develop such complex systems, however, will require a much more multi-disciplined approach to systems design than the one embraced by manufacturers at the present time.
In the future, teams of hardware and software engineers working to develop advanced systems will need to be complemented by natural scientists (who have a firm understanding of the processes that are used in the brain to perform such functions as vision, hearing and speech), as well as mathematicians (who are able to specifically model those processes so that their algorithms can then be implemented on more traditional silicon-based systems).
If all sounds pretty far fetched − it isn’t. Some companies have already embraced the idea and have built sophisticated image-processing systems that have modelled the way that the brain interprets images to extract important information from them – information that would be impossible to detect by conventional means.
In the future, however, it’s going to get more complex. Design engineers won’t rest on their laurels after creating systems simply based on a few mathematical models that are only capable of processing data from a single source, such as an image.
That’s right. Since our brain interprets its surroundings using a variety of other sensory data – from smells, sounds and touch, as well as vision − so future systems will be capable of interpreting data from all these sources simultaneously, in a move that will undoubtedly lead to the development of even more sophisticated products that could be used in industrial applications.
Eventually, of course, one can envisage systems that are dynamically able to configure themselves to pull whatever relevant data that they need to perform specific functions directly from their surroundings, without the need for any human involvement. That scenario, however, might be a little further off − but not by much.
Best wishes,
Dave Wilson
Editor, Electronicstalk
Dave’s comments form part of the weekly Electronicstalk newsletter, which also includes a round-up of the latest electronic products and services for engineers. To subscribe click here
Onshore wind and grid queue targeted in 2030 energy plan
NESO is expecting the gas powered turbines (all of them) to run for 5% of the time!. I did not realise that this was in the actual plan - but not...