Adaptive systems which learn about their environment in a similar way to a toddler exploring its surroundings could form the heart of flexible robots, road traffic monitoring and surveillance systems.
A European project called COgnitiveSystems using Perception-Action Learning (COSPAL) led by Sweden’s Linköping University, is developing learning systems to make automated applications more flexible and adaptable.
According to Prof Gösta Granlund of the university’s Electrical Engineering department, the research could ultimately lead to genuinely intelligent systems which use learning rather than programming.
‘The type of system level we want to explore is fairly low level, meaning when a new system “wakes up,” like a young child it doesn’t have any concept of “what is space,” or how the external world appears and behaves,’ said Granlund.
‘One of the fundamental features we are investigating is exploratory learning — having our systems learn the way a toddler does,’ he added.
In typical systems, perception precedes action — a robot programmed to catch a ball would need to recognise the ball and calculate its parabola, for example. This project uses the notion that action precedes perception. In other words, the system acts first, observes the effect and learns about the behaviour of the world from it.
The system learns through a core program that prompts it to move, and observes and processes the results.
‘In our system, the robot moves its gripper, or different parts of its limbs, then through feedback systems it finds out how parts of itself work, and, in turn, how space works,’ said Granlund. ‘What’s important is that it’s not the vision or perception part that is important, it’s the action — the motor system — a concept called embodiment.
‘In the same way a congenitally blind person can develop a good sense of space by walking around and touching things, it’s a combination of actuation and some type of sensory response. But vision is not critical, although it’s very convenient, because it has a large capacity.’
In humans, image processing takes place in the right side of the brain, which is also where the perception/action association takes place. In a computer the same mechanism would be called a reactive control system, acting in real time.
‘Traditionally, developers would apply an AI engine to a controller; say the motor system of a robot. This gives a tight connection between the percept analysis part and the actuation of the motor. We put these two together and add symbolic logic,’ said Granlund.
In addition to continuous mapping between sensors and actuators, COSPAL also uses symbolic processing to deal with space and time in real time, a concept dubbed generalisation. The symbolic memory stores information and manipulates it in a similar way to AI. The architecture behind COSPAL makes both of these work together.
The benefits of the technology over traditional AI is that it operates faster — it sets up the circuits for direct connection to be able to manipulate an object, such as picking up a glass, and can also plan future manipulations on it.
‘This gives the advantage of combining direct-coupled speed and the type of symbolic processing which is required for us to get really intelligent and adaptive systems,’ said Granlund. ‘Until now, there has been AI, which made much of symbolic processing, and neural processing. But in a truly useful system, you have to have something that works like a combination of both.’
The team’s next project — dubbed DIPLECS — is underway, headed by Granlund’s collaborator Michael Felsberg. It is looking at using the technology for road traffic safety systems.
‘There you have a fairly complex scenario, but you also have the market demand for large volumes which are needed to support development and production,’ said Granlund.
Another key area could be military and civilian surveillance, where the system could look for, and react to, anomalies in a highly-complex environment. In addition to security applications, it could also help to ensure safety in dangerous industrial environments.
It could also be used to produce generic robotic control systems.
‘Today you build one system for one application, then if you need another application you have to redo nearly everything, which is expensive,’ said Granlund. ‘The idea is to get systems which are easier to train for different purposes, and are therefore more economical.’
By the end of the three-year COSPAL project, the researchers hope to have a system architecture that could be used in its diverse potential applications.
Other partners in COSPAL include Surrey University, Germany’s Kiel University and Prague University in the Czech Republic.
EV targets set to cost UK auto makers billions
The UK car ´parc´ (technical term) in 2023 was 36 million, UK car sales in 2023 just under 2 million and as the report above says 363,000 EV sales,...