We haven’t quite reached the stage where robots are constantly flying above our heads monitoring traffic or driving around cleaning our streets. But the public is increasingly aware of the development of autonomous vehicles, due unfortunately, perhaps, to the increasing use of military drones and their alleged killing of hundreds of civilians in places like Pakistan.
Within a few years, unmanned aerial vehicles (UAVs) will likely be allowed to fly in regulated airspace in the US and Europe, opening up huge possibilities for their use for civilian applications, from crop management to climate data gathering. And much of the technology that will power these devices, from propulsion systems to sensors to control software, already exists. The issue at the moment is convincing regulators the technology is safe and secure and building a legal framework for autonomous vehicles to operate within.
The next challenge, however, will be enabling groups of different devices, whether they’re ground, air or sea-based, to work together in coordinated operations. For example search and rescue missions could combine UAVs that look for people lost at sea and once they’ve found them direct unmanned surface vehicles (USVs) to go and pick them up.
Work is already being done to enable similar autonomous vehicles to operate in swarms, communicating with each other to determine the best way to move individually to achieve their group goals. But it can be much more difficult to get different types of vehicle to work together, especially if they were created by different groups and have different control systems. Add humans into the mix and things can become even trickier.
An EU-funded project known as ICARUS is hoping to demonstrate how such a multi-vehicle system might operate. In 2015 they plan to run two scenarios to show off their efforts: one will simulate an earthquake response with a human team supported by UAVs that provide information from the air and unmanned ground vehicles (UGVs) that will enter collapsed buildings to search for survivors; the second will recreate a sea accident similar to the shipwreck of the Costa Concordia cruise liner that occurred earlier this year, using UAVs and USVs to locate and rescue survivors.
Teams from 24 universities and countries led by the Royal Military Academy (RMA) of Belgium will design and build all the elements for the operation, including a lightweight, low-power sensor for detecting human life signs, all the autonomous vehicles, a self-organising wireless communications network and all the software to control and operate the robots, including the user interface.
Completing such a project will be an impressive achievement. But the teams do have an advantage in that they will be working together from the start and have members dedicated to integrating all the robots into an overarching collaboration system. If a government or disaster response agency wanted to start incorporating autonomous vehicles into their operations, they would be more likely to struggle with the issue of integrating several different proprietary technologies.
One possible solution to this issue could be for more autonomous vehicle builders to use open-source software that can be copied or adapted by anyone, such as the Robot Operating System (ROS). However, while perfect for university research, this could create security issues and leave the robots more vulnerable to hacking by outside parties. Perhaps theses risks could be acceptable for some commercial operations – if the companies were willing to give up certain intellectual property rights to their work – but would likely be more problematic for government agencies and completely unworkable for the military.
Dr Peter Sapaty from the National Academy of Sciences of Ukraine – who spoke alongside ICARUS project coordinator Prof Geert de Cubber at this week’s Military Robotics conference in London – is hoping to introduce an alternative way of thinking about integrating different robotic systems.
Instead of following the more traditional method of creating a static system with a central control system and setting it tasks to complete, he argues that teams of robotic vehicles in such a setup should organise their own structure according to mission scenarios that could be injected into the network at any point, with problems solved in waves of communication and feedback.
Each important point in the network would have a universal control module that communicates with those around it in a special distributed scenario language, creating a higher layer of control Sapaty calls overoperability (as opposed to the more traditional interoperability).
It’s a fascinating concept (although too complex to go into more detail here) with similarities in much of the work going on in swarm behaviour in robotics. But with Sapaty apparently set on commercialising his idea rather than opening it up to the developer community, it’s not clear whether it could allow all robots built with different proprietary technologies to interact, even if they didn’t need to be designed as part of a central control system.
Perhaps there will come a time when it will be possible to buy a component that can be attached to any robotic vehicle and it will instantly be able to talk to any other fitted with a similar device and develop their own plans. If this could be achieved the possibilities for coordinated operations using lots of different autonomous devices would be endless – and quite scary.
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...