UK computer scientists have developed AMLAS, a process to help make machine learning (ML) for autonomous technologies safe.
Developed at York University, AMLAS (Assurance of Machine Learning for use in Autonomous Systems), is designed to help engineers build a safety case that establishes confidence in the ML before handing over to end users.
AMLAS was developed by the Assuring Autonomy International Programme (AAIP) at York University. AAIP worked with industry experts to develop the process, which systematically integrates safety assurance into the development of ML components.
MORE FROM ELECTRONICS & COMMUNICATIONS
In a statement, Dr Richard Hawkins, senior research fellow and one of the authors of AMLAS, said: "The current approach to assuring safety in autonomous technologies is haphazard, with very little guidance or set standards in place. Sectors everywhere struggle to develop new guidelines fast enough to ensure that robotics and autonomous systems are safe for people to use.
"If the rush to market is the most important consideration when developing a new product, it will only be a matter of time before an unsafe piece of technology causes a serious accident."
AMLAS methodology has been used in several applications, including transport and healthcare. In one of its healthcare projects, AAIP is working with NHS Digital, the British Standards Institution, and Human Factors Everywhere to use AMLAS to help create resources that support manufacturers to meet the regulatory requirements for their ML healthcare tools.
Dr Ibrahim Habli, reader at York University and another of the authors, said: “Although there are many standards related to digital health technology, there is no published standard addressing specific safety assurance considerations. There is little published literature supporting the adequate assurance of AI-enabled healthcare products.
“AMLAS bridges a gap between existing healthcare regulations, which predate AI and ML, and the proliferation of these new technologies in the domain.”
“AMLAS can help any business or individual with a new autonomous product to systematically integrate safety assurance into the development of the ML components,” added Dr Hawkins. “We show how you can provide a persuasive argument about your ML model to feed into your system safety case. Our research helps us understand the risks and limits to which autonomous technologies can be shown to perform safely.”
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...