This is the claim of a team led by Oxford University in collaboration with Carnegie Mellon University. They believe their method could soon be used in digital human communications, including social media and private messaging. The ability to send ‘perfectly secure’ information could be advantageous to vulnerable groups, including dissidents, investigative journalists, and humanitarian aid workers.
According to Oxford University, the algorithm applies to steganography, which is the practice of hiding sensitive information inside of innocuous content. Steganography differs from cryptography because the sensitive information is concealed in such a way that this obscures the fact that something has been hidden.
Despite having been studied for over 25 years, existing steganography approaches generally have imperfect security, so individuals using these methods risk being detected because previous steganography algorithms would subtly change the distribution of the innocuous content.
To overcome this, the research team used recent breakthroughs in so-called minimum entropy coupling, which allows someone to join two distributions of data together such that their mutual information is maximised, but the individual distributions are preserved.
As a result, with the new algorithm, there is no statistical difference between the distribution of the innocuous content and the distribution of content that encodes sensitive information.
The algorithm was tested using several types of models that produce auto-generated content, such as GPT-2, an open-source language model, and WAVE-RNN, a text-to-speech converter. Besides being perfectly secure, the new algorithm showed up to 40 per cent higher encoding efficiency than previous steganography methods across a variety of applications, enabling more information to be concealed within a given amount of data. This may make steganography an attractive method even if perfect security is not required, due to the benefits for data compression and storage.
In a statement, co-lead author Dr Christian Schroeder de Witt, from Oxford’s Department of Engineering Science, said: ‘Our method can be applied to any software that automatically generates content, for instance probabilistic video filters, or meme generators. This could be very valuable, for instance, for journalists and aid workers in countries where the act of encryption is illegal. However, users still need to exercise precaution as any encryption technique may be vulnerable to side-channel attacks such as detecting a steganography app on the user’s phone.”
The research team has filed a patent for the algorithm, but intend to issue it under a free licence to third parties for non-commercial responsible use. This includes academic and humanitarian use, and trusted third-party security audits. The researchers have published this work as a preprint paper on arXiv, as well as open-sourced an inefficient implementation of their method on Github. They will also present the new algorithm at the premier AI conference, the 2023 International Conference on Learning Representations in May.
Co-lead author Samuel Sokota (Machine Learning Department, Carnegie Mellon University) said: ‘The main contribution of the work is showing a deep connection between a problem called minimum entropy coupling and perfectly secure steganography. By leveraging this connection, we introduce a new family of steganography algorithms that have perfect security guarantees.’
The study involved Prof. Zico Kolter at Carnegie Mellon University, USA, and Dr Martin Strohmeier from armasuisse Science+Technology, Switzerland. The work was partially funded by an EPSRC IAA Doctoral Impact fund hosted by Professor Philip Torr, Torr Vision Group, at Oxford University.
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...