The machine learning techniques can be used to detect the whales’ presence by listening to the sounds they make underwater. By locating the whales before they approach potentially harmful situations, for example reaching close proximity to large vessels or entering a mitigation zone, the technology aims to protect the animals and avoid costly shutdowns of offshore operations.
Developed in partnership with the Scottish Association for Marine Science (SAMS) and Gardline Geosurvey Limited, the system could hold potential for the species of right whales to survive and increase in population, according to lead researcher Dr Ben Milner. Currently there are only around 350 North Atlantic right whales remaining, and only around 100 females of breeding age.
“One of the main reasons why such automated systems have not been deployed has been the lack of confidence in their accuracy as they have been susceptible to the effects of noise,” said Dr Milner. “With this method, the effect of noise has been demonstrated to now be much less and so we hope more likely to be taken as a serious option for right whale detection.”
Scientists count elephants from space using AI
Acoustic deterrent keeps marine mammals from offshore builds
Whilst conventional methods of locating right whales have relied on observers onboard ships, which can be expensive and challenging in low-visibility conditions, Dr Milner explained that the UEA team’s technique utilises supervised learning, specifically a deep convolutional neural network (CNN).
“To train the CNN, we begin by taking around 2,000 examples of right whale upcall and gunshot sounds and a further 1,000 examples of just background noise, all collected through passive acoustic monitoring (PAM) devices,” Dr Milner said.
Upcall tones and gunshot sounds are common vocalisations emitted from whales and can both be detected by the technology.
“We convert these to a spectrogram representation which is a two dimensional representation with time along one axis and frequency along the other. The colour or greyscale shows the energy of the signal at that time-frequency point. This essentially converts all of the time-domain samples into a 2D image – which CNNs are known to process very effectively.”
The CNN is trained to be able to distinguish between the three classes (upcall, gunshot and no whale) from the set of spectrograms, Dr Milner said.
Recordings can often be interrupted by unwanted sounds such as from shipping, drilling, piling, seismic noise and noises from other animals, resulting in false detections.
“To address these problems we also developed a method of noise reduction that is applied to the spectrogram representations,” said Dr Milner. “This is based on methods applied to image filtering and uses another CNN but this time not trained for classification, instead it learns the relationship between clean spectrograms and noisy spectrograms. Knowing this, the CNN is able to transform a noisy spectrogram into a clean spectrogram which can then be classified.”
Dr Milner envisions that the technology could be implemented on buoys, autonomous surface vehicles (ASVs) or gliders to achieve high levels of real-time detection. The researchers’ findings are published in The Journal of the Acoustical Society of America.
Five ways to prepare for your first day
If I may add my own personal Tip No. 6 it goes something like this: From time to time a more senior member of staff will start explaining something...