In a paper published by Advanced Materials Technologies, the group explained how it has developed so-called convolutional neural networks (CNNs), a popular type of algorithm primarily used to process images and videos, to predict whether a part will be good by looking at as little as 10 milliseconds of video.
"This is a revolutionary way to look at the data that you can label video by video, or better yet, frame by frame," said principal investigator and LLNL researcher Brian Giera. "The advantage is that you can collect video while you're printing something and ultimately make conclusions as you're printing it.”
Giera said that the approach has considerable advantages over the use post-build sensor analysis, which is expensive. With parts that take days to weeks to print, CNNs could prove valuable for understanding the print process, learning the quality of the part sooner and correcting or adjusting the build in real time if necessary.
The team developed the neural networks using around 2,000 video clips of melted laser tracks under varying conditions, such as speed or power. They scanned the part surfaces with a tool that generated 3D height maps, using that information to train the algorithms to analyse sections of video frames (each area called a convolution).
LLNL researcher Bodi Yuan, the paper's lead author, developed the algorithms that could label automatically the height maps of each build and used the same model to predict the width of the build track, whether the track was broken and the standard deviation of width. Using the algorithms, researchers were able to take video of in-progress builds and determine if the part exhibited acceptable quality. Researchers reported that the neural networks were able to detect whether a part would be continuous with 93 per cent accuracy, making other strong predictions on part width.
The neural networks described in the paper could theoretically be used in other 3D printing systems, Giera said. Other researchers should be able to follow the same formula, creating parts under different conditions, collecting video and scanning them with a height map to generate a labelled video set that could be used with standard machine-learning techniques.
Giera said work still needs to be done to detect voids within parts that can't be predicted with height map scans but could be measured using ex situ X-ray radiography.
Researchers also will be looking to create algorithms to incorporate multiple sensing modalities besides image and video. "Right now, any type of detection is considered a huge win. If we can fix it on the fly, that is the greater end goal," Giera said. "Given the volumes of data we're collecting that machine learning algorithms are designed to handle, machine learning is going to play a central role in creating parts right the first time."
Onshore wind and grid queue targeted in 2030 energy plan
NESO is expecting the gas powered turbines (all of them) to run for 5% of the time!. I did not realise that this was in the actual plan - but not...