AM-SegNet for additive manufacturing in situ X-ray image segmentation and feature quantification

Synchrotron X-ray imaging has been utilised to detect the dynamic behaviour of molten pools during the metal additive manufacturing (AM) process, where a substantial amount of imaging data is generated. Here, we develop an e ﬃ cient and robust deep learning model, AM-SegNet, for segmenting and quantifying high-resolution X-ray images and prepare a large-scale database consisting of over 10,000 pixel-labelled images for model training and testing. AM-SegNet incorporates a lightweight convolution block and a customised attention mechanism, capable of performing semantic segmentation with high accuracy ( ∼ 96%) and processing speed (< 4 ms per frame). The segmentation results can be used for quanti ﬁ cation and multi-modal correlation analysis of critical features (e


Introduction
Laser additive manufacturing (AM), such as laser powder bed fusion (LPBF) [1][2] and directed energy deposition (DED) [3][4], has attracted a great deal of interest from both academia and industry, offering extraordinary advantages over traditional manufacturing methods.However, some features, e.g.lack of fusion [5,6] and residual porosity [7][8][9], restrict its application in the manufacturing of safetysensitive components.With the development of synchrotron facilities, it has been possible to exploit in situ highspeed X-ray imaging to gain insights into the complex physical phenomena during the AM process [10][11][12], such as powder melting and solidification, keyhole fluctuation, as well as defect formation.The dynamic behaviour of melt pool and critical features have been studied and revealed using synchrotron imaging results.For example, power-velocity process maps [e.g.[13][14][15] have been defined to directly relate the product quality, e.g.porosity, to the process parameters in LPBF experiments.
Generally, in situ synchrotron experiments are performed at ultra-high temporal and spatial resolutions [16,17], thus generating a large volume of X-ray imaging data and making manual data processing time-consuming and impractical.In this case, it becomes essential to propose an efficient and reliable approach to performing image segmentation and analysis.For example, an automatic detection algorithm [18] involving quotient, intensity difference calculations, and numerical shaping was proposed to detect the melt pool boundaries during the LPBF process of aluminium alloys.Apart from the melt pool boundary, it is of great importance to identify and classify the component defects in an efficient and reliable manner.Recently, machine learning techniques, such as support vector machine [19,20], Bayesian classifier [21] and K-means clustering [8,22,23], have been applied for the detection and classification of manufacturing defects in metal AM processes.Due to the stochastic nature of melt pool dynamics, it is challenging for traditional machine learning approaches to provide accurate and reliable detection results.
With the rapid advances in computational resources, deep learning methods, especially convolutional neural networks (CNN), are starting to play an important role in the monitoring and quantification of surface defects and other critical features during metal AM processes.For example, CNN models have been used for porosity detection [24], anomaly monitoring [25], and surface quality improvement [26], etc.However, these studies focus on optical or acoustic signals rather than in situ X-ray imaging results, failing to reveal the dynamic behaviour beneath the component surface.On the other hand, pixel-wise segmentation models, such as U-Net [27] and its variants [28,29], have been used to perform semantic segmentation on synchrotron X-ray images [30,31].For example, an automatic deep learning segmentation model using U-Net was proposed for the segmentation and annotation of melt pools [32].However, the U-Net and its variants exhibit complicated model architectures and high latency, restricting their potential applications in real-time detection and monitoring of the AM processes.Moreover, the diversity of X-ray image datasets used in existing studies is limited, as these datasets only cover a single synchrotron facility, Advanced Photon Source [30][31][32] and three materials, including Ti-6Al-4V [30], aluminium alloy [30][31][32].As a result, the trained segmentation models are not generalisable for a range of manufacturing processes, process parameters, materials, and synchrotron facilities, e.g.beam energy, insertion devices, etc.The attempt to create a generalisable machine-learning (ML) segmentation model for AM X-ray images has not been explored yet.Lastly, none of the existing models can provide direct quantification results which is an unexplored area.
In this study, we develop a novel generalised lightweight neural network, AM-SegNet, to perform semantic segmentation and feature quantification on time-series X-ray images collected from various AM beamtime experiments.For the comprehensive model training and testing, we have established a large-scale benchmark database consisting of more than 10,000 pixel-labelled X-ray images.Experimental results indicate that AM-SegNet outperforms other state-of-the-art segmentation models in terms of accuracy, speed and robustness.A well-trained AM-SegNet has been adopted to expedite the quantification of critical features and conduct correlation analysis in the LPBF experiments.The accuracy and efficiency of AM-SegNet are further validated across different types of AM experiments, and for another advanced manufacturing technique, high-pressure die casting (HPDC) [33], making it closer to achieving realtime automatic segmentation and quantification of X-ray images captured by high-speed synchrotron experiments.

Architecture of AM-Segnet
To expedite the segmentation and quantification of Xray images collected from high-speed synchrotron experiments, we propose a novel lightweight network AM-SegNet (see Figure 1(a)) with the purpose of improving computation efficiency and segmentation speed without compromising model performance.The AM-SegNet adopts an encoder-decoder architecture in which a customised lightweight convolution block (see Figure 1(b)) and the attention mechanism (see Figure 1 (c)) are utilised.
The lightweight convolution block begins with a squeeze convolution layer (1 × 1 kernels) that limits the number of input channels, denoted as n 1 , to be processed by the following expand module.The expand module includes: (1) separable convolutions, (2) residual convolution with 1 × 1 kernels and (3) expand convolution with 3 × 3 kernels.Specifically, separable convolution decomposes a regular convolution operation into two separate steps: depth-wise convolution and pointwise convolution.Depth-wise convolution applies a single filter to individual input channels, bringing about a feature map for each input channel separately.All the resulting feature maps are concatenated into a single output tensor and processed by the following point-wise convolution with 1 × 1 filters.Three sets of outputs from the expand layer are concatenated in the concatenation layer, increasing the channel number from n 1 to 4 × n 1 .The capability and efficiency of such squeeze-expand operations has been successfully validated in the task of image classification and defect detection [34].In the last encoder step, standard convolutional layers are retained in order to ensure the model's robustness and generalisation and mitigate over-fitting problems.
For better model sensitivity and higher segmentation accuracy, the attention mechanism [35,36] has been introduced to deep neural networks.It has been found that attention gates can help to disambiguate irrelevant and noisy responses and update the model parameters based on spatial regions that are more relevant to the given task.Inspired by this, a customised attention gate is proposed in this study (see Figure 1(c)).The purpose is to highlight the salient features in the last encoding stage without consuming excessive computation resources.The output x' after the attention gate can be updated by attention coefficient α, given by: where w and ∅ are linear transformations implemented as 1 × 1 convolutions, and σ 1 and σ 2 refer to ReLU (Rectified Linear Unit) and sigmoid activations, respectively.Here, the ReLU function outputs the input for positive values and zero for negative values, while the sigmoid function transforms input values into a smooth S-shaped curve, mapping them to a range from 0 to 1.
In this study, AM-SegNet is proposed to perform semantic segmentation on synchrotron X-ray images.Semantic segmentation results can provide detailed understanding and analysis by assigning a specific label to each pixel within the image.Once a welltrained AM-SegNet is ready, it will be feasible to perform feature quantification and correlation with high confidence, minimising the time-consuming and subjective problems related to manual analysis.

Benchmark dataset
In this study, we build a large-scale benchmark database for model training and testing.The database encompasses a broad range of synchrotron experiments, incorporating various synchrotron beamlines, powder materials and process parameters.Details of synchrotron beamlines and X-ray imaging settings are available in Section 2.3.As a result, it can be utilised by other researchers to benchmark their models' performance against others and to develop novel algorithms or techniques for image segmentation in this domain.
Figure 2(a) presents the pipeline of semantic pixel-labelling of X-ray images, in which flat field correction, background subtraction, image cropping and pixel labelling are executed step by step.Here, background subtraction will be applied only if it is difficult to segment the regions of interest from raw X-ray images.In the pixel-labelling stage, each pixel in the image has its own corresponding pixel label, i.e. keyhole, pore, substrate, background or powder.Figure 2(b) presents some examples of manually pixel-labelled X-ray images, which are used as ground- truth in the following model training and testing steps.Additionally, more in situ synchrotron imaging data from recent studies [8,11,13] are incorporated into the benchmark database in order to improve its universality and generalisation.In the end, a variety of metal materials, process parameters and synchrotron beamlines are covered by the benchmark database, as listed in Table 1.We have also used random croppinga data augmentation technique to minimise over-fitting and class-imbalance issues during model training.For data augmentation, a random 10% of X-ray images are selected and cropped.Then the newly-generated images are added to the benchmark database to further improve data diversity and generalisation.
Figures 2(c,d) present the distributions of X-ray images and pixel labels in the benchmark database related to LPBF and welding processes.It can be found that the percentages of two critical features, keyhole and pore, are significantly lower than the other three.Therefore, class weighting will be applied for balancing in the training of AM-SegNet and other CNN models.During the preparation of the benchmark database, some simplification operations are adopted, which can introduce minor errors to the ground-truth pixel labels.For example, the spatter ejected from the melt pool during laser scanning is treated as background and the gaps between large powder particles are ignored.These errors only occur in a few cases and do not affect the overall accuracy and reliability of pixel-labelling results in the benchmark database.Furthermore, X-ray imaging results collected during DED and HPDC experiments were subsequently added to the database and utilised in Section 3.3 to demonstrate the extended application of AM-SegNet in different advanced manufacturing processes.

Synchrotron beamlines and imaging settings
In this study, the benchmark database consists of over 10,000 X-ray images collected from various AM beamtime experiments, involving three synchrotron beamlines: (1) European Synchrotron Radiation Facility (ESRF): In situ operando synchrotron imaging of LPBF and welding experiments (see Table 1) was performed at ESRF, and the in situ X-ray imaging setup is illustrated in Figure 3.The dynamic behaviour of molten pool and relevant critical features was imaged at high spatial (4.31 µm) and temporal (frame rate of 40 kHz) resolutions.Synchrotron experiments were carried out using a custom-designed replicator [8,10], which can provide an environmental chamber to accommodate laser scanning and synchrotron X-ray imaging at the same time.(2) Diamond Light Source (DLS): The fast synchrotron imaging of DED experiments was conducted on the DLS I12 beamline.A replicator of the DED process was integrated with the beamline for in situ synchrotron X-ray experiments.Radiographic images were obtained with a pixel size of 6.67 μm at a frame rate of 1 kHz.(3) Advanced Photon Source (APS): This study utilises published X-ray imaging data [11,13] from other synchrotron experiments to validate the segmentation performance of AM-SegNet.The relevant synchrotron experiments were performed at APS 32-ID-B beamline in the Argonne National Laboratory.Operando X-ray imaging data were collected with a frame rate of 50 kHz and a spatial resolution of ∼2.0 μm/pixel.

Model training and testing
In this study, AM-SegNet and other widely used CNN models, i.e.U-Net and its variants (Res-U-Net and Squeeze-U-Net), are trained and evaluated.In the variants of U-Net, the standard convolution layer is substituted with an equivalent convolution block, e.g. the residual block [37,38] in Res-U-Net.Additionally, access to source codes for AM-SegNet and other CNN models  In general, a large learning rate enables the model to learn faster but brings with it a risk of sub-optimal results [39].When the learning rate becomes smaller, the convergence speed becomes lower in the initial stage, and it takes a longer time to reach the stable stage.To achieve a smooth learning process, we adopt a novel training strategy of learning rate scheduling, called annealing learning [40,41], to automatically anneal the learning rate during the training process.In the early stage of network training, a higher learning rate, e.g. 1 × 10 −3 , is used to allow the model to explore a larger portion of the parameter space and embrace a higher convergence speed.However, as the training progresses and the model gets closer to its optimal solution, a lower learning rate is adopted for the further fine-tuning of model parameters.In this section, all the segmentation models are trained for 100 epochs with an initial learning rate of 1 × 10 −3 and a batch size of 16 using the Adam solver [42,43].In the model compile, the Dice loss and Categorical Focal Loss are combined to measure the model loss [44], and the F 1 -score and Jaccard index, also known as Intersection over Union (IoU), are selected as model metrics [45,46].Here, the loss function is designed to help address the issue of class imbalance, as it can lead the model to achieve better discrimination between foreground and background classes.Additionally, the IoU score serves as a key metric for evaluating the quality of segmentation results by accounting for localisation accuracy, handling class imbalance and enabling fair comparisons.The relevant results of model training and testing are presented in Figure 4.
The training and testing of different segmentation models are repeated 20 times and the average values are calculated for further comparison and analysis.
Figure 4 presents the training and testing results of different CNN models for semantic segmentation of Xray images.Figure 4(a) focuses on the model's computation efficiency, in which AM-SegNet realises the shortest training and segmentation time compared with U-Net and its variants.As a result, AM-SegNet is the first one to finish the whole training process when the maximum training epoch is set to be the same (see Figure 4(b)).Here, the segmentation time refers to the time taken by a CNN model to generate segmentation results after an input image is fed into the network.On the other hand, the training time is related to the iteration process when the neural network computes the network error and adjusts its weights and biases accordingly to minimise the loss function.Specifically, the minimal training and segmentation time associated with AM-SegNet indicates that the lightweight convolutional block proposed in this study can bring about remarkable computation efficiency and propagation speed.Compared with the standard U-Net, both training and segmentation time is reduced by around 50%.When it comes to the IoU scores of individual pixel labels, the model testing results (see Figure 4(d)) indicate that AM-SegNet is able to produce reliable segmentation results for two critical features, i.e. keyhole and pore, while the IoU scores related to the standard U-Net drop considerably.Ablative analysis was performed to clarify the impacts of the customised attention block on the AM-SegNet.AM-SegNet* is developed in which the attention block was removed from the AM-SegNet architecture.The comparison results (see Table 2) indicate that the attention block improves the model's accuracy and robustness.The segmentation time and trainable parameters of the two models are very close, as listed in Table 2, which means the usage of the attention block does not consume excessive computation resources.
Table 2 lists the numbers of trainable parameters related to different segmentation models.It can be found that AM-SegNet* has the smallest number (1.63 × 10 7 ), which should be attributed to its lightweight design.Besides, the influence of adding the attention block to AM-SegNet on the trainable parameters is negligible whilst improving the F 1 and IoU scores.Comparing AM-SegNet and another lightweight model -Squeeze-U-Net, both models have similar trainable parameters, however AM-SegNet can provide higher segmentation accuracy while reducing the segmentation time by more than 40%.
Additionally, the Grad-CAM (gradient-weighted class activation mapping) [47,48] technique is adopted for interpreting and visualising the decision mechanism of AM-SegNet after model tuning.It computes the gradient of the output class score with respect to the feature maps of the specific convolutional layer in a CNN model.The resulting gradients are then used to generate a weighted activation map, highlighting the regions of the image that make great contributions to the network's decision.Here, we present Grad-CAM results related to the pixel labels of keyhole and pore to examine the network responses of the AM-SegNet after model tuning (see Figure 5(a)).The blue regime corresponds to a condition of low influence, whereas the red regime indicates high impact.For example, the red zones in Figure 5(a) have the greatest impact on classifying the relevant image pixels as keyhole or pore, which agrees well with the ground truth and thereby indicates excellent segmentation performance.
Overall, the AM-SegNet proposed in this study can perform semantic segmentation on X-ray images with excellent accuracy and processing speed.A well-trained AM-SegNet has been utilised to perform segmentation analysis on time-series X-ray images in the LPBF experiments (see Figure 5(b) and Supplementary Video S1 in  Codes and Videos).Moreover, the segmentation results will be used for automatic quantification and correlation analysis in the next section.

Feature quantification and correlation
After a well-trained AM-SegNet is obtained, the process of carrying out feature quantification and correlation analysis using synchrotron imaging results is considerably streamlined.For example, AM-SegNet can be employed to automatically compute the geometric properties of two critical features, i.e. the keyhole and pores, within the molten pool region in the LPBF experiments.The keyhole refers to a deep, high aspect ratio vapour depression, which plays an important role in the melt pool region [13,49].For example, the fluctuation and collapse of keyhole are closely related to the pore evolution, e.g.formation, growth and migration, potentially impacting the fatigue life of metal components.Therefore, after a well-trained AM-SegNet is obtained, the model will be employed to carry out quantification and correlation analysis of these two critical features within the melt pool region in the LPBF experiments.
In this study, the geometrical properties of keyholes and pores are calculated pixel by pixel using semantic segmentation results from AM-SegNet, including keyhole area (A k ), keyhole depth (d k ) and pore area (A p ), as shown in Figure 6(a).Additionally, the distribution of quantification errors associated with A k is given in Figure 6(b).Here, each individual metal material has its corresponding 100 X-ray images for quantification tests and the calculation results are then compared with the ground truth to compute quantification errors.The quantification errors related to other geometric properties, i.e. d k and A p , are presented in Figures 6(c,d), respectively.The experimental results indicate that employing AM-SegNet to replace manual operations for feature quantification is feasible, as it is capable of presenting quantification results in an efficient and accurate manner.
It has been reported that pore formation is closely related to keyhole fluctuations in the LPBF process [7,8].This finding is consistent with the comparison results in Figures 7(a,b), which present the histograms of keyhole areas in two LPBF experiments with and without pore formation, respectively.Comparing these two histograms provides an intuitive way to reveal the distribution of keyhole area across different intervals.The histogram with outliers and a high degree of variance (see Figure 7(b)) is connected to the LPBF process with pore formation.Furthermore, leveraging the quantification results from AM-SegNet enables us to correlate keyhole fluctuation with pore formation from a statistical perspective.Figures 7(c,d) depict the mapping relationships between keyhole deviations and pore formation.The size of pink bubbles corresponds to the pore size (equivalent diameter ∅ e ) segmented from X-ray images.Here, the fluctuations of keyholes (A k and d k ) in time-series X-ray imaging are measured using δ max and δ avg , which correspond to the maximum and average deviations of keyholes under different experimental conditions.For example, the deviation δ of keyhole area (A k ) can be given by: where i and n are the sequence of the current image and the total number of all X-ray images, respectively.Additionally, the equivalent diameter ∅ e of a segmented pore is calculated by: where A p is the pore area segmented from the X-ray image.
Here, X-ray imaging results from 81 sets of LPBF experiments are used for correlative mapping analysis, with pores being detected in 60 experimental sets.The average and maximum deviations of keyhole area are analysed in Figure 7(c), while Figure 7(d) takes both keyhole area and depth into consideration (average deviations only).The data points in both scenarios, i.e. with and without pore formation, exhibit a strong clustering effect, represented by ellipses of varying colours and divided by blue dashed lines.Like the P-V (laser power vs. scan speed) space reported in the literature [13,15], we can use the correlation maps (Figures 7(cd)) as a data-driven approaches to avoid process parameters with a high likelihood of pore generation and hence improve the process consistency in LPBF.

Extended applications of AM-SegNet
In this section, the application of AM-SegNet is extended to other synchrotron facilities, e.g.APS and DLS, and other advanced manufacturing processes (e.g.DED and HPDC).More relevant details of synchrotron beamlines and imaging settings can be found in Section 2.3.Figures 8(a,b) present the differences in IoU scores between two model training strategies: training from scratch and transfer learning using the same X-ray imaging data collected from LPBF synchrotron experiments performed at APS.In the transfer learning, a pre-trained AM-SegNet is further tuned on the new dataset.Here, synchrotron experiment data collected from recent studies [11,13] are used for model training and testing.In this section, the model training platforms, i.e. hardware and software, are kept the same as listed in Section 3.1 and similar training strategies are adopted.Additionally, the total training epochs in Figures 8(b,c) are set to 100 and 200, respectively.It is noted that the segmentation performance of AM-SegNet remains excellent in terms of mean IoU scores (∼95%) when confronted with a new dataset, regardless of the selection of training strategies.Furthermore, the utilisation of transfer learning enables the pre-trained model to achieve excellent performance with reduced training time and computation resources.This is because the pre-trained segmentation model has been comprehensively tuned on a large dataset and learned general features of X-ray imaging data that can be useful in similar tasks.
Additionally, the performance of AM-SegNet is further validated using X-ray imaging data collected from other advanced manufacturing processes.For example, Figure 8(c) presents the transition of mean IoU scores when the AM-SegNet is trained on DED and HPDC X-ray imaging data.Upon the completion of model training, the mean IoU scores of AM-SegNet exceed 95% for both experiments.Likewise, the well-trained segmentation model can be used to perform feature quantification, i.e. calculation of pore area, on time-series DED X-ray images with high confidence (see Figures 8(d,e)).Additionally, the trained model was tested on X-ray imaging results from an HPDC experiment with reasonable success (see Supplementary Video S2 in Codes and Videos).

Conclusions
In summary, this paper proposes a novel lightweight neural network, AM-SegNet, for image segmentation and feature quantification of X-ray imaging data collected from a variety of synchrotron experiments.A large-scale benchmark database consisting of pixel-labelled X-ray images has been established for network training and testing.The performance of AM-SegNet was compared with other state-of-the-art networks and further validated in other advanced manufacturing processes (DED and HPDC).The utilisation of AM-SegNet to facilitate feature quantification and correlation analysis was also explored.The main conclusions are given below: (1) AM-SegNet has the highest segmentation accuracy (∼96%) and the fastest processing speed (< 4 ms per frame), outperforming other state-of-the-art segmentation models.(2) Trained AM-SegNet enables automatic feature quantification and correlation analysis, minimising the time- consuming and subjective problems related to manual analysis.(3) Application of AM-SegNet for the segmentation and analysis of X-ray images can be feasibly extended to other advanced manufacturing processes with high confidence.
The proposed method will enable researchers and engineers in the manufacturing and imaging domains to expedite the processing of X-ray imaging data and gain new insights into complex experimental phenomena from a data-driven perspective.The benchmark database established in this study covers a wide range of highspeed synchrotron experiments, involving different beamlines, powder materials and process parameters.
Therefore, it can be adopted by researchers to benchmark the performance of their models against others, and to develop new algorithms or techniques for image segmentation and quantification in this field.It is expected that real-time segmentation and quantification of X-ray images in high-speed synchrotron experiments will be achieved through deep learning in the near future.ESRF.The authors are grateful to Diamond Light Source for the beamtime (MG22053-1, MG30735-1, and MG31855) and the help of all the staff on the I12 and I13 beamlines.We acknowledge the Renishaw plc.for their in-kind contribution and technical support on the development of the QUAD-ISOPR and an EPSRC-iCASE studentship (grant number: EP/W522193/1).We want to thank Dr Bita Ghaffari, Ford motor Company USA for providing die cast samples.We also extend our thanks to Dr. Ravi Shahani from Constellium for providing CP1 materials for the LPBF experiments, and Ford USA for providing materials and funding for the HPDC experiments.

Figure 1 .
Figure 1.Schematic workflow of AM-SegNet designed for automatic segmentation and quantification of high-resolution X-ray images: (a) the architecture of AM-SegNet using a lightweight convolution block and attention mechanism: H n × W n correspond to the input sizes in different layers, and H n = H 0 /2 n and W n = W 0 /2 n , where H 0 and W 0 refer to the size of raw X-ray images; (b) structure of the lightweight convolution block based on separable convolution, residual convolution, and squeeze-expand operations; and (c) structure of the attention mechanism adopted in the standard convolution layers.

Figure 2 .
Figure 2. X-ray imaging benchmark database for model training and testing: (a) pipeline of image processing, including: flat field correction, background subtraction, image cropping, and pixel labelling; (b) examples of manually pixel-wise labelled X-ray images collected from LPBF and welding experiments; (c) distributions of X-ray images related to different substrate and powder materials; and (d) percentages of individual pixel labels in the benchmark database.

Figure 3 .
Figure 3.In situ X-ray imaging setup for capturing time-series radiographs during LPBF synchrotron experiments performed at ESRF.X-ray imaging was performed at high spatial (4.31 µm/pixel) and temporal (frame rate of 40 kHz) resolutions.

Figure 4 .
Figure 4. Training and testing of five different CNN semantic segmentation models for X-ray images: (a) comparison analysis of training and segmentation time related to different models; (b) transition process of mean IoU scores over training time; (c) comparison of mean IoU values within specified training duration; and (d) comparison between AM-SegNet and standard U-Net in terms of IoU values of individual pixel labels after model training.All IoU scores related to AM-SegNet are higher, especially those of keyhole and pore.

Figure 5 .
Figure 5. Examination and application of the trained AM-SegNet: (a) Grad-CAM results of keyhole and pore associated with the trained AM-SegNet.The blue regime corresponds to a condition of low influence, whereas the red regime indicates high impact; and (b) comparison between ground-truth and AM-SegNet segmentation results of time-series X-ray images in the LPBF experiments.

Figure 6 .
Figure 6.Quantification of critical features in LPBF X-ray images using the trained AM-SegNet: (a) calculation of keyhole and pore geometry; (b) quantification errors of keyhole area (A k ) related to different materials; (c) quantification errors of keyhole depth (d k ) related to different materials; and (d) quantification errors of pore area (A p ) related to different materials.

Figure 7 .
Figure 7. Correlative analysis of critical features in LPBF X-ray images using AM-SegNet: (a) histogram of keyhole areas in an LPBF experiment without pores; (b) histogram of keyhole areas in an LPBF experiment with segmented pores; (c) correlation mapping between pore formation and deviations of keyhole area; and (d) correlation mapping between pore formation and deviations of both keyhole area and depth.

Figure 8 .
Figure 8. Extended application of AM-SegNet to other advanced manufacturing processes: (a) comparison of IoU values of individual labels after transfer learning and training from scratch using the same X-ray imaging data collected from APS synchrotron experiments; (b) transition of mean IoU scores during model training of (a); (c) transition of mean IoU scores in the model training on DED and HPDC X-ray imaging data; (d) illustration of pore area calculation using the segmentation results from AM-SegNet; and (e) error distribution of pore area quantification on DED X-ray imaging data collected from DLS synchrotron experiments.

Table 1 .
Synchrotron facilities, metal materials, and process parameters during LPBF and welding synchrotron experiments covered by the benchmark database

Table 2 .
Comparison of different segmentation models AM-SegNet* is obtained by removing the attention block from the original AM-SegNet.