Best practices for machine learning strategies aimed at process …

Best practices for machine learning strategies aimed at process …

Background and knowledge gap

Additive manufacturing (AM), also known as 3D printing, is a technology that provides distinct advantages over traditional manufacturing approaches. In this process, three-dimensional (3D) products are built incrementally by gradually adding thin layers of material based on computer-aided design (CAD) models. This unique printing procedure supports the production of complex shapes, without the need for costly tooling typically used in conventional manufacturing methods.

Among the various AM techniques, powder bed fusion (PBF) processes, including laser powder-bed fusion (L-PBF) and electron beam powder-bed fusion (E-PBF), have gained significant attention due to their versatility and ability to fabricate complex parts from a wide range of materials. In PBF, a laser or electron beam is used to selectively melt specific areas of a thin layer of powder, building up a 3D part layer by layer under computer control.

Despite the advantages of AM, the process has remained a niche technology due to the challenges involved in building complex geometries without defects. Multiple factors such as material properties and process parameters influence the part quality. Given the multidimensional parameter space and interactions between parameters, it is onerous for human operators to efficiently identify the hidden relationships underpinning the AM process. While high-fidelity physical-based models can help identify some of the cause-and-effect relations via ‘what-if’ simulations, their deterministic nature does not allow the uncertainties that occur during the AM process to be incorporated easily, and they typically require significant computational resources.

Under these circumstances, machine learning (ML) techniques provide a pathway forward. ML has been used in many industries to uncover relationships—often unexpected—between inputs and outputs, to solve challenging real-world problems. In the context of AM, ML can be employed in various applications, such as:

  1. Design phase: Improving topology and material composite design.
  2. Process parameter development (PPD): Obtaining an ideal processing window to ensure the built part is within specifications.
  3. In-situ monitoring: Analyzing sensor data in real-time to predict ensuing events or identify potential defects.
  4. Quality assessment: Detecting various anomalies/defects in the final part.

This article focuses on the role of ML in PPD applications for PBF processes, where the goal is to determine the optimal set of processing parameters to achieve the desired part quality. Specifically, we review ML-based forward and inverse models that have been proposed to unlock the process–structure–property–performance relationships in both directions.

ML frameworks for the PPD problem: forward vs inverse models

The relationships in the process–structure–property–performance (PSPP) continuum may be developed in the forward direction where process parameters are the starting point. Equally, the linkages may be mapped in the reverse order where the process parameters are the outputs.

In Forward Models, the ML algorithm receives different process parameters as input and predicts/estimates various aspects of PBF, i.e., process signature, microstructure, property, build quality, and performance as output. The forward models are ideal for discovering the variables that exert the most influence; for example, identifying that the laser (or electron beam) parameters are a dominant factor in dictating the likelihood of porosity on an AM build.

However, forward models cannot be used for obtaining time-saving direct solutions for resolving critical issues associated with the process as they require a series of ‘what-if’ forward simulations to pinpoint the causes. In such situations, Inverse Models must be used. As their name indicates, this class of models are the opposite of the forward models. They work backward to predict/output optimum process parameter values given the desired process signature, microstructure, property, build quality, or performance (which become the inputs).

There is a limited number of articles in the open domain that focus on optimizing process parameters for PBF using experimental data. While most employed the forward strategy, a few presented inverse models. In this article, we first point out the shortcomings of existing forward and inverse models in the PBF domain through a critical review of the literature. Then, we propose improvements to address the drawbacks for developing a reliable and well-generalized ML forward and/or inverse model.

Process parameter development (PPD) in PBF: the role and requirements of ML

Unlike in traditional manufacturing where only a handful of process parameters need to be tuned, a large set of process parameters are required to be adjusted in PBF before the start of a build. While providing flexibility and an increased degree of control, such a multi-dimensional parameter space nonetheless makes it challenging for human machine operators to arrive at the optimum values efficiently—leading to expensive trial and error.

Additionally, the sets of parameters that give the same result in terms of part properties, for example, can be non-unique. That is to say, several different permutations and combinations of process parameters could potentially give the same final result on the part. Furthermore, human analysis may not detect all the complex interactions that could occur between various parameters.

Under these circumstances, the solution is to apply AI techniques (e.g., ML). Table 3 summarizes different ways in which AI can assist in the area of PPD. Two different ML approaches (i.e., forward and inverse models) are available, as shown in Figure 2.

ML F1 is a forward model that outputs predicted process signature given a certain combination of processing parameters and powder composition as inputs. Similarly, other forward models ML F2, ML F3, ML F4, and ML F5 frameworks accept two input variables each, i.e., process parameters and material composition, and predict microstructure, property, build quality, and performance, respectively.

The inverse models including ML I1, ML I2, ML I3, ML I4, and ML I5 identify the specific processing parameters for a given powder composition. In inverse models, the inputs are the desirable values of process signature, microstructure, property, build quality, or performance.

To increase their range of applicability, the ML models must be generalized, for example, they should be capable of parsing unknown part shape, its dimensions, etc. Additionally, a well-generalized ML model should support unique geometrical features on AM parts such as thick bulk regions, thin walls, overhangs, sharp corners, and thin gaps.

Furthermore, on parts with a combination of such features, the ML should be able to identify the specific combination of process parameters for each geometry feature. For instance, if the objective is to obtain a higher quality surface finish on a given section of a part, the model should deliberately focus on a few influential parameters, including laser/electron-beam power, scan speed, overhang angles, scan strategy, and layer height to achieve the desired result.

Thus, generalizability should be one of the requirements that must be satisfied in developing best practices for ML models. Additionally, the ML models must be accurate enough to predict the values of desired targets with minimal errors between the predictions and the actual values. Moreover, such a model must be reliable in the face of noisy data (e.g., noisy in-situ monitoring data in AM).

Data-related issues: quality, quantity, and diversity

The quality of an ML model is underpinned by the quality of data used for its training. Higher quality data may be ensured through validation, e.g., by checking its type, range, and format and/or by incorporating a separate autonomous validation module in intricate frameworks.

Improving data quality can also allow a smaller quantity of data to be used for training (since outliers that can distort a model are avoided), facilitating the development of data-centric ML frameworks. In the AM literature, model-centric algorithms were used with insufficient amounts of data, resulting in non-generalized ML models that may not maintain a comparable performance on unseen data.

The lack of big data in AM may be traced to the fact that the labor-intensive design phase and time-consuming trial-and-error stage make it challenging to generate vast amounts of information. This is particularly problematic when using ANNs which require more data than tree-based models.

To address the issue of insufficient data, we recommend the following strategies:

  1. Dividing a build geometry into regions and using different sets of parameters: This will allow data to be generated for several different combinations of parameters on a single build.
  2. Utilizing simulations to generate additional datasets: Simulated datasets can be combined with experimental data to create “grey box” data, which helps train a more robust model.
  3. Employing data augmentation techniques: These methods manipulate the data systematically to modify them slightly while preserving the original information, creating extra data with additional perspectives for training, validation, and testing.
  4. Leveraging transfer learning: This technique accelerates the training process by transferring knowledge from a comparable system to improve the performance of an ML model in the target scenario.
  5. Adopting data-centric models: These models focus on the characteristics of the data being used, rather than on the specific problem that the model is being applied to. This helps create a model which characterizes the patterns between samples as comprehensively as possible with fewer data points.

In addition to the issue of insufficient data, the datasets used in the AM literature were often imbalanced and non-reproducible. To address these problems, techniques such as oversampling (e.g., SMOTE) and undersampling (e.g., Tomek Link Removal) can be employed to balance the class distribution. Furthermore, repeating the builds to obtain data from several instances can improve the reliability of the dataset.

Inadequate diversity and range in datasets

Another limitation observed in the existing ML models for PPD was the lack of diversity in the training datasets. The published works primarily relied on primitive shapes (e.g., cubes, cylinders) with a limited number of process parameters that were varied only across narrow ranges.

Since the commercial AM parts frequently exceed the maximum dimension of 30 mm used in these studies, the datasets may not have sufficiently addressed industry requirements. Different geometries which were constructed in the published articles to develop an ML model for a PPD problem are shown in Figure 6.

Furthermore, only a small quantity of parameter combinations was employed during the printing of experimental objects in the reported studies. For instance, in many cases, only power and/or scan speed were varied, possibly because these generally had the strongest influence on process signatures like the melt pool size and/or temperature.

To address these drawbacks, we recommend the following strategies:

  1. Designing and building relatively complex geometries: Construct parts comprising various geometric features, such as bulk regions, thin walls, hollow features, overhangs, and sharp corners with different angles, and use a wide range of process parameters.
  2. Defining a geometry descriptor: Include complementary information about the shape and location of geometric features to provide the ML model with extra knowledge, allowing it to learn more effectively.
  3. Employing a large range of process parameters: Use uniformly distributed values between the lower and upper bounds of parameters, preferably including different permutations and combinations.

Choices of ML methods: mismatch and neglect of history

The complexity of a model is subjective and can be characterized by different factors, such as the quantity of features incorporated in a predictive model, the nature of the model itself (e.g., linear or non-linear, parametric or non-parametric), and algorithmic learning complexity.

We investigated the sophistication levels in the complexity of the models found in the published AM literature and identified two key issues:

  1. Mismatch between data and ML methods: Several studies failed to obey the “1:10 rule,” which specifies that the minimum size of training data should be around 10 times the trainable parameters in the ANNs. Additionally, some models performed differently between training and validation/test datasets, indicating a mismatch between the method chosen and the data.

  2. Neglect of history: An overwhelming majority of approaches did not consider the information which described interconnections or time-varying dependencies between layers. The AM process is layer-by-layer in nature, and defects can extend from one layer to the next, potentially resulting in the part being rejected. To address such flaws, the layer-wise information must be analyzed by ML methods and remembered in order to better manage the build.

To address these issues, we recommend the following:

  1. Choosing the ML method based on the trade-off between bias and variance: High bias/low variance methods such as Naive Bayes and logistic regression can achieve better results with fewer data instances, while low bias/high variance techniques such as ANN/DNNs and RF may outperform these when larger datasets are available.
  2. Ensuring a good representation of the underlying data-generating process: This can be accomplished by accurately dividing the data into training, validation, and testing sets, and by augmenting physical insights to the data using Physics-Informed Machine Learning (PIML).
  3. Employing ML algorithms capable of remembering layer-wise information: Techniques such as Seasonal Autoregressive Integrated Moving Average (SARIMA), Spiking Neural Networks (SNNs), Recurrent Neural Networks (RNNs), and Transformers can effectively capture temporal dependencies in the AM process.

Inadequate analysis of the model’s performance

In addition to the technical shortcomings discussed so far, we observed limitations in the reporting conventions followed in the published studies. Specifically, many researchers failed to pay due attention to accuracy and generalizability of the ML models, while neglecting other important aspects such as robustness, interpretability, explainability, and reproducibility.

To conduct an adequately rigorous analysis, different aspects of a model’s performance should be evaluated:

  1. Accuracy and generalizability: Examine the ML models’ performance by calculating different measures of statistical variability (e.g., R2, Pearson correlation coefficient) and generating additional testing data with unseen, complex geometries.
  2. Robustness: Assess the model’s ability to provide accurate predictions in the presence of outliers, noisy data, and noisy labels.
  3. Interpretability and explainability: Employ feature importance methods, such as sensitivity analysis, to determine the role of each process parameter in fabricating an acceptable product.
  4. Reproducibility: Verify the reliability and trustworthiness of the model by ensuring that the results can be repeated every time the ML model is re-run with the exact settings on the identical dataset.

Shortcomings in the reporting protocols

In addition to the technical limitations, we also observed shortcomings in the reporting conventions followed in the published studies. Specifically, many researchers failed to provide sufficient details on the datasets, ML methods, and model performance analysis.

To ensure reproducibility and enhance transparency, we recommend that the AM community should provide detailed information on the following:

  1. Description of the physical experiment: Document the printing process, machine settings, material specifications, and environmental conditions.
  2. Description of the corresponding dataset: Provide comprehensive details about the data types, quantity, number of fabricated samples, geometries, and parameters used for their construction, including information on support structures.
  3. Description of the data science project/analysis: Document the entire ML methodology, from data preprocessing to model training, validation, and testing, including the architecture of the trained ML models, training datasets, structural hyperparameters, and performance metrics used.

Comprehensive reporting in these areas is critical for developing a reproducible approach which can be validated by other researchers to assess the model’s performance. Industry associations, such as the American National Standards Institute (ANSI), should consider developing reporting standards to promote the fair and ethical deployment of AI and machine learning technologies in AM.

Further guidelines on selecting the appropriate ML method

The suitability of the ML method is primarily determined by the characteristics of the input and output data. Then, the other factors should be weighted appropriately to select the most suitable ML method for creating a robust, efficient, and well-generalized ML PPD framework.

Forward models: Techniques such as RF, ANN, SVM and SVR with kernel functions are some of the best-performing regressors that can be applied to the AM data. DNNs may be better choices for creating ML F2, F3 and F4 models due to their capability to achieve more accurate predictions by processing big data offline.

Inverse models: Two types of methods can be utilized: (1) a hybrid framework of optimization and prediction methods (e.g., ANN and evolutionary algorithms such as Particle Swarm Optimization (PSO)), and (2) pure multi-target ML regression techniques such as tree-based ensemble methods (e.g., multi-target RF). These methods can simultaneously predict multiple dimensional outputs with high accuracy.

When selecting the appropriate ML method, consider the following six key criteria:

  1. Data quantity and number of features
  2. Data type (e.g., numerical, image and signal)
  3. Linearity of data
  4. Training and prediction time
  5. Method interpretability
  6. Memory requirements

The above guidelines can assist in choosing the most suitable algorithm for developing reliable and efficient ML-based PPD frameworks in PBF systems.

Conclusions

Additive manufacturing (AM) provides distinct advantages over traditional manufacturing approaches, but the process has remained a niche technology due to the challenges involved in building complex geometries without defects. Machine learning (ML) techniques offer a solution by efficiently identifying the optimal parameters through analyzing and recognizing patterns in data described by a multi-dimensional parameter space.

This article focused on the role of ML in process parameter development (PPD) applications for powder bed fusion (PBF) processes, where the goal is to determine the optimal set of processing parameters to achieve the desired part quality. We reviewed ML-based forward and inverse models that have been proposed to unlock the process–structure–property–performance relationships in both directions.

Our review highlighted several shortcomings in the existing ML models, including issues related to data quality, quantity, and diversity,

Facebook
Pinterest
Twitter
LinkedIn

Newsletter

Signup our newsletter to get update information, news, insight or promotions.

Latest Post