Page 93 - IJAMD-1-2
P. 93

International Journal of AI for
            Materials and Design
                                                                                   AMTransformer for process dynamics


            decoder layers generally improved model performance,   The AMTransformer achieved an SSIM of 0.9206,
            with six layers achieving the best SSIM of 0.9206, an MAE   which is higher than the SSIMs of the transformer with
            of 0.0009 mm , and an accuracy of 92.73%. While adding   a basic autoencoder (0.8699) and the ConvLSTM model
                       2
            more layers slightly enhanced MAE and accuracy, it had a   (0.9069). This result indicates that the AMTransformer can
            negligible effect on SSIM. Thus, a six-layer configuration   predict the MPM images more accurately than the other
            was deemed the most suitable for achieving high image   models. For melt pool size prediction, the AMTransformer
                                                                                        2
            congruence.                                        achieved an MAE of 0.0009 mm , whereas the transformer
                                                               with a basic autoencoder and ConvLSTM had MAEs of
              Table 2 presents the results of varying the number        2            2
            of attention heads while keeping the  six  decoder  layers   0.0017 mm  and 0.0014 mm , respectively. In addition, the
                                                               proposed method for predicting melt pool size achieved
            constant. The configuration with four attention heads   an accuracy of 92.73%. Table 3 displays the comparison of
            was the most effective, delivering the highest SSIM and   model performance. According to the results, the AM –
            an optimal MAE while maintaining good accuracy.    Transformer demonstrated the best overall performance.
            Increasing the number of attention heads slightly decreased   Figure  11 presents a comparison between the predicted
            performance, likely due to the complexity of the LPBF tool   and target images, illustrating the correspondence between
            path. While each head captures distinct local patterns, there   the model’s outputs and the ground truth.
            can be interdependencies between embeddings processed
            by  different  heads.  For  instance,  the  initial  embeddings   6. Discussion
            from the first head may have strong relationships with the
            last embeddings of the sixth head when they are spatially   In the framework of the AMTransformer, the proposed AM
            close but temporally distant. This observation indicates   dynamics formulation links the Koopman and transformer
                                                               approaches to AM data. These approaches complement each
            a need for future research on AM-specific multi-head   other by leveraging their respective strengths for AM. The AM
            attention mechanisms, which will be discussed further in   state embedder improves the learning of significant features
            Section 7.
                                                               of  physical properties and  their  dynamical dependencies
              Based on these findings, we selected the configuration   for each AM state within the embeddings, which are latent
            with six decoder layers and four attention heads, as it offered   vector representations of these physical property features
            the best overall performance across all metrics. With this   and dependencies at each time step. In addition, in our
            configuration set, we compared the AMTransformer’s   study, the adaptation of the Koopman operator with AM
            performance against other models, including a transformer   state embeddings in latent space representations focuses
            with a basic autoencoder and a ConvLSTM model.     on transforming non-linear dynamical AM systems into a
                                                               linear framework. This transformation enables improved
            Table 1. Performance comparisons across a varying number   analysis and prediction of key dynamical dependencies in
            of decoder layer                                   AM  using  a  linear method.  The  Koopman  operator  can
                                                               linearize aspects of the dynamical dependencies that are
                                         2
            Layer      SSIM      MAE (mm )      Accuracy (%)   amenable to linear analysis, providing a robust foundation
            2          0.4163       0.0057         21.87       for understanding underlying AM dynamics. In the
            4          0.7815       0.0016         77.52       case study, by comparing the AMTransformer with the
            6          0.9206       0.0009         92.73       transformer using a basic autoencoder, we explored how the
            8          0.9169       0.0008         93.04       AM state embedder with the Koopman operator improves
            Notes: MAE: Mean absolute error; SSIM: Structural similarity index   the understanding of AM dynamics.
            measure.                                             Meanwhile, the adaptation of the transformer, which
                                                               employs a multi-head attention mechanism, is adept at
            Table 2. Variations in the number of attention heads with six
            decoder layers                                     Table 3. Comparison of performance across models
            Head       SSIM      MAE (mm )       Accuracy (%)  Model             SSIM   MAE (mm )   Accuracy (%)
                                         2
                                                                                               2
            2          0.7584       0.0041          44.6       AMTransformer     0.9206   0.0009      92.73
            4          0.9206       0.0009         92.73       Basic AE+Transformer  0.8699  0.0017   70.55
            8          0.9047       0.0011         90.13       ConvLSTM          0.9069   0.0014      89.50
            16         0.8518       0.0013         83.32       Notes: AE: Autoencoder; ConvLSTM: Convolutional long short-term
            Notes: MAE: Mean absolute error: SSIM: Structural similarity index   memory; MAE: Mean absolute error; SSIM: Structural similarity index
            measure.                                           measure.


            Volume 1 Issue 2 (2024)                         87                             doi: 10.36922/ijamd.3919
   88   89   90   91   92   93   94   95   96   97   98