Page 91 - IJAMD-1-2
P. 91

International Journal of AI for
            Materials and Design
                                                                                   AMTransformer for process dynamics


              Following the proposed method, the case study    properties in the laser AM process. These state properties of
            systemically matched the state and rate property features   the LPBF process are expressed as  x  , and the observation
                                                                                           L
                                                                                           i
            at each time step to generate concatenations of consecutive   data ( z L ) represents these properties. Simultaneously,
                                                                     x i
            AM states. Figure 9 details this process. At time step i, an   rate property observation data (  z L ) represents velocity
                                                                                           c i
            MPM image mainly focuses on the melt pool’s top surface   (c i,laser_v ) and energy density (c i,laser_d ) obtained from process
            area (e.g., size and morphology), which can be expressed as   control data. The observed data on state and rate properties
            a state property of the melt pool (x i,mp ). Melt pool location   (Z) is then provided as input to the AMTransformer in
                                                                 i
            (x i,mp_loc ), and laser power (x i,laser_p ) also represent state   sequential order.
                                                               5.3. Melt pool prediction of the AMTransformer
                                                               To handle the input data, the AM state embedder
                                                               incorporated CNNs into its architecture. In this case
                                                               study, the AM state embedder comprised four 2D
                                                               convolutional layers in an encoder and four 2D transposed
                                                               convolutional layers in a decoder. The AM state embedder
                                                               is designed to combine multiple observed inputs, each
                                                               representing  an  AM  state  ( ),  to  produce  a  single
                                                               embedding vector (ε ). This embedding vector serves as
                                                                                i
                                                               a latent representation of the AM state, encapsulating the
                                                               state  transition  characteristics  of  the  LPBF  process.  The
                                                               Koopman operator within the AM state embedder then
                                                               captures the local dynamic dependencies, as depicted in
                                                               Figure 9B. In this case study, the latent vector embedding
                                                               had 128 dimensions. We used rectified linear units as the
            Figure 8. An illustration of melt pool monitoring image pre-processing,
            with zoomed-in views of the original and denoised images to demonstrate   activation function for the AM state embedder, leveraging
            the improvement in image quality                   its simplicity and effectiveness in mitigating the vanishing

                         A                                     C
















                         B





            Figure 9. Diagram illustrating the implementation and operation of the AMTransformer within the case study. (A) An example of the laser powder bed
            fusion (LPBF) layer from the case study, showing data and target dynamical dependencies. Each dot represents a melt pool along the laser scanning path,
            with the shaded area indicating the observed melt pool region used to learn the dependencies captured in functions f and g. (B) The additive manufacturing
            (AM) state embedder operation and LPBF data flow: the circle represents observed data (Z), including observation of state and rate properties. The
                                                                        i
            rounded square denotes AM embedding, encapsulating dynamic dependencies within AM states. The Koopman operator captures linear local state
            transitions. (C) The transformer operation and its LPBF data flow: the transformer processes all embeddings to reveal the adjacent melt pool region
            influencing the current melt pool (red dot) from a spatiotemporal perspective. Multi-head attention and multiple decoder layers consider the relationships
            among embeddings of the LPBF process, enabling the learning of non-linear and non-local dynamics. The decoder’s output, which is a contextualized
            embedding, is passed through a linear layer followed by a softmax function, converting it into a probability. The highest probability event is selected as the
            prediction of the future LPBF states.


            Volume 1 Issue 2 (2024)                         85                             doi: 10.36922/ijamd.3919
   86   87   88   89   90   91   92   93   94   95   96