Page 74 - IJAMD-2-1
P. 74

International Journal of AI for
            Materials and Design
                                                                             Fatigue life prediction via contrastive learning



            Table 3. The detailed number of each specimen and defects   proposed. This method effectively extracts deep feature
            for each type                                      representations  from  complex  multiaxial  fatigue  stress-
                                                               strain responses, which are then utilized in downstream
            Feature       RMSE of the downstream prediction model
            extraction   Linear   ANN      SVM     XGBoost     fatigue life prediction tasks to enhance prediction accuracy.
            method                                             The specific conclusions are as follows:
            1D-CNN       0.2889   0.41946  0.76503  1.40253    (i)  Compared  to network  architectures  such as ANN,
            2D-CNN      0.33526   0.54549  0.8378   1.11583       GRU, and 2D-CNN, the 1D-CNN network achieves
                                                                  the best contrastive learning performance and is more
            GRU         0.45183   0.42774  0.82675  1.30814       stable during training. The deep feature representations
            ANN         0.88021   0.85771  1.10393  1.3852        extracted through contrastive learning, when
            PLS         0.56382   0.54084  1.17852  1.55079       visualized by t-SNE, show a chaotic distribution in the
            PCA         0.51162   0.36833  0.83287  1.55079       reduced-dimensional space, with no obvious clustering
            Note: The values in boldface represent the lowest RMSE among the   or separation of categories. However, the extracted
            four downstream models under the same conditions.     features enable the downstream fatigue life prediction
            Abbreviations: 1D-CNN: One-dimensional convolutional neural network;   model to more easily learn from samples with different
            2D-CNN: Two-dimensional convolutional neural network; ANN:
            Artificial neural network; GRU: Gated recurrent unit; Linear: Linear   loading paths, achieving excellent performance even
            regression; PCA: Principal component analysis; PLS: Partial least squares;   with a simple linear regression model.
            RMSE: Root mean squared error; SVM: Support Vector Machines.  (ii)  Compared to other unsupervised learning algorithms,
                                                                  the features extracted using contrastive learning show
            Table 4. RMSE results of ablation experiments on 1D‑CNN   better similarity and consistency. Through comparative
            with contrastive learning                             experiments, contrastive learning is found to be more
                                                                  effective in extracting features related to fatigue life
            Ablation setting     RMSE of the downstream prediction
                                           model                  prediction from stress-strain hysteresis data, thereby
                                 Linear  ANN  SVM  XGBoost        helping  downstream  models  better  uncover  the
            Without data augmentation  0.45243  -  -  -           underlying patterns in the data. Compared to traditional
                                                                  unsupervised learning algorithms, contrastive learning
            Without contrastive learning  1.63522 0.69097 1.12755  1.28143  demonstrates stronger robustness and effectiveness
            Without data augmentation &  0.49226 0.78342 0.50171  1.09144  when handling multiaxial fatigue data.
            without contrastive learning                       (iii) The feature representations learned through contrastive
            With data augmentation &   0.2889  0.41946 0.76503  1.40253  learning exhibit superior predictive performance
            with contrastive learning                             in downstream tasks. In multiple machine learning
            Note: The values in boldface represent the lowest RMSE among the   models, contrastive learning consistently achieves
            four downstream models under the same conditions.     better prediction results. Compared to scenarios
            Abbreviations: 1D-CNN: One-dimensional convolutional neural
            network; ANN: Artificial neural network; Linear: Linear regression;   without contrastive learning, the maximum reduction
            RMSE: Root mean squared error; SVM: Support vector machines;   in RMSE in models such as SVM, ANN, and others
            XGBoost: eXtreme gradient boosting.                   can reach 86.26%. In addition, the prediction stability
                                                                  is improved, as evidenced by a reduction in the
              Table 3 summarizes the RMSE values of different feature   standard deviation of repeated experiments.
            extraction methods across various downstream prediction   (iv)  Contrastive learning has the potential to be further
            models. It was observed that the features extracted   extended for  applications in multiaxial fatigue life
            by the contrastive learning framework with 1D-CNN     prediction and similar domains. Leveraging the
            as the encoder achieved the best performance on the   benefits  of  contrastive  learning,  it  can  help  achieve
            downstream linear regression model. To further validate   few-shot or even zero-shot learning for downstream
            the contributions of data augmentation and contrastive   tasks. This approach can also contribute to addressing
            learning, ablation experiments were conducted.  Table 4   challenges in fields such as electronic packaging and
            presents  the  results  of removing data augmentation,   multiscale structural integrity, where data scarcity and
            removing contrastive learning, and removing both,     the need for robust predictive models are key concerns.
            illustrating their impact on model performance.    (v)  To further enhance the effectiveness and applicability
                                                                  of the proposed framework, several key directions
            5. Conclusion                                         warrant exploration. One important area is integrating

            In this study, a multiaxial fatigue hysteresis feature   contrastive learning with traditional physics-based
            extraction  method  based on  contrastive  learning  is   models to bridge data-driven insights with mechanical


            Volume 2 Issue 1 (2025)                         68                        doi: 10.36922/IJAMD025040004
   69   70   71   72   73   74   75   76   77   78   79