Page 82 - AIH-2-2
P. 82

Artificial Intelligence in Health                                 Efficient knowledge distillation for breast US



            terminology throughout this manuscript by referring to this   et al.  proposed different approaches for transfer learning.
                                                                   71
            dataset accordingly (refer to Section 3.1 for more details on   In one of their experiments, first, they pre-trained various
            the dataset). Using Dataset_A, Yap et al.  proposed an end-  networks on Achilles tendon US images, and then fine-
                                           47
            to-end approach for US lesion detection and recognition   tuned on breast US images (i.e., Dataset_A) and reported
            by utilizing a pre-trained segmentation network designed   the best DSC of 83%. A unified-focal loss was introduced by
            based on fully convolutional networks (FCN)  and achieved   Yeung et al.  achieving a DSC of 82%. An adaptive receptive
                                               48
                                                                        72
            Dice similarity coefficient (DSC) score of 55%. Abraham and   field network proposed by Xu et al.  reported a DSC of 88%.
                                                                                          73
            Khan  proposed generalized focal loss based on the Tversky   Lou et al.  achieved a DSC of 90% by introducing inverted
                49
                                                                      74
            index for the attention UNet and achieved a DSC of 80%.   residual pyramid block and context-aware fusion block
            They achieved a DSC of 66% for the UNet model with focal   modules to UNet architecture. By introducing CTG-Net
            Tversky loss. Zhuang et al.  proposed a Residual-Dilated-  that integrates lesion segmentation and tumor classification
                                 50
            Attention-Gate-UNet model obtaining a DSC of 85% and   tasks  in  breast  US  image  analysis,  Yang  et al.   achieved
                                                                                                    75
            reported a DSC of 82% for UNet model. Costa  et al.    improved performance compared to existing multi-task
                                                         51
            proposed FCN-based segmentation models and reported a   learning approaches.  Table 2 summarizes previous works
            DSC of 82%. Liang et al.  developed a multi-stage elastic   that utilized Dataset_A.
                                52
            augmentation technique and achieved a DSC of 84% using a
            Mask-RCNN-based segmentation network. 53
                                                               Table 2. Summary of previous works and their reported DSC
              Amiri  et al.  developed a two-stage segmentation   scores on Dataset_A
                         54
            UNet to first detect the tumor region and then segment   Article          Method         DSC (%)
            the detected region. They reported a DSC of 86%. Lee
            et al.  proposed an attention module and obtained a DSC   Yap et al. 47  Pre-trained model  55
                55
            of 76%. Shareef  et al.  proposed Small Tumor-Aware   Abraham and Khan 49  Tversky focal loss  80
                               56
            Network (STAN) that involved CNN layers with various   Zhuang et al. 50  RDAU-Net          85
            kernel  sizes  in  order  to  extract  multi-scale  information   Costa et al. 51  FCN-based model  82
            from US images. They achieved a DSC of 78%. In their   Liang et al. 52  Multi-stage AUG    84
            next study,  they improved their work by proposing an   Amiri et al. 54  Two-stage UNet    86
                     57
            enhanced STAN network and achieved a DSC of 82%.         55
            Singh  et al.  proposed a contextual information-aware   Lee et al.  Attention module      76
                      58
            network based on conditional generative adversarial   Shareef et al. 56  STAN model        78
            networks  that integrates atrous-convolution,  channel   Shareef et al. 57  ESTAN model    82
                                                  60
                   59
            attention  and channel weighting, obtaining a DSC of   Singh et al. 58  cGAN-based model   86
                   61
            86%.  A methodology built on the combination of deep   Hussain et al. 63  DL+LS framework  98 (Benign)
                62
            learning (i.e., UNet network) and a traditional learning-                              72 (Malignant)
            based algorithm (i.e., level-set framework), proposed by   Qu et al. 64  ASFRRN model      84
            Hussain  et al.,  was reported to yield the DSC of only   66
                        63
            98% and 72% for benign and malignant tumors. Qu et al.    Ning et al.  Coarse-to-fine fusion  85
                                                         64
            introduced an attention-supervised full-resolution residual   Behboodi et al. 67  Pre-trained model  57
            network inspired from full-resolution residual networks    Gao et al. 68  MS fused model   85
                                                         65
            and achieved a DSC of 84%. Ning et al.  achieved a DSC   Su et al. 69  MS UNet             82
                                            66
            of 85% from their proposed coarse-to-fine fusion network   Xu et al. 70  MS self-attention model  83
            alongside a weighted-balanced loss function. In one of our   Huang et al. 71  Transfer learning  83
            previous works,  we explored the different pre-training    72
                         67
            strategies for training a UNet when only 20 images were   Yeung et al.  Unified focal loss  82
            used for training and obtained a maximum DSC of 57%.  Xu et al. 73   Adaptive RF model     88
                                                               Lou et al. 74     IRPB+CFB modules      90
              Gao et al.  investigated class imbalance in segmentation
                      68
            by proposing their multi-scale fused network with   Yang et al. 75   CTG-Net               79
            additive channel-spatial attention and achieved a DSC   Lee et al. 44  TTFT KD-based       89
            of 85%. Su  et al.  proposed a multi-scale UNet that   Abbreviations: Aug: Augmentation; DL+LS: Deep
                           69
            involves  layers  with  different  receptive  fields and  led  to   learning+level-set; MS: Multi-scale; RF: Receptive field;
            a DSC of 82%. Xu  et al.  introduced a multi-scale self-  RDAU-Net: Residual-Dilated-Attention-Gate-UNet; cGAN: Conditional
                                70
                                                               generative adversarial networks; ASFRRN: Attention-supervised
            attention network by integrating local features and global   full-resolution residual network; IRPB: Inverted residual pyramid block;
            contextual information that led to a DSC of 83%. Huang   CFB: Context-aware fusion block.
            Volume 2 Issue 2 (2025)                         76                               doi: 10.36922/aih.3509
   77   78   79   80   81   82   83   84   85   86   87