Page 88 - AIH-2-4
P. 88

Artificial Intelligence in Health                            Federated learning health stack against pandemics




            scoring evaluates client updates with a polynomial  ReLU    2.7. Deep-learning protocol
            function:                                          In the previous section, the FL architecture was outlined.

                        gg ,                                 However, specific deep-learning models that might be
            TS  ReLU    0  i                      (XIII)   integrated to address future pandemics were not discussed.
              i
                         g   0  .  g   i                As future pandemics are uncertain, to illustrate the point,
                                                               COVID-19 and lung cancer were taken as benchmarks for
              Where TS is the trust score for client i, g  is the reference   potential adaptations of deep-learning protocols.
                      i
                                              0
            gradient, and  g  is the encoded gradient.  Here, the dot
                                              22
                         i
                                                                                                   29
            product  gg,  i   gives the scalar projection, measuring   In this context, the study by Gogineni et al.  was chosen
                     0
            how directionally aligned the two gradient vectors are,   as a reference. The study investigated the potential of deep-
                                                               learning models for automated COVID-19 detection using
            while the Euclidian norm in the denominator, which   chest X-ray images, presenting a promising alternative
            measures the magnitude of the vectors  g  and  g ,   to the current gold standard – reverse-transcription
                                                 0
                                                         i
            normalizes this measure so that it lies in (−1, 1). The   polymerase chain reaction (RT-PCR) test. This choice has
            updates with TS < 0.1 can be discarded. For LCC-based   two distinct merits. First, COVID-19 represents the most
                         i
            aggregation, global gradients are computed as:
                                                               recent pandemic for benchmarking. Second, choosing
                    N TS g.                  2                images as input data type. This choice is crucial as image
                                      ,.
                               (~
            g global    i1 N  i  i  N  00 1  g global     (XIV)  data is representative of various real-world medical datasets
                     i1 TS i                                 and therefore can be used as a reliable proxy (since the
              where  η is the re-randomization noise drawn from   exact nature of future pandemics is unknown). Moreover,
            a zero-mean Gaussian distribution    with a standard   images are one of the most complex and prevalent data
            deviation equal to 10% of the global gradient’s ℓ₂ norm.   types in the medical arena; therefore, demonstrating with
            Such noise interferes with deterministic patterns across   proxy images offers one of the most effective benchmarking
            training iterations, making gradient memorization or   strategies  to  mimic  real-world  complexities.  Although
            reverse engineering less likely without compromising   videos represent more complex data types, they are far less
                                                               frequent in the medical context and, in a crude sense, can
            update fidelity. 14,27
                                                               be considered as a sequential stacking of images with time-
            2.6. Integration of FedML–HE with ByITFL for       series data (audios, if any) added in an additional channel.
            enhanced privacy                                     In  the study  by Gogineni  et al.,   several  CNN
                                                                                                29
                                                                                                            30
            The combination of FedML–HE’s selective HE  and    architectures were implemented, including ResNet34,
                                                     23
                                                                                          33
                                                                                                            34
            ByITFL’s Byzantine-resilient architecture  offers an   SeResNext50, 31,32  DenseNet121,  and EfficientNet.
                                               28
            optimally balanced solution to FL’s dual challenges of   These models were chosen for their distinct advantages
            privacy and security, providing a secure and reliable   in image  classification tasks.  ResNet34 utilizes  skip
            network architecture. By using FedML–HE’s parameter-  connections, allowing for efficient training of deep
            level encryption on sensitive gradients that are detected   networks, while SeResNext50 incorporates squeeze-
            through ByITFL’s trust scoring,  the hybrid solution is   and-excitation blocks, which recalibrate channel-wise
                                      28
            able to effectively protect important model updates against   feature responses for improved representational capacity.
                         23
            inversion attacks  while maintaining Byzantine resilience.   Meanwhile, DenseNet121, with its dense connections
            Empirical evaluations have demonstrated that selective   between layers, facilitates feature reuse and enhances
            encryption can cut down on communication overhead by   information flow. Finally, EfficientNet models are
            up to 10 times for large models such as ResNet-50  without   designed using neural architecture search, optimizing the
                                                   23
                                                                                                            34
            a loss in malicious client detection accuracy. Earlier work   balance between accuracy and computational efficiency.
                                                                                                        35
            on Byzantine-resilient secure aggregation architectures   Transfer learning, using the ImageNet dataset,  was
            points to the possibility of combining cryptographic   employed to improve model performance on the relatively
                                                                                                            36
            privacy with adversarial robustness. In addition, FedML-  limited medical image dataset. A learning rate scheduler
            HE’s performance-optimized encryption pipeline is   and a one-cycle training policy were also implemented
            optimally compatible with ByITFL’s computationally   for better convergence and generalization.
            limited architecture. The hybrid solution helps prevent   The models’ performance demonstrated encouraging
            privacy leakage risks in trust-based aggregation while   results.  ResNet34 and DenseNet121 achieved the
                                                                    29
            maintaining the system’s ability to filter poisoned updates,   highest overall accuracy of 94.09% in classifying images
            a circumstance verified by cross-institutional medical FL   as COVID-19, normal, or pneumonia. This accuracy is
            trials. 23                                         considerably higher than the typical 70 – 80% sensitivity
            Volume 2 Issue 4 (2025)                         82                          doi: 10.36922/AIH025080013
   83   84   85   86   87   88   89   90   91   92   93