Page 86 - AIH-2-4
P. 86
Artificial Intelligence in Health Federated learning health stack against pandemics
updated model parameters G , generating new model poisoned updates, such as label-flipping or trimmed mean
r
parameters for round r + 1, represented as: methods, to induce global model divergence. Experiments
have shown that only 10% of such malicious clients may
G r1 train Gb( r , r1 ) (VI) reduce model accuracy by up to 33.1%, which highlights
ij
ij
24
The hierarchical communication process is repeated the need for robustness against such attacks.
using HE for each subsequent round. This process Another urgent concern is privacy leakage. Even if raw
continues until either the final round R is reached or data is retained locally, leakage of gradient distributions
the model converges. The workflow of our proposed might compromise sensitive information. Attacks on
framework is diagrammatically represented in Figure 3, medical imaging data, for example, have resulted in up to
and the algorithm is as follows:
74.24% reconstruction success rates for all data samples
Algorithm: 1 Hierarchical federated learning with in the medical segmentation decathlon liver dataset from
25
homomorphic encryption model updates (for privacy parameter, ε = 20). Therefore,
robust privacy-preserving mechanisms are essential.
Require: Initial global model W , rounds R, round
0
count r = 0, local servers n, clients C ij A deployable FL system in healthcare should be
Ensure: Final global model W* robust, i.e., capable of operating in unreliable network
environments, especially in wireless configurations
1: Central server S broadcasts W to local servers L, with as much as packet loss rates (approximately 10%)
0
i
where 1 ≤ i ≤ n; for maximum users (approximately 90%). 26,14 These
2: for r = 1 to R do interruptions cause stale or missing updates, leading to
3: Each L distributes W to clients C ; unwanted divergence during model training.
i
o
ij
4: Key authority shares encryption keys (pk ij r ,sk ij r ) ; To address the above challenges of privacy and security, a
5: for each client C do recent article by Xia et al. introduces ByITFL, an FL-based
ij
14
6: Each client decrypts [[G ]] as G , where r ≤ 2 ≤ R; privacy-preserving secure aggregation scheme. Here,
r
r
ij
7: Train local model on G and b ij r D⊆ ij to get G ;
r
r
ij
8: Encrypt G and send [[G r ij ]] to L; i Byzantine basically refers to clients that are malicious or
r
9: Each L aggregates collected [[G r ij ]] as and [[ ]]G i r behave arbitrarily, typically with the intention of poisoning
i
sends to S; training. ByITFL is designed to maintain the integrity
10: S performs global aggregation to obtain [[G ]] and and confidentiality of model updates during aggregation,
r
updates W ; without relying on computational assumptions. The
r+1
11: S sends W to local servers for client distribution; scheme is information-theoretically private, i.e., even a
r+1
12: S broadcasts final model W* through local servers. semi-honest or curious server learns nothing beyond the
aggregated gradients. Privacy is quantified using Shannon
entropy (H): for a finite field F, the local update entropy of
22
2.2. Secure and reliable network architecture
a client X, given on the server’s observations, must satisfy
We discussed the possible security challenges in the FL
network by addressing both malicious and unintended H(X|View server ) ≥ log|F| (VII)
disruptions caused by clients or communication ensuring that individual model inputs are infeasible to
vulnerabilities, for example, Byzantine attacks, gradient reconstruct. Moreover, to be immune to Byzantine attacks,
inversion, and packet sniffing. Solutions such as ByITFL, ByITFL incorporates robustness requirements from
14
trust bootstrapping via root datasets (FLTrust), and distributed consensus theory. Specifically, the system can
20
client similarity analysis (e.g., FoolsGold) ensure secure tolerate up to t malicious clients only if the total number of
21
updates through trust and information-theoretic privacy clients N satisfies the inequality: 14,27
scores. In addition, a cryptographic framework that blends N ≥ 3t + 1 (VIII)
23
ByITFL 14,22 and FedML-HE to address the dual challenges
of adversarial client behavior and privacy leakage in This barrier ensures that there are sufficient correct
medical FL deployments is provided. clients to regulate the aggregation process so that the
system can reject erroneous updates and achieve stable
2.3. Threat model and security requirements convergence.
In FL, systems are susceptible to a range of attacks against 2.4. ByITFL cryptographic framework
security and privacy that compromise model performance
and data privacy. The most critical of these issues occurs The ByITFL framework integrates three cryptography
when malicious clients under an attacker’s control submit primitives to achieve Byzantine resilience and
Volume 2 Issue 4 (2025) 80 doi: 10.36922/AIH025080013

