Abstract
Federated learning faces significant data privacy challenges, with threats like inference attacks, model inversion attacks, and poisoning attacks. Existing methods struggle to balance privacy, security, and accuracy, resulting in suboptimal performance. Furthermore, many solutions extend training and communication time, increasing costs and reducing overall system efficiency and value. This paper proposes “gradient whispering” covert communication to address these issues. Adjusting gradients in federated learning changes the optimization path while maintaining model efficacy. “Gradient whispering” introduces two embedding schemes: gradient direction-based embedding and gradient magnitude-based embedding, designed to incorporate information during the iterative updates of AI models. These two schemes can be applied independently or in combination to enhance the flexibility of the embedding process. When used together, they further expand the embedding capacity, thereby maximizing the effectiveness of information embedding. MNIST and CIFAR-10 dataset trials demonstrate model accuracy stays stable post-embedding with fluctuations under 0.3%. Two-sample Kolmogorov–Smirnov tests and Kullback–Leibler divergence analysis show no statistical difference between pre- and post-embedding gradient distributions. Peak signal-to-noise ratio values of 40 to 50 indicate a strong similarity between the embedded and original gradients, hiding hidden information and guaranteeing model stability
Original language | English |
---|---|
Article number | 104118 |
Number of pages | 9 |
Journal | Journal of Information Security and Applications |
Volume | 93 |
DOIs | |
Publication status | Published - 12 Jun 2025 |
Bibliographical note
Publisher Copyright:© 2025 Elsevier Ltd
Keywords
- Information hiding
- Model security
- Covert channel
- Decentralized federated learning
- Capacity