VeriFL: Communication-Efficient and Fast Verifiable Aggregation for Federated Learning

Xiaojie Guo, Zheli Liu, Jin Li, Jiqiang Gao, Boyu Hou, Changyu Dong, Thar Baker

Research output: Contribution to journalArticlepeer-review


Federated learning (FL) enables a large number of clients to collaboratively train a global model through sharing their gradients in each synchronized epoch of local training. However, a centralized server used to aggregate these gradients can be compromised and forge the result in order to violate privacy or launch other attacks, which incurs the need to verify the integrity of aggregation. In this work, we explore how to design communication-efficient and fast verifiable aggregation in FL. We propose VeriFL, a verifiable aggregation protocol, with O(N) (dimension-independent) communication and O(N+ d) computation for verification in each epoch, where N is the number of clients and d is the dimension of gradient vectors. Since d can be large in some real-world FL applications (e.g., 100K), our dimension-independent communication is especially desirable for clients with limited bandwidth and high-dimensional gradients. In addition, the proposed protocol can be used in the FL setting where secure aggregation is needed or there is a subset of clients dropping out of protocol execution. Experimental results indicate that our protocol is efficient in these settings.
Original languageEnglish
Pages (from-to)1736 - 1751
Number of pages15
JournalIEEE Transactions on Information Forensics and Security
Publication statusPublished - 7 Dec 2020


Dive into the research topics of 'VeriFL: Communication-Efficient and Fast Verifiable Aggregation for Federated Learning'. Together they form a unique fingerprint.

Cite this