Federated learning enables data owners to jointly train a neural network without sharing their personal data, which makes it possible to share sensitive data generated from various Industrial Internet of Things (IIoT) devices. However, in traditional federated learning, the user directly sends its parameters to the server, which increases the risk of privacy leakage. To solve this problem, several privacy-preserving solutions have been proposed. However, most of them either reduce model accuracy or increase computation and communication overhead. In addition, federated learning is still exposed to the risk of model tampering, which may impair model accuracy. In this paper, we propose PPTFL, a Privacy-Preserving and Traceable Federated Learning framework with efficient performance. Specifically, we first propose a Hierarchical Aggregation Federated Learning (HAFL) to protect privacy with low overhead, which is suitable for IIoT scenarios. Then, we combine federated learning with blockchain and IPFS, which makes the parameters traceable and tamper-proof. The extensive experiments demonstrate the practical performance of PPTFL.
Bibliographical noteFunding Information:
This work was supported by the National Key Research & Development Program of China (No. 2020YFC2006204 ) and the National Natural Science Foundation of China (No. 62172042 ).
- Federated learning
- Hierarchical aggregation
- Privacy protection