To bring more intelligence to edge systems, Federated Learning (FL) is proposed to provide a privacy-preserving mechanism to train a globally shared model by utilizing a massive amount of user-generated data on devices. FL enables multiple clients collaboratively train a machine learning model while keeping the raw training data local. When the dataset is horizontally partitioned, existing FL algorithms can aggregate CNN models received from decentralized clients. But, it cannot be applied to the scenario where the dataset is vertically partitioned. This manuscript showcases the task of image classification in the vertical FL settings in which participants hold incomplete image pieces of all samples, individually. To this end, the paper discusses AdptVFedConv to tackle this issue and achieves the CNN models’ aim for training without revealing raw data. Unlike conventional FL algorithms for sharing model parameters in every communication iteration, AdptVFedConv enables hidden feature representations. Each client fine-tunes a local feature extractor and transmits the extracted feature representations to the backend machine. A classifier model is trained with concatenated feature representations as input and ground truth labels as output at the server-side. Furthermore, we put forward the model transfer method and replication padding tricks to improve final performance. Extensive experiments demonstrate that the accuracy of AdptVFedConv is close to the centralized model.
Bibliographical noteFunding Information:
This work was supported by the National Key Research and Development Program of China under Grant 2020YFB1712101,National Natural Science Foundation of China (no. U1936218 and 62072037).
© 2022, The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature.
- Convolutional neural network
- Federated learning
- Machine learning
- Transfer learning