ISBN: 978-981-18-1791-5 DOI: 10.18178/wcse.2021.06.006
Adaptive Privacy-preserving and Shuffling Aggregation in Federated- Learning
Abstract— Deep learning models are usually trained on data sets containing sensitive information, such as personal shopping transactions, personal contacts, and medical records. Therefore, more and more important work attempts to train neural networks subject to privacy constraints, which are specified by differential privacy or divergence-based relaxation. However, these privacy definitions have weaknesses in handling certain important primitives (synthesis and sub-sampling), which makes the privacy analysis of training neural networks loose or complex. Federated learning is a popular privacy protection method, which collects local gradient information instead of real data. One way to achieve strict privacy guarantee is to apply differential privacy to federated learning. However, previous work did not give a practical solution. This paper proposes a new type of adaptive privacy-preserving and shuffling aggregation in federated-learning mechanism design. It can make the data more different from its original value and introduce lower variance. In addition, the proposed mechanism is updated through the split and shuffle model, avoiding the curse of dimensionality. A series of empirical evaluations conducted on the three commonly used data sets of MNIST, Fashi-MNIST and CIFAR-10 show that our solution can not only achieve excellent deep learning performance, but also provide strong privacy protection.
Index Terms— federated learning, privacy preserving
He Huixian, Cao Zhenfu
East China Normal University, CHINA
Cite: He Huixian, Cao Zhenfu , "Adaptive Privacy-preserving and Shuffling Aggregation in Federated- Learning ," 2021 The 11th International Workshop on Computer Science and Engineering (WCSE 2021), pp. 37-41, Shanghai, China, June 19-21, 2021.