Combining homomorphic encryption and differential privacy in federated learning
Résumé
Recent works have investigated the relevance and practicality of using techniques such as Differential Privacy (DP) or Homomorphic Encryption (HE) to strengthen training data privacy in the context of Federated Learning protocols. As these two techniques cover different sources of confidentiality threats (other clients/end-users for the former, aggregation server for the latter), there is a need to consistently combine them in order to bridge the gap towards more realistic deployment scenarios. In this paper, we achieve that goal by means of a novel stochastic quantization operator which allows us to establish DP guarantees when the noise is both quantized and bounded due to the use of HE. The paper is concluded by experiments on the FEMNIST dataset which show that the precision required to get state-of-the art privacy/utility trade-off (which directly impacts HE parameters and, hence, HE operations performances) results in a computation time overhead between 0.2% and 1.1% imputable to HE (depending on the key setup, either single key or threshold), for the whole training of a 500k parameters model and state-of-the-art privacy/utility trade-off.
Fichier principal
article_ArnaudGrivetSebert_combining_HE_and_DP____PST.pdf (322.41 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|