Literature Database Entry

tao2024communication


Youming Tao, Cheng-Long Wang, Miao Pan, Dongxiao Yu, Xiuzhen Cheng and Di Wang, "Communication Efficient and Provable Federated Unlearning," Proceedings of the VLDB Endowment, vol. 17 (5), pp. 1119–1131, January 2024.


Abstract

We study federated unlearning, a novel problem to eliminate the impact of specific clients or data points on the global model learned via federated learning (FL). This problem is driven by the right to be forgotten and the privacy challenges in FL. We introduce a new framework for exact federated unlearning that meets two essential criteria: communication efficiency and exact unlearning provability. To our knowledge, this is the first work to tackle both aspects coherently. We start by giving a rigorous definition of exact federated unlearning, which guarantees that the unlearned model is statistically indistinguishable from the one trained without the deleted data. We then pinpoint the key property that enables fast exact federated unlearning: total variation (TV) stability, which measures the sensitivity of the model parameters to slight changes in the dataset. Leveraging this insight, we develop a TV-stable FL algorithm called FATS, which modifies the classical FedAvg algorithm for TV Stability and employs local SGD with periodic averaging to lower the communication round. We also design efficient unlearning algorithms for FATS under two settings: client-level and sample-level unlearning. We provide theoretical guarantees for our learning and unlearning algorithms, proving that they achieve exact federated unlearning with reasonable convergence rates for both the original and unlearned models. We empirically validate our framework on 6 benchmark datasets, and show its superiority over state-of-the-art methods in terms of accuracy, communication cost, computation cost, and unlearning efficacy.

Quick access

Original Version DOI (at publishers web site)
BibTeX BibTeX

Contact

Youming Tao
Cheng-Long Wang
Miao Pan
Dongxiao Yu
Xiuzhen Cheng
Di Wang

BibTeX reference

@article{tao2024communication,
    author = {Tao, Youming and Wang, Cheng-Long and Pan, Miao and Yu, Dongxiao and Cheng, Xiuzhen and Wang, Di},
    doi = {10.14778/3641204.3641220},
    title = {{Communication Efficient and Provable Federated Unlearning}},
    pages = {1119--1131},
    journal = {Proceedings of the VLDB Endowment},
    issn = {2150-8097},
    publisher = {VLDB Endowment},
    month = {1},
    number = {5},
    volume = {17},
    year = {2024},
   }
   
   

Copyright notice

Links to final or draft versions of papers are presented here to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted or distributed for commercial purposes without the explicit permission of the copyright holder.

The following applies to all papers listed above that have IEEE copyrights: Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

The following applies to all papers listed above that are in submission to IEEE conference/workshop proceedings or journals: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.

The following applies to all papers listed above that have ACM copyrights: ACM COPYRIGHT NOTICE. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept., ACM, Inc., fax +1 (212) 869-0481, or permissions@acm.org.

The following applies to all SpringerLink papers listed above that have Springer Science+Business Media copyrights: The original publication is available at www.springerlink.com.

This page was automatically generated using BibDB and bib2web.