FedPrompt: Communication-Efficient and Privacy-Preserving Prompt Tuning in Federated Learning

摘要

Federated learning (FL) has enabled global model training on decentralized data in a privacy-preserving way. However, for tasks that utilize pre-trained language models (PLMs) with massive parameters, there are considerable communication costs. Prompt tuning, which tunes soft prompts without modifying PLMs, has achieved excellent performance as a new learning paradigm. In this paper, we want to combine these methods and explore the effect of prompt tuning under FL. We propose “FedPrompt” studying prompt tuning in a model split aggregation way using FL, and prove that split aggregation greatly reduces the communication cost, only 0.01% of the PLMs’ parameters, with little decrease on accuracy both on IID and Non-IID data distribution. We further conduct backdoor attacks by data poisoning on FedPrompt. Experiments show that attack achieve a quite low attack success rate and can not inject backdoor effectively, proving the robustness of FedPrompt.

出版物
In 2023 IEEE International Conference on Acoustics, Speech and Signal Processing
赵皓东
赵皓东
博士研究生
杜巍
杜巍
博士研究生
李方圻
李方圻
博士研究生
刘功申
刘功申
教授