Federated learning is a distributed machine learning framework that allows multiple parties to collaboratively train a shared model without the need to share their original data with a central server. This approach minimizes the risk of data leakage and effectively addresses the challenge of isolated data silos. However, it necessitates multiple rounds of interactions to transmit models or gradients between the client and the server, leading to a significant communication cost and private data leakage. Consequently, some schemes compress the transmitted models or gradients through model pruning or knowledge distillation. However, they often overlook the privacy implications of compressed gradients or models, potentially resulting in privacy breaches. Additionally, they are susceptible to client disconnections, resulting in incomplete or delayed model updates. To address these challenges, this paper proposes Octopus, a robust and privacy-preserving scheme for compressed gradients in federated learning. Octopus employs Sketch to compress gradients and embeds masks for the compressed gradients, thereby safeguarding the gradients while concurrently reducing communication overhead. Moreover, we propose an anti-disconnection strategy to support model updates even in situations where some clients are disconnected. Lastly, we carry out comprehensive security and convergence analyses, along with extensive performance evaluations, demonstrating Octopus’s robustness, stability, and efficiency over existing schemes.