Abstract
A method for approximating ROC and PR curves in federated learning under distributed differential privacy ensures high accuracy, minimal communication, and strong privacy guarantees.
Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves are fundamental tools for evaluating machine learning classifiers, offering detailed insights into the trade-offs between true positive rate vs. false positive rate (ROC) or precision vs. recall (PR). However, in Federated Learning (FL) scenarios, where data is distributed across multiple clients, computing these curves is challenging due to privacy and communication constraints. Specifically, the server cannot access raw prediction scores and class labels, which are used to compute the ROC and PR curves in a centralized setting. In this paper, we propose a novel method for approximating ROC and PR curves in a federated setting by estimating quantiles of the prediction score distribution under distributed differential privacy. We provide theoretical bounds on the Area Error (AE) between the true and estimated curves, demonstrating the trade-offs between approximation accuracy, privacy, and communication cost. Empirical results on real-world datasets demonstrate that our method achieves high approximation accuracy with minimal communication and strong privacy guarantees, making it practical for privacy-preserving model evaluation in federated systems.
Community
TL;DR: A privacy-preserving method to approximate ROC and PR curves in federated learning with provable error guarantees.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Private Federated Multiclass Post-hoc Calibration (2025)
- Piquant$\varepsilon$: Private Quantile Estimation in the Two-Server Model (2025)
- Federated Survival Analysis with Node-Level Differential Privacy: Private Kaplan-Meier Curves (2025)
- DP-FedLoRA: Privacy-Enhanced Federated Fine-Tuning for On-Device Large Language Models (2025)
- DP-HYPE: Distributed Differentially Private Hyperparameter Search (2025)
- Federated Learning of Quantile Inference under Local Differential Privacy (2025)
- On Evaluating the Poisoning Robustness of Federated Learning under Local Differential Privacy (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper