Datasets:
timestamp string | source_files list | total_evaluations int64 | strategy_comparison dict | cano_vs_best_by_epsilon dict | significance_tests dict | rl_training dict | best_per_dataset dict |
|---|---|---|---|---|---|---|---|
2026-04-26T02:29:34.665300 | [
"eval_20260322_213337.jsonl",
"eval_20260322_223214.jsonl",
"eval_20260323_020050.jsonl",
"eval_20260324_020051.jsonl",
"eval_20260325_020051.jsonl",
"eval_20260326_020052.jsonl",
"eval_20260327_020053.jsonl",
"eval_20260328_020057.jsonl",
"eval_20260329_002315.jsonl",
"eval_20260329_020050.jsonl"... | 54,281 | {
"gaussian": {
"acc_red_mean": 0.3952317531837953,
"acc_red_std": 0.28103528762790336,
"noise_l2": 0.5948564979466564,
"snr_db": 9.734524590047702,
"n": 10423
},
"fgsm": {
"acc_red_mean": 0.2122695026393766,
"acc_red_std": 0.2121446394156302,
"noise_l2": 0.6418459298152309,
"s... | {
"0.05": {
"cano": 0.009511100010766464,
"best_baseline": 0.1035650153723957,
"best_name": "gaussian",
"ratio": 0.09183699704544784
},
"0.1": {
"cano": 0.03458333016403026,
"best_baseline": 0.19514514352881054,
"best_name": "gaussian",
"ratio": 0.17721850279571263
},
"0.15": {... | {
"gaussian": {
"t": -80.56242745250256,
"p": 0,
"d": -1.2022681385245917
},
"fgsm": {
"t": -34.10775507864339,
"p": 3.4141193575201406e-247,
"d": -0.5100989131096486
},
"pgd": {
"t": -6.132014084102183,
"p": 8.865766187130507e-10,
"d": -0.09323412108850163
},
"carlini_... | {
"num_rounds": 30,
"baseline_attack_accuracy": 0.7480084875294456,
"final_attack_accuracy": 0.2079924235612858,
"accuracy_reduction": 0.5400160639681598,
"final_noise_magnitude": 0.6061090465725671
} | {
"synth_small": {
"strategy": "gaussian",
"epsilon": 0.5,
"accuracy_reduction": 0.633686022588907
},
"synth_medium": {
"strategy": "gaussian",
"epsilon": 0.5,
"accuracy_reduction": 0.7493487779993276
},
"overlap_10u_50s": {
"strategy": "gaussian",
"epsilon": 0.5,
"accuracy... |
CANO Adversarial Privacy Evaluations
68,885 experimental evaluations of six noise-injection privacy strategies (CANO, Gaussian, FGSM, PGD, Laplace, Carlini-Wagner) against three adaptive attacker models (Random Forest, Gradient Boosting, MLP) across 12 datasets, including the real FP-Stalker browser-fingerprint corpus (776 users, 13,674 fingerprints, 34 attributes; Vastel et al., IEEE S&P 2018).
Aggregate statistics in the paper are computed over 54,281 in-scope
configurations after excluding the 2-user cybersec_intrusion dataset
(binary task, not a k-class fingerprinting benchmark).
Files
| File | Description |
|---|---|
cano_evaluations.csv |
Per-config raw results: strategy, epsilon, attacker, rep, dataset, accuracy_reduction, transfer_reduction, noise L2/SNR/sparsity, KL divergence, sensitivity, etc. |
cano_paper_v2.{txt,pdf} |
Paper draft with refreshed tables and the FP-Stalker findings. |
evaluation_results.json |
Aggregate strategy comparison, per-dataset best, significance tests, RL training summary. |
feature_importance.json |
Permutation importance from the Phase 2 Random Forest (used as CANO's allocation prior). |
rl_optimization.json |
DQN policy training trajectory (30 rounds, 50 users, Gini convergence). |
Key findings
Transfer-attack profile: CANO achieves a 2.35× transfer-to-adaptive ratio versus 1.04× for Gaussian — feature-importance allocation produces more model-agnostic perturbations, which is the realistic deployment setting (the defender can't tailor noise to the attacker's exact model).
Real-world generalization: On the FP-Stalker corpus, the CANO/Gaussian gap narrows substantially (0.276 vs 0.340 mean accuracy reduction) compared to small-synthetic datasets (e.g., synth_50u_20s: 0.003 vs 0.514). Importance-weighted allocation generalizes better when feature importance reflects real attribute redundancy rather than synthetic noise.
Game-theoretic equilibrium: RL-trained noise allocation converges to near-uniform (Gini = 0.009) — uniform noise is the equilibrium against adaptive adversaries, challenging static feature-weighted defense assumptions.
Citation
See cano_paper_v2.txt / cano_paper_v2.pdf for the methodology, full
results tables, and references. The companion code lives at
github.com/tedrubin80/Adversarial-Privacy.
- Downloads last month
- 18