Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
2150.0
TFLOPS
5
3
165
trojblue
trojblue
Follow
Ameeeee's profile picture
21world's profile picture
SelmaNajih001's profile picture
22 followers
Ā·
15 following
yada_cc
trojblue
AI & ML interests
None yet
Recent Activity
liked
a model
6 days ago
aoi-ot/VibeVoice-Large
reacted
to
anakin87
's
post
with š§
8 days ago
LLMs can leak their post-training data (RL included) š§ New interesting paper on this topic from Google DeepMind: https://huggingface.co/papers/2510.18554 It's known that Language Models memorize data that can be extracted via prompting. In this paper, the authors investigate this aspect: - using open models, where prompting can be fully customized by the user, including special tokens. - focusing on open-source models like Olmo, where full training data is available. š¤ How do they extract data? During post-training (like SFT), new tokens such as <|user|> are introduced. The authors hypothesize prompting the model with these tokens can make it output its alignment data (remember Magpie?). For example, for SFT, their extraction prompt is <|endoftext|><|user|>. š Evaluating memorization The authors compare each sampled example with the original data using vector search with embedding similarity. They find that many outputs are semantically very similar to the original data, even if the exact words differ. Traditional string-matching algorithms underestimate memorization by 10x. š What about RL? Surprisingly, the same technique works to extract data from Reinforcement Learning (PPO/GRPO) phases. This is counter-intuitive because the RL objective is not designed to increase sequence likelihoods (unlike SFT). Practical limitation: in this case, extraction relies on using the initial part of the training prompt, which is not generally public. š Are the extracted data effective for post-training? Both in SFT and RL, the extracted data can be used to fine-tune models to similar performance to the originals. The authors suggest that model distillation, where a stronger model is used to drive the training of a weaker one, may be a form of indirect training on the original dataset.
liked
a model
12 days ago
vafipas663/Qwen-Edit-2509-Upscale-LoRA
View all activity
Organizations
trojblue
's datasets
15
Sort:Ā Recently updated
trojblue/million-song-subset
Viewer
ā¢
Updated
Apr 18
ā¢
10k
ā¢
25
ā¢
1
trojblue/danbooru2025-metadata
Viewer
ā¢
Updated
Apr 16
ā¢
9.11M
ā¢
1.12k
ā¢
19
trojblue/sakugabooru2025
Updated
Mar 31
ā¢
241
ā¢
4
trojblue/AVA-aesthetics-10pct-min50-10bins
Viewer
ā¢
Updated
Feb 23
ā¢
25.5k
ā¢
77
trojblue/AVA-Huggingface
Viewer
ā¢
Updated
Feb 23
ā¢
256k
ā¢
113
ā¢
1
trojblue/AVA-subset-with-metrics
Viewer
ā¢
Updated
Feb 19
ā¢
20.4k
ā¢
20
trojblue/niji-v5-768webp
Updated
Feb 16
ā¢
19
ā¢
6
trojblue/test-HunyuanVideo-pixelart-videos
Viewer
ā¢
Updated
Dec 29, 2024
ā¢
173
ā¢
132
ā¢
5
trojblue/test-HunyuanVideo-pixelart-images
Viewer
ā¢
Updated
Dec 25, 2024
ā¢
140
ā¢
98
ā¢
4
trojblue/test-HunyuanVideo-anime-images
Viewer
ā¢
Updated
Dec 23, 2024
ā¢
40
ā¢
129
ā¢
3
trojblue/test-anime-rating-v3-partial
Viewer
ā¢
Updated
Oct 11, 2024
ā¢
55.5k
ā¢
32
trojblue/rvc-kanade-dataset
Viewer
ā¢
Updated
Oct 11, 2024
ā¢
2.57k
ā¢
35
ā¢
2
trojblue/random-captions
Viewer
ā¢
Updated
May 25, 2023
ā¢
210
ā¢
195
trojblue/sd1-bad-anatomy-images
Viewer
ā¢
Updated
Mar 13, 2023
ā¢
183
ā¢
63
ā¢
5
trojblue/dreambooth-nai1-class-conditions
Viewer
ā¢
Updated
Mar 4, 2023
ā¢
9.17k
ā¢
40
ā¢
2