Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
650.6
TFLOPS
220
710
1420
Clem 🤗
PRO
clem
Follow
NewEngland96's profile picture
joey00072's profile picture
pedro-m's profile picture
2731 followers
·
1920 following
http://huggingface.co
clementdelangue
clmnt
clementdelangue
clem.hf.co
AI & ML interests
multi-modal, time-series, biology and chemistry
Recent Activity
upvoted
an
article
about 3 hours ago
Introducing the Palmyra-mini family: Powerful, lightweight, and ready to reason!
liked
a model
about 4 hours ago
pytorch/Phi-4-mini-instruct-AWQ-INT4
reacted
to
Kseniase
's
post
with 🚀
about 4 hours ago
10 awesome advanced LoRA approaches Low-Rank Adaptation (LoRA) is the go-to method for efficient model fine-tuning that adds small low-rank matrices instead of retraining full models. The field isn’t standing still – new LoRA variants push the limits of efficiency, generalization, and personalization. So we’re sharing 10 of the latest LoRA approaches you should know about: 1. Mixture-of-LoRA-experts → https://huggingface.co/papers/2509.13878 Adds multiple low-rank adapters (LoRA) into a model’s layers, and a routing mechanism activates the most suitable ones for each input. This lets the model adapt better to new unseen conditions 2. Amortized Bayesian Meta-Learning for LoRA (ABMLL) → https://huggingface.co/papers/2508.14285 Balances global and task-specific parameters within a Bayesian framework to improve uncertainty calibration and generalization to new tasks without high memory or compute costs 3. AutoLoRA → https://huggingface.co/papers/2508.02107 Automatically retrieves and dynamically aggregates public LoRAs for stronger T2I generation 4. aLoRA (Activated LoRA) → https://huggingface.co/papers/2504.12397 Only applies LoRA after invocation, letting the model reuse the base model’s KV cache instead of recomputing the full turn’s KV cache. Efficient in multi-turn conversations 5. LiLoRA (LoRA in LoRA) → https://huggingface.co/papers/2508.06202 Shares the LoRA matrix A across tasks and additionally low-rank-decomposes matrix B to cut parameters in continual vision-text MLLMs 6. Sensitivity-LoRA → https://huggingface.co/papers/2509.09119 Dynamically assigns ranks to weight matrices based on their sensitivity, measured using second-order derivatives Read further below ↓ Also, subscribe to the Turing Post: https://www.turingpost.com/subscribe
View all activity
Organizations
clem
's models
10
Sort: Recently updated
clem/test_new_model_upload
Updated
Mar 26
clem/ai-french-prime-minister
Updated
Jul 2, 2024
•
19
clem/gemini
Updated
Dec 6, 2023
•
90
clem/test
Updated
Jul 27, 2023
clem/maxdekdt
Text-to-Image
•
Updated
Jan 5, 2023
•
5
•
3
clem/friedeberg
Text-to-Image
•
Updated
Dec 24, 2022
•
6
clem/autonlp-test3-2101787
Text Classification
•
Updated
Jun 29, 2021
•
10
clem/autonlp-test3-2101782
Text Classification
•
Updated
Jun 29, 2021
•
11
clem/autonlp-test3-2101779
Text Classification
•
Updated
Jun 29, 2021
•
9
clem/bert_portuguese
Updated
Dec 18, 2020
•
1