Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
Kseniase 
posted an update 15 days ago
Post
5068
13 New types of LoRA

LoRA (Low-Rank Adaptation) is a popular lightweight method for fine-tuning AI models. It doesn't update the full model, it adds small trainable components, low-rank matrices, while keeping the original weights frozen. Only these adapters are trained.

Recently, many interesting new LoRA variations came out, so it’s a great time to take a look at these 13 clever approaches:

1. T-LoRA → T-LoRA: Single Image Diffusion Model Customization Without Overfitting (2507.05964)
A timestep-dependent LoRA method for adapting diffusion models with a single image. It dynamically adjusts updates and uses orthogonal initialization to reduce overlap, achieving better fidelity–alignment balance than standard LoRA

2. SingLoRA → SingLoRA: Low Rank Adaptation Using a Single Matrix (2507.05566)
Simplifies LoRA by using only one small matrix instead of usual two, and multiplying it by its own transpose (like A × Aᵀ). It uses half the parameters of LoRA and avoids scale mismatch between different matrices

3. LiON-LoRA → LiON-LoRA: Rethinking LoRA Fusion to Unify Controllable Spatial and Temporal Generation for Video Diffusion (2507.05678)
Improves control and precision in video diffusion models when training data is limited. It builds on LoRA, adding 3 key principles: linear scalability, orthogonality, and norm consistency. A controllable token and modified self-attention enables smooth adjustment of motion

4. LoRA-Mixer → LoRA-Mixer: Coordinate Modular LoRA Experts Through Serial Attention Routing (2507.00029)
Combines LoRA and mixture-of-experts (MoE) to adapt LLMs for multiple tasks. It dynamically routes task-specific LoRA experts into linear projections of attention modules, supporting both joint training and frozen expert reuse

5. QR-LoRA → QR-LoRA: Efficient and Disentangled Fine-tuning via QR Decomposition for Customized Generation (2507.04599)
Separates content and style when combining multiple LoRA adapters. It implements QR decomposition to structure parameter updates, where the orthogonal Q matrix reduces interference between features, and the R matrix captures specific transformations

Read further in the comments 👇

If you like it, also subscribe to the Turing Post: https://www.turingpost.com/subscribe
  1. FreeLoRA → https://huggingface.co/papers/2507.01792
    Enables training-free image generation with multiple subjects by fine-tuning each LoRA module on one subject. During inference, subject-aware activation applies modules only to their target tokens, ensuring clean, interference-free fusion.

  2. LoRA-Augmented Generation (LAG) → https://huggingface.co/papers/2507.05346
    Uses large collections of task-specific LoRA adapters without needing extra training or data. It selects and applies the most relevant adapters at each layer and token, exceling in knowledge-intensive tasks.

  3. ARD-LoRA (Adaptive Rank Dynamic LoRA) → https://huggingface.co/papers/2506.18267
    Adjusts the rank of LoRA adapters dynamically across transformer layers and heads by learning per-head scaling factors through a meta-objective. It balances performance, efficiency, using fewer parameters and reducing memory use.

  4. WaRA → https://huggingface.co/papers/2506.24092
    Designed for vision tasks, it uses wavelet transforms and decomposes weight updates into multiple resolutions, capturing both coarse and detailed patterns.

  5. BayesLoRA → https://huggingface.co/papers/2506.22809
    Adds uncertainty estimation to LoRA adapters using MC-Dropout, helping models gauge confidence in unfamiliar situations. It detects variance outside fine-tuned distributions, supporting more cautious and adaptive behavior of models.

  6. Dual LoRA Learning (DLoRAL) → https://huggingface.co/papers/2506.15591
    Trains two LoRA branches: C-LoRA captures temporal coherence from degraded input, while D-LoRA improves visual detail. It's used for video super-resolution that enhances both spatial detail and temporal consistency.

  7. Safe Pruning LoRA (SPLoRA) → https://huggingface.co/papers/2506.18931
    Improves the safety of LoRA-tuned LMs by selectively removing LoRA layers that reduce alignment, using a new E-DIEM metric to detect safety-related shifts without relying on data labels.

  8. PLoP (Precise LoRA Placement) → https://huggingface.co/papers/2506.20629
    A lightweight method that automatically selects optimal LoRA adapter placement during fine-tuning based on the model and task

In this post