Don't Waste It: Guiding Generative Recommenders with Structured Human Priors via Multi-head Decoding
Abstract
A framework integrates human priors into end-to-end generative recommenders, enhancing accuracy and beyond-accuracy objectives by leveraging lightweight adapter heads and hierarchical composition strategies.
Optimizing recommender systems for objectives beyond accuracy, such as diversity, novelty, and personalization, is crucial for long-term user satisfaction. To this end, industrial practitioners have accumulated vast amounts of structured domain knowledge, which we term human priors (e.g., item taxonomies, temporal patterns). This knowledge is typically applied through post-hoc adjustments during ranking or post-ranking. However, this approach remains decoupled from the core model learning, which is particularly undesirable as the industry shifts to end-to-end generative recommendation foundation models. On the other hand, many methods targeting these beyond-accuracy objectives often require architecture-specific modifications and discard these valuable human priors by learning user intent in a fully unsupervised manner. Instead of discarding the human priors accumulated over years of practice, we introduce a backbone-agnostic framework that seamlessly integrates these human priors directly into the end-to-end training of generative recommenders. With lightweight, prior-conditioned adapter heads inspired by efficient LLM decoding strategies, our approach guides the model to disentangle user intent along human-understandable axes (e.g., interaction types, long- vs. short-term interests). We also introduce a hierarchical composition strategy for modeling complex interactions across different prior types. Extensive experiments on three large-scale datasets demonstrate that our method significantly enhances both accuracy and beyond-accuracy objectives. We also show that human priors allow the backbone model to more effectively leverage longer context lengths and larger model sizes.
Community
While the scaling law shows immense promise, we believe that, before the arrival of AGI, it is unwise to ignore the structured human priors accumulated over years of practice. For generative recommenders, the timely challenge is how to integrate this knowledge into foundation models without resorting to brittle, post-hoc fixes. Inspired by efficient LLM decoding, this paper introduces a framework that uses lightweight, prior-conditioned adapter heads to inject human expertise directly into E2E training. This guides the model to disentangle user intent along interpretable axes, resulting in recommendations that are more accurate, diverse, and inherently controllable.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- GPR: Towards a Generative Pre-trained One-Model Paradigm for Large-Scale Advertising Recommendation (2025)
- SynerGen: Contextualized Generative Recommender for Unified Search and Recommendation (2025)
- GRank: Towards Target-Aware and Streamlined Industrial Retrieval with a Generate-Rank Framework (2025)
- SeqUDA-Rec: Sequential User Behavior Enhanced Recommendation via Global Unsupervised Data Augmentation for Personalized Content Marketing (2025)
- HyMiRec: A Hybrid Multi-interest Learning Framework for LLM-based Sequential Recommendation (2025)
- A Survey on Generative Recommendation: Data, Model, and Tasks (2025)
- Next Interest Flow: A Generative Pre-training Paradigm for Recommender Systems by Modeling All-domain Movelines (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper