Papers
arxiv:2511.10492

Don't Waste It: Guiding Generative Recommenders with Structured Human Priors via Multi-head Decoding

Published on Nov 13
· Submitted by Yunkai Zhang on Nov 17
Authors:
,
,
,
,
,
,
,
,
,

Abstract

A framework integrates human priors into end-to-end generative recommenders, enhancing accuracy and beyond-accuracy objectives by leveraging lightweight adapter heads and hierarchical composition strategies.

AI-generated summary

Optimizing recommender systems for objectives beyond accuracy, such as diversity, novelty, and personalization, is crucial for long-term user satisfaction. To this end, industrial practitioners have accumulated vast amounts of structured domain knowledge, which we term human priors (e.g., item taxonomies, temporal patterns). This knowledge is typically applied through post-hoc adjustments during ranking or post-ranking. However, this approach remains decoupled from the core model learning, which is particularly undesirable as the industry shifts to end-to-end generative recommendation foundation models. On the other hand, many methods targeting these beyond-accuracy objectives often require architecture-specific modifications and discard these valuable human priors by learning user intent in a fully unsupervised manner. Instead of discarding the human priors accumulated over years of practice, we introduce a backbone-agnostic framework that seamlessly integrates these human priors directly into the end-to-end training of generative recommenders. With lightweight, prior-conditioned adapter heads inspired by efficient LLM decoding strategies, our approach guides the model to disentangle user intent along human-understandable axes (e.g., interaction types, long- vs. short-term interests). We also introduce a hierarchical composition strategy for modeling complex interactions across different prior types. Extensive experiments on three large-scale datasets demonstrate that our method significantly enhances both accuracy and beyond-accuracy objectives. We also show that human priors allow the backbone model to more effectively leverage longer context lengths and larger model sizes.

Community

Paper author Paper submitter

While the scaling law shows immense promise, we believe that, before the arrival of AGI, it is unwise to ignore the structured human priors accumulated over years of practice. For generative recommenders, the timely challenge is how to integrate this knowledge into foundation models without resorting to brittle, post-hoc fixes. Inspired by efficient LLM decoding, this paper introduces a framework that uses lightweight, prior-conditioned adapter heads to inject human expertise directly into E2E training. This guides the model to disentangle user intent along interpretable axes, resulting in recommendations that are more accurate, diverse, and inherently controllable.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2511.10492 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2511.10492 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2511.10492 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.