Papers
arxiv:2507.13334

A Survey of Context Engineering for Large Language Models

Published on Jul 17
ยท Submitted by Chevalier on Jul 18
#1 Paper of the day
Authors:
,
,
,
,
,

Abstract

Context Engineering systematically optimizes information payloads for Large Language Models, addressing gaps in generating sophisticated, long-form outputs.

AI-generated summary

The performance of Large Language Models (LLMs) is fundamentally determined by the contextual information provided during inference. This survey introduces Context Engineering, a formal discipline that transcends simple prompt design to encompass the systematic optimization of information payloads for LLMs. We present a comprehensive taxonomy decomposing Context Engineering into its foundational components and the sophisticated implementations that integrate them into intelligent systems. We first examine the foundational components: context retrieval and generation, context processing and context management. We then explore how these components are architecturally integrated to create sophisticated system implementations: retrieval-augmented generation (RAG), memory systems and tool-integrated reasoning, and multi-agent systems. Through this systematic analysis of over 1300 research papers, our survey not only establishes a technical roadmap for the field but also reveals a critical research gap: a fundamental asymmetry exists between model capabilities. While current models, augmented by advanced context engineering, demonstrate remarkable proficiency in understanding complex contexts, they exhibit pronounced limitations in generating equally sophisticated, long-form outputs. Addressing this gap is a defining priority for future research. Ultimately, this survey provides a unified framework for both researchers and engineers advancing context-aware AI.

Community

Paper author Paper submitter

Ongoing work; 165 pages, 1401 citations.

ยท
deleted
This comment has been hidden

๐Ÿ‘Œ

ยท
deleted
This comment has been hidden
deleted
This comment has been hidden

wow, very cool

thanks for extensive taxonomy

Paper author Paper submitter

I wrote the wrong email address in my paper. The correct email address should be: [email protected]

What an amazing job! Thank you for this piece of what will be historical cannon

Wows 166 pages, thanks for the tldr summary :)

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.13334 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.13334 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.13334 in a Space README.md to link it from this page.

Collections including this paper 42