Papers
arxiv:2505.03275

RAG-MCP: Mitigating Prompt Bloat in LLM Tool Selection via Retrieval-Augmented Generation

Published on May 6
Authors:
,

Abstract

RAG-MCP uses semantic retrieval to efficiently integrate and select external tools for LLMs, reducing prompt size and improving tool accuracy.

AI-generated summary

Large language models (LLMs) struggle to effectively utilize a growing number of external tools, such as those defined by the Model Context Protocol (MCP)IntroducingMCP, due to prompt bloat and selection complexity. We introduce RAG-MCP, a Retrieval-Augmented Generation framework that overcomes this challenge by offloading tool discovery. RAG-MCP uses semantic retrieval to identify the most relevant MCP(s) for a given query from an external index before engaging the LLM. Only the selected tool descriptions are passed to the model, drastically reducing prompt size and simplifying decision-making. Experiments, including an MCP stress test, demonstrate RAG-MCP significantly cuts prompt tokens (e.g., by over 50%) and more than triples tool selection accuracy (43.13% vs 13.62% baseline) on benchmark tasks. RAG-MCP enables scalable and accurate tool integration for LLMs.

Community

Hi, I read your paper 'RAG-MCP: Scalable Tool Use for Large Language Models via Retrieval-Augmented Generation' (arXiv:2505.03275). I was wondering if the code for this work has been released or if you plan to open source it. Thank you!

ยท

I'm also interested in the code used for this paper.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.03275 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.03275 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.03275 in a Space README.md to link it from this page.

Collections including this paper 1