Abstract
Edited tool descriptions significantly increase usage by LLMs in competitive scenarios, highlighting the need for a more robust method for tool selection.
Large language models (LLMs) can now access a wide range of external tools, thanks to the Model Context Protocol (MCP). This greatly expands their abilities as various agents. However, LLMs rely entirely on the text descriptions of tools to decide which ones to use--a process that is surprisingly fragile. In this work, we expose a vulnerability in prevalent tool/function-calling protocols by investigating a series of edits to tool descriptions, some of which can drastically increase a tool's usage from LLMs when competing with alternatives. Through controlled experiments, we show that tools with properly edited descriptions receive over 10 times more usage from GPT-4.1 and Qwen2.5-7B than tools with original descriptions. We further evaluate how various edits to tool descriptions perform when competing directly with one another and how these trends generalize or differ across a broader set of 10 different models. These phenomenons, while giving developers a powerful way to promote their tools, underscore the need for a more reliable foundation for agentic LLMs to select and utilize tools and resources.
Community
LLMs choose tools based on how they’re described—not how they work. 🤔
We show that simple edits to tool descriptions can shift usage to 10× between functionally identical tools.
This reveals a key flaw in today’s tool/function-calling protocols.
We also systematically compare how different edits to tool descriptions perform when competing with each others⚔️—and how these effects generalize across 10 different LLMs🤖.
Turns out: most trends do generalize🌐 (though some edits work better on certain models).
These findings give developers ✨new levers✨ to promote their tools—but also highlight the urgent need for stronger, more grounded protocols for agentic LLMs to incorporate tools as well as other resources.
Thoughts? 🤔
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper