Papers
arxiv:2505.18135

Gaming Tool Preferences in Agentic LLMs

Published on May 23
Authors:
,
,
,
,
,

Abstract

Edited tool descriptions significantly increase usage by LLMs in competitive scenarios, highlighting the need for a more robust method for tool selection.

AI-generated summary

Large language models (LLMs) can now access a wide range of external tools, thanks to the Model Context Protocol (MCP). This greatly expands their abilities as various agents. However, LLMs rely entirely on the text descriptions of tools to decide which ones to use--a process that is surprisingly fragile. In this work, we expose a vulnerability in prevalent tool/function-calling protocols by investigating a series of edits to tool descriptions, some of which can drastically increase a tool's usage from LLMs when competing with alternatives. Through controlled experiments, we show that tools with properly edited descriptions receive over 10 times more usage from GPT-4.1 and Qwen2.5-7B than tools with original descriptions. We further evaluate how various edits to tool descriptions perform when competing directly with one another and how these trends generalize or differ across a broader set of 10 different models. These phenomenons, while giving developers a powerful way to promote their tools, underscore the need for a more reliable foundation for agentic LLMs to select and utilize tools and resources.

Community

Paper author

LLMs choose tools based on how they’re described—not how they work. 🤔

We show that simple edits to tool descriptions can shift usage to 10× between functionally identical tools.

This reveals a key flaw in today’s tool/function-calling protocols.

combined.png

We also systematically compare how different edits to tool descriptions perform when competing with each others⚔️—and how these effects generalize across 10 different LLMs🤖.

Turns out: most trends do generalize🌐 (though some edits work better on certain models).

figure_overall.png

These findings give developers ✨new levers✨ to promote their tools—but also highlight the urgent need for stronger, more grounded protocols for agentic LLMs to incorporate tools as well as other resources.

Thoughts? 🤔

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.18135 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.18135 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.18135 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.