Lucy: Edgerunning Agentic Web Search on Mobile with a 1.7B model.

GitHub License

Lucy-128k

Authors: Alan Dao, Bach Vu Dinh, Alex Nguyen, Norapat Buppodom

image/gif

Overview

Lucy is a compact but capable 1.7B model focused on agentic web search and lightweight browsing. Built on Qwen3-1.7B, Lucy inherits deep research capabilities from larger models while being optimized to run efficiently on mobile devices, even with CPU-only configurations.

We achieved this through machine-generated task vectors that optimize thinking processes, smooth reward functions across multiple categories, and pure reinforcement learning without any supervised fine-tuning.

What Lucy Excels At

  • πŸ” Strong Agentic Search: Powered by MCP-enabled tools (e.g., Serper with Google Search)
  • 🌐 Basic Browsing Capabilities: Through Crawl4AI (MCP server to be released), Serper,...
  • πŸ“± Mobile-Optimized: Lightweight enough to run on CPU or mobile devices with decent speed
  • 🎯 Focused Reasoning: Machine-generated task vectors optimize thinking processes for search tasks

Evaluation

Following the same MCP benchmark methodology used for Jan-Nano and Jan-Nano-128k, Lucy demonstrates impressive performance despite being only a 1.7B model, achieving higher accuracy than DeepSeek-v3 on SimpleQA.

image/png

πŸ–₯️ How to Run Locally

Lucy can be deployed using various methods including vLLM, llama.cpp, or through local applications like Jan, LMStudio, and other compatible inference engines. The model supports integration with search APIs and web browsing tools through the MCP.

Deployment

Deploy using VLLM:

vllm serve Menlo/Lucy-128k \
    --host 0.0.0.0 \
    --port 1234 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes 

Or llama-server from llama.cpp:

llama-server ... 

Recommended Sampling Parameters

Temperature: 0.7
Top-p: 0.9
Top-k: 20
Min-p: 0.0

🀝 Community & Support

πŸ“„ Citation

Paper (coming soon): Lucy: edgerunning agentic web search on mobile with machine generated task vectors.

Downloads last month
3,185
GGUF
Model size
1.72B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Menlo/Lucy-gguf

Finetuned
Qwen/Qwen3-1.7B
Finetuned
Menlo/Lucy
Quantized
(15)
this model

Collection including Menlo/Lucy-gguf