nielsr HF Staff commited on
Commit
6543860
·
verified ·
1 Parent(s): 82d3f1f

Improve model card: add metadata, links, and detailed description

Browse files

This PR significantly improves the model card for `Online-Searcher-Qwen-7B` by:

- Adding `pipeline_tag: text-generation` to the metadata, allowing the model to be discovered more easily (e.g., at https://huggingface.co/models?pipeline_tag=text-generation).
- Adding `license: mit` to the metadata, clearly stating the model's license as found in the GitHub repository.
- Adding a link to the original paper [SimpleDeepSearcher: Deep Information Seeking via Web-Powered Reasoning Trajectory Synthesis](https://huggingface.co/papers/2505.16834).
- Adding a link to the associated [GitHub repository](https://github.com/RUCAIBox/SimpleDeepSearcher).
- Populating the "Model description", "Intended uses & limitations", and "Training and evaluation data" sections with information extracted from the paper abstract and GitHub README.
- Adding a dedicated "License" section within the model card content.
- Removing the auto-generated comment.

These changes provide a more comprehensive and informative model card for users.

Files changed (1) hide show
  1. README.md +16 -7
README.md CHANGED
@@ -1,5 +1,7 @@
1
  ---
2
  library_name: transformers
 
 
3
  tags:
4
  - generated_from_trainer
5
  model-index:
@@ -7,24 +9,27 @@ model-index:
7
  results: []
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
-
13
  # Online-Searcher-Qwen-7B
14
 
15
- This model was trained from scratch on an unknown dataset.
 
 
16
 
17
  ## Model description
18
 
19
- More information needed
 
 
 
 
20
 
21
  ## Intended uses & limitations
22
 
23
- More information needed
24
 
25
  ## Training and evaluation data
26
 
27
- More information needed
28
 
29
  ## Training procedure
30
 
@@ -45,3 +50,7 @@ The following hyperparameters were used during training:
45
  - Pytorch 2.5.1+cu124
46
  - Datasets 2.19.0
47
  - Tokenizers 0.20.3
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ pipeline_tag: text-generation
4
+ license: mit
5
  tags:
6
  - generated_from_trainer
7
  model-index:
 
9
  results: []
10
  ---
11
 
 
 
 
12
  # Online-Searcher-Qwen-7B
13
 
14
+ This model is a Qwen2-7B based deep search agent, presented in the paper [SimpleDeepSearcher: Deep Information Seeking via Web-Powered Reasoning Trajectory Synthesis](https://huggingface.co/papers/2505.16834).
15
+
16
+ The code for the paper and model training is available on the [Github repository](https://github.com/RUCAIBox/SimpleDeepSearcher).
17
 
18
  ## Model description
19
 
20
+ SimpleDeepSearcher is a lightweight yet effective framework for enhancing large language models (LLMs) in complex deep search scenarios requiring multi-step reasoning and iterative information retrieval. Unlike traditional Retrieval-Augmented Generation (RAG) or Reinforcement Learning (RL)-based methods, SimpleDeepSearcher strategically synthesizes high-quality reasoning trajectories in real-world web environments. This approach enables efficient Supervised Fine-Tuning (SFT) using only a small amount of curated data, resulting in strong performance with significantly reduced computational cost and development complexity.
21
+
22
+ Key contributions of the framework include:
23
+ - A real web-based data synthesis framework that simulates realistic user search behaviors, generating multi-turn reasoning and search trajectories.
24
+ - A multi-criteria data curation strategy that jointly optimizes both input question selection and output response filtering through orthogonal filtering dimensions.
25
 
26
  ## Intended uses & limitations
27
 
28
+ This model is intended for deep information seeking tasks and complex search scenarios that demand multi-step reasoning and iterative information retrieval from the web. It aims to provide practical insights for efficient deep search systems by leveraging its ability to synthesize high-quality reasoning trajectories.
29
 
30
  ## Training and evaluation data
31
 
32
+ The model was fine-tuned using Supervised Fine-Tuning (SFT) on a small, high-quality dataset consisting of only 871 curated samples. This data was strategically synthesized by simulating realistic user interactions in live web search environments, coupled with a multi-criteria curation strategy that optimizes diversity and quality.
33
 
34
  ## Training procedure
35
 
 
50
  - Pytorch 2.5.1+cu124
51
  - Datasets 2.19.0
52
  - Tokenizers 0.20.3
53
+
54
+ ## License
55
+
56
+ This project is released under the [MIT License](https://github.com/RUCAIBox/SimpleDeepSearcher/blob/main/LICENSE).