rulixiang commited on
Commit
8a778a1
·
1 Parent(s): 10ac23c

Update READMME.md

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. Ling-lite-1.5-2506-benchmarks.png +3 -0
  3. README.md +96 -3
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Ling-lite-1.5-2506-benchmarks.png filter=lfs diff=lfs merge=lfs -text
Ling-lite-1.5-2506-benchmarks.png ADDED

Git LFS Details

  • SHA256: 5f7e666684eebf0e91496f47908226277f327d224f55b14770f383a7ba4dd4e1
  • Pointer size: 131 Bytes
  • Size of remote file: 101 kB
README.md CHANGED
@@ -1,3 +1,96 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
+ ---
6
+
7
+ # Ling-lite-1.5-2506
8
+
9
+ [Paper](https://hf.co/papers/2503.05139)
10
+
11
+ <p align="center"><img src="https://huggingface.co/inclusionAI/Ling-lite/resolve/main/ant-bailing.png" width="100"/></p>
12
+
13
+ <p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a></p>
14
+
15
+ ## Model Overview
16
+ We are excited to introduce **Ling-lite-1.5-2506**, the updated version of our highly capable Ling-lite-1.5 model.
17
+
18
+ Ling-lite-1.5-2506 boasts 16.8 billion parameters with 2.75 billion activated parameters, building upon its predecessor with significant advancements across the board, featuring the following key improvements:
19
+
20
+ * **Reasoning and Knowledge:** Significant gains in general intelligence, logical reasoning, and complex problem-solving abilities. For instance, in GPQA Diamond, Ling-lite-1.5-2506 achieves 53.79%, a substantial lead over Ling-lite-1.5's 36.55%.
21
+ * **Coding Capabilities:** A remarkable improvement in coding and debugging prowess. For instance,in LiveCodeBench, a critical programming benchmark, Ling-lite-1.5-2506 demonstrates strong improvement with 42.04% compared to Ling-lite-1.5's 22.7%.
22
+
23
+ <p align="center">
24
+ <img width="80%" src="Ling-lite-1.5-2506-benchmarks.png">
25
+ </p>
26
+
27
+ ## Model Downloads
28
+
29
+ You can download the following table to see the various parameters for your use case. If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.
30
+
31
+
32
+ | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
33
+ | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
34
+ | Ling-lite-1.5-2506 | 16.8B | 2.75B | 128K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-lite-1.5-2506) |
35
+ | Ling-lite-1.5 | 16.8B | 2.75B | 128K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-lite-1.5) |
36
+
37
+
38
+ ## Quickstart
39
+ ### 🤗 Hugging Face Transformers
40
+
41
+ Here is a code snippet to show you how to use the chat model with `transformers`:
42
+
43
+ ```python
44
+ from transformers import AutoModelForCausalLM, AutoTokenizer
45
+
46
+ model_name = "inclusionAI/Ling-lite-1.5-2506"
47
+
48
+ model = AutoModelForCausalLM.from_pretrained(
49
+ model_name,
50
+ torch_dtype="auto",
51
+ device_map="auto"
52
+ )
53
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
54
+
55
+ prompt = "Give me a short introduction to large language models."
56
+ messages = [
57
+ {"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
58
+ {"role": "user", "content": prompt}
59
+ ]
60
+ text = tokenizer.apply_chat_template(
61
+ messages,
62
+ tokenize=False,
63
+ add_generation_prompt=True
64
+ )
65
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
66
+
67
+ generated_ids = model.generate(
68
+ **model_inputs,
69
+ max_new_tokens=512
70
+ )
71
+ generated_ids = [
72
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
73
+ ]
74
+
75
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
76
+ ```
77
+
78
+ ## Deployment
79
+
80
+ Please refer to [Github](https://github.com/inclusionAI/Ling/blob/master/README.md)
81
+
82
+ ## License
83
+ This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ling-lite/blob/main/LICENCE).
84
+
85
+ ## Citation
86
+
87
+ If you find our work helpful, feel free to give us a cite.
88
+
89
+ ```
90
+ @article{ling,
91
+ title = {Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs},
92
+ author = {Ling Team},
93
+ journal = {arXiv preprint arXiv:2503.05139},
94
+ year = {2025}
95
+ }
96
+ ```