UltraRonin commited on
Commit
7a6fe74
Β·
verified Β·
1 Parent(s): 8c6275e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -2
README.md CHANGED
@@ -32,6 +32,7 @@ configs:
32
  - [🌐 Overview](#overview)
33
  - [πŸ“š Preparation](#preparation)
34
  - [⏳ Data Selection](#data_selection)
 
35
  - [πŸ“ Citation](#citation)
36
 
37
 
@@ -40,7 +41,6 @@ configs:
40
  ## 🌐 Overview
41
 
42
  Long-context modeling has drawn more and more attention in the area of Large Language Models (LLMs). Continual training with long-context data becomes the de-facto method to equip LLMs with the ability to process long inputs. However, it still remains an open challenge to measure the quality of long-context training data. To address this issue, we propose a **L**ong-context data selection framework with **A**ttention-based **D**ependency **M**easurement (**LADM**), which can efficiently identify high-quality long-context data from a large-scale, multi-domain pre-training corpus. LADM leverages the retrieval capabilities of the attention mechanism to capture contextual dependencies, ensuring a comprehensive quality measurement of long-context data. Experimental results show that our LADM framework significantly boosts the performance of LLMs on multiple long-context tasks with only 1B tokens for continual training.
43
- ![](./assets/framework.png)
44
 
45
 
46
  <a name="preparation"></a>
@@ -77,6 +77,13 @@ For full usage:
77
  bash launch.sh
78
  ```
79
 
 
 
 
 
 
 
 
80
  <a name="citation"></a>
81
 
82
  ## πŸ“ Citation
@@ -89,4 +96,4 @@ If you find this repo useful for your research, please consider citing the paper
89
  journal={arXiv preprint arXiv:2503.02502},
90
  year={2025}
91
  }
92
- ```
 
32
  - [🌐 Overview](#overview)
33
  - [πŸ“š Preparation](#preparation)
34
  - [⏳ Data Selection](#data_selection)
35
+ - [πŸ“ˆ Training](#training)
36
  - [πŸ“ Citation](#citation)
37
 
38
 
 
41
  ## 🌐 Overview
42
 
43
  Long-context modeling has drawn more and more attention in the area of Large Language Models (LLMs). Continual training with long-context data becomes the de-facto method to equip LLMs with the ability to process long inputs. However, it still remains an open challenge to measure the quality of long-context training data. To address this issue, we propose a **L**ong-context data selection framework with **A**ttention-based **D**ependency **M**easurement (**LADM**), which can efficiently identify high-quality long-context data from a large-scale, multi-domain pre-training corpus. LADM leverages the retrieval capabilities of the attention mechanism to capture contextual dependencies, ensuring a comprehensive quality measurement of long-context data. Experimental results show that our LADM framework significantly boosts the performance of LLMs on multiple long-context tasks with only 1B tokens for continual training.
 
44
 
45
 
46
  <a name="preparation"></a>
 
77
  bash launch.sh
78
  ```
79
 
80
+ <a name="training"></a>
81
+
82
+ ## πŸ“ˆ Training
83
+
84
+ Our training mainly follows [Huggingface Trainer](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) code base. Please refer to that repo for more details.
85
+
86
+
87
  <a name="citation"></a>
88
 
89
  ## πŸ“ Citation
 
96
  journal={arXiv preprint arXiv:2503.02502},
97
  year={2025}
98
  }
99
+ ```