Update README.md
Browse files
README.md
CHANGED
|
@@ -7,22 +7,10 @@ base_model:
|
|
| 7 |
datasets:
|
| 8 |
- ChatTSRepo/ChatTS-Training-Dataset
|
| 9 |
language:
|
| 10 |
-
- zho
|
| 11 |
- eng
|
| 12 |
-
- fra
|
| 13 |
-
- spa
|
| 14 |
-
- por
|
| 15 |
-
- deu
|
| 16 |
-
- ita
|
| 17 |
-
- rus
|
| 18 |
-
- jpn
|
| 19 |
-
- kor
|
| 20 |
-
- vie
|
| 21 |
-
- tha
|
| 22 |
-
- ara
|
| 23 |
---
|
| 24 |
|
| 25 |
-
# [VLDB' 25] ChatTS-14B Model
|
| 26 |
|
| 27 |
<div style="display:flex;justify-content: center">
|
| 28 |
<a href="https://github.com/NetmanAIOps/ChatTS"><img alt="github" src="https://img.shields.io/badge/Code-GitHub-blue"></a>
|
|
@@ -35,7 +23,7 @@ language:
|
|
| 35 |
This repo provides code, datasets and model for `ChatTS`: [ChatTS: Aligning Time Series with LLMs via Synthetic Data for Enhanced Understanding and Reasoning](https://arxiv.org/pdf/2412.03104).
|
| 36 |
|
| 37 |
## Web Demo
|
| 38 |
-
The Web Demo of ChatTS-14B is available at HuggingFace Spaces: [](https://huggingface.co/spaces/xiezhe22/ChatTS)
|
| 39 |
|
| 40 |
## Key Features
|
| 41 |
ChatTS is a Multimodal LLM built natively for time series as a core modality:
|
|
@@ -85,6 +73,9 @@ outputs = model.generate(**inputs, max_new_tokens=300)
|
|
| 85 |
print(tokenizer.decode(outputs[0][len(inputs['input_ids'][0]):], skip_special_tokens=True))
|
| 86 |
```
|
| 87 |
|
|
|
|
|
|
|
|
|
|
| 88 |
## Reference
|
| 89 |
- QWen2.5-14B-Instruct (https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
|
| 90 |
- transformers (https://github.com/huggingface/transformers.git)
|
|
|
|
| 7 |
datasets:
|
| 8 |
- ChatTSRepo/ChatTS-Training-Dataset
|
| 9 |
language:
|
|
|
|
| 10 |
- eng
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
---
|
| 12 |
|
| 13 |
+
# [VLDB' 25] ChatTS-14B-0801 Model
|
| 14 |
|
| 15 |
<div style="display:flex;justify-content: center">
|
| 16 |
<a href="https://github.com/NetmanAIOps/ChatTS"><img alt="github" src="https://img.shields.io/badge/Code-GitHub-blue"></a>
|
|
|
|
| 23 |
This repo provides code, datasets and model for `ChatTS`: [ChatTS: Aligning Time Series with LLMs via Synthetic Data for Enhanced Understanding and Reasoning](https://arxiv.org/pdf/2412.03104).
|
| 24 |
|
| 25 |
## Web Demo
|
| 26 |
+
The Web Demo of ChatTS-14B-0801 is available at HuggingFace Spaces: [](https://huggingface.co/spaces/xiezhe22/ChatTS)
|
| 27 |
|
| 28 |
## Key Features
|
| 29 |
ChatTS is a Multimodal LLM built natively for time series as a core modality:
|
|
|
|
| 73 |
print(tokenizer.decode(outputs[0][len(inputs['input_ids'][0]):], skip_special_tokens=True))
|
| 74 |
```
|
| 75 |
|
| 76 |
+
## Reproduction of Paper Results
|
| 77 |
+
Please download the [legacy ChatTS-14B model](https://huggingface.co/bytedance-research/ChatTS-14B/tree/fea24f221dd13ad310b68cc5470f575647b838c6) to reproduce the results in the paper.
|
| 78 |
+
|
| 79 |
## Reference
|
| 80 |
- QWen2.5-14B-Instruct (https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
|
| 81 |
- transformers (https://github.com/huggingface/transformers.git)
|