Upload 3 files
Browse files- .gitattributes +1 -0
- README.md +90 -3
- assets/.DS_Store +0 -0
- assets/overview_v20250413.png +3 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
assets/overview_v20250413.png filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -1,3 +1,90 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
metrics:
|
| 6 |
+
- accuracy
|
| 7 |
+
tags:
|
| 8 |
+
- code
|
| 9 |
+
arxiv: 2407.10424
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# CodeV:Empowering LLMs for HDL Generation through Multi-Level Summarization
|
| 13 |
+
|
| 14 |
+
<img src="assets/overview_v20250413.png" style="zoom:50%;" />
|
| 15 |
+
|
| 16 |
+
CodeV is an innovative series of open-source, instruction-tuned Large Language Models (LLMs) specifically designed for the generation of high-quality HDL code, addressing the challenges faced by existing models in this domain. **(This repo is under development)**
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
## Models and Datasets
|
| 20 |
+
|
| 21 |
+
| | Base Model | CodeV |
|
| 22 |
+
| ---- | --------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
|
| 23 |
+
| 6.7B | [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | [yang-z/CodeV-DS-6.7B](https://huggingface.co/yang-z/CodeV-DS-6.7B) |
|
| 24 |
+
| 7B | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [yang-z/CodeV-CL-7B](https://huggingface.co/yang-z/CodeV-CL-7B) |
|
| 25 |
+
| 7B | [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) | [yang-z/CodeV-QW-7B](https://huggingface.co/yang-z/CodeV-QW-7B) |
|
| 26 |
+
| 7B | [Qwen/Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B) | [yang-z/CodeV-QC-7B](https://huggingface.co/yang-z/CodeV-QC-7B) |
|
| 27 |
+
| 6.7B | [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) | [yang-z/CodeV-All-DSC](https://huggingface.co/yang-z/CodeV-All-DSC) |
|
| 28 |
+
| 7B | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [yang-z/CodeV-All-CL](https://huggingface.co/yang-z/CodeV-All-CL) |
|
| 29 |
+
| 7B |[Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) | [yang-z/CodeV-All-CQ](https://huggingface.co/yang-z/CodeV-All-CQ) |
|
| 30 |
+
| 7B |[Qwen/Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B) | [yang-z/CodeV-All-QC](https://huggingface.co/yang-z/CodeV-All-QC) |
|
| 31 |
+
|
| 32 |
+
## Test
|
| 33 |
+
|
| 34 |
+
If you want to test the generation capability of existing models on Verilog, you need to install the [VerilogEval](https://github.com/NVlabs/verilog-eval) and [RTLLM](https://github.com/hkust-zhiyao/rtllm) environments.
|
| 35 |
+
|
| 36 |
+
## Quick Start
|
| 37 |
+
|
| 38 |
+
```python
|
| 39 |
+
from transformers import pipeline
|
| 40 |
+
|
| 41 |
+
import torch
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
prompt= "FILL IN THE QUESTION"
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
generator = pipeline(
|
| 50 |
+
|
| 51 |
+
model="CODEV",
|
| 52 |
+
|
| 53 |
+
task="text-generation",
|
| 54 |
+
|
| 55 |
+
torch_dtype=torch.bfloat16,
|
| 56 |
+
|
| 57 |
+
device_map="auto",
|
| 58 |
+
|
| 59 |
+
)
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
result = generator(prompt , max_length=2048, num_return_sequences=1, temperature=0.0)
|
| 64 |
+
|
| 65 |
+
response = result[0]["generated_text"]
|
| 66 |
+
|
| 67 |
+
print("Response:", response)
|
| 68 |
+
```
|
| 69 |
+
## Paper
|
| 70 |
+
**Arxiv:** <https://arxiv.org/abs/2407.10424>
|
| 71 |
+
|
| 72 |
+
Please cite the paper if you use the models from CodeV.
|
| 73 |
+
|
| 74 |
+
```
|
| 75 |
+
@misc{zhao2025codevempoweringllmshdl,
|
| 76 |
+
title={CodeV: Empowering LLMs with HDL Generation through Multi-Level Summarization},
|
| 77 |
+
author={Yang Zhao and Di Huang and Chongxiao Li and Pengwei Jin and Muxin Song and Yinan Xu and Ziyuan Nan and Mingju Gao and Tianyun Ma and Lei Qi and Yansong Pan and Zhenxing Zhang and Rui Zhang and Xishan Zhang and Zidong Du and Qi Guo and Xing Hu},
|
| 78 |
+
year={2025},
|
| 79 |
+
eprint={2407.10424},
|
| 80 |
+
archivePrefix={arXiv},
|
| 81 |
+
primaryClass={cs.PL},
|
| 82 |
+
url={https://arxiv.org/abs/2407.10424},
|
| 83 |
+
}
|
| 84 |
+
```
|
| 85 |
+
## Acknowledgements
|
| 86 |
+
|
| 87 |
+
* [Magicoder](https://github.com/ise-uiuc/magicoder): Training code, original datasets and data decontamination
|
| 88 |
+
* [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for CodeV-DeepSeek
|
| 89 |
+
* [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for CodeLlama
|
| 90 |
+
* [CodeQwen](https://github.com/QwenLM/CodeQwen1.5): CodeV-CodeQwen
|
assets/.DS_Store
ADDED
|
Binary file (6.15 kB). View file
|
|
|
assets/overview_v20250413.png
ADDED
|
Git LFS Details
|