Update README.md
Browse files
README.md
CHANGED
|
@@ -84,6 +84,87 @@ The goal of the FIM task is to fill in the missing parts of the code, generating
|
|
| 84 |
{prefix}[SUF]{suffix}[MID]
|
| 85 |
````
|
| 86 |
It is recommended to use our template during inference.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 87 |
## Paper
|
| 88 |
**Arxiv:** <https://arxiv.org/abs/2407.10424>
|
| 89 |
|
|
|
|
| 84 |
{prefix}[SUF]{suffix}[MID]
|
| 85 |
````
|
| 86 |
It is recommended to use our template during inference.
|
| 87 |
+
|
| 88 |
+
## Run CodeV-All Models with Twinny
|
| 89 |
+
|
| 90 |
+
The instructions below use `codev-all-qc` as an example. For other models, please make corresponding adjustments.
|
| 91 |
+
|
| 92 |
+
### Install Ollama
|
| 93 |
+
|
| 94 |
+
Refer to the [official documentation](https://github.com/ollama/ollama/tree/main/docs).
|
| 95 |
+
|
| 96 |
+
### Import a Model in Ollama
|
| 97 |
+
|
| 98 |
+
#### Create a Modelfile
|
| 99 |
+
|
| 100 |
+
Create a file named `Modelfile` and fill it with the following content:
|
| 101 |
+
|
| 102 |
+
```
|
| 103 |
+
from path/to/codev-all-qc
|
| 104 |
+
|
| 105 |
+
TEMPLATE """{{ .Prompt }}"""
|
| 106 |
+
|
| 107 |
+
PARAMETER stop "```"
|
| 108 |
+
```
|
| 109 |
+
|
| 110 |
+
Replace `path/to/codev-all-qc` with the actual path to your model. You can also customize parameters (e.g., temperature). See the [Modelfile Reference](https://github.com/ollama/ollama/blob/main/docs/modelfile.md) for details.
|
| 111 |
+
|
| 112 |
+
#### Import CodeV-ALL
|
| 113 |
+
|
| 114 |
+
Start the Ollama service:
|
| 115 |
+
|
| 116 |
+
```
|
| 117 |
+
ollama serve
|
| 118 |
+
```
|
| 119 |
+
|
| 120 |
+
Create the model:
|
| 121 |
+
|
| 122 |
+
```
|
| 123 |
+
ollama create codev-all-qc -f path/to/Modelfile
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
Repace `path/to/Modelfile` with the actual path to your Modelfile. Wait for the model creation process to complete.
|
| 127 |
+
|
| 128 |
+
### **Twinny Setup**
|
| 129 |
+
|
| 130 |
+
#### Install Twinny
|
| 131 |
+
|
| 132 |
+
Open VS Code and install Twinny in the Extensions Marketplace.
|
| 133 |
+
|
| 134 |
+
<img src="./assets/image-20250912155617922.png" alt="image-20250912155617922" style="zoom: 35%;" />
|
| 135 |
+
|
| 136 |
+
#### Twinny Configuration
|
| 137 |
+
|
| 138 |
+
Open the FIM Configuration page.
|
| 139 |
+
|
| 140 |
+
<img src="./assets/7449b0e6ac2ff722339b7c74f37a8b0e.png" alt="7449b0e6ac2ff722339b7c74f37a8b0e" style="zoom:33%;" />
|
| 141 |
+
|
| 142 |
+
Enter the settings as shown below. The model name should match the one used during `ollama create`. Modify the hostname according to your setup (if Ollama is running on a different node, use that node’s IP address; for local use, use `0.0.0.0`). Click Save.
|
| 143 |
+
|
| 144 |
+
<img src="./assets/image-20250912160402939.png" alt="image-20250912160402939" style="zoom: 35%;" />
|
| 145 |
+
|
| 146 |
+
Go to Template Configuration and open the template editor.
|
| 147 |
+
|
| 148 |
+
<img src="./assets/image-20250912160957699.png" alt="image-20250912160957699" style="zoom: 35%;" />
|
| 149 |
+
|
| 150 |
+
Open `fim.hbs`, replace its content with the following, and save:
|
| 151 |
+
|
| 152 |
+
```
|
| 153 |
+
<|fim_prefix|>```verilog\n<verilog>{{{prefix}}}<|fim_suffix|>{{{suffix}}}<|fim_middle|>
|
| 154 |
+
```
|
| 155 |
+
|
| 156 |
+
<img src="./assets/image-20250912160901631.png" alt="image-20250912160901631" style="zoom: 33%;" />
|
| 157 |
+
|
| 158 |
+
Finally, ensure the Fim option is checked in the template settings. Note: you may need to re-enable this each time VS Code restarts.
|
| 159 |
+
|
| 160 |
+
<img src="./assets/bd1fc20b0075656ba4e5321523832e19.png" alt="bd1fc20b0075656ba4e5321523832e19" style="zoom:35%;" />
|
| 161 |
+
|
| 162 |
+
#### Try FIM
|
| 163 |
+
|
| 164 |
+
You can now try FIM while writing code in VS Code. Note: The first time you use completion, Ollama will load the model, which may cause a significant delay.
|
| 165 |
+
|
| 166 |
+
<img src="./assets/image-20250225124004805.png" alt="image-20250225124004805" style="zoom: 67%;" />
|
| 167 |
+
|
| 168 |
## Paper
|
| 169 |
**Arxiv:** <https://arxiv.org/abs/2407.10424>
|
| 170 |
|