Forge-T5-Base-s1
This model is initialized from the pre-trained google-t5/t5-base and fine-tuned on the AL-GR/AL-GR-v1 dataset using the FORGE framework for 4 training epochs.
Evaluation Results on AL-GR/AL-GR-v1
| Model | HR@20 | HR@100 | HR@500 | HR@1000 |
|---|---|---|---|---|
| Forge-Qwen 2.5-0.5B-Base-s1 | 0.0506 | 0.1277 | 0.2602 | 0.3068 |
| Forge-T5-Base-s1 | 0.0284 | 0.0689 | 0.1372 | 0.1557 |
Note: HR@K denotes Hit Rate at K — the proportion of test queries for which the correct answer appears in the top-K retrieved/generated results.
Usage
1. Download the Model
You can download this model locally using the huggingface_hub library:
import os
os.environ["HF_ENDPOINT"] = "https://hf-mirror.com" # Optional: use mirror for faster download in some regions
os.environ["KMP_DUPLICATE_LIB_OK"] = "True"
from huggingface_hub import snapshot_download
snapshot_download(
repo_id='AL-GR/Forge-T5-Base-s1',
local_dir='{YOUR_LOCAL_DIR}', # Replace with your desired local path
local_dir_use_symlinks=False,
)
2. Update Configuration
After downloading, update the configuration file used by the FORGE framework. Specifically, replace the load_checkpoint_from field in the JSON config file:
File: algr/config/generate_t5base_3layer_tiny.json
Update to:
"load_checkpoint_from": "{YOUR_LOCAL_DIR}"
Replace
{YOUR_LOCAL_DIR}with the actual local path where you downloaded the model.
For more details about the training setup, dataset, or evaluation protocol, please refer to the FORGE framework repository.
- Downloads last month
- 92