Chinese LLM MCQ Model - KAGGLE #2

這是NYCU深度學習課程KAGGLE #2的模型,使用Qwen2.5-7B-Instruct進行微調,加入了推理鏈能力。

模型資訊

  • 基礎模型: Qwen/Qwen2.5-7B-Instruct
  • 微調方法: LoRA (r=8, alpha=16)
  • 任務: 中文單選題問答(含推理過程)
  • 訓練數據: GPT-4生成的推理數據

使用方法

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

# 載入基礎模型
base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2.5-7B-Instruct",
    device_map="auto",
    trust_remote_code=True
)

# 載入LoRA
model = PeftModel.from_pretrained(base_model, "RayTsai/Kaggle_2")

# 載入tokenizer
tokenizer = AutoTokenizer.from_pretrained("RayTsai/Kaggle_2")

作者

  • Ray Tsai (110651053)
  • NYCU 深度學習課程 2025
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RayTsai/Kaggle_2

Base model

Qwen/Qwen2.5-7B
Adapter
(675)
this model