NSMC ๊ฐ์ • ๋ถ„์„ LoRA ๋ชจ๋ธ

NSMC ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ํŒŒ์ธํŠœ๋‹๋œ ํ•œ๊ตญ์–ด ๊ฐ์ • ๋ถ„์„ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.

๋ชจ๋ธ ์„ค๋ช…

  • ** ๋ฒ ์ด์Šค ๋ชจ๋ธ **: klue/bert_base
  • ** ํŒŒ์ธ ํŠœ๋‹ ๋ฐฉ๋ฒ•**: LoRA
  • ์–ธ์–ด: ํ•œ๊ตญ์–ด

์„ฑ๋Šฅ

  • ** ์ตœ์ข… ์„ฑ๋Šฅ**: 85%

ํ•™์Šต์ •๋ณด

๋ฐ์ดํ„ฐ์…‹

  • ์ด๋ฆ„: NSMC -** ํ•™์Šต ๋ฐ์ดํ„ฐ**:10000

ํ•™์Šต์ •๋ณด

  • ** ใ…‡

๋ชจ๋ธ ์ •๋ณด

์‚ฌ์šฉ ๋ฐฉ๋ฒ•

from peft import PeftModel

# ๋ฒ ์ด์Šค ๋ชจ๋ธ ๋กœ๋“œ (๋ถ„๋ฅ˜์šฉ)
print("๋ฒ ์ด์Šค ๋ชจ๋ธ ๋กœ๋”ฉ")
base_model_reload = AutoModelForSequenceClassification.from_pretrained(
    "klue/bert-base",
    num_labels=2
)

# ์—…๋กœ๋“œํ•œ LoRA ์–ด๋Œ‘ํ„ฐ ๋กœ๋“œ
print(f"LoRA ์–ด๋Œ‘ํ„ฐ ๋กœ๋”ฉ: yooranyeong/nsmc-sentiment-lora")
model_reload = PeftModel.from_pretrained(base_model_reload, model_name_upload)
tokenizer_reload = AutoTokenizer.from_pretrained(model_name_upload)

# GPU๋กœ ์ด๋™
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_reload = model_reload.to(device)
model_reload.eval()

print("๋ชจ๋ธ ๋กœ๋“œ ์™„๋ฃŒ!")
print("์ด์ œ ์ด ์ฝ”๋“œ๋กœ ์–ด๋””์„œ๋“  ๋‚ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค!")

Note: ์ด ๋ชจ๋ธ์€ ๊ต์œก ๋ชฉ์ ์œผ๋กœ ๋งŒ๋“ค์–ด์กŒ์Šต๋‹ˆ๋‹ค.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for yooranyeong/nsmc-sentiment-lora

Base model

klue/bert-base
Adapter
(26)
this model

Dataset used to train yooranyeong/nsmc-sentiment-lora

Space using yooranyeong/nsmc-sentiment-lora 1