Model

Model Page: Gemma

  • fine-tuned the google/gemma-2b-it model.

How to Use it

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("carrotter/ko-gemma-2b-it-sft")
model = AutoModelForCausalLM.from_pretrained("carrotter/ko-gemma-2b-it-sft")

chat = [
    { "role": "user", "content": "피보나치 수열 파이썬 코드로 알려줘" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=100)
print(tokenizer.decode(outputs[0]))

Example Output

<bos><start_of_turn>user
피보나치 수열 파이썬 코드로 알려줘<end_of_turn>
<start_of_turn>model
다음은 피보나치 수열을 파이썬으로 구현하는 방법의 예입니다:

def fibonacci(n):
    if n <= 1:
        return n
    else:
        return fibonacci(n-1) + fibonacci(n-2)

이 함수는 n이 피보나치 수열의 몇 번째 항인지에 따라 반환합니다. n이 1이거나 2인 경우

Applications

This fine-tuned model is particularly suited for [mention applications, e.g., chatbots, question-answering systems, etc.]. Its enhanced capabilities ensure more accurate and contextually appropriate responses in these domains.

Limitations and Considerations

While our fine-tuning process has optimized the model for specific tasks, it's important to acknowledge potential limitations. The model's performance can still vary based on the complexity of the task and the specificities of the input data. Users are encouraged to evaluate the model thoroughly in their specific context to ensure it meets their requirements.

Downloads last month
6
Safetensors
Model size
3B params
Tensor type
F16
·
Inference Providers NEW
Input a message to start chatting with CarrotAI/ko-gemma-2b-it-sft.

Model tree for CarrotAI/ko-gemma-2b-it-sft

Quantizations
2 models

Dataset used to train CarrotAI/ko-gemma-2b-it-sft