Model Description

mistral_7b_yo_instruct is a text generation model in Yorรนbรก.

Intended uses & limitations

How to use


import requests

API_URL = "https://i8nykns7vw253vx3.us-east-1.aws.endpoints.huggingface.cloud"
headers = {
    "Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
    "Content-Type": "application/json"
}

def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.json()

# Prompt content: "Pแบนlแบน o. Bawo ni o se wa?" ("Hello. How are you?")	
output = query({
    "inputs": "Pแบนlแบน o. Bawo ni o se wa?",
})

# Model response: "O dabo. O jแบน แปjแป ti o dara." ("I am safe. It was a good day.")
print(output)

Eval results

Coming soon

Limitations and bias

This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.

Training data

This model is fine-tuned on 60k+ instruction-following demonstrations built from an aggregation of datasets (AfriQA, XLSum, MENYO-20k), and translations of Alpaca-gpt4).

Use and safety

We emphasize that mistral_7b_yo_instruct is intended only for research purposes and is not ready to be deployed for general use, namely because we have not designed adequate safety measures.

Downloads last month
3
Safetensors
Model size
7.24B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Dataset used to train seyabde/mistral_7b_yo_instruct

Spaces using seyabde/mistral_7b_yo_instruct 9