Frankenmodels
Collection
They're not supposed to be that size! Neat, right? β’ 8 items β’ Updated β’ 3
How to use chargoddard/llama2-22b with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="chargoddard/llama2-22b") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("chargoddard/llama2-22b")
model = AutoModelForCausalLM.from_pretrained("chargoddard/llama2-22b")How to use chargoddard/llama2-22b with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "chargoddard/llama2-22b"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "chargoddard/llama2-22b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/chargoddard/llama2-22b
How to use chargoddard/llama2-22b with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "chargoddard/llama2-22b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "chargoddard/llama2-22b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "chargoddard/llama2-22b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "chargoddard/llama2-22b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use chargoddard/llama2-22b with Docker Model Runner:
docker model run hf.co/chargoddard/llama2-22b
This is Llama 2 13b with some additional attention heads from original-flavor Llama 33b frankensteined on.
Fine-tuned on ~10M tokens from RedPajama to settle in the transplants a little.
Not intended for use as-is - this model is meant to serve as a base for further tuning, hopefully with a greater capacity for learning than 13b.
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 46.85 |
| ARC (25-shot) | 58.53 |
| HellaSwag (10-shot) | 82.55 |
| MMLU (5-shot) | 54.68 |
| TruthfulQA (0-shot) | 39.84 |
| Winogrande (5-shot) | 76.32 |
| GSM8K (5-shot) | 9.93 |
| DROP (3-shot) | 6.08 |