Instructions to use duyntnet/MythoMist-7b-imatrix-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use duyntnet/MythoMist-7b-imatrix-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="duyntnet/MythoMist-7b-imatrix-GGUF")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("duyntnet/MythoMist-7b-imatrix-GGUF", dtype="auto") - llama-cpp-python
How to use duyntnet/MythoMist-7b-imatrix-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="duyntnet/MythoMist-7b-imatrix-GGUF", filename="MythoMist-7b-IQ1_M.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use duyntnet/MythoMist-7b-imatrix-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf duyntnet/MythoMist-7b-imatrix-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf duyntnet/MythoMist-7b-imatrix-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf duyntnet/MythoMist-7b-imatrix-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf duyntnet/MythoMist-7b-imatrix-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf duyntnet/MythoMist-7b-imatrix-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf duyntnet/MythoMist-7b-imatrix-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf duyntnet/MythoMist-7b-imatrix-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf duyntnet/MythoMist-7b-imatrix-GGUF:Q4_K_M
Use Docker
docker model run hf.co/duyntnet/MythoMist-7b-imatrix-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use duyntnet/MythoMist-7b-imatrix-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "duyntnet/MythoMist-7b-imatrix-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "duyntnet/MythoMist-7b-imatrix-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/duyntnet/MythoMist-7b-imatrix-GGUF:Q4_K_M
- SGLang
How to use duyntnet/MythoMist-7b-imatrix-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "duyntnet/MythoMist-7b-imatrix-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "duyntnet/MythoMist-7b-imatrix-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "duyntnet/MythoMist-7b-imatrix-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "duyntnet/MythoMist-7b-imatrix-GGUF", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Ollama
How to use duyntnet/MythoMist-7b-imatrix-GGUF with Ollama:
ollama run hf.co/duyntnet/MythoMist-7b-imatrix-GGUF:Q4_K_M
- Unsloth Studio new
How to use duyntnet/MythoMist-7b-imatrix-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for duyntnet/MythoMist-7b-imatrix-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for duyntnet/MythoMist-7b-imatrix-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for duyntnet/MythoMist-7b-imatrix-GGUF to start chatting
- Docker Model Runner
How to use duyntnet/MythoMist-7b-imatrix-GGUF with Docker Model Runner:
docker model run hf.co/duyntnet/MythoMist-7b-imatrix-GGUF:Q4_K_M
- Lemonade
How to use duyntnet/MythoMist-7b-imatrix-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull duyntnet/MythoMist-7b-imatrix-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.MythoMist-7b-imatrix-GGUF-Q4_K_M
List all available models
lemonade list
output = llm(
"Once upon a time,",
max_tokens=512,
echo=True
)
print(output)Quantizations of https://huggingface.co/Gryphe/MythoMist-7b
Open source inference clients/UIs
Closed source inference clients/UIs
- LM Studio
- More will be added...
From original readme
MythoMist 7b is, as always, a highly experimental Mistral-based merge based on my latest algorithm, which actively benchmarks the model as it's being built in pursuit of a goal set by the user.
Addendum (2023-11-23): A more thorough investigation revealed a flaw in my original algorithm that has since been resolved. I've considered deleting this model as it did not follow its original objective completely but since there are plenty of folks enjoying it I'll be keeping it around. Keep a close eye on my MergeMonster repo for further developments and releases of merges produced by the Merge Monster.
The primary purpose for MythoMist was to reduce usage of the word anticipation, ministrations and other variations we've come to associate negatively with ChatGPT roleplaying data. This algorithm cannot outright ban these words, but instead strives to minimize the usage.
The script has now been made available on my Github. Warning - Plenty of VRAM is needed.
Final merge composition
After processing 12 models my algorithm ended up with the following (approximated) final composition:
| Model | Contribution |
|---|---|
| Neural-chat-7b-v3-1 | 26% |
| Synatra-7B-v0.3-RP | 22% |
| Airoboros-m-7b-3.1.2 | 10% |
| Toppy-M-7B | 10% |
| Zephyr-7b-beta | 7% |
| Nous-Capybara-7B-V1.9 | 5% |
| OpenHermes-2.5-Mistral-7B | 5% |
| Dolphin-2.2.1-mistral-7b | 4% |
| Noromaid-7b-v0.1.1 | 4% |
| SynthIA-7B-v1.3 | 3% |
| Mistral-7B-v0.1 | 2% |
| Openchat_3.5 | 2% |
There is no real logic in how these models were divided throughout the merge - Small bits and pieces were taken from each and then mixed in with other models on a layer by layer basis, using a pattern similar to my MythoMax recipe in which underlying tensors are mixed in a criss-cross manner.
This new process only decides on the model's layers, not the singular lm_head and embed_tokens layers which influence much of the model's output. I ran a seperate script for that, picking the singular tensors that resulted in the longest responses, which settled on Toppy-M-7B.
Prompt Format
Due to the wide variation in prompt formats used in this merge I (for now) recommend using Alpaca as the prompt template for compatibility reasons:
### Instruction:
Your instruction or question here.
### Response:
- Downloads last month
- 173
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="duyntnet/MythoMist-7b-imatrix-GGUF", filename="", )