| license: other | |
| tags: | |
| - merge | |
| - mergekit | |
| - lazymergekit | |
| - mlx | |
| base_model: | |
| - NousResearch/Meta-Llama-3-8B-Instruct | |
| - mlabonne/OrpoLlama-3-8B | |
| - cognitivecomputations/dolphin-2.9-llama3-8b | |
| - Danielbrdz/Barcenas-Llama3-8b-ORPO | |
| - VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct | |
| - vicgalle/Configurable-Llama-3-8B-v0.3 | |
| - MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 | |
| # mlx-community/ChimeraLlama-3-8B-v3-unquantized | |
| This model was converted to MLX format from [`mlabonne/ChimeraLlama-3-8B-v3`]() using mlx-lm version **0.12.1**. | |
| Refer to the [original model card](https://huggingface.co/mlabonne/ChimeraLlama-3-8B-v3) for more details on the model. | |
| ## Use with mlx | |
| ```bash | |
| pip install mlx-lm | |
| ``` | |
| ```python | |
| from mlx_lm import load, generate | |
| model, tokenizer = load("mlx-community/ChimeraLlama-3-8B-v3-unquantized") | |
| response = generate(model, tokenizer, prompt="hello", verbose=True) | |
| ``` | |