AI & ML interests
None defined yet.
Recent Activity
View all activity
-
kaitchup/Qwen2.5-72B-Instruct-autoround-2bit-32g-4096-gptq
9B • Updated • 4 -
kaitchup/Qwen2.5-72B-Instruct-autoround-2bit-128g-4096-gptq
7B • Updated • 1 -
kaitchup/Qwen2.5-72B-Instruct-autoround-2bit-128g-2048-gptq
7B • Updated • 1 -
kaitchup/Qwen2.5-72B-Instruct-autoround-2bit-64g-4096-gptq
8B • Updated • 1
-
kaitchup/Phi-4-mini-instruct-AutoRoundGPTQ-8bit
Text Generation • 1B • Updated • 4 -
kaitchup/Phi-4-mini-instruct-AutoRoundGPTQ-3bit
Text Generation • 0.9B • Updated • 3 -
kaitchup/Phi-4-mini-instruct-AutoRoundGPTQ-4bit
Text Generation • 1B • Updated • 3 -
kaitchup/Phi-4-mini-instruct-AutoRoundGPTQ-2bit
Text Generation • 0.9B • Updated • 3
-
kaitchup/Falcon3-10B-Instruct-AutoRound-GPTQ-4bit
Text Generation • 2B • Updated • 3 -
kaitchup/Falcon3-10B-Base-AutoRound-GPTQ-4bit
Text Generation • 2B • Updated • 3 -
kaitchup/Falcon3-7B-Base-AutoRound-GPTQ-4bit
Text Generation • 2B • Updated • 4 -
kaitchup/Falcon3-7B-Instruct-AutoRound-GPTQ-4bit
Text Generation • 2B • Updated • 3
-
kaitchup/Qwen2.5-1.5B-AutoRound-GPTQ-asym-4bit
Text Generation • 0.4B • Updated • 4 -
kaitchup/Qwen2.5-7B-AutoRound-GPTQ-asym-4bit
Text Generation • 2B • Updated • 4 -
kaitchup/Qwen2.5-1.5B-Instruct-AutoRound-GPTQ-asym-4bit
Text Generation • 0.4B • Updated • 4 -
kaitchup/Qwen2.5-7B-Instruct-AutoRound-GPTQ-asym-4bit
Text Generation • 2B • Updated • 4
Machine translation adapters for Llama 2 7B.
Quantized and fine-tuned versions of the Yi models
A collection of 7B models made with mergekit.
-
kaitchup/Phi-3-mini-4k-instruct-gptq-4bit
Text Generation • 0.7B • Updated • 209k • 2 -
kaitchup/Phi-3-medium-128k-instruct-awq-4bit
Text Generation • 2B • Updated -
kaitchup/Phi-3-mini-4k-instruct-bnb-4bit
Text Generation • 2B • Updated • 5 -
kaitchup/Phi-3-medium-4k-instruct-awq-4bit
Text Generation • 2B • Updated
-
kaitchup/Meta-Llama-3.1-8B-Instruct-autoround-gptq-4bit-sym
Text Generation • 2B • Updated • 7 • 1 -
kaitchup/Meta-Llama-3.1-8B-Instruct-awq-4bit
Text Generation • 2B • Updated • 3 • 1 -
kaitchup/Meta-Llama-3.1-8B-awq-4bit
Text Generation • 2B • Updated • 5 -
kaitchup/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation • 2B • Updated • 3
-
kaitchup/Mistral-NeMo-Minitron-8B-Base-Minivoc-32k-v0.1a
Text Generation • 8B • Updated -
kaitchup/Llama-3.1-8B-Minivoc-32k-v0.1a
Text Generation • 7B • Updated -
kaitchup/Qwen2-1.5B-Minivoc-32k-v0.1a
Text Generation • 1B • Updated • 3 • 2 -
kaitchup/Qwen2.5-1.5B-Minivoc-32k-v0.1a-AutoRound-GPTQ-asym-4bit
Text Generation • 0.2B • Updated • 4
-
kaitchup/OLMo-2-1124-7B-Instruct-AutoRound-GPTQ-4bit
Text Generation • 2B • Updated • 3 • 1 -
kaitchup/Llama-3.1-Tulu-3-70B-AutoRound-GPTQ-4bit
Text Generation • 11B • Updated • 3 -
kaitchup/Llama-3.1-Tulu-3-8B-AutoRound-GPTQ-4bit
Text Generation • 2B • Updated • 3 -
kaitchup/OLMo-2-1124-13B-Instruct-AutoRound-GPTQ-4bit
Text Generation • 3B • Updated • 16
Some language pairs of OPUS formatted to have source and target sentences as single sequences. Intended to facilitate fine-tuning of causal LLMs.
Llama 2 7B, 13B, Llama 3 8B, and Mistral 7B quantized with GPTQ in 2-bit, 3-bit, 4-bit and 8-bit with GPTQ.
Contaminated Mistral 7B and TinyLlama adapters, and the datasets used for contamination.
-
kaitchup/Mistral-NeMo-Minitron-8B-Base-AutoRound-GPTQ-sym-4bit
Text Generation • 2B • Updated • 3 -
kaitchup/Mistral-NeMo-Minitron-8B-Base-AutoRound-GPTQ-asym-4bit
Text Generation • 2B • Updated • 3 -
kaitchup/Mistral-Nemo-Base-2407-AutoRound-GPTQ-asym-4bit
Text Generation • 3B • Updated • 5 -
kaitchup/Llama-3.1-Minitron-4B-Width-Base-AutoRound-GPTQ-asym-4bit
Text Generation • 1B • Updated • 4
-
kaitchup/Qwen2.5-72B-Instruct-autoround-2bit-32g-4096-gptq
9B • Updated • 4 -
kaitchup/Qwen2.5-72B-Instruct-autoround-2bit-128g-4096-gptq
7B • Updated • 1 -
kaitchup/Qwen2.5-72B-Instruct-autoround-2bit-128g-2048-gptq
7B • Updated • 1 -
kaitchup/Qwen2.5-72B-Instruct-autoround-2bit-64g-4096-gptq
8B • Updated • 1
-
kaitchup/Phi-4-mini-instruct-AutoRoundGPTQ-8bit
Text Generation • 1B • Updated • 4 -
kaitchup/Phi-4-mini-instruct-AutoRoundGPTQ-3bit
Text Generation • 0.9B • Updated • 3 -
kaitchup/Phi-4-mini-instruct-AutoRoundGPTQ-4bit
Text Generation • 1B • Updated • 3 -
kaitchup/Phi-4-mini-instruct-AutoRoundGPTQ-2bit
Text Generation • 0.9B • Updated • 3
-
kaitchup/Falcon3-10B-Instruct-AutoRound-GPTQ-4bit
Text Generation • 2B • Updated • 3 -
kaitchup/Falcon3-10B-Base-AutoRound-GPTQ-4bit
Text Generation • 2B • Updated • 3 -
kaitchup/Falcon3-7B-Base-AutoRound-GPTQ-4bit
Text Generation • 2B • Updated • 4 -
kaitchup/Falcon3-7B-Instruct-AutoRound-GPTQ-4bit
Text Generation • 2B • Updated • 3
-
kaitchup/OLMo-2-1124-7B-Instruct-AutoRound-GPTQ-4bit
Text Generation • 2B • Updated • 3 • 1 -
kaitchup/Llama-3.1-Tulu-3-70B-AutoRound-GPTQ-4bit
Text Generation • 11B • Updated • 3 -
kaitchup/Llama-3.1-Tulu-3-8B-AutoRound-GPTQ-4bit
Text Generation • 2B • Updated • 3 -
kaitchup/OLMo-2-1124-13B-Instruct-AutoRound-GPTQ-4bit
Text Generation • 3B • Updated • 16
-
kaitchup/Qwen2.5-1.5B-AutoRound-GPTQ-asym-4bit
Text Generation • 0.4B • Updated • 4 -
kaitchup/Qwen2.5-7B-AutoRound-GPTQ-asym-4bit
Text Generation • 2B • Updated • 4 -
kaitchup/Qwen2.5-1.5B-Instruct-AutoRound-GPTQ-asym-4bit
Text Generation • 0.4B • Updated • 4 -
kaitchup/Qwen2.5-7B-Instruct-AutoRound-GPTQ-asym-4bit
Text Generation • 2B • Updated • 4
Some language pairs of OPUS formatted to have source and target sentences as single sequences. Intended to facilitate fine-tuning of causal LLMs.
Machine translation adapters for Llama 2 7B.
Llama 2 7B, 13B, Llama 3 8B, and Mistral 7B quantized with GPTQ in 2-bit, 3-bit, 4-bit and 8-bit with GPTQ.
Quantized and fine-tuned versions of the Yi models
Contaminated Mistral 7B and TinyLlama adapters, and the datasets used for contamination.
A collection of 7B models made with mergekit.
-
kaitchup/Phi-3-mini-4k-instruct-gptq-4bit
Text Generation • 0.7B • Updated • 209k • 2 -
kaitchup/Phi-3-medium-128k-instruct-awq-4bit
Text Generation • 2B • Updated -
kaitchup/Phi-3-mini-4k-instruct-bnb-4bit
Text Generation • 2B • Updated • 5 -
kaitchup/Phi-3-medium-4k-instruct-awq-4bit
Text Generation • 2B • Updated
-
kaitchup/Meta-Llama-3.1-8B-Instruct-autoround-gptq-4bit-sym
Text Generation • 2B • Updated • 7 • 1 -
kaitchup/Meta-Llama-3.1-8B-Instruct-awq-4bit
Text Generation • 2B • Updated • 3 • 1 -
kaitchup/Meta-Llama-3.1-8B-awq-4bit
Text Generation • 2B • Updated • 5 -
kaitchup/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation • 2B • Updated • 3
-
kaitchup/Mistral-NeMo-Minitron-8B-Base-AutoRound-GPTQ-sym-4bit
Text Generation • 2B • Updated • 3 -
kaitchup/Mistral-NeMo-Minitron-8B-Base-AutoRound-GPTQ-asym-4bit
Text Generation • 2B • Updated • 3 -
kaitchup/Mistral-Nemo-Base-2407-AutoRound-GPTQ-asym-4bit
Text Generation • 3B • Updated • 5 -
kaitchup/Llama-3.1-Minitron-4B-Width-Base-AutoRound-GPTQ-asym-4bit
Text Generation • 1B • Updated • 4
-
kaitchup/Mistral-NeMo-Minitron-8B-Base-Minivoc-32k-v0.1a
Text Generation • 8B • Updated -
kaitchup/Llama-3.1-8B-Minivoc-32k-v0.1a
Text Generation • 7B • Updated -
kaitchup/Qwen2-1.5B-Minivoc-32k-v0.1a
Text Generation • 1B • Updated • 3 • 2 -
kaitchup/Qwen2.5-1.5B-Minivoc-32k-v0.1a-AutoRound-GPTQ-asym-4bit
Text Generation • 0.2B • Updated • 4