Llama 3.2 3B DART LLM - GGUF Quantized Models
This repository contains GGUF quantized versions of the Llama 3.2 3B DART LLM model, fine-tuned for robot task planning in construction environments.
Model Details
- Base Model: meta-llama/Llama-3.2-3B
- Fine-tuned Version: Based on QLoRA fine-tuned model for robotics task planning
- Format: GGUF (GPT-Generated Unified Format)
- Use Case: Optimized for inference with llama.cpp and compatible frameworks
Available Files
- M:
llama_3.2_3b-lora-qlora-dart-llm_q4_k_m.gguf
- m quantization
Usage with llama.cpp
# Clone llama.cpp repository
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
# Build llama.cpp
make
# Download a quantized model (example with q4_k_m)
wget https://huggingface.co/YongdongWang/llama-3.2-3b-lora-qlora-dart-llm-gguf/resolve/main/{model_filename}_q4_k_m.gguf
# Run inference
./main -m {model_filename}_q4_k_m.gguf -p "### Instruction:\nDeploy Excavator 1 to Soil Area 1 for excavation\n\n### Response:\n" -n 512
Usage with Python (llama-cpp-python)
from llama_cpp import Llama
# Load model
llm = Llama(model_path="{model_filename}_q4_k_m.gguf", n_ctx=2048)
# Generate response
prompt = "### Instruction:\nDeploy Excavator 1 to Soil Area 1 for excavation\n\n### Response:\n"
output = llm(prompt, max_tokens=512, stop=["</s>"], echo=False)
print(output['choices'][0]['text'])
Quantization Details
Different quantization levels offer trade-offs between model size, inference speed, and quality:
- f16: Full 16-bit precision (largest, highest quality)
- q8_0: 8-bit quantization (good balance of size and quality)
- q5_k_m: 5-bit quantization with mixed precision (recommended)
- q4_k_m: 4-bit quantization (good for most use cases)
- q3_k_m: 3-bit quantization (smaller, some quality loss)
- q2_k: 2-bit quantization (smallest, significant quality loss)
Performance
The model generates structured JSON task sequences for construction robotics:
{
"tasks": [
{
"instruction_function": {
"dependencies": [],
"name": "target_area_for_specific_robots",
"object_keywords": ["soil_area_1"],
"robot_ids": ["robot_excavator_01"],
"robot_type": null
},
"task": "target_area_for_specific_robots_1"
}
]
}
Original Model
This GGUF model is converted from: YongdongWang/llama-3.2-3b-lora-qlora-dart-llm
License
This model inherits the license from the base model (meta-llama/Llama-3.2-3B).
Citation
@misc{llama_3.2_3b_lora_qlora_dart_llm_gguf,
title={Llama 3.2 3B DART LLM - GGUF Quantized Models},
author={YongdongWang},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/YongdongWang/llama-3.2-3b-lora-qlora-dart-llm-gguf}
}
- Downloads last month
- 6
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for YongdongWang/llama-3.2-3b-lora-qlora-dart-llm-gguf
Base model
meta-llama/Llama-3.2-3B