This repository contains models that have been converted to the GGUF format with various quantizations from an IBM Granite base model.
Please reference the base model's full model card here: https://huggingface.co/ibm-granite/granite-3.2-2b-instruct
Granite-3.2-2B-Instruct-GGUF
Model Summary: Granite-3.2-2B-Instruct is an 2-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of Granite-3.1-2B-Instruct, it has been trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks. The model allows controllability of its thinking capability, ensuring it is applied only when required.
- Developers: Granite Team, IBM
 - Website: Granite Docs
 - Release Date: February 26th, 2025
 - License: Apache 2.0
 
Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.
Intended Use: This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.
Capabilities
- Thinking
 - Summarization
 - Text classification
 - Text extraction
 - Question-answering
 - Retrieval Augmented Generation (RAG)
 - Code related tasks
 - Function-calling tasks
 - Multilingual dialog use cases
 - Long-context tasks including long document/meeting summarization, long document QA, etc.
 
- Downloads last month
 - 861
 
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for ibm-research/granite-3.2-2b-instruct-GGUF
Base model
ibm-granite/granite-3.1-2b-base