YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Logics-Thinking

πŸ”₯ News

  • 2025.09.30 🌟 We are honored to open source Logics-Thinking-32B, excelling in complex logical and abstract reasoning.

  • 2025.09.15 🌟 We are honored to launch Logics-Thinking-8B, a next-generation multimodal large model developed with great dedication by the Logics Team at Alibaba Group. Logics-Thinking is specifically engineered for advanced reasoning tasks, demonstrating outstanding performance in the domain of complex logical and abstract reasoning. This launch marks a key step in our mission to continuously push the frontiers of artificial intelligence, and we are excited for the future it will enable.

Logics Performance EN Logics Performance CH

LOGICS-THINKING Experimental Results

The Logics-Thinking training pipeline comprises three key steps: (1) Long CoT Data Synthesis, which involves Prompt Engineering and Selective Sampling; (2) Model Merging and (3) Advanced Training, which includes Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) methods.

Contents

πŸ“ Quick Start

Install

pip install -r requirements.txt

Inference

import torch
from transformers import AutoModelForCausalLM, AutoProcessor

image_base64 = "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mP8/wcAAwAB/epv2AAAAABJRU5ErkJggg=="
image_url = "http://path/to/your/image.jpg"
image_path = "file:///path/to/your/image.jpg"
video_path = "file:///path/to/video1.mp4"
text="Please describe this image or video."

MODEL_PATH = 'Logics-MLLM/Logics-Thinking-8B'
model = AutoModelForCausalLM.from_pretrained(
    MODEL_PATH,
    torch_dtype="auto",
    device_map="auto", 
    trust_remote_code=True,
)

processor = AutoProcessor.from_pretrained(
    MODEL_PATH,
    trust_remote_code=True,
)

inputs = processor(
    text=text,
    images=image_path,
    return_tensors="pt"
)

generated_ids = model.generate(**inputs)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(generated_text)

πŸ“ˆ Experimental Results

LOGICS-THINKING Experimental Results Table. Performance comparison on multimodal mathematical and reasoning benchmarks.
LOGICS-THINKING Experimental Results Table. Performance comparison of models on multi-subject Chinese benchmark.

Logics-Thinking-8B exhibits robust performance on evaluation sets requiring sophisticated logical and mathematical skills, such as WeMath, MathVerse, and LogicVista, which demonstrates its advanced capacity for logical reasoning and for solving complex quantitative problems.

LOGICS-THINKING Experimental Results Examples of Responses Generated by the Logics-Thinkingfor Chinese and English Questions.

Acknowledgement

Logics is developed based on the codebases of the following projects: SigLIP,ConvNeXT, Qwen3, Qwen2.5-VL, VLMEvalKit. We sincerely thank these projects for their outstanding work.

Downloads last month
23
Safetensors
Model size
9B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support