appy-mod-beta1
appy-mod-beta1
is a vision-language encoder model fine-tuned fromsiglip2-base-patch16-224
for binary image classification. The model is trained to perform game content moderation, specifically classifying visual content as either safe (good) or unsafe (bad). It utilizes theSiglipForImageClassification
architecture.
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features https://arxiv.org/pdf/2502.14786
Classification Report:
precision recall f1-score support
bad 0.9763 0.9140 0.9441 1755
good 0.9279 0.9803 0.9534 1983
accuracy 0.9492 3738
macro avg 0.9521 0.9471 0.9487 3738
weighted avg 0.9506 0.9492 0.9490 3738
Label Space: 2 Classes
Class 0: bad (Unsafe content)
Class 1: good (Safe content)
Install Dependencies
pip install transformers torch pillow gradio hf_xet
Inference Code
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "KarteeMonkey/appy-mod-beta1" # Update this if using a different path
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Label mapping
id2label = {
"0": "bad",
"1": "good"
}
def classify_watermark(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_watermark,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=2, label="Game Anomaly Detection"),
title="Game Anomaly Detection SigLIP2",
description="Upload an image to detect whether it contains a anomaly."
)
if __name__ == "__main__":
iface.launch()
Intended Use
appy-mod-beta1
is designed for:
- Game Content Moderation – Automated moderation of user-generated or in-game visual content.
- Parental Control Tools – Supports identifying unsafe or inappropriate content in children’s games.
- Online Game Platforms – Enables scalable and automatic screening of images uploaded by users.
- Community Safety – Helps maintain safe and compliant visual environments in multiplayer games and forums.
- AI Moderation Research – A sample project for applying vision-language models to safety-critical applications.
- Downloads last month
- 169
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for KarteeMonkeys/appy-mod-beta1
Base model
google/siglip2-base-patch16-224