🧭 DETECT: Determining Ease and Textual Clarity of German Text Simplifications

This repository contains the trained checkpoint for DETECT, an automated German Automatic Text Simplification (ATS) quality evaluation metric introduced in

“DETECT: Determining Ease and Textual Clarity of German Text Simplifications”.

DETECT provides fine-grained scoring across simplicity, meaning preservation, and fluency, along with a composite total score. Further information about the metric can be found in the description of the GitHub repository or in our accompanying paper.

🔎 Note

  • This repository hosts a checkpoint file only.
  • You must load it through the DETECT codebase (see usage below).
  • It is not directly compatible with AutoModel.from_pretrained().
  • The model supports reference-based text simplification evaluation only — it does not provide reference-free evaluation.

⚙️ Usage

Clone and install the DETECT codebase:

git clone https://github.com/ZurichNLP/DETECT.git
cd DETECT/detect
pip install -e .

Then, in Python:


# Initialize model
detect = DETECT("ZurichNLP/DETECT/best-LENS_multi_wechsel_reducedhs-epoch=04.ckpt", rescale=True)

complex = [
"Sie sind kulturell den Küstenbewohnern von Papua-Neuguinea verwandt."
]

simple = [
"Sie sind kulturell den Menschen in Papua-Neuguinea ähnlich."
]

references = [[
"Sie sind kulturell den Küstenbewohnern von Papua-Neuguinea ähnlich.",
"Sie ähneln den Menschen aus Papua-Neuguinea, die an der Küste leben."
]]

scores = detect.score(complex, simple, references, batch_size=8, devices=[0])
print(scores)
# [{'simplicity': 78.6, 'meaning_preservation': 80.1, 'fluency': 77.3, 'total': 78.3}]

Citation

If you use DETECT, please cite:

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ZurichNLP/DETECT

Finetuned
(1)
this model