metadata
language:
- en
tags:
- coreference-resolution
- xcore
- long-document
- cross-document
- maverick
license:
- cc-by-nc-sa-4.0
datasets:
- ECB
metrics:
- CoNLL
task_categories:
- coreference-resolution
model-index:
- name: sapienzanlp/xcore-ecb
results:
- task:
type: coreference-resolution
name: coreference-resolution
dataset:
name: ECB
type: coreference
metrics:
- name: Avg. F1
type: CoNLL
value: 42.4
xCoRe ECB+
Official weights for xcore, pretrained on LitBank and trained on ECB+, based on DeBERTa-large. This model achieves 42.4 Avg CoNLL-F1 on ECB+.
Other available models at SapienzaNLP huggingface hub:
| hf_model_name | dataset | Score | Mode |
|---|---|---|---|
| "sapienzanlp/xcore-litbank" | LitBank | 78.2 | Long-Document (Book Splits) |
| "sapienzanlp/xcore-ecb" | ECB+ | 42.4 | Cross-Document (News) |
| "sapienzanlp/xcore-scico" | SciCo | 31.0 | Cross-Document (Scientific) |
Results on ECB+
xCoRe: Cross Context Coreference Resolution Defying recent trends
Citation
This work has been published at EMNLP 2025 main conference. If you use any part, please consider citing our paper as follows:
@inproceedings{martinelli-etal-2025-xcore,
title = "x{C}o{R}e: Cross-context Coreference Resolution",
author = "Martinelli, Giuliano and
Gatti, Bruno and
Navigli, Roberto",
editor = "Christodoulopoulos, Christos and
Chakraborty, Tanmoy and
Rose, Carolyn and
Peng, Violet",
booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.emnlp-main.1737/",
pages = "34252--34266"
}