XLM-RoBERTa for English Metaphor Interpretation

This model is a fine-tuned version of XLM-RoBERTa-large on MultiNLI dataset, for metaphor interpretation through NLI in English at premise-hypothesis level. This model is presented in our paper Meta4XNLI: A Crosslingual Parallel Corpus for Metaphor Detection and Interpretation

Model Sources

Training & Testing Data

MultiNLI dataset (Williams et al. 2018).

Training Hyperparameters

  • Batch size: 8
  • Weight Decay: 0.1
  • Learning Rate: 0.00005
  • Epochs: 4

Results

  • Accuracy: 83.36

Label Dictionary

{
  "LABEL_0": "B-METAPHOR",
  "LABEL_1": "I-METAPHOR",
  "LABEL_2": "O"
}

Citation

If you use this model, please cite our work:


@InProceedings{N18-1101,
  author = "Williams, Adina
            and Nangia, Nikita
            and Bowman, Samuel",
  title = "A Broad-Coverage Challenge Corpus for
           Sentence Understanding through Inference",
  booktitle = "Proceedings of the 2018 Conference of
               the North American Chapter of the
               Association for Computational Linguistics:
               Human Language Technologies, Volume 1 (Long
               Papers)",
  year = "2018",
  publisher = "Association for Computational Linguistics",
  pages = "1112--1122",
  location = "New Orleans, Louisiana",
  url = "http://aclweb.org/anthology/N18-1101"
}


@misc{sanchezbayona2024meta4xnlicrosslingualparallelcorpus,
      title={Meta4XNLI: A Crosslingual Parallel Corpus for Metaphor Detection and Interpretation}, 
      author={Elisa Sanchez-Bayona and Rodrigo Agerri},
      year={2024},
      eprint={2404.07053},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2404.07053}, 
}

Dataset Card Contact

{elisa.sanchez, rodrigo.agerri}@ehu.eus

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train HiTZ/xlm-roberta-large-metaphor-interpretationNLI-en

Collection including HiTZ/xlm-roberta-large-metaphor-interpretationNLI-en