GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge
Abstract
BERT-based models fine-tuned on SemCor3.0 achieve state-of-the-art performance in WSD by incorporating context-gloss pairs.
Word Sense Disambiguation (WSD) aims to find the exact sense of an ambiguous word in a particular context. Traditional supervised methods rarely take into consideration the lexical resources like WordNet, which are widely utilized in knowledge-based methods. Recent studies have shown the effectiveness of incorporating gloss (sense definition) into neural networks for WSD. However, compared with traditional word expert supervised methods, they have not achieved much improvement. In this paper, we focus on how to better leverage gloss knowledge in a supervised neural WSD system. We construct context-gloss pairs and propose three BERT-based models for WSD. We fine-tune the pre-trained BERT model on SemCor3.0 training corpus and the experimental results on several English all-words WSD benchmark datasets show that our approach outperforms the state-of-the-art systems.
Community
@librarian-bot recommend
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- PolyBERT: Fine-Tuned Poly Encoder BERT-Based Model for Word Sense Disambiguation (2025)
- Positional Attention for Efficient BERT-Based Named Entity Recognition (2025)
- QTP-Net: A Quantum Text Pre-training Network for Natural Language Processing (2025)
- Domain Lexical Knowledge-based Word Embedding Learning for Text Classification under Small Data (2025)
- Boosting Neural Language Inference via Cascaded Interactive Reasoning (2025)
- Using External knowledge to Enhanced PLM for Semantic Matching (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper