nyu-mll/glue
Viewer • Updated • 1.49M • 458k • 495
How to use Intel/xlm-roberta-base-mrpc with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Intel/xlm-roberta-base-mrpc") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Intel/xlm-roberta-base-mrpc")
model = AutoModelForSequenceClassification.from_pretrained("Intel/xlm-roberta-base-mrpc")This model is a fine-tuned version of xlm-roberta-base on the GLUE MRPC dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training: