Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
< 1K
ArXiv:
License:
metadata
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: crossner
pretty_name: CrossNER-AI
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-field
'2': I-field
'3': B-task
'4': I-task
'5': B-product
'6': I-product
'7': B-algorithm
'8': I-algorithm
'9': B-researcher
'10': I-researcher
'11': B-metrics
'12': I-metrics
'13': B-programlang
'14': I-programlang
'15': B-conference
'16': I-conference
'17': B-university
'18': I-university
'19': B-country
'20': I-country
'21': B-person
'22': I-person
'23': B-organisation
'24': I-organisation
'25': B-location
'26': I-location
'27': B-misc
'28': I-misc
splits:
- name: train
num_bytes: 10000
num_examples: 100
- name: validation
num_bytes: 35000
num_examples: 350
- name: test
num_bytes: 43100
num_examples: 431
CrossNER AI Dataset
An NER dataset for cross-domain evaluation, read more.
This split contains labeled data from the AI domain.
Features
- tokens: A list of words in the sentence
- ner_tags: A list of NER labels (as integers) corresponding to each token
Label Mapping
The dataset uses the following 29 labels:
Index | Label |
---|---|
0 | O |
1 | B-field |
2 | I-field |
3 | B-task |
4 | I-task |
5 | B-product |
6 | I-product |
7 | B-algorithm |
8 | I-algorithm |
9 | B-researcher |
10 | I-researcher |
11 | B-metrics |
12 | I-metrics |
13 | B-programlang |
14 | I-programlang |
15 | B-conference |
16 | I-conference |
17 | B-university |
18 | I-university |
19 | B-country |
20 | I-country |
21 | B-person |
22 | I-person |
23 | B-organisation |
24 | I-organisation |
25 | B-location |
26 | I-location |
27 | B-misc |
28 | I-misc |
Usage
from datasets import load_dataset
dataset = load_dataset("eesuhn/crossner-ai")