Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 2,500 Bytes
b2f9de3
765fdb6
 
 
 
 
 
b2f9de3
 
 
 
 
 
 
 
 
 
 
 
765fdb6
b2f9de3
 
 
 
 
 
 
765fdb6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
094910a
765fdb6
 
 
 
 
094910a
765fdb6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
language:
- en
- ja
license: cc-by-4.0
task_categories:
- translation
dataset_info:
  features:
  - name: translation
    struct:
    - name: en
      dtype: string
    - name: ja
      dtype: string
  splits:
  - name: train
    num_bytes: 249255464
    num_examples: 2801388
  download_size: 175157050
  dataset_size: 249255464
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# Dataset Card for JESC
### Dataset Summary

This corpus is extracted from the JESC, with Japanese-English pairs.
For more information, see website below!
**[(https://nlp.stanford.edu/projects/jesc/index_ja.html)](https://nlp.stanford.edu/projects/jesc/index_ja.html)**

JESC is the product of a collaboration between Stanford University, Google Brain, and Rakuten Institute of Technology. It was created by crawling the internet for movie and tv subtitles and aligining their captions. It is one of the largest freely available EN-JA corpus, and covers the poorly represented domain of colloquial language.

You can download the scripts, tools, and crawlers used to create this dataset on **[Github](https://github.com/rpryzant/JESC)**.
**[You can read the paper here](https://arxiv.org/abs/1710.10639)**.

### How to use

```
from datasets import load_dataset
dataset = load_dataset("nntsuzu/JESC")
```
If data loading times are too long and boring, use Streaming.

```
from datasets import load_dataset
dataset = load_dataset("nntsuzu/JESC", streaming=True)
```

### Data Instances
For example:

```json
{
  'en': "you are back, aren't you, harold?",
  'ja': 'あなたは戻ったのね、ハロルド?'
}
```
### Contents ###
1. A large corpus consisting of 2.8 million sentences.
2. Translations of casual language, colloquialisms, expository writing, and narrative discourse. These are domains that are hard to find in JA-EN MT.
3. Pre-processed data, including tokenized train/dev/test splits.
4. Code for making your own crawled datasets and tools for manipulating MT data.

### Data Splits
Only a `train` split is provided.

### Licensing Information ###
These data are released under a Creative Commons (CC) license.

### Citation Information

```json
@ARTICLE{pryzant_jesc_2018,
   author = {{Pryzant}, R. and {Chung}, Y. and {Jurafsky}, D. and {Britz}, D.},
    title = "{JESC: Japanese-English Subtitle Corpus}",
  journal = {Language Resources and Evaluation Conference (LREC)},
 keywords = {Computer Science - Computation and Language},
     year = 2018
}
```