set actual license and link to GH repo
Browse filescc
@lysandre
README.md
CHANGED
@@ -1,5 +1,6 @@
|
|
1 |
---
|
2 |
language: zh
|
|
|
3 |
---
|
4 |
|
5 |
# Bert-base-chinese
|
@@ -19,15 +20,17 @@ language: zh
|
|
19 |
|
20 |
This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper).
|
21 |
|
22 |
-
- **Developed by:**
|
23 |
- **Model Type:** Fill-Mask
|
24 |
- **Language(s):** Chinese
|
25 |
-
- **License:**
|
26 |
- **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model.
|
27 |
|
28 |
### Model Sources
|
|
|
29 |
- **Paper:** [BERT](https://arxiv.org/abs/1810.04805)
|
30 |
|
|
|
31 |
## Uses
|
32 |
|
33 |
#### Direct Use
|
@@ -67,9 +70,4 @@ tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
|
|
67 |
|
68 |
model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese")
|
69 |
|
70 |
-
```
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
|
|
1 |
---
|
2 |
language: zh
|
3 |
+
license: apache-2.0
|
4 |
---
|
5 |
|
6 |
# Bert-base-chinese
|
|
|
20 |
|
21 |
This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper).
|
22 |
|
23 |
+
- **Developed by:** Google
|
24 |
- **Model Type:** Fill-Mask
|
25 |
- **Language(s):** Chinese
|
26 |
+
- **License:** Apache 2.0
|
27 |
- **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model.
|
28 |
|
29 |
### Model Sources
|
30 |
+
- **GitHub repo**: https://github.com/google-research/bert/blob/master/multilingual.md
|
31 |
- **Paper:** [BERT](https://arxiv.org/abs/1810.04805)
|
32 |
|
33 |
+
|
34 |
## Uses
|
35 |
|
36 |
#### Direct Use
|
|
|
70 |
|
71 |
model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese")
|
72 |
|
73 |
+
```
|
|
|
|
|
|
|
|
|
|