Update README.md
Browse files
README.md
CHANGED
|
@@ -14,7 +14,7 @@ license: cc-by-nc-sa-4.0
|
|
| 14 |
---
|
| 15 |
|
| 16 |
# <font color="IndianRed"> Kraft (Korean Romanization From Transformer) </font>
|
| 17 |
-
[
|
| 18 |
|
| 19 |
The Kraft (Korean Romanization From Transformer) model translates the characters (Hangul) of a Korean person name into the Roman alphabet ([McCune–Reischauer system](https://en.wikipedia.org/wiki/McCune%E2%80%93Reischauer)). Kraft uses the Transformer architecture, which is a type of neural network architecture that was introduced in the 2017 paper "Attention Is All You Need" by Google researchers. It is designed for sequence-to-sequence tasks, such as machine translation, language modeling, and summarization.
|
| 20 |
|
|
|
|
| 14 |
---
|
| 15 |
|
| 16 |
# <font color="IndianRed"> Kraft (Korean Romanization From Transformer) </font>
|
| 17 |
+
[](https://colab.research.google.com/drive/1CyyBvXZYNjaidOZUNGSCtVbmBganRUCn?usp=sharing)
|
| 18 |
|
| 19 |
The Kraft (Korean Romanization From Transformer) model translates the characters (Hangul) of a Korean person name into the Roman alphabet ([McCune–Reischauer system](https://en.wikipedia.org/wiki/McCune%E2%80%93Reischauer)). Kraft uses the Transformer architecture, which is a type of neural network architecture that was introduced in the 2017 paper "Attention Is All You Need" by Google researchers. It is designed for sequence-to-sequence tasks, such as machine translation, language modeling, and summarization.
|
| 20 |
|