Byte-Level BPE Tokenizer: jpn_Jpan (32K)

A Byte-Level BPE tokenizer trained on jpn_Jpan data from Fineweb-2-HQ.

Training Details

Parameter Value
Algorithm Byte-Level BPE
Language jpn_Jpan
Target Vocab Size 32,000
Final Vocab Size 32,917
Pre-tokenizer custom:jpn_Jpan
Number handling ltr_3digit
Contraction handling True
Normalizer NFC
Special Tokens <s>, </s>, <pad>, <unk>
Training Shards 2

Usage

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("flexitok/bpe_ltr_jpn_Jpan_32000_v2")
tokens = tokenizer.encode("Hello, world!")

Files

  • tokenizer.json — Full HuggingFace tokenizer
  • vocab.json — Vocabulary mapping
  • merges.txt — BPE merge rules

Sample Encoding

Text Tokens Token IDs
Hello, world! 12345 This is a test. こんにちは H, ell, o, ,, Ġworld, !, Ġ, 123, 45, ĠTh, is, Ġis, Ġa, Ġt, est, ., Ġãģĵ, ãĤĵãģ«, ãģ¡ãģ¯ 42, 4980, 81, 14, 31907, 3, 223, 20720, 4690, 10378, 1257, 4909, 2412, 1086, 3761, 16, 4547, 5934, 6901
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collections including flexitok/bpe_ltr_jpn_Jpan_32000_v2