ZhiyuanChen commited on
Commit
d1c1f66
·
verified ·
1 Parent(s): d90d7b8

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,339 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - multimolecule/rnacentral
4
+ language: rna
5
+ library_name: multimolecule
6
+ license: agpl-3.0
7
+ mask_token: <mask>
8
+ pipeline_tag: fill-mask
9
+ tags:
10
+ - Biology
11
+ - RNA
12
+ - ncRNA
13
+ widget:
14
+ - example_title: microRNA 21
15
+ output:
16
+ - label: <null>
17
+ score: 0.988225
18
+ - label: <cls>
19
+ score: 0.011775
20
+ - label: <mask>
21
+ score: 0.0
22
+ - label: C
23
+ score: 0.0
24
+ - label: N
25
+ score: 0.0
26
+ text: UAGCUUAUCAGACUGAUGUUGA
27
+ - example_title: microRNA 146a
28
+ output:
29
+ - label: <null>
30
+ score: 0.600279
31
+ - label: <cls>
32
+ score: 0.399507
33
+ - label: C
34
+ score: 0.000211
35
+ - label: <mask>
36
+ score: 3.0e-06
37
+ - label: N
38
+ score: 0.0
39
+ text: UGAGAACUGAAUUCCAUGGGUU
40
+ - example_title: microRNA 155
41
+ output:
42
+ - label: <null>
43
+ score: 0.969194
44
+ - label: <mask>
45
+ score: 0.030797
46
+ - label: <cls>
47
+ score: 9.0e-06
48
+ - label: C
49
+ score: 0.0
50
+ - label: N
51
+ score: 0.0
52
+ text: UUAAUGCUAAUCGUGAUAGGGGUU
53
+ - example_title: metastasis associated lung adenocarcinoma transcript 1
54
+ output:
55
+ - label: <null>
56
+ score: 0.998803
57
+ - label: <cls>
58
+ score: 0.001166
59
+ - label: <mask>
60
+ score: 3.1e-05
61
+ - label: C
62
+ score: 0.0
63
+ - label: <unk>
64
+ score: 0.0
65
+ text: AGGCAUUGAGGCAGCCAGCGCAGGGGCUUCUGCUGAGGGGGCAGGCGGAGCUUGAGGAAA
66
+ - example_title: Pvt1 oncogene
67
+ output:
68
+ - label: <null>
69
+ score: 0.999972
70
+ - label: <cls>
71
+ score: 2.8e-05
72
+ - label: <mask>
73
+ score: 0.0
74
+ - label: C
75
+ score: 0.0
76
+ - label: N
77
+ score: 0.0
78
+ text: CCCGCGCUCCUCCGGGCAGAGCGCGUGUGGCGGCCGAGCACAUGGGCCCGCGGGCCGGGC
79
+ - example_title: telomerase RNA component
80
+ output:
81
+ - label: <null>
82
+ score: 0.999882
83
+ - label: <cls>
84
+ score: 0.000118
85
+ - label: <mask>
86
+ score: 0.0
87
+ - label: C
88
+ score: 0.0
89
+ - label: N
90
+ score: 0.0
91
+ text: GGGUUGCGGAGGGUGGGCCUGGGAGGGGUGGUGGCCAUUUUUUGUCUAACCCUAACUGAG
92
+ - example_title: vault RNA 2-1
93
+ output:
94
+ - label: <null>
95
+ score: 1.0
96
+ - label: <cls>
97
+ score: 0.0
98
+ - label: <mask>
99
+ score: 0.0
100
+ - label: C
101
+ score: 0.0
102
+ - label: <unk>
103
+ score: 0.0
104
+ text: CGGGUCGGAGUUAGCUCAAGCGGUUACCUCCUCAUGCCGGACUUUCUAUCUGUCCAUCUCUGUGCUGGGGUUCGAGACCCGCGGGUGCUUACUGACCCUUUUAUGCAA
105
+ - example_title: brain cytoplasmic RNA 1
106
+ output:
107
+ - label: <cls>
108
+ score: 0.943406
109
+ - label: <null>
110
+ score: 0.055805
111
+ - label: <mask>
112
+ score: 0.000789
113
+ - label: C
114
+ score: 0.0
115
+ - label: N
116
+ score: 0.0
117
+ text: GGCCGGGCGCGGUGGCUCACGCCUGUAAUCCCAGCUCUCAGGGAGGCUAAGAGGCGGGAGGAUAGCUUGAGCCCAGGAGUUCGAGACCUGCCUGGGCAAUAUAGCGAGACCCCGUUCUCCAGAAAAAGGAAAAAAAAAAACAAAAGACAAAAAAAAAAUAAGCGUAACUUCCCUCAAAGCAACAACCCCCCCCCCCCUUU
118
+ - example_title: HIV-1 TAR-WT
119
+ output:
120
+ - label: <mask>
121
+ score: 0.996234
122
+ - label: <cls>
123
+ score: 0.002032
124
+ - label: <null>
125
+ score: 0.001734
126
+ - label: C
127
+ score: 0.0
128
+ - label: N
129
+ score: 0.0
130
+ text: GGUCUCUCUGGUUAGACCAGAUCUGAGCCUGGGAGCUCUCUGGCUAACUAGGGAACC
131
+ ---
132
+
133
+ # Uni-RNA
134
+
135
+ Pre-trained model on RNA using a masked language modeling (MLM) objective.
136
+
137
+ > [!WARNING]
138
+ > This model is currently in a **Development Preview** state. It is not yet ready for production use and may contain incomplete or experimental features. The MultiMolecule team is actively working on improving this model, and we welcome feedback from the community.
139
+
140
+ ## Disclaimer
141
+
142
+ This is the OFFICIAL implementation of the [Uni-RNA: Universal Pre-Trained Models Revolutionize RNA Research](https://doi.org/10.1101/2023.07.11.548588) by Xi Wang, Ruichu Gu, Zhiyuan Chen, et al.
143
+
144
+ > [!TIP]
145
+ > The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
146
+
147
+ ## Model Details
148
+
149
+ Uni-RNA is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of non-coding RNA sequences in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
150
+
151
+ ### Model Specification
152
+
153
+ | Num Layers | Hidden Size | Num Heads | Intermediate Size | Num Parameters (M) | FLOPs (G) | MACs (G) | Max Num Tokens |
154
+ | ---------- | ----------- | --------- | ----------------- | ------------------ | --------- | -------- | -------------- |
155
+ | 16 | 1024 | 16 | 3072 | 868 | 44.05 | 22.01 | 1024 |
156
+
157
+ ### Links
158
+
159
+ - **Code**: [multimolecule.unirna](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/unirna)
160
+ - **Data**: [multimolecule/rnacentral](https://huggingface.co/datasets/multimolecule/rnacentral)
161
+ - **Paper**: [Uni-RNA: Universal Pre-Trained Models Revolutionize RNA Research](https://doi.org/10.1101/2023.07.11.548588)
162
+ - **Developed by**: Xi Wang, Ruichu Gu, Zhiyuan Chen, Yongge Li, Xiaohong Ji, Guolin Ke, Han Wen
163
+ - **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased) - [ESM](https://huggingface.co/facebook/esm2_t48_15B_UR50D)
164
+
165
+ ## Usage
166
+
167
+ The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
168
+
169
+ ```bash
170
+ pip install multimolecule
171
+ ```
172
+
173
+ ### Direct Use
174
+
175
+ #### Masked Language Modeling
176
+
177
+ You can use this model directly with a pipeline for masked language modeling:
178
+
179
+ ```python
180
+ import multimolecule # you must import multimolecule to register models
181
+ from transformers import pipeline
182
+
183
+ predictor = pipeline("fill-mask", model="multimolecule/unirna")
184
+ output = predictor("gguc<mask>cucugguuagaccagaucugagccu")
185
+ ```
186
+
187
+ ### Downstream Use
188
+
189
+ #### Extract Features
190
+
191
+ Here is how to use this model to get the features of a given sequence in PyTorch:
192
+
193
+ ```python
194
+ from multimolecule import RnaTokenizer, RnaFmModel
195
+
196
+
197
+ tokenizer = RnaTokenizer.from_pretrained("multimolecule/unirna")
198
+ model = RnaFmModel.from_pretrained("multimolecule/unirna")
199
+
200
+ text = "UAGCUUAUCAGACUGAUGUUG"
201
+ input = tokenizer(text, return_tensors="pt")
202
+
203
+ output = model(**input)
204
+ ```
205
+
206
+ #### Sequence Classification / Regression
207
+
208
+ > [!NOTE]
209
+ > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
210
+
211
+ Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
212
+
213
+ ```python
214
+ import torch
215
+ from multimolecule import RnaTokenizer, RnaFmForSequencePrediction
216
+
217
+
218
+ tokenizer = RnaTokenizer.from_pretrained("multimolecule/unirna")
219
+ model = RnaFmForSequencePrediction.from_pretrained("multimolecule/unirna")
220
+
221
+ text = "UAGCUUAUCAGACUGAUGUUG"
222
+ input = tokenizer(text, return_tensors="pt")
223
+ label = torch.tensor([1])
224
+
225
+ output = model(**input, labels=label)
226
+ ```
227
+
228
+ #### Token Classification / Regression
229
+
230
+ > [!NOTE]
231
+ > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for token classification or regression.
232
+
233
+ Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
234
+
235
+ ```python
236
+ import torch
237
+ from multimolecule import RnaTokenizer, RnaFmForTokenPrediction
238
+
239
+
240
+ tokenizer = RnaTokenizer.from_pretrained("multimolecule/unirna")
241
+ model = RnaFmForTokenPrediction.from_pretrained("multimolecule/unirna")
242
+
243
+ text = "UAGCUUAUCAGACUGAUGUUG"
244
+ input = tokenizer(text, return_tensors="pt")
245
+ label = torch.randint(2, (len(text), ))
246
+
247
+ output = model(**input, labels=label)
248
+ ```
249
+
250
+ #### Contact Classification / Regression
251
+
252
+ > [!NOTE]
253
+ > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
254
+
255
+ Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
256
+
257
+ ```python
258
+ import torch
259
+ from multimolecule import RnaTokenizer, RnaFmForContactPrediction
260
+
261
+
262
+ tokenizer = RnaTokenizer.from_pretrained("multimolecule/unirna")
263
+ model = RnaFmForContactPrediction.from_pretrained("multimolecule/unirna")
264
+
265
+ text = "UAGCUUAUCAGACUGAUGUUG"
266
+ input = tokenizer(text, return_tensors="pt")
267
+ label = torch.randint(2, (len(text), len(text)))
268
+
269
+ output = model(**input, labels=label)
270
+ ```
271
+
272
+ ## Training Details
273
+
274
+ Uni-RNA used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
275
+
276
+ ### Training Data
277
+
278
+ The Uni-RNA model was pre-trained on [RNAcentral](https://multimolecule.danling.org/datasets/rnacentral).
279
+ RNAcentral is a free, public resource that offers integrated access to a comprehensive and up-to-date set of non-coding RNA sequences provided by a collaborating group of [Expert Databases](https://rnacentral.org/expert-databases) representing a broad range of organisms and RNA types.
280
+
281
+ Uni-RNA applied [CD-HIT (CD-HIT-EST)](https://sites.google.com/view/cd-hit) with a cut-off at 100% sequence identity to remove redundancy from the RNAcentral. The final dataset contains 23.7 million non-redundant RNA sequences.
282
+
283
+ Uni-RNA preprocessed all tokens by replacing "U"s with "T"s.
284
+
285
+ Note that during model conversions, "T" is replaced with "U". [`RnaTokenizer`][multimolecule.RnaTokenizer] will convert "T"s to "U"s for you, you may disable this behaviour by passing `replace_T_with_U=False`.
286
+
287
+ ### Training Procedure
288
+
289
+ #### Preprocessing
290
+
291
+ Uni-RNA used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
292
+
293
+ - 15% of the tokens are masked.
294
+ - In 80% of the cases, the masked tokens are replaced by `<mask>`.
295
+ - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
296
+ - In the 10% remaining cases, the masked tokens are left as is.
297
+
298
+ #### Pre-training
299
+
300
+ The model was trained on 8 NVIDIA A100 GPUs with 80GiB memories.
301
+
302
+ - Learning rate: 1e-4
303
+ - Learning rate scheduler: Inverse square root
304
+ - Learning rate warm-up: 10,000 steps
305
+ - Weight decay: 0.01
306
+
307
+ ## Citation
308
+
309
+ **BibTeX**:
310
+
311
+ ```bibtex
312
+ @article {Wang2023.07.11.548588,
313
+ author = {Wang, Xi and Gu, Ruichu and Chen, Zhiyuan and Li, Yongge and Ji, Xiaohong and Ke, Guolin and Wen, Han},
314
+ title = {UNI-RNA: UNIVERSAL PRE-TRAINED MODELS REVOLUTIONIZE RNA RESEARCH},
315
+ elocation-id = {2023.07.11.548588},
316
+ year = {2023},
317
+ doi = {10.1101/2023.07.11.548588},
318
+ publisher = {Cold Spring Harbor Laboratory},
319
+ abstract = {RNA molecules play a crucial role as intermediaries in diverse biological processes. Attaining a profound understanding of their function can substantially enhance our comprehension of life{\textquoteright}s activities and facilitate drug development for numerous diseases. The advent of high-throughput sequencing technologies makes vast amounts of RNA sequence data accessible, which contains invaluable information and knowledge. However, deriving insights for further application from such an immense volume of data poses a significant challenge. Fortunately, recent advancements in pre-trained models have surfaced as a revolutionary solution for addressing such challenges owing to their exceptional ability to automatically mine and extract hidden knowledge from massive datasets. Inspired by the past successes, we developed a novel context-aware deep learning model named Uni-RNA that performs pre-training on the largest dataset of RNA sequences at the unprecedented scale to date. During this process, our model autonomously unraveled the obscured evolutionary and structural information embedded within the RNA sequences. As a result, through fine-tuning, our model achieved the state-of-the-art (SOTA) performances in a spectrum of downstream tasks, including both structural and functional predictions. Overall, Uni-RNA established a new research paradigm empowered by the large pre-trained model in the field of RNA, enabling the community to unlock the power of AI at a whole new level to significantly expedite the pace of research and foster groundbreaking discoveries.Competing Interest StatementPatents have been filed based on the methods described in this manuscript. All authors are employees of DP Technology, Beijing.},
320
+ URL = {https://www.biorxiv.org/content/early/2023/07/12/2023.07.11.548588},
321
+ eprint = {https://www.biorxiv.org/content/early/2023/07/12/2023.07.11.548588.full.pdf},
322
+ journal = {bioRxiv}
323
+ }
324
+
325
+ ```
326
+
327
+ ## Contact
328
+
329
+ Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
330
+
331
+ Please contact the authors of the [Uni-RNA paper](https://doi.org/10.1101/2023.07.11.548588) for questions or comments on the paper/model.
332
+
333
+ ## License
334
+
335
+ This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
336
+
337
+ ```spdx
338
+ SPDX-License-Identifier: AGPL-3.0-or-later
339
+ ```
config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "UniRnaForSecondaryStructurePrediction"
4
+ ],
5
+ "attention_dropout": 0.1,
6
+ "bos_token_id": 1,
7
+ "codon": false,
8
+ "dtype": "float32",
9
+ "emb_layer_norm_before": true,
10
+ "eos_token_id": 2,
11
+ "head": null,
12
+ "hidden_act": "gelu",
13
+ "hidden_dropout": 0.1,
14
+ "hidden_size": 1024,
15
+ "id2label": {
16
+ "0": "LABEL_0"
17
+ },
18
+ "initializer_range": 0.02,
19
+ "intermediate_size": 3072,
20
+ "label2id": {
21
+ "LABEL_0": 0
22
+ },
23
+ "layer_norm_eps": 1e-12,
24
+ "lm_head": null,
25
+ "mask_token_id": 4,
26
+ "max_position_embeddings": 1026,
27
+ "model_type": "unirna",
28
+ "null_token_id": 5,
29
+ "num_attention_heads": 16,
30
+ "num_hidden_layers": 16,
31
+ "pad_token_id": 0,
32
+ "position_embedding_type": "rotary",
33
+ "token_dropout": false,
34
+ "transformers_version": "4.57.1",
35
+ "unk_token_id": 3,
36
+ "use_cache": true,
37
+ "vocab_size": 26
38
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:314e292a9c30f75e4bf063443715fa858ccd6dd225172d76bd3d62ffa38c6a1e
3
+ size 680438700
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f5221d411a036a6f2178292465a51c1133fab287e9635e1832563eca252e1b0
3
+ size 680495311
special_tokens_map.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<null>"
4
+ ],
5
+ "bos_token": "<cls>",
6
+ "cls_token": "<cls>",
7
+ "eos_token": "<eos>",
8
+ "mask_token": "<mask>",
9
+ "pad_token": "<pad>",
10
+ "sep_token": "<eos>",
11
+ "unk_token": "<unk>"
12
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<pad>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<cls>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "<eos>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "4": {
36
+ "content": "<mask>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "5": {
44
+ "content": "<null>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "additional_special_tokens": [
53
+ "<null>"
54
+ ],
55
+ "bos_token": "<cls>",
56
+ "clean_up_tokenization_spaces": true,
57
+ "cls_token": "<cls>",
58
+ "codon": false,
59
+ "eos_token": "<eos>",
60
+ "extra_special_tokens": {},
61
+ "mask_token": "<mask>",
62
+ "model_max_length": 1024,
63
+ "nmers": 1,
64
+ "pad_token": "<pad>",
65
+ "replace_T_with_U": true,
66
+ "sep_token": "<eos>",
67
+ "tokenizer_class": "RnaTokenizer",
68
+ "unk_token": "<unk>"
69
+ }
vocab.txt ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <pad>
2
+ <cls>
3
+ <eos>
4
+ <unk>
5
+ <mask>
6
+ <null>
7
+ A
8
+ C
9
+ G
10
+ U
11
+ N
12
+ R
13
+ Y
14
+ S
15
+ W
16
+ K
17
+ M
18
+ B
19
+ D
20
+ H
21
+ V
22
+ .
23
+ X
24
+ *
25
+ -
26
+ I