Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,49 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
-
|
5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
**AMRBART** is a pretrained semantic parser which converts a sentence into an abstract meaning graph. You may find our paper [here](https://arxiv.org/pdf/2203.07836.pdf) (Arxiv). The original implementation is avaliable [here](https://github.com/goodbai-nlp/AMRBART/tree/acl2022)
|
3 |
+
|
4 |
+
[](https://paperswithcode.com/sota/amr-to-text-generation-on-ldc2017t10?p=graph-pre-training-for-amr-parsing-and-1)
|
5 |
+
|
6 |
+
[](https://paperswithcode.com/sota/amr-to-text-generation-on-ldc2020t02?p=graph-pre-training-for-amr-parsing-and-1)
|
7 |
+
|
8 |
+
[](https://paperswithcode.com/sota/amr-parsing-on-ldc2017t10?p=graph-pre-training-for-amr-parsing-and-1)
|
9 |
+
|
10 |
+
[](https://paperswithcode.com/sota/amr-parsing-on-ldc2020t02?p=graph-pre-training-for-amr-parsing-and-1)
|
11 |
+
|
12 |
+
**News**🎈
|
13 |
+
|
14 |
+
- (2022/12/10) fix max_length bugs in AMR parsing and update results.
|
15 |
+
- (2022/10/16) release the AMRBART-v2 model which is simpler, faster, and stronger.
|
16 |
+
|
17 |
+
# Requirements
|
18 |
+
+ python 3.8
|
19 |
+
+ pytorch 1.8
|
20 |
+
+ transformers 4.21.3
|
21 |
+
+ datasets 2.4.0
|
22 |
+
+ Tesla V100 or A100
|
23 |
+
|
24 |
+
We recommend to use conda to manage virtual environments:
|
25 |
+
```
|
26 |
+
conda env update --name <env> --file requirements.yml
|
27 |
+
```
|
28 |
+
|
29 |
+
# Data Processing
|
30 |
+
|
31 |
+
<!-- Since AMR corpus require LDC license, we upload some examples for format reference. If you have the license, feel free to contact us for getting the preprocessed data. -->
|
32 |
+
You may download the AMR corpora at [LDC](https://www.ldc.upenn.edu).
|
33 |
+
|
34 |
+
Please follow [this respository](https://github.com/goodbai-nlp/AMR-Process) to preprocess AMR graphs:
|
35 |
+
```
|
36 |
+
bash run-process-acl2022.sh
|
37 |
+
```
|
38 |
+
|
39 |
+
# Usage
|
40 |
+
|
41 |
+
Our model is avaliable at [huggingface](https://huggingface.co/xfbai). Here is how to initialize a AMR parsing model in PyTorch:
|
42 |
+
|
43 |
+
```
|
44 |
+
from transformers import BartForConditionalGeneration
|
45 |
+
from model_interface.tokenization_bart import AMRBartTokenizer # We use our own tokenizer to process AMRs
|
46 |
+
|
47 |
+
model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large-finetuned-AMR3.0-AMRParsing-v2")
|
48 |
+
tokenizer = AMRBartTokenizer.from_pretrained("xfbai/AMRBART-large-finetuned-AMR3.0-AMRParsing-v2")
|
49 |
+
```
|