Update README.md
Browse files
README.md
CHANGED
|
@@ -12,17 +12,10 @@ tags:
|
|
| 12 |
|
| 13 |
# gbpatentdata/lt-patent-inventor-linking
|
| 14 |
|
| 15 |
-
This is a [LinkTransformer](https://linktransformer.github.io/) model. At its core this model this is a sentence transformer model [sentence-transformers](https://www.SBERT.net) model- it just wraps around the class.
|
| 16 |
-
It is designed for quick and easy record linkage (entity-matching) through the LinkTransformer package. The tasks include clustering, deduplication, linking, aggregation and more.
|
| 17 |
-
Notwithstanding that, it can be used for any sentence similarity task within the sentence-transformers framework as well.
|
| 18 |
-
It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
| 19 |
Take a look at the documentation of [sentence-transformers](https://www.sbert.net/index.html) if you want to use this model for more than what we support in our applications.
|
| 20 |
|
| 21 |
-
|
| 22 |
-
This model has been fine-tuned on the model : sentence-transformers/all-mpnet-base-v2. It is pretrained for the language : - en.
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
test
|
| 26 |
|
| 27 |
## Usage (LinkTransformer)
|
| 28 |
|
|
@@ -38,56 +31,18 @@ Then you can use the model like this:
|
|
| 38 |
import linktransformer as lt
|
| 39 |
import pandas as pd
|
| 40 |
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
##Done! The merged dataframe has a column called "score" that contains the similarity score between the two company names
|
| 49 |
-
|
| 50 |
-
```
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
## Training your own LinkTransformer model
|
| 54 |
-
Any Sentence Transformers can be used as a backbone by simply adding a pooling layer. Any other transformer on HuggingFace can also be used by specifying the option add_pooling_layer==True
|
| 55 |
-
The model was trained using SupCon loss.
|
| 56 |
-
Usage can be found in the package docs.
|
| 57 |
-
The training config can be found in the repo with the name LT_training_config.json
|
| 58 |
-
To replicate the training, you can download the file and specify the path in the config_path argument of the training function. You can also override the config by specifying the training_args argument.
|
| 59 |
-
Here is an example.
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
```python
|
| 63 |
-
|
| 64 |
-
##Consider the example in the paper that has a dataset of Mexican products and their tariff codes from 1947 and 1948 and we want train a model to link the two tariff codes.
|
| 65 |
-
saved_model_path = train_model(
|
| 66 |
-
model_path="hiiamsid/sentence_similarity_spanish_es",
|
| 67 |
-
dataset_path=dataset_path,
|
| 68 |
-
left_col_names=["description47"],
|
| 69 |
-
right_col_names=['description48'],
|
| 70 |
-
left_id_name=['tariffcode47'],
|
| 71 |
-
right_id_name=['tariffcode48'],
|
| 72 |
-
log_wandb=False,
|
| 73 |
-
config_path=LINKAGE_CONFIG_PATH,
|
| 74 |
-
training_args={"num_epochs": 1}
|
| 75 |
-
)
|
| 76 |
|
| 77 |
```
|
| 78 |
|
|
|
|
| 79 |
|
| 80 |
-
|
| 81 |
-
Read our paper and the documentation for more!
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
## Evaluation Results
|
| 86 |
-
|
| 87 |
-
<!--- Describe how your model was evaluated -->
|
| 88 |
-
|
| 89 |
-
You can evaluate the model using the [LinkTransformer](https://github.com/dell-research-harvard/linktransformer) package's inference functions.
|
| 90 |
-
We have provided a few datasets in the package for you to try out. We plan to host more datasets on Huggingface and our website (Coming soon) that you can take a look at.
|
| 91 |
|
| 92 |
|
| 93 |
## Training
|
|
@@ -121,10 +76,7 @@ Parameters of the fit()-Method:
|
|
| 121 |
"weight_decay": 0.01
|
| 122 |
}
|
| 123 |
```
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
| 128 |
LinkTransformer(
|
| 129 |
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
|
| 130 |
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
|
|
@@ -132,7 +84,21 @@ LinkTransformer(
|
|
| 132 |
)
|
| 133 |
```
|
| 134 |
|
| 135 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 136 |
|
| 137 |
```
|
| 138 |
@misc{arora2023linktransformer,
|
|
|
|
| 12 |
|
| 13 |
# gbpatentdata/lt-patent-inventor-linking
|
| 14 |
|
| 15 |
+
This is a [LinkTransformer](https://linktransformer.github.io/) model. At its core this model this is a sentence transformer model [sentence-transformers](https://www.SBERT.net) model - it just wraps around the class.
|
|
|
|
|
|
|
|
|
|
| 16 |
Take a look at the documentation of [sentence-transformers](https://www.sbert.net/index.html) if you want to use this model for more than what we support in our applications.
|
| 17 |
|
| 18 |
+
This model has been fine-tuned on the model: `sentence-transformers/all-mpnet-base-v2`. It is pretrained for the language: `en`.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
## Usage (LinkTransformer)
|
| 21 |
|
|
|
|
| 31 |
import linktransformer as lt
|
| 32 |
import pandas as pd
|
| 33 |
|
| 34 |
+
df_lm_matched = lt.cluster_rows(df,
|
| 35 |
+
model='gbpatentdata/lt-patent-inventor-linking',
|
| 36 |
+
on=['name', 'occupation', 'year', 'address', 'firm', 'patent_title'],
|
| 37 |
+
cluster_type='SLINK',
|
| 38 |
+
cluster_params={'threshold': 0.1, 'min cluster size': 1, 'metric': 'cosine'}
|
| 39 |
+
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
```
|
| 42 |
|
| 43 |
+
## Evaluation
|
| 44 |
|
| 45 |
+
We evaluate using the standard [LinkTransformer](https://github.com/dell-research-harvard/linktransformer) information retrieval metrics. Our test set evaluations are available [here](https://huggingface.co/gbpatentdata/lt-patent-inventor-linking/blob/main/Information-Retrieval_evaluation_test_results.csv).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
|
| 47 |
|
| 48 |
## Training
|
|
|
|
| 76 |
"weight_decay": 0.01
|
| 77 |
}
|
| 78 |
```
|
| 79 |
+
```
|
|
|
|
|
|
|
|
|
|
| 80 |
LinkTransformer(
|
| 81 |
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
|
| 82 |
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
|
|
|
|
| 84 |
)
|
| 85 |
```
|
| 86 |
|
| 87 |
+
## Citation
|
| 88 |
+
|
| 89 |
+
If you use our model or custom training/evaluation data in your research, please cite our accompanying paper as follows:
|
| 90 |
+
|
| 91 |
+
```
|
| 92 |
+
@article{bct2025,
|
| 93 |
+
title = {300 Years of British Patents},
|
| 94 |
+
author = {Enrico Berkes and Matthew Lee Chen and Matteo Tranchero},
|
| 95 |
+
journal = {arXiv preprint arXiv:2401.12345},
|
| 96 |
+
year = {2025},
|
| 97 |
+
url = {https://arxiv.org/abs/2401.12345}
|
| 98 |
+
}
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
Please also cite the original LinkTransformer authors:
|
| 102 |
|
| 103 |
```
|
| 104 |
@misc{arora2023linktransformer,
|