本模型是从:https://huggingface.co/dyyyyyyyy/GNER-T5-xxl 模型提取的 Encoder 部分权重,并做了 fp16 / fp8 / Q8 量化,可以用于替代 T5-xxl 模型,作为 text_encoder 使用。

This model is extracted from the Encoder part weights of the model: https://huggingface.co/dyyyyyyyy/GNER-T5-xxl, and has been quantized to fp16 / fp8 / Q8. It can be used as a replacement for the T5-xxl model, for text_encoder CLIP.

以下内容,引用自官方模型说明。

Rethinking Negative Instances for Generative Named Entity Recognition

Model Card for GNER-T5-xxl

We introduce GNER, a Generative Named Entity Recognition framework, which demonstrates enhanced zero-shot capabilities across unseen entity domains. Experiments on two representative generative models, i.e., LLaMA and Flan-T5, show that the integration of negative instances into the training process yields substantial performance enhancements. The resulting models, GNER-LLaMA and GNER-T5, outperform state-of-the-art (SoTA) approaches by a large margin, achieving improvements of 8 and 11 points in $F_1$ score, respectively. Code and models are publicly available.

PreTrained Models

We release five GNER models based on LLaMA (7B) and Flan-T5 (base, large, xl and xxl).

Model # Params Zero-shot Average $F_1$ Supervised Average $F_1$ 🤗 HuggingFace
Download Link
GNER-LLaMA 7B 66.1 86.09 link
GNER-T5-base 248M 59.5 83.21 link
GNER-T5-large 783M 63.5 85.45 link
GNER-T5-xl 3B 66.1 85.94 link
GNER-T5-xxl 11B 69.1 86.15 link

Demo usage

You should install the dependencies:

pip install torch datasets deepspeed accelerate transformers protobuf

Please check out Example Jupyter Notebooks for guidance on utilizing GNER models.

A simple inference example is as follows:

Below is an example using GNER-T5

>>> import torch
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("dyyyyyyyy/GNER-T5-xxl")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("dyyyyyyyy/GNER-T5-xxl", torch_dtype=torch.bfloat16).cuda()
>>> model = model.eval()
>>> instruction_template = "Please analyze the sentence provided, identifying the type of entity for each word on a token-by-token basis.\nOutput format is: word_1(label_1), word_2(label_2), ...\nWe'll use the BIO-format to label the entities, where:\n1. B- (Begin) indicates the start of a named entity.\n2. I- (Inside) is used for words within a named entity but are not the first word.\n3. O (Outside) denotes words that are not part of a named entity.\n"
>>> sentence = "did george clooney make a musical in the 1980s"
>>> entity_labels = ["genre", "rating", "review", "plot", "song", "average ratings", "director", "character", "trailer", "year", "actor", "title"]
>>> instruction = f"{instruction_template}\nUse the specific entity tags: {', '.join(entity_labels)} and O.\nSentence: {sentence}"
>>> inputs = tokenizer(instruction, return_tensors="pt").to("cuda")
>>> outputs = model.generate(**inputs, max_new_tokens=640)
>>> response = tokenizer.decode(outputs[0], skip_special_tokens=True)
>>> print(response)
"did(O) george(B-actor) clooney(I-actor) make(O) a(O) musical(B-genre) in(O) the(O) 1980s(B-year)"

Citation

@misc{ding2024rethinking,
      title={Rethinking Negative Instances for Generative Named Entity Recognition}, 
      author={Yuyang Ding and Juntao Li and Pinzheng Wang and Zecheng Tang and Bowen Yan and Min Zhang},
      year={2024},
      eprint={2402.16602},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
41
GGUF
Model size
5B params
Architecture
t5encoder
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for wikeeyang/GNER-T5-xxl-encoder-only

Quantized
(1)
this model