PISCO
Collection
PISCO models are compression models for RAG. They are intended as plug-in replacement for RAG systems with x5 faster inference
•
3 items
•
Updated
PISCO is a context compression model to be used for efficient inference when doing Retrieval Augmented Generation (RAG), particularly optimized for question answering.
PISCO contains two adapters around a backbone LLM:
With a compressed collection of documents to retrieve from, inference becomes about x5 faster. PISCO models have very small loss in accuracy on a wide set of QA benchmarks (0-3%).
Developed by: Naver Labs Europe
License: CC BY-NC 4.0.
Pisco-solar
from transformers import AutoModel
pisco = AutoModel.from_pretrained('naver/pisco-solar').to('cuda')
# Example documents and question:
documents = [
[
"Weldenia is a monotypic genus of flowering plant in the family Commelinaceae, first describ ed in 1829. It has one single species: Weldenia candida, which grows originally in Mexico and Guatemala.",
"Hagsatera is a genus of flowering plants from the orchid family, Orchidaceae. There are two known species, native to Mexico and Guatemala",
"Alsobia is a genus of flowering plants in the family Gesneriaceae, native to Mexico, Guatemala and Costa Rica. The two species are succulent, stoloniferous herbs and were previously included in the genus \"Episcia\". Recent molecular studies have supported the separation of \"Alsobia\" from \"Episcia\""
]
]
questions = ["Which genus of plant grows originally in Mexico and Guatemala, Phylica or Weldenia?"]
# End-to-end usage
out = pisco.generate_from_text(questions=questions, documents=documents, max_new_tokens=64)
print('Generated answer', out)
# Document compression:
embeddings = pisco.compress_documents(documents=documents[0])
# Generation from compressed documents:
out = pisco.generate_from_compressed_documents_and_questions(questions=questions, compressed_documents=embeddings)
The recommended usage is to provide documents cropped to about 128 tokens, which is common practice when doing RAG.
This work is licensed under CC BY-NC 4.0.
TODO
Model trained at Naver Labs Europe
Team:
Base model
upstage/SOLAR-10.7B-v1.0