Dataset Viewer
text
stringlengths 0
445
|
|---|
MIT License
|
Copyright (c) 2024 Hen-Hsen Huang, Brian J Chan, Chao-Ting Chen and Jui-Hung Cheng
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
of this software and associated documentation files (the "Software"), to deal
|
in the Software without restriction, including without limitation the rights
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
copies of the Software, and to permit persons to whom the Software is
|
furnished to do so, subject to the following conditions:
|
The above copyright notice and this permission notice shall be included in all
|
copies or substantial portions of the Software.
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
SOFTWARE.
|
# Cache-Augmented Generation (CAG)
|
Retrieval-Augmented Generation (RAG) has emerged as a powerful approach for enhancing language models by integrating external knowledge sources. However, RAG also introduces several challenges, including:
|
- **Retrieval Latency** – Delays caused by real-time retrieval steps.
|
- **Retrieval Errors** – Inaccuracies in selecting relevant documents.
|
- **System Complexity** – Increased architectural and maintenance overhead.
|
To address these limitations, we propose **Cache-Augmented Generation (CAG)**—an alternative paradigm that bypasses real-time retrieval. CAG leverages the extended context windows of modern large language models (LLMs) by preloading all relevant resources into the model’s context and caching its runtime parameters. During inference, the preloaded KV-cache enables the model to generate responses directly, eliminating the need for retrieval.
|
**Advantages of CAG**
|
- **Reduced Latency** – Eliminates real-time retrieval, enabling faster inference.
|
- **Improved Reliability** – Minimizes retrieval errors while maintaining context relevance.
|
- **Simplified Design** – Provides a streamlined, retrieval-free alternative to RAG, achieving comparable or superior results with lower complexity.
|
**Limitations of CAG**
|
- **Limited Knowledge Size** – CAG requires the entire knowledge source to fit within the context window, making it less suitable for tasks involving extremely large datasets.
|
- **Context Length Constraints** – The performance of LLMs may degrade with very long contexts ([reference](https://arxiv.org/pdf/2404.02060v2)).
|
Our [paper](https://arxiv.org/abs/2412.15605) investigates the relationship between model performance and context length, providing insights into scenarios where CAG excels.
|
The limitations of CAG are rapidly being addressed by advancements in LLMs with longer context windows and improved capabilities for extracting relevant information from extended inputs. As these models continue to evolve, **CAG** is expected to handle increasingly complex applications, making it a practical and scalable alternative to traditional RAG.
|
---
|
## Installation
|
```bash
|
pip install -r ./requirements.txt
|
```
|
## Preparation
|
> [!IMPORTANT]
|
> download the required `squad` and `hotpotqa` datasets by curl script
|
> ```bash
|
> sh ./downloads.sh
|
> ```
|
> [!IMPORTANT]
|
> create `.env` file by `.env.template` and input the keys required
|
> ```bash
|
> cp ./.env.template ./.env
|
> ```
|
## Usage
|
- `rag.py` is for RAG Experiment
|
- `kvcache.py` is for CAG Experiment
|
## Parameter Usage -- kvcache.py
|
- `--kvcache`: "file"
|
- `--dataset`: "hotpotqa-train" or "squad-train"
|
- `--similarity` "bertscore"
|
- `--modelname`: "meta-llama/Llama-3.1-8B-Instruct"
|
- `--maxKnowledge`: "", int, select how many document in dataset, explanation in Note
|
- `--maxParagraph`: 100
|
- `--maxQuestion` int, max question number, explanation in Note
|
- `--randomSeed`: "", int, a random seed number
|
- `--output`: "", str, output filepath string
|
- `--usePrompt`, add this parameter if not using CAG knowledge cache acceleration
|
### Example -- kvcache.py
|
```bash
|
python ./kvcache.py --kvcache file --dataset "squad-train" --similarity bertscore \
|
--maxKnowledge 5 --maxParagraph 100 --maxQuestion 1000 \
|
--modelname "meta-llama/Llama-3.1-8B-Instruct" --randomSeed 0 \
|
--output "./result_kvcache.txt"
|
```
|
## Parameter Usage -- rag.py
|
- `--index`: "openai" or "bm25"
|
- `--dataset`: "hotpotqa-train" or "squad-train"
|
- `--similarity` "bertscore"
|
- `--maxKnowledge`: "", int, select how many document in dataset, explanation in Note
|
- `--maxParagraph`: 100
|
- `--maxQuestion` int, max question number, explanation in Note
|
- `--topk`: int, the similarity topk of retrieval
|
- `--modelname`: "meta-llama/Llama-3.1-8B-Instruct"
|
- `--randomSeed`: "", int, a random seed number
|
- `--output`: "", str, output filepath string
|
### Example -- rag.py
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 6