samuel-z-chen commited on
Commit
789c796
·
2 Parent(s): 90583bd 27408d4

Merge remote-tracking branch 'origin/main'

Browse files
LICENSE CHANGED
@@ -1,6 +1,6 @@
1
  MIT License
2
 
3
- Copyright (c) 2025 Gustavo Ye
4
 
5
  Permission is hereby granted, free of charge, to any person obtaining a copy
6
  of this software and associated documentation files (the "Software"), to deal
 
1
  MIT License
2
 
3
+ Copyright (c) 2025 LarFii
4
 
5
  Permission is hereby granted, free of charge, to any person obtaining a copy
6
  of this software and associated documentation files (the "Software"), to deal
README.md CHANGED
@@ -12,7 +12,7 @@
12
  </p>
13
  <p>
14
  <img src='https://img.shields.io/github/stars/hkuds/lightrag?color=green&style=social' />
15
- <img src="https://img.shields.io/badge/python->=3.10-blue">
16
  <a href="https://pypi.org/project/lightrag-hku/"><img src="https://img.shields.io/pypi/v/lightrag-hku.svg"></a>
17
  <a href="https://pepy.tech/project/lightrag-hku"><img src="https://static.pepy.tech/badge/lightrag-hku/month"></a>
18
  </p>
@@ -637,7 +637,7 @@ if __name__ == "__main__":
637
  | **llm\_model\_kwargs** | `dict` | Additional parameters for LLM generation | |
638
  | **vector\_db\_storage\_cls\_kwargs** | `dict` | Additional parameters for vector database (currently not used) | |
639
  | **enable\_llm\_cache** | `bool` | If `TRUE`, stores LLM results in cache; repeated prompts return cached responses | `TRUE` |
640
- | **enable\_llm\_cache\_for\_entity\_extract** | `bool` | If `TRUE`, stores LLM results in cache for entity extraction; Good for beginners to debug your application | `FALSE` |
641
  | **addon\_params** | `dict` | Additional parameters, e.g., `{"example_number": 1, "language": "Simplified Chinese", "entity_types": ["organization", "person", "geo", "event"], "insert_batch_size": 10}`: sets example limit, output language, and batch size for document processing | `example_number: all examples, language: English, insert_batch_size: 10` |
642
  | **convert\_response\_to\_json\_func** | `callable` | Not used | `convert_response_to_json` |
643
  | **embedding\_cache\_config** | `dict` | Configuration for question-answer caching. Contains three parameters:<br>- `enabled`: Boolean value to enable/disable cache lookup functionality. When enabled, the system will check cached responses before generating new answers.<br>- `similarity_threshold`: Float value (0-1), similarity threshold. When a new question's similarity with a cached question exceeds this threshold, the cached answer will be returned directly without calling the LLM.<br>- `use_llm_check`: Boolean value to enable/disable LLM similarity verification. When enabled, LLM will be used as a secondary check to verify the similarity between questions before returning cached answers. | Default: `{"enabled": False, "similarity_threshold": 0.95, "use_llm_check": False}` |
@@ -892,69 +892,6 @@ def extract_queries(file_path):
892
  ```
893
  </details>
894
 
895
- ## Code Structure
896
-
897
- ```python
898
- .
899
- ├── .github/
900
- │ ├── workflows/
901
- │ │ └── linting.yaml
902
- ├── examples/
903
- │ ├── batch_eval.py
904
- │ ├── generate_query.py
905
- │ ├── graph_visual_with_html.py
906
- │ ├── graph_visual_with_neo4j.py
907
- │ ├── insert_custom_kg.py
908
- │ ├── lightrag_api_openai_compatible_demo.py
909
- │ ├── lightrag_api_oracle_demo..py
910
- │ ├── lightrag_azure_openai_demo.py
911
- │ ├── lightrag_bedrock_demo.py
912
- │ ├── lightrag_hf_demo.py
913
- │ ├── lightrag_lmdeploy_demo.py
914
- │ ├── lightrag_nvidia_demo.py
915
- │ ├── lightrag_ollama_demo.py
916
- │ ├── lightrag_openai_compatible_demo.py
917
- │ ├── lightrag_openai_demo.py
918
- │ ├── lightrag_oracle_demo.py
919
- │ ├── lightrag_siliconcloud_demo.py
920
- │ └── vram_management_demo.py
921
- ├── lightrag/
922
- │ ├── api/
923
- │ │ ├── lollms_lightrag_server.py
924
- │ │ ├── ollama_lightrag_server.py
925
- │ │ ├── openai_lightrag_server.py
926
- │ │ ├── azure_openai_lightrag_server.py
927
- │ │ └── requirements.txt
928
- │ ├── kg/
929
- │ │ ├── __init__.py
930
- │ │ ├── oracle_impl.py
931
- │ │ └── neo4j_impl.py
932
- │ ├── __init__.py
933
- │ ├── base.py
934
- │ ├── lightrag.py
935
- │ ├── llm.py
936
- │ ├── operate.py
937
- │ ├── prompt.py
938
- │ ├── storage.py
939
- │ └── utils.py
940
- ├── reproduce/
941
- │ ├── Step_0.py
942
- │ ├── Step_1_openai_compatible.py
943
- │ ├── Step_1.py
944
- │ ├── Step_2.py
945
- │ ├── Step_3_openai_compatible.py
946
- │ └── Step_3.py
947
- ├── .gitignore
948
- ├── .pre-commit-config.yaml
949
- ├── get_all_edges_nx.py
950
- ├── LICENSE
951
- ├── README.md
952
- ├── requirements.txt
953
- ├── setup.py
954
- ├── test_neo4j.py
955
- └── test.py
956
- ```
957
-
958
  ## Install with API Support
959
 
960
  LightRAG provides optional API support through FastAPI servers that add RAG capabilities to existing LLM services. You can install LightRAG with API support in two ways:
 
12
  </p>
13
  <p>
14
  <img src='https://img.shields.io/github/stars/hkuds/lightrag?color=green&style=social' />
15
+ <img src="https://img.shields.io/badge/python-3.10-blue">
16
  <a href="https://pypi.org/project/lightrag-hku/"><img src="https://img.shields.io/pypi/v/lightrag-hku.svg"></a>
17
  <a href="https://pepy.tech/project/lightrag-hku"><img src="https://static.pepy.tech/badge/lightrag-hku/month"></a>
18
  </p>
 
637
  | **llm\_model\_kwargs** | `dict` | Additional parameters for LLM generation | |
638
  | **vector\_db\_storage\_cls\_kwargs** | `dict` | Additional parameters for vector database (currently not used) | |
639
  | **enable\_llm\_cache** | `bool` | If `TRUE`, stores LLM results in cache; repeated prompts return cached responses | `TRUE` |
640
+ | **enable\_llm\_cache\_for\_entity\_extract** | `bool` | If `TRUE`, stores LLM results in cache for entity extraction; Good for beginners to debug your application | `TRUE` |
641
  | **addon\_params** | `dict` | Additional parameters, e.g., `{"example_number": 1, "language": "Simplified Chinese", "entity_types": ["organization", "person", "geo", "event"], "insert_batch_size": 10}`: sets example limit, output language, and batch size for document processing | `example_number: all examples, language: English, insert_batch_size: 10` |
642
  | **convert\_response\_to\_json\_func** | `callable` | Not used | `convert_response_to_json` |
643
  | **embedding\_cache\_config** | `dict` | Configuration for question-answer caching. Contains three parameters:<br>- `enabled`: Boolean value to enable/disable cache lookup functionality. When enabled, the system will check cached responses before generating new answers.<br>- `similarity_threshold`: Float value (0-1), similarity threshold. When a new question's similarity with a cached question exceeds this threshold, the cached answer will be returned directly without calling the LLM.<br>- `use_llm_check`: Boolean value to enable/disable LLM similarity verification. When enabled, LLM will be used as a secondary check to verify the similarity between questions before returning cached answers. | Default: `{"enabled": False, "similarity_threshold": 0.95, "use_llm_check": False}` |
 
892
  ```
893
  </details>
894
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
895
  ## Install with API Support
896
 
897
  LightRAG provides optional API support through FastAPI servers that add RAG capabilities to existing LLM services. You can install LightRAG with API support in two ways:
contributor-readme.MD → contributor-README.md RENAMED
File without changes
examples/test_split_by_character.ipynb ADDED
@@ -0,0 +1,1296 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "id": "4b5690db12e34685",
7
+ "metadata": {
8
+ "ExecuteTime": {
9
+ "end_time": "2025-01-09T03:40:58.307102Z",
10
+ "start_time": "2025-01-09T03:40:51.935233Z"
11
+ }
12
+ },
13
+ "outputs": [],
14
+ "source": [
15
+ "import os\n",
16
+ "import logging\n",
17
+ "import numpy as np\n",
18
+ "from lightrag import LightRAG, QueryParam\n",
19
+ "from lightrag.llm import openai_complete_if_cache, openai_embedding\n",
20
+ "from lightrag.utils import EmbeddingFunc\n",
21
+ "import nest_asyncio"
22
+ ]
23
+ },
24
+ {
25
+ "cell_type": "markdown",
26
+ "id": "dd17956ec322b361",
27
+ "metadata": {},
28
+ "source": "#### split by character"
29
+ },
30
+ {
31
+ "cell_type": "code",
32
+ "execution_count": 3,
33
+ "id": "8c8ee7c061bf9159",
34
+ "metadata": {
35
+ "ExecuteTime": {
36
+ "end_time": "2025-01-09T03:41:13.961167Z",
37
+ "start_time": "2025-01-09T03:41:13.958357Z"
38
+ }
39
+ },
40
+ "outputs": [],
41
+ "source": [
42
+ "nest_asyncio.apply()\n",
43
+ "WORKING_DIR = \"../../llm_rag/paper_db/R000088_test1\"\n",
44
+ "logging.basicConfig(format=\"%(levelname)s:%(message)s\", level=logging.INFO)\n",
45
+ "if not os.path.exists(WORKING_DIR):\n",
46
+ " os.mkdir(WORKING_DIR)\n",
47
+ "API = os.environ.get(\"DOUBAO_API_KEY\")"
48
+ ]
49
+ },
50
+ {
51
+ "cell_type": "code",
52
+ "execution_count": 4,
53
+ "id": "a5009d16e0851dca",
54
+ "metadata": {
55
+ "ExecuteTime": {
56
+ "end_time": "2025-01-09T03:41:16.862036Z",
57
+ "start_time": "2025-01-09T03:41:16.859306Z"
58
+ }
59
+ },
60
+ "outputs": [],
61
+ "source": [
62
+ "async def llm_model_func(\n",
63
+ " prompt, system_prompt=None, history_messages=[], keyword_extraction=False, **kwargs\n",
64
+ ") -> str:\n",
65
+ " return await openai_complete_if_cache(\n",
66
+ " \"ep-20241218114828-2tlww\",\n",
67
+ " prompt,\n",
68
+ " system_prompt=system_prompt,\n",
69
+ " history_messages=history_messages,\n",
70
+ " api_key=API,\n",
71
+ " base_url=\"https://ark.cn-beijing.volces.com/api/v3\",\n",
72
+ " **kwargs,\n",
73
+ " )\n",
74
+ "\n",
75
+ "\n",
76
+ "async def embedding_func(texts: list[str]) -> np.ndarray:\n",
77
+ " return await openai_embedding(\n",
78
+ " texts,\n",
79
+ " model=\"ep-20241231173413-pgjmk\",\n",
80
+ " api_key=API,\n",
81
+ " base_url=\"https://ark.cn-beijing.volces.com/api/v3\",\n",
82
+ " )"
83
+ ]
84
+ },
85
+ {
86
+ "cell_type": "code",
87
+ "execution_count": 5,
88
+ "id": "397fcad24ce4d0ed",
89
+ "metadata": {
90
+ "ExecuteTime": {
91
+ "end_time": "2025-01-09T03:41:24.950307Z",
92
+ "start_time": "2025-01-09T03:41:24.940353Z"
93
+ }
94
+ },
95
+ "outputs": [
96
+ {
97
+ "name": "stderr",
98
+ "output_type": "stream",
99
+ "text": [
100
+ "INFO:lightrag:Logger initialized for working directory: ../../llm_rag/paper_db/R000088_test1\n",
101
+ "INFO:lightrag:Load KV llm_response_cache with 0 data\n",
102
+ "INFO:lightrag:Load KV full_docs with 0 data\n",
103
+ "INFO:lightrag:Load KV text_chunks with 0 data\n",
104
+ "INFO:nano-vectordb:Init {'embedding_dim': 4096, 'metric': 'cosine', 'storage_file': '../../llm_rag/paper_db/R000088_test1/vdb_entities.json'} 0 data\n",
105
+ "INFO:nano-vectordb:Init {'embedding_dim': 4096, 'metric': 'cosine', 'storage_file': '../../llm_rag/paper_db/R000088_test1/vdb_relationships.json'} 0 data\n",
106
+ "INFO:nano-vectordb:Init {'embedding_dim': 4096, 'metric': 'cosine', 'storage_file': '../../llm_rag/paper_db/R000088_test1/vdb_chunks.json'} 0 data\n",
107
+ "INFO:lightrag:Loaded document status storage with 0 records\n"
108
+ ]
109
+ }
110
+ ],
111
+ "source": [
112
+ "rag = LightRAG(\n",
113
+ " working_dir=WORKING_DIR,\n",
114
+ " llm_model_func=llm_model_func,\n",
115
+ " embedding_func=EmbeddingFunc(\n",
116
+ " embedding_dim=4096, max_token_size=8192, func=embedding_func\n",
117
+ " ),\n",
118
+ " chunk_token_size=512,\n",
119
+ ")"
120
+ ]
121
+ },
122
+ {
123
+ "cell_type": "code",
124
+ "execution_count": 6,
125
+ "id": "1dc3603677f7484d",
126
+ "metadata": {
127
+ "ExecuteTime": {
128
+ "end_time": "2025-01-09T03:41:37.947456Z",
129
+ "start_time": "2025-01-09T03:41:37.941901Z"
130
+ }
131
+ },
132
+ "outputs": [],
133
+ "source": [
134
+ "with open(\n",
135
+ " \"../../llm_rag/example/R000088/auto/R000088_full_txt.md\", \"r\", encoding=\"utf-8\"\n",
136
+ ") as f:\n",
137
+ " content = f.read()\n",
138
+ "\n",
139
+ "\n",
140
+ "async def embedding_func(texts: list[str]) -> np.ndarray:\n",
141
+ " return await openai_embedding(\n",
142
+ " texts,\n",
143
+ " model=\"ep-20241231173413-pgjmk\",\n",
144
+ " api_key=API,\n",
145
+ " base_url=\"https://ark.cn-beijing.volces.com/api/v3\",\n",
146
+ " )\n",
147
+ "\n",
148
+ "\n",
149
+ "async def get_embedding_dim():\n",
150
+ " test_text = [\"This is a test sentence.\"]\n",
151
+ " embedding = await embedding_func(test_text)\n",
152
+ " embedding_dim = embedding.shape[1]\n",
153
+ " return embedding_dim"
154
+ ]
155
+ },
156
+ {
157
+ "cell_type": "code",
158
+ "execution_count": 7,
159
+ "id": "6844202606acfbe5",
160
+ "metadata": {
161
+ "ExecuteTime": {
162
+ "end_time": "2025-01-09T03:41:39.608541Z",
163
+ "start_time": "2025-01-09T03:41:39.165057Z"
164
+ }
165
+ },
166
+ "outputs": [
167
+ {
168
+ "name": "stderr",
169
+ "output_type": "stream",
170
+ "text": [
171
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n"
172
+ ]
173
+ }
174
+ ],
175
+ "source": [
176
+ "embedding_dimension = await get_embedding_dim()"
177
+ ]
178
+ },
179
+ {
180
+ "cell_type": "code",
181
+ "execution_count": 8,
182
+ "id": "d6273839d9681403",
183
+ "metadata": {
184
+ "ExecuteTime": {
185
+ "end_time": "2025-01-09T03:44:34.295345Z",
186
+ "start_time": "2025-01-09T03:41:48.324171Z"
187
+ }
188
+ },
189
+ "outputs": [
190
+ {
191
+ "name": "stderr",
192
+ "output_type": "stream",
193
+ "text": [
194
+ "INFO:lightrag:Processing 1 new unique documents\n",
195
+ "Processing batch 1: 0%| | 0/1 [00:00<?, ?it/s]INFO:lightrag:Inserting 35 vectors to chunks\n",
196
+ "\n",
197
+ "Generating embeddings: 0%| | 0/2 [00:00<?, ?batch/s]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
198
+ "\n",
199
+ "Generating embeddings: 50%|█████ | 1/2 [00:00<00:00, 1.36batch/s]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
200
+ "\n",
201
+ "Generating embeddings: 100%|██████████| 2/2 [00:04<00:00, 2.25s/batch]\u001b[A\n",
202
+ "\n",
203
+ "Extracting entities from chunks: 0%| | 0/35 [00:00<?, ?chunk/s]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
204
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
205
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
206
+ ]
207
+ },
208
+ {
209
+ "name": "stdout",
210
+ "output_type": "stream",
211
+ "text": [
212
+ "⠙ Processed 1 chunks, 1 entities(duplicated), 0 relations(duplicated)\r"
213
+ ]
214
+ },
215
+ {
216
+ "name": "stderr",
217
+ "output_type": "stream",
218
+ "text": [
219
+ "\n",
220
+ "Extracting entities from chunks: 3%|▎ | 1/35 [00:04<02:47, 4.93s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
221
+ ]
222
+ },
223
+ {
224
+ "name": "stdout",
225
+ "output_type": "stream",
226
+ "text": [
227
+ "⠹ Processed 2 chunks, 2 entities(duplicated), 0 relations(duplicated)\r"
228
+ ]
229
+ },
230
+ {
231
+ "name": "stderr",
232
+ "output_type": "stream",
233
+ "text": [
234
+ "\n",
235
+ "Extracting entities from chunks: 6%|▌ | 2/35 [00:05<01:18, 2.37s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
236
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
237
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
238
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
239
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
240
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
241
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
242
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
243
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
244
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
245
+ ]
246
+ },
247
+ {
248
+ "name": "stdout",
249
+ "output_type": "stream",
250
+ "text": [
251
+ "⠸ Processed 3 chunks, 9 entities(duplicated), 5 relations(duplicated)\r"
252
+ ]
253
+ },
254
+ {
255
+ "name": "stderr",
256
+ "output_type": "stream",
257
+ "text": [
258
+ "\n",
259
+ "Extracting entities from chunks: 9%|▊ | 3/35 [00:26<05:43, 10.73s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
260
+ ]
261
+ },
262
+ {
263
+ "name": "stdout",
264
+ "output_type": "stream",
265
+ "text": [
266
+ "⠼ Processed 4 chunks, 16 entities(duplicated), 11 relations(duplicated)\r"
267
+ ]
268
+ },
269
+ {
270
+ "name": "stderr",
271
+ "output_type": "stream",
272
+ "text": [
273
+ "\n",
274
+ "Extracting entities from chunks: 11%|█▏ | 4/35 [00:26<03:24, 6.60s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
275
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
276
+ ]
277
+ },
278
+ {
279
+ "name": "stdout",
280
+ "output_type": "stream",
281
+ "text": [
282
+ "⠴ Processed 5 chunks, 24 entities(duplicated), 18 relations(duplicated)\r"
283
+ ]
284
+ },
285
+ {
286
+ "name": "stderr",
287
+ "output_type": "stream",
288
+ "text": [
289
+ "\n",
290
+ "Extracting entities from chunks: 14%|█▍ | 5/35 [00:33<03:24, 6.82s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
291
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
292
+ ]
293
+ },
294
+ {
295
+ "name": "stdout",
296
+ "output_type": "stream",
297
+ "text": [
298
+ "⠦ Processed 6 chunks, 35 entities(duplicated), 28 relations(duplicated)\r"
299
+ ]
300
+ },
301
+ {
302
+ "name": "stderr",
303
+ "output_type": "stream",
304
+ "text": [
305
+ "\n",
306
+ "Extracting entities from chunks: 17%|█▋ | 6/35 [00:42<03:38, 7.53s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
307
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
308
+ ]
309
+ },
310
+ {
311
+ "name": "stdout",
312
+ "output_type": "stream",
313
+ "text": [
314
+ "⠧ Processed 7 chunks, 47 entities(duplicated), 36 relations(duplicated)\r"
315
+ ]
316
+ },
317
+ {
318
+ "name": "stderr",
319
+ "output_type": "stream",
320
+ "text": [
321
+ "\n",
322
+ "Extracting entities from chunks: 20%|██ | 7/35 [00:43<02:28, 5.31s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
323
+ ]
324
+ },
325
+ {
326
+ "name": "stdout",
327
+ "output_type": "stream",
328
+ "text": [
329
+ "⠇ Processed 8 chunks, 61 entities(duplicated), 49 relations(duplicated)\r"
330
+ ]
331
+ },
332
+ {
333
+ "name": "stderr",
334
+ "output_type": "stream",
335
+ "text": [
336
+ "\n",
337
+ "Extracting entities from chunks: 23%|██▎ | 8/35 [00:45<01:52, 4.16s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
338
+ ]
339
+ },
340
+ {
341
+ "name": "stdout",
342
+ "output_type": "stream",
343
+ "text": [
344
+ "⠏ Processed 9 chunks, 81 entities(duplicated), 49 relations(duplicated)\r"
345
+ ]
346
+ },
347
+ {
348
+ "name": "stderr",
349
+ "output_type": "stream",
350
+ "text": [
351
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
352
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
353
+ ]
354
+ },
355
+ {
356
+ "name": "stdout",
357
+ "output_type": "stream",
358
+ "text": [
359
+ "⠋ Processed 10 chunks, 90 entities(duplicated), 62 relations(duplicated)\r"
360
+ ]
361
+ },
362
+ {
363
+ "name": "stderr",
364
+ "output_type": "stream",
365
+ "text": [
366
+ "\n",
367
+ "Extracting entities from chunks: 29%|██▊ | 10/35 [00:46<01:06, 2.64s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
368
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
369
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
370
+ ]
371
+ },
372
+ {
373
+ "name": "stdout",
374
+ "output_type": "stream",
375
+ "text": [
376
+ "⠙ Processed 11 chunks, 101 entities(duplicated), 80 relations(duplicated)\r"
377
+ ]
378
+ },
379
+ {
380
+ "name": "stderr",
381
+ "output_type": "stream",
382
+ "text": [
383
+ "\n",
384
+ "Extracting entities from chunks: 31%|███▏ | 11/35 [00:52<01:19, 3.31s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
385
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
386
+ ]
387
+ },
388
+ {
389
+ "name": "stdout",
390
+ "output_type": "stream",
391
+ "text": [
392
+ "⠹ Processed 12 chunks, 108 entities(duplicated), 85 relations(duplicated)\r"
393
+ ]
394
+ },
395
+ {
396
+ "name": "stderr",
397
+ "output_type": "stream",
398
+ "text": [
399
+ "\n",
400
+ "Extracting entities from chunks: 34%|███▍ | 12/35 [00:54<01:11, 3.12s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
401
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
402
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
403
+ ]
404
+ },
405
+ {
406
+ "name": "stdout",
407
+ "output_type": "stream",
408
+ "text": [
409
+ "⠸ Processed 13 chunks, 120 entities(duplicated), 100 relations(duplicated)\r"
410
+ ]
411
+ },
412
+ {
413
+ "name": "stderr",
414
+ "output_type": "stream",
415
+ "text": [
416
+ "\n",
417
+ "Extracting entities from chunks: 37%|███▋ | 13/35 [00:59<01:18, 3.55s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
418
+ ]
419
+ },
420
+ {
421
+ "name": "stdout",
422
+ "output_type": "stream",
423
+ "text": [
424
+ "⠼ Processed 14 chunks, 131 entities(duplicated), 110 relations(duplicated)\r"
425
+ ]
426
+ },
427
+ {
428
+ "name": "stderr",
429
+ "output_type": "stream",
430
+ "text": [
431
+ "\n",
432
+ "Extracting entities from chunks: 40%|████ | 14/35 [01:00<00:59, 2.82s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
433
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
434
+ ]
435
+ },
436
+ {
437
+ "name": "stdout",
438
+ "output_type": "stream",
439
+ "text": [
440
+ "⠴ Processed 15 chunks, 143 entities(duplicated), 110 relations(duplicated)\r"
441
+ ]
442
+ },
443
+ {
444
+ "name": "stderr",
445
+ "output_type": "stream",
446
+ "text": [
447
+ "\n",
448
+ "Extracting entities from chunks: 43%|████▎ | 15/35 [01:02<00:52, 2.64s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
449
+ ]
450
+ },
451
+ {
452
+ "name": "stdout",
453
+ "output_type": "stream",
454
+ "text": [
455
+ "⠦ Processed 16 chunks, 162 entities(duplicated), 124 relations(duplicated)\r"
456
+ ]
457
+ },
458
+ {
459
+ "name": "stderr",
460
+ "output_type": "stream",
461
+ "text": [
462
+ "\n",
463
+ "Extracting entities from chunks: 46%|████▌ | 16/35 [01:05<00:53, 2.80s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
464
+ ]
465
+ },
466
+ {
467
+ "name": "stdout",
468
+ "output_type": "stream",
469
+ "text": [
470
+ "⠧ Processed 17 chunks, 174 entities(duplicated), 132 relations(duplicated)\r"
471
+ ]
472
+ },
473
+ {
474
+ "name": "stderr",
475
+ "output_type": "stream",
476
+ "text": [
477
+ "\n",
478
+ "Extracting entities from chunks: 49%|████▊ | 17/35 [01:06<00:39, 2.19s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
479
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
480
+ ]
481
+ },
482
+ {
483
+ "name": "stdout",
484
+ "output_type": "stream",
485
+ "text": [
486
+ "⠇ Processed 18 chunks, 185 entities(duplicated), 137 relations(duplicated)\r"
487
+ ]
488
+ },
489
+ {
490
+ "name": "stderr",
491
+ "output_type": "stream",
492
+ "text": [
493
+ "\n",
494
+ "Extracting entities from chunks: 51%|█████▏ | 18/35 [01:12<00:53, 3.15s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
495
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
496
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
497
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
498
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
499
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
500
+ ]
501
+ },
502
+ {
503
+ "name": "stdout",
504
+ "output_type": "stream",
505
+ "text": [
506
+ "⠏ Processed 19 chunks, 193 entities(duplicated), 149 relations(duplicated)\r"
507
+ ]
508
+ },
509
+ {
510
+ "name": "stderr",
511
+ "output_type": "stream",
512
+ "text": [
513
+ "\n",
514
+ "Extracting entities from chunks: 54%|█████▍ | 19/35 [01:18<01:06, 4.14s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
515
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
516
+ ]
517
+ },
518
+ {
519
+ "name": "stdout",
520
+ "output_type": "stream",
521
+ "text": [
522
+ "⠋ Processed 20 chunks, 205 entities(duplicated), 158 relations(duplicated)\r"
523
+ ]
524
+ },
525
+ {
526
+ "name": "stderr",
527
+ "output_type": "stream",
528
+ "text": [
529
+ "\n",
530
+ "Extracting entities from chunks: 57%|█████▋ | 20/35 [01:19<00:50, 3.35s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
531
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
532
+ ]
533
+ },
534
+ {
535
+ "name": "stdout",
536
+ "output_type": "stream",
537
+ "text": [
538
+ "⠙ Processed 21 chunks, 220 entities(duplicated), 187 relations(duplicated)\r"
539
+ ]
540
+ },
541
+ {
542
+ "name": "stderr",
543
+ "output_type": "stream",
544
+ "text": [
545
+ "\n",
546
+ "Extracting entities from chunks: 60%|██████ | 21/35 [01:27<01:02, 4.47s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
547
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
548
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
549
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
550
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
551
+ ]
552
+ },
553
+ {
554
+ "name": "stdout",
555
+ "output_type": "stream",
556
+ "text": [
557
+ "⠹ Processed 22 chunks, 247 entities(duplicated), 216 relations(duplicated)\r"
558
+ ]
559
+ },
560
+ {
561
+ "name": "stderr",
562
+ "output_type": "stream",
563
+ "text": [
564
+ "\n",
565
+ "Extracting entities from chunks: 63%|██████▎ | 22/35 [01:30<00:54, 4.16s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
566
+ ]
567
+ },
568
+ {
569
+ "name": "stdout",
570
+ "output_type": "stream",
571
+ "text": [
572
+ "⠸ Processed 23 chunks, 260 entities(duplicated), 230 relations(duplicated)\r"
573
+ ]
574
+ },
575
+ {
576
+ "name": "stderr",
577
+ "output_type": "stream",
578
+ "text": [
579
+ "\n",
580
+ "Extracting entities from chunks: 66%|██████▌ | 23/35 [01:34<00:48, 4.05s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
581
+ ]
582
+ },
583
+ {
584
+ "name": "stdout",
585
+ "output_type": "stream",
586
+ "text": [
587
+ "⠼ Processed 24 chunks, 291 entities(duplicated), 253 relations(duplicated)\r"
588
+ ]
589
+ },
590
+ {
591
+ "name": "stderr",
592
+ "output_type": "stream",
593
+ "text": [
594
+ "\n",
595
+ "Extracting entities from chunks: 69%|██████▊ | 24/35 [01:38<00:44, 4.03s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
596
+ ]
597
+ },
598
+ {
599
+ "name": "stdout",
600
+ "output_type": "stream",
601
+ "text": [
602
+ "⠴ Processed 25 chunks, 304 entities(duplicated), 262 relations(duplicated)\r"
603
+ ]
604
+ },
605
+ {
606
+ "name": "stderr",
607
+ "output_type": "stream",
608
+ "text": [
609
+ "\n",
610
+ "Extracting entities from chunks: 71%|███████▏ | 25/35 [01:41<00:36, 3.67s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
611
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
612
+ ]
613
+ },
614
+ {
615
+ "name": "stdout",
616
+ "output_type": "stream",
617
+ "text": [
618
+ "⠦ Processed 26 chunks, 313 entities(duplicated), 271 relations(duplicated)\r"
619
+ ]
620
+ },
621
+ {
622
+ "name": "stderr",
623
+ "output_type": "stream",
624
+ "text": [
625
+ "\n",
626
+ "Extracting entities from chunks: 74%|███████▍ | 26/35 [01:41<00:24, 2.76s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
627
+ ]
628
+ },
629
+ {
630
+ "name": "stdout",
631
+ "output_type": "stream",
632
+ "text": [
633
+ "⠧ Processed 27 chunks, 321 entities(duplicated), 283 relations(duplicated)\r"
634
+ ]
635
+ },
636
+ {
637
+ "name": "stderr",
638
+ "output_type": "stream",
639
+ "text": [
640
+ "\n",
641
+ "Extracting entities from chunks: 77%|███████▋ | 27/35 [01:47<00:28, 3.52s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
642
+ ]
643
+ },
644
+ {
645
+ "name": "stdout",
646
+ "output_type": "stream",
647
+ "text": [
648
+ "⠇ Processed 28 chunks, 333 entities(duplicated), 290 relations(duplicated)\r"
649
+ ]
650
+ },
651
+ {
652
+ "name": "stderr",
653
+ "output_type": "stream",
654
+ "text": [
655
+ "\n",
656
+ "Extracting entities from chunks: 80%|████████ | 28/35 [01:52<00:28, 4.08s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
657
+ ]
658
+ },
659
+ {
660
+ "name": "stdout",
661
+ "output_type": "stream",
662
+ "text": [
663
+ "⠏ Processed 29 chunks, 348 entities(duplicated), 307 relations(duplicated)\r"
664
+ ]
665
+ },
666
+ {
667
+ "name": "stderr",
668
+ "output_type": "stream",
669
+ "text": [
670
+ "\n",
671
+ "Extracting entities from chunks: 83%|████████▎ | 29/35 [01:59<00:29, 4.88s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
672
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
673
+ ]
674
+ },
675
+ {
676
+ "name": "stdout",
677
+ "output_type": "stream",
678
+ "text": [
679
+ "⠋ Processed 30 chunks, 362 entities(duplicated), 329 relations(duplicated)\r"
680
+ ]
681
+ },
682
+ {
683
+ "name": "stderr",
684
+ "output_type": "stream",
685
+ "text": [
686
+ "\n",
687
+ "Extracting entities from chunks: 86%|████████▌ | 30/35 [02:02<00:21, 4.29s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
688
+ ]
689
+ },
690
+ {
691
+ "name": "stdout",
692
+ "output_type": "stream",
693
+ "text": [
694
+ "⠙ Processed 31 chunks, 373 entities(duplicated), 337 relations(duplicated)\r"
695
+ ]
696
+ },
697
+ {
698
+ "name": "stderr",
699
+ "output_type": "stream",
700
+ "text": [
701
+ "\n",
702
+ "Extracting entities from chunks: 89%|████████▊ | 31/35 [02:03<00:13, 3.28s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
703
+ ]
704
+ },
705
+ {
706
+ "name": "stdout",
707
+ "output_type": "stream",
708
+ "text": [
709
+ "⠹ Processed 32 chunks, 390 entities(duplicated), 369 relations(duplicated)\r"
710
+ ]
711
+ },
712
+ {
713
+ "name": "stderr",
714
+ "output_type": "stream",
715
+ "text": [
716
+ "\n",
717
+ "Extracting entities from chunks: 91%|█████████▏| 32/35 [02:03<00:07, 2.55s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
718
+ ]
719
+ },
720
+ {
721
+ "name": "stdout",
722
+ "output_type": "stream",
723
+ "text": [
724
+ "⠸ Processed 33 chunks, 405 entities(duplicated), 378 relations(duplicated)\r"
725
+ ]
726
+ },
727
+ {
728
+ "name": "stderr",
729
+ "output_type": "stream",
730
+ "text": [
731
+ "\n",
732
+ "Extracting entities from chunks: 94%|█████████▍| 33/35 [02:07<00:05, 2.84s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
733
+ ]
734
+ },
735
+ {
736
+ "name": "stdout",
737
+ "output_type": "stream",
738
+ "text": [
739
+ "⠼ Processed 34 chunks, 435 entities(duplicated), 395 relations(duplicated)\r"
740
+ ]
741
+ },
742
+ {
743
+ "name": "stderr",
744
+ "output_type": "stream",
745
+ "text": [
746
+ "\n",
747
+ "Extracting entities from chunks: 97%|█████████▋| 34/35 [02:10<00:02, 2.94s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
748
+ ]
749
+ },
750
+ {
751
+ "name": "stdout",
752
+ "output_type": "stream",
753
+ "text": [
754
+ "⠴ Processed 35 chunks, 456 entities(duplicated), 440 relations(duplicated)\r"
755
+ ]
756
+ },
757
+ {
758
+ "name": "stderr",
759
+ "output_type": "stream",
760
+ "text": [
761
+ "\n",
762
+ "Extracting entities from chunks: 100%|██████████| 35/35 [02:23<00:00, 4.10s/chunk]\u001b[A\n",
763
+ "INFO:lightrag:Inserting entities into storage...\n",
764
+ "\n",
765
+ "Inserting entities: 100%|██████████| 324/324 [00:00<00:00, 17456.96entity/s]\n",
766
+ "INFO:lightrag:Inserting relationships into storage...\n",
767
+ "\n",
768
+ "Inserting relationships: 100%|██████████| 427/427 [00:00<00:00, 29956.31relationship/s]\n",
769
+ "INFO:lightrag:Inserting 324 vectors to entities\n",
770
+ "\n",
771
+ "Generating embeddings: 0%| | 0/11 [00:00<?, ?batch/s]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
772
+ "\n",
773
+ "Generating embeddings: 9%|▉ | 1/11 [00:00<00:06, 1.48batch/s]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
774
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
775
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
776
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
777
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
778
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
779
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
780
+ "\n",
781
+ "Generating embeddings: 18%|█▊ | 2/11 [00:02<00:11, 1.25s/batch]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
782
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
783
+ "\n",
784
+ "Generating embeddings: 27%|██▋ | 3/11 [00:02<00:06, 1.17batch/s]\u001b[A\n",
785
+ "Generating embeddings: 36%|███▋ | 4/11 [00:03<00:04, 1.50batch/s]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
786
+ "\n",
787
+ "Generating embeddings: 45%|████▌ | 5/11 [00:03<00:03, 1.78batch/s]\u001b[A\n",
788
+ "Generating embeddings: 55%|█████▍ | 6/11 [00:03<00:02, 2.01batch/s]\u001b[A\n",
789
+ "Generating embeddings: 64%|██████▎ | 7/11 [00:04<00:01, 2.19batch/s]\u001b[A\n",
790
+ "Generating embeddings: 73%|███████▎ | 8/11 [00:04<00:01, 2.31batch/s]\u001b[A\n",
791
+ "Generating embeddings: 82%|████████▏ | 9/11 [00:04<00:00, 2.41batch/s]\u001b[A\n",
792
+ "Generating embeddings: 91%|█████████ | 10/11 [00:05<00:00, 2.48batch/s]\u001b[A\n",
793
+ "Generating embeddings: 100%|██████████| 11/11 [00:05<00:00, 1.91batch/s]\u001b[A\n",
794
+ "INFO:lightrag:Inserting 427 vectors to relationships\n",
795
+ "\n",
796
+ "Generating embeddings: 0%| | 0/14 [00:00<?, ?batch/s]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
797
+ "\n",
798
+ "Generating embeddings: 7%|▋ | 1/14 [00:01<00:14, 1.11s/batch]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
799
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
800
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
801
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
802
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
803
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
804
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
805
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
806
+ "\n",
807
+ "Generating embeddings: 14%|█▍ | 2/14 [00:02<00:14, 1.18s/batch]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
808
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
809
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
810
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
811
+ "\n",
812
+ "Generating embeddings: 21%|██▏ | 3/14 [00:02<00:08, 1.23batch/s]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
813
+ "\n",
814
+ "Generating embeddings: 29%|██▊ | 4/14 [00:03<00:06, 1.56batch/s]\u001b[A\n",
815
+ "Generating embeddings: 36%|███▌ | 5/14 [00:03<00:04, 1.85batch/s]\u001b[A\n",
816
+ "Generating embeddings: 43%|████▎ | 6/14 [00:03<00:03, 2.05batch/s]\u001b[A\n",
817
+ "Generating embeddings: 50%|█████ | 7/14 [00:04<00:03, 2.23batch/s]\u001b[A\n",
818
+ "Generating embeddings: 57%|█████▋ | 8/14 [00:04<00:02, 2.37batch/s]\u001b[A\n",
819
+ "Generating embeddings: 64%|██████▍ | 9/14 [00:04<00:02, 2.46batch/s]\u001b[A\n",
820
+ "Generating embeddings: 71%|███████▏ | 10/14 [00:05<00:01, 2.54batch/s]\u001b[A\n",
821
+ "Generating embeddings: 79%|███████▊ | 11/14 [00:05<00:01, 2.59batch/s]\u001b[A\n",
822
+ "Generating embeddings: 86%|████████▌ | 12/14 [00:06<00:00, 2.64batch/s]\u001b[A\n",
823
+ "Generating embeddings: 93%|█████████▎| 13/14 [00:06<00:00, 2.65batch/s]\u001b[A\n",
824
+ "Generating embeddings: 100%|██████████| 14/14 [00:06<00:00, 2.05batch/s]\u001b[A\n",
825
+ "INFO:lightrag:Writing graph with 333 nodes, 427 edges\n",
826
+ "Processing batch 1: 100%|██████████| 1/1 [02:45<00:00, 165.90s/it]\n"
827
+ ]
828
+ }
829
+ ],
830
+ "source": [
831
+ "# rag.insert(content)\n",
832
+ "rag.insert(content, split_by_character=\"\\n#\")"
833
+ ]
834
+ },
835
+ {
836
+ "cell_type": "code",
837
+ "execution_count": 9,
838
+ "id": "c4f9ae517151a01d",
839
+ "metadata": {
840
+ "ExecuteTime": {
841
+ "end_time": "2025-01-09T03:45:11.668987Z",
842
+ "start_time": "2025-01-09T03:45:11.664744Z"
843
+ }
844
+ },
845
+ "outputs": [],
846
+ "source": [
847
+ "prompt1 = \"\"\"你是一名经验丰富的论文分析科学家,你的任务是对一篇英文学术研究论文进行关键信息提取并深入分析。\n",
848
+ "请按照以下步骤进行分析:\n",
849
+ "1. 该文献主要研究的问题是什么?\n",
850
+ "2. 该文献采用什么方法进行分析?\n",
851
+ "3. 该文献的主要结论是什么?\n",
852
+ "首先在<分析>标签中,针对每个问题详��分析你的思考过程。然后在<回答>标签中给出所有问题的最终答案。\"\"\""
853
+ ]
854
+ },
855
+ {
856
+ "cell_type": "code",
857
+ "execution_count": 10,
858
+ "id": "7a6491385b050095",
859
+ "metadata": {
860
+ "ExecuteTime": {
861
+ "end_time": "2025-01-09T03:45:40.829111Z",
862
+ "start_time": "2025-01-09T03:45:13.530298Z"
863
+ }
864
+ },
865
+ "outputs": [
866
+ {
867
+ "name": "stderr",
868
+ "output_type": "stream",
869
+ "text": [
870
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
871
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
872
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
873
+ "INFO:lightrag:Local query uses 5 entites, 12 relations, 3 text units\n",
874
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
875
+ "INFO:lightrag:Global query uses 8 entites, 5 relations, 4 text units\n",
876
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
877
+ ]
878
+ },
879
+ {
880
+ "name": "stdout",
881
+ "output_type": "stream",
882
+ "text": [
883
+ "<分析>\n",
884
+ "1. **该文献主要研究的问题是什么?**\n",
885
+ " - 思考过程:通过浏览论文内容,查找作者明确阐述研究目的的部分。文中多处提及“Our study was performed to explore whether folic acid treatment was associated with cancer outcomes and all-cause mortality after extended follow-up”,表明作者旨在探究叶酸治疗与癌症结局及全因死亡率之间的关系,尤其是在经过长期随访后。\n",
886
+ "2. **该文献采用什么方法进行分析?**\n",
887
+ " - 思考过程:寻找描述研究方法和数据分析过程的段落。文中提到“Survival curves were constructed using the Kaplan-Meier method and differences in survival between groups were analyzed using the log-rank test. Estimates of hazard ratios (HRs) with 95% CIs were obtained by using Cox proportional hazards regression models stratified by trial”,可以看出作者使用了Kaplan-Meier法构建生存曲线、log-rank检验分析组间生存差异以及Cox比例风险回归模型估计风险比等方法。\n",
888
+ "3. **该文献的主要结论是什么?**\n",
889
+ " - 思考过程:定位到论文中总结结论的部分,如“Conclusion Treatment with folic acid plus vitamin $\\mathsf{B}_{12}$ was associated with increased cancer outcomes and all-cause mortality in patients with ischemic heart disease in Norway, where there is no folic acid fortification of foods”,可知作者得出叶酸加维生素$\\mathsf{B}_{12}$治疗与癌症结局和全因死亡率增加有关的结论。\n",
890
+ "<回答>\n",
891
+ "1. 该文献主要研究的问题是:叶酸治疗与癌症结局及全因死亡率之间的关系,尤其是在经过长期随访后,叶酸治疗是否与癌症结局和全因死亡率相关。\n",
892
+ "2. 该文献采用的分析方法包括:使用Kaplan-Meier法构建生存曲线、log-rank检验分析组间生存差异、Cox比例风险回归模型估计风险比等。\n",
893
+ "3. 该文献的主要结论是:在挪威没有叶酸强化食品的情况下,叶酸加维生素$\\mathsf{B}_{12}$治疗与缺血性心脏病患者的癌症结局和全因死亡率增加有关。\n",
894
+ "\n",
895
+ "**参考文献**\n",
896
+ "- [VD] In2Norwegianhomocysteine-lowering trialsamongpatientswithischemicheart disease, there was a statistically nonsignificantincreaseincancerincidenceinthe groupsassignedtofolicacidtreatment.15,16 Our study was performed to explore whetherfolicacidtreatmentwasassociatedwithcanceroutcomesandall-cause mortality after extended follow-up.\n",
897
+ "- [VD] Survivalcurveswereconstructedusing theKaplan-Meiermethodanddifferences insurvivalbetweengroupswereanalyzed usingthelog-ranktest.Estimatesofhazard ratios (HRs) with $95\\%$ CIs were obtainedbyusingCoxproportionalhazards regressionmodelsstratifiedbytrial.\n",
898
+ "- [VD] Conclusion Treatment with folic acid plus vitamin $\\mathsf{B}_{12}$ was associated with increased cancer outcomes and all-cause mortality in patients with ischemic heart disease in Norway, where there is no folic acid fortification of foods.\n"
899
+ ]
900
+ }
901
+ ],
902
+ "source": [
903
+ "resp = rag.query(prompt1, param=QueryParam(mode=\"mix\", top_k=5))\n",
904
+ "print(resp)"
905
+ ]
906
+ },
907
+ {
908
+ "cell_type": "markdown",
909
+ "id": "4e5bfad24cb721a8",
910
+ "metadata": {},
911
+ "source": "#### split by character only"
912
+ },
913
+ {
914
+ "cell_type": "code",
915
+ "execution_count": 11,
916
+ "id": "44e2992dc95f8ce0",
917
+ "metadata": {
918
+ "ExecuteTime": {
919
+ "end_time": "2025-01-09T03:47:40.988796Z",
920
+ "start_time": "2025-01-09T03:47:40.982648Z"
921
+ }
922
+ },
923
+ "outputs": [],
924
+ "source": [
925
+ "WORKING_DIR = \"../../llm_rag/paper_db/R000088_test2\"\n",
926
+ "if not os.path.exists(WORKING_DIR):\n",
927
+ " os.mkdir(WORKING_DIR)"
928
+ ]
929
+ },
930
+ {
931
+ "cell_type": "code",
932
+ "execution_count": 12,
933
+ "id": "62c63385d2d973d5",
934
+ "metadata": {
935
+ "ExecuteTime": {
936
+ "end_time": "2025-01-09T03:51:39.951329Z",
937
+ "start_time": "2025-01-09T03:49:15.218976Z"
938
+ }
939
+ },
940
+ "outputs": [
941
+ {
942
+ "name": "stderr",
943
+ "output_type": "stream",
944
+ "text": [
945
+ "INFO:lightrag:Logger initialized for working directory: ../../llm_rag/paper_db/R000088_test2\n",
946
+ "INFO:lightrag:Load KV llm_response_cache with 0 data\n",
947
+ "INFO:lightrag:Load KV full_docs with 0 data\n",
948
+ "INFO:lightrag:Load KV text_chunks with 0 data\n",
949
+ "INFO:nano-vectordb:Init {'embedding_dim': 4096, 'metric': 'cosine', 'storage_file': '../../llm_rag/paper_db/R000088_test2/vdb_entities.json'} 0 data\n",
950
+ "INFO:nano-vectordb:Init {'embedding_dim': 4096, 'metric': 'cosine', 'storage_file': '../../llm_rag/paper_db/R000088_test2/vdb_relationships.json'} 0 data\n",
951
+ "INFO:nano-vectordb:Init {'embedding_dim': 4096, 'metric': 'cosine', 'storage_file': '../../llm_rag/paper_db/R000088_test2/vdb_chunks.json'} 0 data\n",
952
+ "INFO:lightrag:Loaded document status storage with 0 records\n",
953
+ "INFO:lightrag:Processing 1 new unique documents\n",
954
+ "Processing batch 1: 0%| | 0/1 [00:00<?, ?it/s]INFO:lightrag:Inserting 12 vectors to chunks\n",
955
+ "\n",
956
+ "Generating embeddings: 0%| | 0/1 [00:00<?, ?batch/s]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
957
+ "\n",
958
+ "Generating embeddings: 100%|██████████| 1/1 [00:02<00:00, 2.95s/batch]\u001b[A\n",
959
+ "\n",
960
+ "Extracting entities from chunks: 0%| | 0/12 [00:00<?, ?chunk/s]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
961
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
962
+ ]
963
+ },
964
+ {
965
+ "name": "stdout",
966
+ "output_type": "stream",
967
+ "text": [
968
+ "⠙ Processed 1 chunks, 0 entities(duplicated), 0 relations(duplicated)\r"
969
+ ]
970
+ },
971
+ {
972
+ "name": "stderr",
973
+ "output_type": "stream",
974
+ "text": [
975
+ "\n",
976
+ "Extracting entities from chunks: 8%|▊ | 1/12 [00:03<00:43, 3.93s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
977
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
978
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
979
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
980
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
981
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
982
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
983
+ ]
984
+ },
985
+ {
986
+ "name": "stdout",
987
+ "output_type": "stream",
988
+ "text": [
989
+ "⠹ Processed 2 chunks, 8 entities(duplicated), 8 relations(duplicated)\r"
990
+ ]
991
+ },
992
+ {
993
+ "name": "stderr",
994
+ "output_type": "stream",
995
+ "text": [
996
+ "\n",
997
+ "Extracting entities from chunks: 17%|█▋ | 2/12 [00:29<02:44, 16.46s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
998
+ ]
999
+ },
1000
+ {
1001
+ "name": "stdout",
1002
+ "output_type": "stream",
1003
+ "text": [
1004
+ "⠸ Processed 3 chunks, 17 entities(duplicated), 15 relations(duplicated)\r"
1005
+ ]
1006
+ },
1007
+ {
1008
+ "name": "stderr",
1009
+ "output_type": "stream",
1010
+ "text": [
1011
+ "\n",
1012
+ "Extracting entities from chunks: 25%|██▌ | 3/12 [00:30<01:25, 9.45s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
1013
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
1014
+ ]
1015
+ },
1016
+ {
1017
+ "name": "stdout",
1018
+ "output_type": "stream",
1019
+ "text": [
1020
+ "⠼ Processed 4 chunks, 27 entities(duplicated), 22 relations(duplicated)\r"
1021
+ ]
1022
+ },
1023
+ {
1024
+ "name": "stderr",
1025
+ "output_type": "stream",
1026
+ "text": [
1027
+ "\n",
1028
+ "Extracting entities from chunks: 33%|███▎ | 4/12 [00:39<01:16, 9.52s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
1029
+ ]
1030
+ },
1031
+ {
1032
+ "name": "stdout",
1033
+ "output_type": "stream",
1034
+ "text": [
1035
+ "⠴ Processed 5 chunks, 36 entities(duplicated), 33 relations(duplicated)\r"
1036
+ ]
1037
+ },
1038
+ {
1039
+ "name": "stderr",
1040
+ "output_type": "stream",
1041
+ "text": [
1042
+ "\n",
1043
+ "Extracting entities from chunks: 42%|████▏ | 5/12 [00:40<00:43, 6.24s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
1044
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
1045
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
1046
+ ]
1047
+ },
1048
+ {
1049
+ "name": "stdout",
1050
+ "output_type": "stream",
1051
+ "text": [
1052
+ "⠦ Processed 6 chunks, 49 entities(duplicated), 42 relations(duplicated)\r"
1053
+ ]
1054
+ },
1055
+ {
1056
+ "name": "stderr",
1057
+ "output_type": "stream",
1058
+ "text": [
1059
+ "\n",
1060
+ "Extracting entities from chunks: 50%|█████ | 6/12 [00:49<00:43, 7.33s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
1061
+ ]
1062
+ },
1063
+ {
1064
+ "name": "stdout",
1065
+ "output_type": "stream",
1066
+ "text": [
1067
+ "⠧ Processed 7 chunks, 62 entities(duplicated), 65 relations(duplicated)\r"
1068
+ ]
1069
+ },
1070
+ {
1071
+ "name": "stderr",
1072
+ "output_type": "stream",
1073
+ "text": [
1074
+ "\n",
1075
+ "Extracting entities from chunks: 58%|█████▊ | 7/12 [01:05<00:50, 10.05s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
1076
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
1077
+ ]
1078
+ },
1079
+ {
1080
+ "name": "stdout",
1081
+ "output_type": "stream",
1082
+ "text": [
1083
+ "⠇ Processed 8 chunks, 81 entities(duplicated), 90 relations(duplicated)\r"
1084
+ ]
1085
+ },
1086
+ {
1087
+ "name": "stderr",
1088
+ "output_type": "stream",
1089
+ "text": [
1090
+ "\n",
1091
+ "Extracting entities from chunks: 67%|██████▋ | 8/12 [01:23<00:50, 12.69s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
1092
+ ]
1093
+ },
1094
+ {
1095
+ "name": "stdout",
1096
+ "output_type": "stream",
1097
+ "text": [
1098
+ "⠏ Processed 9 chunks, 99 entities(duplicated), 117 relations(duplicated)\r"
1099
+ ]
1100
+ },
1101
+ {
1102
+ "name": "stderr",
1103
+ "output_type": "stream",
1104
+ "text": [
1105
+ "\n",
1106
+ "Extracting entities from chunks: 75%|███████▌ | 9/12 [01:32<00:34, 11.54s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
1107
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
1108
+ ]
1109
+ },
1110
+ {
1111
+ "name": "stdout",
1112
+ "output_type": "stream",
1113
+ "text": [
1114
+ "⠋ Processed 10 chunks, 123 entities(duplicated), 140 relations(duplicated)\r"
1115
+ ]
1116
+ },
1117
+ {
1118
+ "name": "stderr",
1119
+ "output_type": "stream",
1120
+ "text": [
1121
+ "\n",
1122
+ "Extracting entities from chunks: 83%|████████▎ | 10/12 [01:48<00:25, 12.79s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
1123
+ ]
1124
+ },
1125
+ {
1126
+ "name": "stdout",
1127
+ "output_type": "stream",
1128
+ "text": [
1129
+ "⠙ Processed 11 chunks, 158 entities(duplicated), 174 relations(duplicated)\r"
1130
+ ]
1131
+ },
1132
+ {
1133
+ "name": "stderr",
1134
+ "output_type": "stream",
1135
+ "text": [
1136
+ "\n",
1137
+ "Extracting entities from chunks: 92%|█████████▏| 11/12 [02:03<00:13, 13.50s/chunk]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
1138
+ ]
1139
+ },
1140
+ {
1141
+ "name": "stdout",
1142
+ "output_type": "stream",
1143
+ "text": [
1144
+ "⠹ Processed 12 chunks, 194 entities(duplicated), 221 relations(duplicated)\r"
1145
+ ]
1146
+ },
1147
+ {
1148
+ "name": "stderr",
1149
+ "output_type": "stream",
1150
+ "text": [
1151
+ "\n",
1152
+ "Extracting entities from chunks: 100%|██████████| 12/12 [02:13<00:00, 11.15s/chunk]\u001b[A\n",
1153
+ "INFO:lightrag:Inserting entities into storage...\n",
1154
+ "\n",
1155
+ "Inserting entities: 100%|██████████| 170/170 [00:00<00:00, 11610.25entity/s]\n",
1156
+ "INFO:lightrag:Inserting relationships into storage...\n",
1157
+ "\n",
1158
+ "Inserting relationships: 100%|██████████| 218/218 [00:00<00:00, 15913.51relationship/s]\n",
1159
+ "INFO:lightrag:Inserting 170 vectors to entities\n",
1160
+ "\n",
1161
+ "Generating embeddings: 0%| | 0/6 [00:00<?, ?batch/s]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1162
+ "\n",
1163
+ "Generating embeddings: 17%|█▋ | 1/6 [00:01<00:05, 1.10s/batch]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1164
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1165
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1166
+ "\n",
1167
+ "Generating embeddings: 33%|███▎ | 2/6 [00:02<00:04, 1.07s/batch]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1168
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1169
+ "\n",
1170
+ "Generating embeddings: 50%|█████ | 3/6 [00:02<00:02, 1.33batch/s]\u001b[A\n",
1171
+ "Generating embeddings: 67%|██████▋ | 4/6 [00:02<00:01, 1.67batch/s]\u001b[A\n",
1172
+ "Generating embeddings: 83%|████████▎ | 5/6 [00:03<00:00, 1.95batch/s]\u001b[A\n",
1173
+ "Generating embeddings: 100%|██████████| 6/6 [00:03<00:00, 1.66batch/s]\u001b[A\n",
1174
+ "INFO:lightrag:Inserting 218 vectors to relationships\n",
1175
+ "\n",
1176
+ "Generating embeddings: 0%| | 0/7 [00:00<?, ?batch/s]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1177
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1178
+ "\n",
1179
+ "Generating embeddings: 14%|█▍ | 1/7 [00:01<00:10, 1.74s/batch]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1180
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1181
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1182
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1183
+ "\n",
1184
+ "Generating embeddings: 29%|██▊ | 2/7 [00:02<00:05, 1.04s/batch]\u001b[A\n",
1185
+ "Generating embeddings: 43%|████▎ | 3/7 [00:02<00:02, 1.35batch/s]\u001b[A\n",
1186
+ "Generating embeddings: 57%|█████▋ | 4/7 [00:03<00:01, 1.69batch/s]\u001b[AINFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1187
+ "\n",
1188
+ "Generating embeddings: 71%|███████▏ | 5/7 [00:03<00:01, 1.96batch/s]\u001b[A\n",
1189
+ "Generating embeddings: 86%|████████▌ | 6/7 [00:03<00:00, 2.17batch/s]\u001b[A\n",
1190
+ "Generating embeddings: 100%|██████████| 7/7 [00:04<00:00, 1.68batch/s]\u001b[A\n",
1191
+ "INFO:lightrag:Writing graph with 174 nodes, 218 edges\n",
1192
+ "Processing batch 1: 100%|██████████| 1/1 [02:24<00:00, 144.69s/it]\n"
1193
+ ]
1194
+ }
1195
+ ],
1196
+ "source": [
1197
+ "rag = LightRAG(\n",
1198
+ " working_dir=WORKING_DIR,\n",
1199
+ " llm_model_func=llm_model_func,\n",
1200
+ " embedding_func=EmbeddingFunc(\n",
1201
+ " embedding_dim=4096, max_token_size=8192, func=embedding_func\n",
1202
+ " ),\n",
1203
+ " chunk_token_size=512,\n",
1204
+ ")\n",
1205
+ "\n",
1206
+ "# rag.insert(content)\n",
1207
+ "rag.insert(content, split_by_character=\"\\n#\", split_by_character_only=True)"
1208
+ ]
1209
+ },
1210
+ {
1211
+ "cell_type": "code",
1212
+ "execution_count": 13,
1213
+ "id": "3c7aa9836d8d43c7",
1214
+ "metadata": {
1215
+ "ExecuteTime": {
1216
+ "end_time": "2025-01-09T03:52:37.000418Z",
1217
+ "start_time": "2025-01-09T03:52:09.933584Z"
1218
+ }
1219
+ },
1220
+ "outputs": [
1221
+ {
1222
+ "name": "stderr",
1223
+ "output_type": "stream",
1224
+ "text": [
1225
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1226
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n",
1227
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1228
+ "INFO:lightrag:Local query uses 5 entites, 3 relations, 2 text units\n",
1229
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/embeddings \"HTTP/1.1 200 OK\"\n",
1230
+ "INFO:lightrag:Global query uses 9 entites, 5 relations, 4 text units\n",
1231
+ "INFO:httpx:HTTP Request: POST https://ark.cn-beijing.volces.com/api/v3/chat/completions \"HTTP/1.1 200 OK\"\n"
1232
+ ]
1233
+ },
1234
+ {
1235
+ "name": "stdout",
1236
+ "output_type": "stream",
1237
+ "text": [
1238
+ "<分析>\n",
1239
+ "- **该文献主要研究的问题是什么?**\n",
1240
+ " - **思考过程**:通过浏览论文的标题、摘要、引言等部分,寻找关于研究目的和问题的描述。论文标题为“Cancer Incidence and Mortality After Treatment With Folic Acid and Vitamin B12”,摘要中的“Objective”部分明确指出研究目的是“To evaluate effects of treatment with B vitamins on cancer outcomes and all-cause mortality in 2 randomized controlled trials”。因此,可以确定该文献主要研究的问题是评估B族维生素治疗对两项随机对照试验中癌症结局和全因死亡率的影响。\n",
1241
+ "- **该文献采用什么方法进行分析?**\n",
1242
+ " - **思考过程**:在论文的“METHODS”部分详细描述了研究方法。文中提到这是一个对两项随机、双盲、安慰剂对照临床试验(Norwegian Vitamin [NORVIT] trial和Western Norway B Vitamin Intervention Trial [WENBIT])数据的联合分析,并进行了观察性的试验后随访。具体包括对参与者进行分组干预(不同剂量的叶酸、维生素B12、维生素B6或安慰剂),收集临床信息和血样,分析循环B族维生素、同型半胱氨酸和可替宁等指标,并进行基因分型等,还涉及到多种统计分析方法,如计算预期癌症发生率、构建生存曲线、进行Cox比例风险回归模型分析等。\n",
1243
+ "- **该文献的主要结论是什么?**\n",
1244
+ " - **思考过程**:在论文的“Results”和“Conclusion”部分寻找主要结论。研究结果表明,在治疗期间,接受叶酸加维生素B12治疗的参与者血清叶酸浓度显著增加,且在后续随访中,该组癌症发病率、癌症死亡率和全因死亡率均有所上升,主要是肺癌发病率增加,而维生素B6治疗未显示出显著影响。结论部分明确指出“Treatment with folic acid plus vitamin $\\mathsf{B}_{12}$ was associated with increased cancer outcomes and all-cause mortality in patients with ischemic heart disease in Norway, where there is no folic acid fortification of foods”。\n",
1245
+ "</分析>\n",
1246
+ "\n",
1247
+ "<回答>\n",
1248
+ "- **主要研究问题**:评估B族维生素治疗对两项随机对照试验中癌症结局和全因死亡率的影响。\n",
1249
+ "- **研究方法**:采用对两项随机、双盲、安慰剂对照临床试验(Norwegian Vitamin [NORVIT] trial和Western Norway B Vitamin Intervention Trial [WENBIT])数据的联合分析,并进行观察性的试验后随访,涉及分组干预、多种指标检测以及多种统计分析方法。\n",
1250
+ "- **主要结论**:在挪威(食品中未添加叶酸),对于缺血性心脏病患者,叶酸加维生素B12治疗与癌症结局和全因死亡率的增加有关,而维生素B6治疗未显示出显著影响。\n",
1251
+ "\n",
1252
+ "**参考文献**\n",
1253
+ "- [VD] Cancer Incidence and Mortality After Treatment With Folic Acid and Vitamin B12\n",
1254
+ "- [VD] METHODS Study Design, Participants, and Study Intervention\n",
1255
+ "- [VD] RESULTS\n",
1256
+ "- [VD] Conclusion\n",
1257
+ "- [VD] Objective To evaluate effects of treatment with B vitamins on cancer outcomes and all-cause mortality in 2 randomized controlled trials.\n"
1258
+ ]
1259
+ }
1260
+ ],
1261
+ "source": [
1262
+ "resp = rag.query(prompt1, param=QueryParam(mode=\"mix\", top_k=5))\n",
1263
+ "print(resp)"
1264
+ ]
1265
+ },
1266
+ {
1267
+ "cell_type": "code",
1268
+ "execution_count": null,
1269
+ "id": "7ba6fa79a2550d10",
1270
+ "metadata": {},
1271
+ "outputs": [],
1272
+ "source": []
1273
+ }
1274
+ ],
1275
+ "metadata": {
1276
+ "kernelspec": {
1277
+ "display_name": "Python 3",
1278
+ "language": "python",
1279
+ "name": "python3"
1280
+ },
1281
+ "language_info": {
1282
+ "codemirror_mode": {
1283
+ "name": "ipython",
1284
+ "version": 2
1285
+ },
1286
+ "file_extension": ".py",
1287
+ "mimetype": "text/x-python",
1288
+ "name": "python",
1289
+ "nbconvert_exporter": "python",
1290
+ "pygments_lexer": "ipython2",
1291
+ "version": "2.7.6"
1292
+ }
1293
+ },
1294
+ "nbformat": 4,
1295
+ "nbformat_minor": 5
1296
+ }
lightrag/__init__.py CHANGED
@@ -1,5 +1,5 @@
1
  from .lightrag import LightRAG as LightRAG, QueryParam as QueryParam
2
 
3
- __version__ = "1.0.9"
4
  __author__ = "Zirui Guo"
5
  __url__ = "https://github.com/HKUDS/LightRAG"
 
1
  from .lightrag import LightRAG as LightRAG, QueryParam as QueryParam
2
 
3
+ __version__ = "1.1.0"
4
  __author__ = "Zirui Guo"
5
  __url__ = "https://github.com/HKUDS/LightRAG"
lightrag/lightrag.py CHANGED
@@ -45,6 +45,7 @@ from .storage import (
45
 
46
  from .prompt import GRAPH_FIELD_SEP
47
 
 
48
  # future KG integrations
49
 
50
  # from .kg.ArangoDB_impl import (
@@ -167,7 +168,7 @@ class LightRAG:
167
 
168
  # LLM
169
  llm_model_func: callable = gpt_4o_mini_complete # hf_model_complete#
170
- llm_model_name: str = "meta-llama/Llama-3.2-1B-Instruct" #'meta-llama/Llama-3.2-1B'#'google/gemma-2-2b-it'
171
  llm_model_max_token_size: int = 32768
172
  llm_model_max_async: int = 16
173
  llm_model_kwargs: dict = field(default_factory=dict)
@@ -177,7 +178,7 @@ class LightRAG:
177
 
178
  enable_llm_cache: bool = True
179
  # Sometimes there are some reason the LLM failed at Extracting Entities, and we want to continue without LLM cost, we can use this flag
180
- enable_llm_cache_for_entity_extract: bool = False
181
 
182
  # extension
183
  addon_params: dict = field(default_factory=dict)
@@ -186,6 +187,10 @@ class LightRAG:
186
  # Add new field for document status storage type
187
  doc_status_storage: str = field(default="JsonDocStatusStorage")
188
 
 
 
 
 
189
  def __post_init__(self):
190
  log_file = os.path.join("lightrag.log")
191
  set_logger(log_file)
@@ -313,15 +318,25 @@ class LightRAG:
313
  "JsonDocStatusStorage": JsonDocStatusStorage,
314
  }
315
 
316
- def insert(self, string_or_strings):
 
 
317
  loop = always_get_an_event_loop()
318
- return loop.run_until_complete(self.ainsert(string_or_strings))
 
 
319
 
320
- async def ainsert(self, string_or_strings):
 
 
321
  """Insert documents with checkpoint support
322
 
323
  Args:
324
  string_or_strings: Single document string or list of document strings
 
 
 
 
325
  """
326
  if isinstance(string_or_strings, str):
327
  string_or_strings = [string_or_strings]
@@ -358,7 +373,7 @@ class LightRAG:
358
  batch_docs = dict(list(new_docs.items())[i : i + batch_size])
359
 
360
  for doc_id, doc in tqdm_async(
361
- batch_docs.items(), desc=f"Processing batch {i//batch_size + 1}"
362
  ):
363
  try:
364
  # Update status to processing
@@ -377,11 +392,14 @@ class LightRAG:
377
  **dp,
378
  "full_doc_id": doc_id,
379
  }
380
- for dp in chunking_by_token_size(
381
  doc["content"],
 
 
382
  overlap_token_size=self.chunk_overlap_token_size,
383
  max_token_size=self.chunk_token_size,
384
  tiktoken_model=self.tiktoken_model_name,
 
385
  )
386
  }
387
 
@@ -453,6 +471,73 @@ class LightRAG:
453
  # Ensure all indexes are updated after each document
454
  await self._insert_done()
455
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
456
  async def _insert_done(self):
457
  tasks = []
458
  for storage_inst in [
 
45
 
46
  from .prompt import GRAPH_FIELD_SEP
47
 
48
+
49
  # future KG integrations
50
 
51
  # from .kg.ArangoDB_impl import (
 
168
 
169
  # LLM
170
  llm_model_func: callable = gpt_4o_mini_complete # hf_model_complete#
171
+ llm_model_name: str = "meta-llama/Llama-3.2-1B-Instruct" # 'meta-llama/Llama-3.2-1B'#'google/gemma-2-2b-it'
172
  llm_model_max_token_size: int = 32768
173
  llm_model_max_async: int = 16
174
  llm_model_kwargs: dict = field(default_factory=dict)
 
178
 
179
  enable_llm_cache: bool = True
180
  # Sometimes there are some reason the LLM failed at Extracting Entities, and we want to continue without LLM cost, we can use this flag
181
+ enable_llm_cache_for_entity_extract: bool = True
182
 
183
  # extension
184
  addon_params: dict = field(default_factory=dict)
 
187
  # Add new field for document status storage type
188
  doc_status_storage: str = field(default="JsonDocStatusStorage")
189
 
190
+ # Custom Chunking Function
191
+ chunking_func: callable = chunking_by_token_size
192
+ chunking_func_kwargs: dict = field(default_factory=dict)
193
+
194
  def __post_init__(self):
195
  log_file = os.path.join("lightrag.log")
196
  set_logger(log_file)
 
318
  "JsonDocStatusStorage": JsonDocStatusStorage,
319
  }
320
 
321
+ def insert(
322
+ self, string_or_strings, split_by_character=None, split_by_character_only=False
323
+ ):
324
  loop = always_get_an_event_loop()
325
+ return loop.run_until_complete(
326
+ self.ainsert(string_or_strings, split_by_character, split_by_character_only)
327
+ )
328
 
329
+ async def ainsert(
330
+ self, string_or_strings, split_by_character=None, split_by_character_only=False
331
+ ):
332
  """Insert documents with checkpoint support
333
 
334
  Args:
335
  string_or_strings: Single document string or list of document strings
336
+ split_by_character: if split_by_character is not None, split the string by character, if chunk longer than
337
+ chunk_size, split the sub chunk by token size.
338
+ split_by_character_only: if split_by_character_only is True, split the string by character only, when
339
+ split_by_character is None, this parameter is ignored.
340
  """
341
  if isinstance(string_or_strings, str):
342
  string_or_strings = [string_or_strings]
 
373
  batch_docs = dict(list(new_docs.items())[i : i + batch_size])
374
 
375
  for doc_id, doc in tqdm_async(
376
+ batch_docs.items(), desc=f"Processing batch {i // batch_size + 1}"
377
  ):
378
  try:
379
  # Update status to processing
 
392
  **dp,
393
  "full_doc_id": doc_id,
394
  }
395
+ for dp in self.chunking_func(
396
  doc["content"],
397
+ split_by_character=split_by_character,
398
+ split_by_character_only=split_by_character_only,
399
  overlap_token_size=self.chunk_overlap_token_size,
400
  max_token_size=self.chunk_token_size,
401
  tiktoken_model=self.tiktoken_model_name,
402
+ **self.chunking_func_kwargs,
403
  )
404
  }
405
 
 
471
  # Ensure all indexes are updated after each document
472
  await self._insert_done()
473
 
474
+ def insert_custom_chunks(self, full_text: str, text_chunks: list[str]):
475
+ loop = always_get_an_event_loop()
476
+ return loop.run_until_complete(
477
+ self.ainsert_custom_chunks(full_text, text_chunks)
478
+ )
479
+
480
+ async def ainsert_custom_chunks(self, full_text: str, text_chunks: list[str]):
481
+ update_storage = False
482
+ try:
483
+ doc_key = compute_mdhash_id(full_text.strip(), prefix="doc-")
484
+ new_docs = {doc_key: {"content": full_text.strip()}}
485
+
486
+ _add_doc_keys = await self.full_docs.filter_keys([doc_key])
487
+ new_docs = {k: v for k, v in new_docs.items() if k in _add_doc_keys}
488
+ if not len(new_docs):
489
+ logger.warning("This document is already in the storage.")
490
+ return
491
+
492
+ update_storage = True
493
+ logger.info(f"[New Docs] inserting {len(new_docs)} docs")
494
+
495
+ inserting_chunks = {}
496
+ for chunk_text in text_chunks:
497
+ chunk_text_stripped = chunk_text.strip()
498
+ chunk_key = compute_mdhash_id(chunk_text_stripped, prefix="chunk-")
499
+
500
+ inserting_chunks[chunk_key] = {
501
+ "content": chunk_text_stripped,
502
+ "full_doc_id": doc_key,
503
+ }
504
+
505
+ _add_chunk_keys = await self.text_chunks.filter_keys(
506
+ list(inserting_chunks.keys())
507
+ )
508
+ inserting_chunks = {
509
+ k: v for k, v in inserting_chunks.items() if k in _add_chunk_keys
510
+ }
511
+ if not len(inserting_chunks):
512
+ logger.warning("All chunks are already in the storage.")
513
+ return
514
+
515
+ logger.info(f"[New Chunks] inserting {len(inserting_chunks)} chunks")
516
+
517
+ await self.chunks_vdb.upsert(inserting_chunks)
518
+
519
+ logger.info("[Entity Extraction]...")
520
+ maybe_new_kg = await extract_entities(
521
+ inserting_chunks,
522
+ knowledge_graph_inst=self.chunk_entity_relation_graph,
523
+ entity_vdb=self.entities_vdb,
524
+ relationships_vdb=self.relationships_vdb,
525
+ global_config=asdict(self),
526
+ )
527
+
528
+ if maybe_new_kg is None:
529
+ logger.warning("No new entities and relationships found")
530
+ return
531
+ else:
532
+ self.chunk_entity_relation_graph = maybe_new_kg
533
+
534
+ await self.full_docs.upsert(new_docs)
535
+ await self.text_chunks.upsert(inserting_chunks)
536
+
537
+ finally:
538
+ if update_storage:
539
+ await self._insert_done()
540
+
541
  async def _insert_done(self):
542
  tasks = []
543
  for storage_inst in [
lightrag/operate.py CHANGED
@@ -4,7 +4,6 @@ import re
4
  from tqdm.asyncio import tqdm as tqdm_async
5
  from typing import Union
6
  from collections import Counter, defaultdict
7
- import warnings
8
  from .utils import (
9
  logger,
10
  clean_str,
@@ -34,23 +33,61 @@ import time
34
 
35
 
36
  def chunking_by_token_size(
37
- content: str, overlap_token_size=128, max_token_size=1024, tiktoken_model="gpt-4o"
 
 
 
 
 
 
38
  ):
39
  tokens = encode_string_by_tiktoken(content, model_name=tiktoken_model)
40
  results = []
41
- for index, start in enumerate(
42
- range(0, len(tokens), max_token_size - overlap_token_size)
43
- ):
44
- chunk_content = decode_tokens_by_tiktoken(
45
- tokens[start : start + max_token_size], model_name=tiktoken_model
46
- )
47
- results.append(
48
- {
49
- "tokens": min(max_token_size, len(tokens) - start),
50
- "content": chunk_content.strip(),
51
- "chunk_order_index": index,
52
- }
53
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  return results
55
 
56
 
@@ -574,15 +611,22 @@ async def kg_query(
574
  logger.warning("low_level_keywords and high_level_keywords is empty")
575
  return PROMPTS["fail_response"]
576
  if ll_keywords == [] and query_param.mode in ["local", "hybrid"]:
577
- logger.warning("low_level_keywords is empty")
578
- return PROMPTS["fail_response"]
579
- else:
580
- ll_keywords = ", ".join(ll_keywords)
 
581
  if hl_keywords == [] and query_param.mode in ["global", "hybrid"]:
582
- logger.warning("high_level_keywords is empty")
583
- return PROMPTS["fail_response"]
584
- else:
585
- hl_keywords = ", ".join(hl_keywords)
 
 
 
 
 
 
586
 
587
  # Build context
588
  keywords = [ll_keywords, hl_keywords]
@@ -648,77 +692,51 @@ async def _build_query_context(
648
  # ll_entities_context, ll_relations_context, ll_text_units_context = "", "", ""
649
  # hl_entities_context, hl_relations_context, hl_text_units_context = "", "", ""
650
 
651
- ll_kewwords, hl_keywrds = query[0], query[1]
652
- if query_param.mode in ["local", "hybrid"]:
653
- if ll_kewwords == "":
654
- ll_entities_context, ll_relations_context, ll_text_units_context = (
655
- "",
656
- "",
657
- "",
658
- )
659
- warnings.warn(
660
- "Low Level context is None. Return empty Low entity/relationship/source"
661
- )
662
- query_param.mode = "global"
663
- else:
664
- (
665
- ll_entities_context,
666
- ll_relations_context,
667
- ll_text_units_context,
668
- ) = await _get_node_data(
669
- ll_kewwords,
670
- knowledge_graph_inst,
671
- entities_vdb,
672
- text_chunks_db,
673
- query_param,
674
- )
675
- if query_param.mode in ["global", "hybrid"]:
676
- if hl_keywrds == "":
677
- hl_entities_context, hl_relations_context, hl_text_units_context = (
678
- "",
679
- "",
680
- "",
681
- )
682
- warnings.warn(
683
- "High Level context is None. Return empty High entity/relationship/source"
684
- )
685
- query_param.mode = "local"
686
- else:
687
- (
688
- hl_entities_context,
689
- hl_relations_context,
690
- hl_text_units_context,
691
- ) = await _get_edge_data(
692
- hl_keywrds,
693
- knowledge_graph_inst,
694
- relationships_vdb,
695
- text_chunks_db,
696
- query_param,
697
- )
698
- if (
699
- hl_entities_context == ""
700
- and hl_relations_context == ""
701
- and hl_text_units_context == ""
702
- ):
703
- logger.warn("No high level context found. Switching to local mode.")
704
- query_param.mode = "local"
705
- if query_param.mode == "hybrid":
706
- entities_context, relations_context, text_units_context = combine_contexts(
707
- [hl_entities_context, ll_entities_context],
708
- [hl_relations_context, ll_relations_context],
709
- [hl_text_units_context, ll_text_units_context],
710
  )
711
- elif query_param.mode == "local":
712
- entities_context, relations_context, text_units_context = (
713
  ll_entities_context,
714
  ll_relations_context,
715
  ll_text_units_context,
 
 
 
 
 
 
716
  )
717
- elif query_param.mode == "global":
718
- entities_context, relations_context, text_units_context = (
719
  hl_entities_context,
720
  hl_relations_context,
721
  hl_text_units_context,
 
 
 
 
 
 
 
 
 
 
 
722
  )
723
  return f"""
724
  -----Entities-----
 
4
  from tqdm.asyncio import tqdm as tqdm_async
5
  from typing import Union
6
  from collections import Counter, defaultdict
 
7
  from .utils import (
8
  logger,
9
  clean_str,
 
33
 
34
 
35
  def chunking_by_token_size(
36
+ content: str,
37
+ split_by_character=None,
38
+ split_by_character_only=False,
39
+ overlap_token_size=128,
40
+ max_token_size=1024,
41
+ tiktoken_model="gpt-4o",
42
+ **kwargs,
43
  ):
44
  tokens = encode_string_by_tiktoken(content, model_name=tiktoken_model)
45
  results = []
46
+ if split_by_character:
47
+ raw_chunks = content.split(split_by_character)
48
+ new_chunks = []
49
+ if split_by_character_only:
50
+ for chunk in raw_chunks:
51
+ _tokens = encode_string_by_tiktoken(chunk, model_name=tiktoken_model)
52
+ new_chunks.append((len(_tokens), chunk))
53
+ else:
54
+ for chunk in raw_chunks:
55
+ _tokens = encode_string_by_tiktoken(chunk, model_name=tiktoken_model)
56
+ if len(_tokens) > max_token_size:
57
+ for start in range(
58
+ 0, len(_tokens), max_token_size - overlap_token_size
59
+ ):
60
+ chunk_content = decode_tokens_by_tiktoken(
61
+ _tokens[start : start + max_token_size],
62
+ model_name=tiktoken_model,
63
+ )
64
+ new_chunks.append(
65
+ (min(max_token_size, len(_tokens) - start), chunk_content)
66
+ )
67
+ else:
68
+ new_chunks.append((len(_tokens), chunk))
69
+ for index, (_len, chunk) in enumerate(new_chunks):
70
+ results.append(
71
+ {
72
+ "tokens": _len,
73
+ "content": chunk.strip(),
74
+ "chunk_order_index": index,
75
+ }
76
+ )
77
+ else:
78
+ for index, start in enumerate(
79
+ range(0, len(tokens), max_token_size - overlap_token_size)
80
+ ):
81
+ chunk_content = decode_tokens_by_tiktoken(
82
+ tokens[start : start + max_token_size], model_name=tiktoken_model
83
+ )
84
+ results.append(
85
+ {
86
+ "tokens": min(max_token_size, len(tokens) - start),
87
+ "content": chunk_content.strip(),
88
+ "chunk_order_index": index,
89
+ }
90
+ )
91
  return results
92
 
93
 
 
611
  logger.warning("low_level_keywords and high_level_keywords is empty")
612
  return PROMPTS["fail_response"]
613
  if ll_keywords == [] and query_param.mode in ["local", "hybrid"]:
614
+ logger.warning(
615
+ "low_level_keywords is empty, switching from %s mode to global mode",
616
+ query_param.mode,
617
+ )
618
+ query_param.mode = "global"
619
  if hl_keywords == [] and query_param.mode in ["global", "hybrid"]:
620
+ logger.warning(
621
+ "high_level_keywords is empty, switching from %s mode to local mode",
622
+ query_param.mode,
623
+ )
624
+ query_param.mode = "local"
625
+
626
+ ll_keywords = ", ".join(ll_keywords) if ll_keywords else ""
627
+ hl_keywords = ", ".join(hl_keywords) if hl_keywords else ""
628
+
629
+ logger.info("Using %s mode for query processing", query_param.mode)
630
 
631
  # Build context
632
  keywords = [ll_keywords, hl_keywords]
 
692
  # ll_entities_context, ll_relations_context, ll_text_units_context = "", "", ""
693
  # hl_entities_context, hl_relations_context, hl_text_units_context = "", "", ""
694
 
695
+ ll_keywords, hl_keywords = query[0], query[1]
696
+
697
+ if query_param.mode == "local":
698
+ entities_context, relations_context, text_units_context = await _get_node_data(
699
+ ll_keywords,
700
+ knowledge_graph_inst,
701
+ entities_vdb,
702
+ text_chunks_db,
703
+ query_param,
704
+ )
705
+ elif query_param.mode == "global":
706
+ entities_context, relations_context, text_units_context = await _get_edge_data(
707
+ hl_keywords,
708
+ knowledge_graph_inst,
709
+ relationships_vdb,
710
+ text_chunks_db,
711
+ query_param,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
712
  )
713
+ else: # hybrid mode
714
+ (
715
  ll_entities_context,
716
  ll_relations_context,
717
  ll_text_units_context,
718
+ ) = await _get_node_data(
719
+ ll_keywords,
720
+ knowledge_graph_inst,
721
+ entities_vdb,
722
+ text_chunks_db,
723
+ query_param,
724
  )
725
+ (
 
726
  hl_entities_context,
727
  hl_relations_context,
728
  hl_text_units_context,
729
+ ) = await _get_edge_data(
730
+ hl_keywords,
731
+ knowledge_graph_inst,
732
+ relationships_vdb,
733
+ text_chunks_db,
734
+ query_param,
735
+ )
736
+ entities_context, relations_context, text_units_context = combine_contexts(
737
+ [hl_entities_context, ll_entities_context],
738
+ [hl_relations_context, ll_relations_context],
739
+ [hl_text_units_context, ll_text_units_context],
740
  )
741
  return f"""
742
  -----Entities-----
requirements.txt CHANGED
@@ -1,38 +1,38 @@
1
  accelerate
2
- aioboto3~=13.3.0
3
- aiofiles~=24.1.0
4
- aiohttp~=3.11.11
5
- asyncpg~=0.30.0
6
 
7
  # database packages
8
  graspologic
9
  gremlinpython
10
  hnswlib
11
  nano-vectordb
12
- neo4j~=5.27.0
13
- networkx~=3.2.1
14
 
15
- numpy~=2.2.0
16
- ollama~=0.4.4
17
- openai~=1.58.1
18
  oracledb
19
- psycopg-pool~=3.2.4
20
- psycopg[binary,pool]~=3.2.3
21
- pydantic~=2.10.4
22
  pymilvus
23
  pymongo
24
  pymysql
25
- python-dotenv~=1.0.1
26
- pyvis~=0.3.2
27
- setuptools~=70.0.0
28
  # lmdeploy[all]
29
- sqlalchemy~=2.0.36
30
- tenacity~=9.0.0
31
 
32
 
33
  # LLM packages
34
- tiktoken~=0.8.0
35
- torch~=2.5.1+cu121
36
- tqdm~=4.67.1
37
- transformers~=4.47.1
38
  xxhash
 
1
  accelerate
2
+ aioboto3
3
+ aiofiles
4
+ aiohttp
5
+ asyncpg
6
 
7
  # database packages
8
  graspologic
9
  gremlinpython
10
  hnswlib
11
  nano-vectordb
12
+ neo4j
13
+ networkx
14
 
15
+ numpy
16
+ ollama
17
+ openai
18
  oracledb
19
+ psycopg-pool
20
+ psycopg[binary,pool]
21
+ pydantic
22
  pymilvus
23
  pymongo
24
  pymysql
25
+ python-dotenv
26
+ pyvis
27
+ setuptools
28
  # lmdeploy[all]
29
+ sqlalchemy
30
+ tenacity
31
 
32
 
33
  # LLM packages
34
+ tiktoken
35
+ torch
36
+ tqdm
37
+ transformers
38
  xxhash