yangdx commited on
Commit
9b94a71
·
1 Parent(s): 9db1d76

Update README.md

Browse files
Files changed (2) hide show
  1. README-zh.md +4 -10
  2. README.md +4 -10
README-zh.md CHANGED
@@ -636,21 +636,15 @@ rag.insert(["文本1", "文本2",...])
636
 
637
  # 带有自定义批量大小配置的批量插入
638
  rag = LightRAG(
 
639
  working_dir=WORKING_DIR,
640
- addon_params={
641
- "insert_batch_size": 4 # 每批处理4个文档
642
- }
643
  )
644
 
645
  rag.insert(["文本1", "文本2", "文本3", ...]) # 文档将以4个为一批进行处理
646
  ```
647
 
648
- `addon_params`中的`insert_batch_size`参数控制插入过程中每批处理的文档数量。这对于以下情况很有用:
649
-
650
- - 管理大型文档集合的内存使用
651
- - 优化处理速度
652
- - 提供更好的进度跟踪
653
- - 如果未指定,默认值为10
654
 
655
  </details>
656
 
@@ -1115,7 +1109,7 @@ rag.clear_cache(modes=["local"])
1115
  | **vector_db_storage_cls_kwargs** | `dict` | 向量数据库的附加参数,如设置节点和关系检索的阈值 | cosine_better_than_threshold: 0.2(默认值由环境变量COSINE_THRESHOLD更改) |
1116
  | **enable_llm_cache** | `bool` | 如果为`TRUE`,将LLM结果存储在缓存中;重复的提示返回缓存的响应 | `TRUE` |
1117
  | **enable_llm_cache_for_entity_extract** | `bool` | 如果为`TRUE`,将实体提取的LLM结果存储在缓存中;适合初学者调试应用程序 | `TRUE` |
1118
- | **addon_params** | `dict` | 附加参数,例如`{"example_number": 1, "language": "Simplified Chinese", "entity_types": ["organization", "person", "geo", "event"], "insert_batch_size": 10}`:设置示例限制、输出语言和文档处理的批量大小 | `example_number: 所有示例, language: English, insert_batch_size: 10` |
1119
  | **convert_response_to_json_func** | `callable` | 未使用 | `convert_response_to_json` |
1120
  | **embedding_cache_config** | `dict` | 问答缓存的配置。包含三个参数:`enabled`:布尔值,启用/禁用缓存查找功能。启用时,系统将在生成新答案之前检查缓存的响应。`similarity_threshold`:浮点值(0-1),相似度阈值。当新问题与缓存问题的相似度超过此阈值时,将直接返回缓存的答案而不调用LLM。`use_llm_check`:布尔值,启用/禁用LLM相似度验证。启用时,在返回缓存答案之前,将使用LLM作为二次检查来验证问题之间的相似度。 | 默认:`{"enabled": False, "similarity_threshold": 0.95, "use_llm_check": False}` |
1121
 
 
636
 
637
  # 带有自定义批量大小配置的批量插入
638
  rag = LightRAG(
639
+ ...
640
  working_dir=WORKING_DIR,
641
+ max_parallel_insert = 4
 
 
642
  )
643
 
644
  rag.insert(["文本1", "文本2", "文本3", ...]) # 文档将以4个为一批进行处理
645
  ```
646
 
647
+ 参数 `max_parallel_insert` 用于控制文档索引流水线中并行处理的文档数量。若未指定,默认值为 **2**。建议将该参数设置为 **10 以下**,因为性能瓶颈通常出现在大语言模型(LLM)的处理环节。
 
 
 
 
 
648
 
649
  </details>
650
 
 
1109
  | **vector_db_storage_cls_kwargs** | `dict` | 向量数据库的附加参数,如设置节点和关系检索的阈值 | cosine_better_than_threshold: 0.2(默认值由环境变量COSINE_THRESHOLD更改) |
1110
  | **enable_llm_cache** | `bool` | 如果为`TRUE`,将LLM结果存储在缓存中;重复的提示返回缓存的响应 | `TRUE` |
1111
  | **enable_llm_cache_for_entity_extract** | `bool` | 如果为`TRUE`,将实体提取的LLM结果存储在缓存中;适合初学者调试应用程序 | `TRUE` |
1112
+ | **addon_params** | `dict` | 附加参数,例如`{"example_number": 1, "language": "Simplified Chinese", "entity_types": ["organization", "person", "geo", "event"]}`:设置示例限制、输出语言和文档处理的批量大小 | `example_number: 所有示例, language: English` |
1113
  | **convert_response_to_json_func** | `callable` | 未使用 | `convert_response_to_json` |
1114
  | **embedding_cache_config** | `dict` | 问答缓存的配置。包含三个参数:`enabled`:布尔值,启用/禁用缓存查找功能。启用时,系统将在生成新答案之前检查缓存的响应。`similarity_threshold`:浮点值(0-1),相似度阈值。当新问题与缓存问题的相似度超过此阈值时,将直接返回缓存的答案而不调用LLM。`use_llm_check`:布尔值,启用/禁用LLM相似度验证。启用时,在返回缓存答案之前,将使用LLM作为二次检查来验证问题之间的相似度。 | 默认:`{"enabled": False, "similarity_threshold": 0.95, "use_llm_check": False}` |
1115
 
README.md CHANGED
@@ -629,21 +629,15 @@ rag.insert(["TEXT1", "TEXT2",...])
629
 
630
  # Batch Insert with custom batch size configuration
631
  rag = LightRAG(
 
632
  working_dir=WORKING_DIR,
633
- addon_params={
634
- "insert_batch_size": 4 # Process 4 documents per batch
635
- }
636
  )
637
 
638
  rag.insert(["TEXT1", "TEXT2", "TEXT3", ...]) # Documents will be processed in batches of 4
639
  ```
640
 
641
- The `insert_batch_size` parameter in `addon_params` controls how many documents are processed in each batch during insertion. This is useful for:
642
-
643
- - Managing memory usage with large document collections
644
- - Optimizing processing speed
645
- - Providing better progress tracking
646
- - Default value is 10 if not specified
647
 
648
  </details>
649
 
@@ -1181,7 +1175,7 @@ Valid modes are:
1181
  | **vector_db_storage_cls_kwargs** | `dict` | Additional parameters for vector database, like setting the threshold for nodes and relations retrieval | cosine_better_than_threshold: 0.2(default value changed by env var COSINE_THRESHOLD) |
1182
  | **enable_llm_cache** | `bool` | If `TRUE`, stores LLM results in cache; repeated prompts return cached responses | `TRUE` |
1183
  | **enable_llm_cache_for_entity_extract** | `bool` | If `TRUE`, stores LLM results in cache for entity extraction; Good for beginners to debug your application | `TRUE` |
1184
- | **addon_params** | `dict` | Additional parameters, e.g., `{"example_number": 1, "language": "Simplified Chinese", "entity_types": ["organization", "person", "geo", "event"], "insert_batch_size": 10}`: sets example limit, output language, and batch size for document processing | `example_number: all examples, language: English, insert_batch_size: 10` |
1185
  | **convert_response_to_json_func** | `callable` | Not used | `convert_response_to_json` |
1186
  | **embedding_cache_config** | `dict` | Configuration for question-answer caching. Contains three parameters: `enabled`: Boolean value to enable/disable cache lookup functionality. When enabled, the system will check cached responses before generating new answers. `similarity_threshold`: Float value (0-1), similarity threshold. When a new question's similarity with a cached question exceeds this threshold, the cached answer will be returned directly without calling the LLM. `use_llm_check`: Boolean value to enable/disable LLM similarity verification. When enabled, LLM will be used as a secondary check to verify the similarity between questions before returning cached answers. | Default: `{"enabled": False, "similarity_threshold": 0.95, "use_llm_check": False}` |
1187
 
 
629
 
630
  # Batch Insert with custom batch size configuration
631
  rag = LightRAG(
632
+ ...
633
  working_dir=WORKING_DIR,
634
+ max_parallel_insert = 4
 
 
635
  )
636
 
637
  rag.insert(["TEXT1", "TEXT2", "TEXT3", ...]) # Documents will be processed in batches of 4
638
  ```
639
 
640
+ The `max_parallel_insert` parameter determines the number of documents processed concurrently in the document indexing pipeline. If unspecified, the default value is **2**. We recommend keeping this setting **below 10**, as the performance bottleneck typically lies with the LLM (Large Language Model) processing.The `max_parallel_insert` parameter determines the number of documents processed concurrently in the document indexing pipeline. If unspecified, the default value is **2**. We recommend keeping this setting **below 10**, as the performance bottleneck typically lies with the LLM (Large Language Model) processing.
 
 
 
 
 
641
 
642
  </details>
643
 
 
1175
  | **vector_db_storage_cls_kwargs** | `dict` | Additional parameters for vector database, like setting the threshold for nodes and relations retrieval | cosine_better_than_threshold: 0.2(default value changed by env var COSINE_THRESHOLD) |
1176
  | **enable_llm_cache** | `bool` | If `TRUE`, stores LLM results in cache; repeated prompts return cached responses | `TRUE` |
1177
  | **enable_llm_cache_for_entity_extract** | `bool` | If `TRUE`, stores LLM results in cache for entity extraction; Good for beginners to debug your application | `TRUE` |
1178
+ | **addon_params** | `dict` | Additional parameters, e.g., `{"example_number": 1, "language": "Simplified Chinese", "entity_types": ["organization", "person", "geo", "event"]}`: sets example limit, entiy/relation extraction output language | `example_number: all examples, language: English` |
1179
  | **convert_response_to_json_func** | `callable` | Not used | `convert_response_to_json` |
1180
  | **embedding_cache_config** | `dict` | Configuration for question-answer caching. Contains three parameters: `enabled`: Boolean value to enable/disable cache lookup functionality. When enabled, the system will check cached responses before generating new answers. `similarity_threshold`: Float value (0-1), similarity threshold. When a new question's similarity with a cached question exceeds this threshold, the cached answer will be returned directly without calling the LLM. `use_llm_check`: Boolean value to enable/disable LLM similarity verification. When enabled, LLM will be used as a secondary check to verify the similarity between questions before returning cached answers. | Default: `{"enabled": False, "similarity_threshold": 0.95, "use_llm_check": False}` |
1181