neno-is-ooo commited on
Commit
c2fc2a3
·
1 Parent(s): c0fc927

docs: Add clear initialization requirements and troubleshooting section

Browse files

- Add prominent warning about required initialization steps
- Document common errors (AttributeError: __aenter__ and KeyError: 'history_messages')
- Add troubleshooting section with specific solutions
- Add inline comments in code example highlighting initialization requirements

This addresses user confusion when LightRAG fails with cryptic errors due to
missing initialization calls. The documentation now clearly states that both
await rag.initialize_storages() and await initialize_pipeline_status() must
be called after creating a LightRAG instance.

Files changed (1) hide show
  1. README.md +36 -2
README.md CHANGED
@@ -149,6 +149,12 @@ For a streaming response implementation example, please see `examples/lightrag_o
149
 
150
  > If you would like to integrate LightRAG into your project, we recommend utilizing the REST API provided by the LightRAG Server. LightRAG Core is typically intended for embedded applications or for researchers who wish to conduct studies and evaluations.
151
 
 
 
 
 
 
 
152
  ### A Simple Program
153
 
154
  Use the below Python snippet to initialize LightRAG, insert text to it, and perform queries:
@@ -173,8 +179,9 @@ async def initialize_rag():
173
  embedding_func=openai_embed,
174
  llm_model_func=gpt_4o_mini_complete,
175
  )
176
- await rag.initialize_storages()
177
- await initialize_pipeline_status()
 
178
  return rag
179
 
180
  async def main():
@@ -1501,6 +1508,33 @@ Thank you to all our contributors!
1501
  <img src="https://contrib.rocks/image?repo=HKUDS/LightRAG" />
1502
  </a>
1503
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1504
  ## 🌟Citation
1505
 
1506
  ```python
 
149
 
150
  > If you would like to integrate LightRAG into your project, we recommend utilizing the REST API provided by the LightRAG Server. LightRAG Core is typically intended for embedded applications or for researchers who wish to conduct studies and evaluations.
151
 
152
+ ### ⚠️ Important: Initialization Requirements
153
+
154
+ **LightRAG requires explicit initialization before use.** You must call both `await rag.initialize_storages()` and `await initialize_pipeline_status()` after creating a LightRAG instance, otherwise you will encounter errors like:
155
+ - `AttributeError: __aenter__` - if storages are not initialized
156
+ - `KeyError: 'history_messages'` - if pipeline status is not initialized
157
+
158
  ### A Simple Program
159
 
160
  Use the below Python snippet to initialize LightRAG, insert text to it, and perform queries:
 
179
  embedding_func=openai_embed,
180
  llm_model_func=gpt_4o_mini_complete,
181
  )
182
+ # IMPORTANT: Both initialization calls are required!
183
+ await rag.initialize_storages() # Initialize storage backends
184
+ await initialize_pipeline_status() # Initialize processing pipeline
185
  return rag
186
 
187
  async def main():
 
1508
  <img src="https://contrib.rocks/image?repo=HKUDS/LightRAG" />
1509
  </a>
1510
 
1511
+ ## Troubleshooting
1512
+
1513
+ ### Common Initialization Errors
1514
+
1515
+ If you encounter these errors when using LightRAG:
1516
+
1517
+ 1. **`AttributeError: __aenter__`**
1518
+ - **Cause**: Storage backends not initialized
1519
+ - **Solution**: Call `await rag.initialize_storages()` after creating the LightRAG instance
1520
+
1521
+ 2. **`KeyError: 'history_messages'`**
1522
+ - **Cause**: Pipeline status not initialized
1523
+ - **Solution**: Call `await initialize_pipeline_status()` after initializing storages
1524
+
1525
+ 3. **Both errors in sequence**
1526
+ - **Cause**: Neither initialization method was called
1527
+ - **Solution**: Always follow this pattern:
1528
+ ```python
1529
+ rag = LightRAG(...)
1530
+ await rag.initialize_storages()
1531
+ await initialize_pipeline_status()
1532
+ ```
1533
+
1534
+ ### Model Switching Issues
1535
+
1536
+ When switching between different embedding models, you must clear the data directory to avoid errors. The only file you may want to preserve is `kv_store_llm_response_cache.json` if you wish to retain the LLM cache.
1537
+
1538
  ## 🌟Citation
1539
 
1540
  ```python