donbr commited on
Commit
2e056db
·
1 Parent(s): 65da904

Update README.md to move Neo4j Storage content

Browse files

Move `Using Neo4J for Storage` content outside of Ollama details group for improved visibility to this option.

Files changed (1) hide show
  1. README.md +27 -28
README.md CHANGED
@@ -203,34 +203,6 @@ rag = LightRAG(
203
  )
204
  ```
205
 
206
- ### Using Neo4J for Storage
207
-
208
- * For production level scenarios you will most likely want to leverage an enterprise solution
209
- * for KG storage. Running Neo4J in Docker is recommended for seamless local testing.
210
- * See: https://hub.docker.com/_/neo4j
211
-
212
-
213
- ```python
214
- export NEO4J_URI="neo4j://localhost:7687"
215
- export NEO4J_USERNAME="neo4j"
216
- export NEO4J_PASSWORD="password"
217
-
218
- When you launch the project be sure to override the default KG: NetworkS
219
- by specifying kg="Neo4JStorage".
220
-
221
- # Note: Default settings use NetworkX
222
- #Initialize LightRAG with Neo4J implementation.
223
- WORKING_DIR = "./local_neo4jWorkDir"
224
-
225
- rag = LightRAG(
226
- working_dir=WORKING_DIR,
227
- llm_model_func=gpt_4o_mini_complete, # Use gpt_4o_mini_complete LLM model
228
- kg="Neo4JStorage", #<-----------override KG default
229
- log_level="DEBUG" #<-----------override log_level default
230
- )
231
- ```
232
- see test_neo4j.py for a working example.
233
-
234
  ### Increasing context size
235
  In order for LightRAG to work context should be at least 32k tokens. By default Ollama models have context size of 8k. You can achieve this using one of two ways:
236
 
@@ -328,6 +300,33 @@ with open("./newText.txt") as f:
328
  rag.insert(f.read())
329
  ```
330
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
331
  ### Insert Custom KG
332
 
333
  ```python
 
203
  )
204
  ```
205
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
206
  ### Increasing context size
207
  In order for LightRAG to work context should be at least 32k tokens. By default Ollama models have context size of 8k. You can achieve this using one of two ways:
208
 
 
300
  rag.insert(f.read())
301
  ```
302
 
303
+ ### Using Neo4J for Storage
304
+
305
+ * For production level scenarios you will most likely want to leverage an enterprise solution
306
+ * for KG storage. Running Neo4J in Docker is recommended for seamless local testing.
307
+ * See: https://hub.docker.com/_/neo4j
308
+
309
+ ```python
310
+ export NEO4J_URI="neo4j://localhost:7687"
311
+ export NEO4J_USERNAME="neo4j"
312
+ export NEO4J_PASSWORD="password"
313
+
314
+ # When you launch the project be sure to override the default KG: NetworkX
315
+ # by specifying kg="Neo4JStorage".
316
+
317
+ # Note: Default settings use NetworkX
318
+ # Initialize LightRAG with Neo4J implementation.
319
+ WORKING_DIR = "./local_neo4jWorkDir"
320
+
321
+ rag = LightRAG(
322
+ working_dir=WORKING_DIR,
323
+ llm_model_func=gpt_4o_mini_complete, # Use gpt_4o_mini_complete LLM model
324
+ kg="Neo4JStorage", #<-----------override KG default
325
+ log_level="DEBUG" #<-----------override log_level default
326
+ )
327
+ ```
328
+ see test_neo4j.py for a working example.
329
+
330
  ### Insert Custom KG
331
 
332
  ```python