zrguo
commited on
Commit
·
d5e7614
1
Parent(s):
ba4763a
Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,7 @@ This repository hosts the code of LightRAG. The structure of this code is based
|
|
22 |
</div>
|
23 |
|
24 |
## 🎉 News
|
|
|
25 |
- [x] [2024.10.29]🎯📢LightRAG now supports multiple file types, including PDF, DOC, PPT, and CSV via `textract`.
|
26 |
- [x] [2024.10.20]🎯📢We’ve added a new feature to LightRAG: Graph Visualization.
|
27 |
- [x] [2024.10.18]🎯📢We’ve added a link to a [LightRAG Introduction Video](https://youtu.be/oageL-1I0GE). Thanks to the author!
|
@@ -161,39 +162,6 @@ rag = LightRAG(
|
|
161 |
```
|
162 |
</details>
|
163 |
|
164 |
-
|
165 |
-
<details>
|
166 |
-
<summary> Using Neo4J for Storage </summary>
|
167 |
-
|
168 |
-
* For production level scenarios you will most likely want to leverage an enterprise solution
|
169 |
-
* for KG storage. Running Neo4J in Docker is recommended for seamless local testing.
|
170 |
-
* See: https://hub.docker.com/_/neo4j
|
171 |
-
|
172 |
-
|
173 |
-
```python
|
174 |
-
export NEO4J_URI="neo4j://localhost:7687"
|
175 |
-
export NEO4J_USERNAME="neo4j"
|
176 |
-
export NEO4J_PASSWORD="password"
|
177 |
-
|
178 |
-
When you launch the project be sure to override the default KG: NetworkS
|
179 |
-
by specifying kg="Neo4JStorage".
|
180 |
-
|
181 |
-
# Note: Default settings use NetworkX
|
182 |
-
#Initialize LightRAG with Neo4J implementation.
|
183 |
-
WORKING_DIR = "./local_neo4jWorkDir"
|
184 |
-
|
185 |
-
rag = LightRAG(
|
186 |
-
working_dir=WORKING_DIR,
|
187 |
-
llm_model_func=gpt_4o_mini_complete, # Use gpt_4o_mini_complete LLM model
|
188 |
-
kg="Neo4JStorage", #<-----------override KG default
|
189 |
-
log_level="DEBUG" #<-----------override log_level default
|
190 |
-
)
|
191 |
-
```
|
192 |
-
see test_neo4j.py for a working example.
|
193 |
-
</details>
|
194 |
-
|
195 |
-
|
196 |
-
|
197 |
<details>
|
198 |
<summary> Using Ollama Models </summary>
|
199 |
|
@@ -222,6 +190,34 @@ rag = LightRAG(
|
|
222 |
)
|
223 |
```
|
224 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
225 |
### Increasing context size
|
226 |
In order for LightRAG to work context should be at least 32k tokens. By default Ollama models have context size of 8k. You can achieve this using one of two ways:
|
227 |
|
|
|
22 |
</div>
|
23 |
|
24 |
## 🎉 News
|
25 |
+
- [x] [2024.11.04]🎯📢You can [use Neo4J for Storage](https://github.com/HKUDS/LightRAG/edit/main/README.md#using-neo4j-for-storage) now.
|
26 |
- [x] [2024.10.29]🎯📢LightRAG now supports multiple file types, including PDF, DOC, PPT, and CSV via `textract`.
|
27 |
- [x] [2024.10.20]🎯📢We’ve added a new feature to LightRAG: Graph Visualization.
|
28 |
- [x] [2024.10.18]🎯📢We’ve added a link to a [LightRAG Introduction Video](https://youtu.be/oageL-1I0GE). Thanks to the author!
|
|
|
162 |
```
|
163 |
</details>
|
164 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
165 |
<details>
|
166 |
<summary> Using Ollama Models </summary>
|
167 |
|
|
|
190 |
)
|
191 |
```
|
192 |
|
193 |
+
### Using Neo4J for Storage
|
194 |
+
|
195 |
+
* For production level scenarios you will most likely want to leverage an enterprise solution
|
196 |
+
* for KG storage. Running Neo4J in Docker is recommended for seamless local testing.
|
197 |
+
* See: https://hub.docker.com/_/neo4j
|
198 |
+
|
199 |
+
|
200 |
+
```python
|
201 |
+
export NEO4J_URI="neo4j://localhost:7687"
|
202 |
+
export NEO4J_USERNAME="neo4j"
|
203 |
+
export NEO4J_PASSWORD="password"
|
204 |
+
|
205 |
+
When you launch the project be sure to override the default KG: NetworkS
|
206 |
+
by specifying kg="Neo4JStorage".
|
207 |
+
|
208 |
+
# Note: Default settings use NetworkX
|
209 |
+
#Initialize LightRAG with Neo4J implementation.
|
210 |
+
WORKING_DIR = "./local_neo4jWorkDir"
|
211 |
+
|
212 |
+
rag = LightRAG(
|
213 |
+
working_dir=WORKING_DIR,
|
214 |
+
llm_model_func=gpt_4o_mini_complete, # Use gpt_4o_mini_complete LLM model
|
215 |
+
kg="Neo4JStorage", #<-----------override KG default
|
216 |
+
log_level="DEBUG" #<-----------override log_level default
|
217 |
+
)
|
218 |
+
```
|
219 |
+
see test_neo4j.py for a working example.
|
220 |
+
|
221 |
### Increasing context size
|
222 |
In order for LightRAG to work context should be at least 32k tokens. By default Ollama models have context size of 8k. You can achieve this using one of two ways:
|
223 |
|