Pankaj Kaushal
commited on
Commit
·
de814c2
1
Parent(s):
531302d
Update README.md: Refactor LlamaIndex section and example code
Browse files- Simplified LlamaIndex documentation in README
- Removed wrapper directory references
- Updated example code to reflect new directory structure
- Cleaned up custom knowledge graph example
- Adjusted file paths and import statements
README.md
CHANGED
@@ -313,30 +313,26 @@ In order to run this experiment on low RAM GPU you should select small model and
|
|
313 |
|
314 |
</details>
|
315 |
<details>
|
316 |
-
<summary> <b>
|
317 |
|
318 |
-
LightRAG supports integration with
|
319 |
|
320 |
-
|
321 |
-
|
322 |
-
1. **LlamaIndex** (`wrapper/llama_index_impl.py`):
|
323 |
- Integrates with OpenAI and other providers through LlamaIndex
|
324 |
-
-
|
325 |
-
- Provides consistent interfaces for embeddings and completions
|
326 |
-
- See [LlamaIndex Wrapper Documentation](lightrag/wrapper/Readme.md) for detailed setup and examples
|
327 |
|
328 |
### Example Usage
|
329 |
|
330 |
```python
|
331 |
# Using LlamaIndex with direct OpenAI access
|
332 |
from lightrag import LightRAG
|
333 |
-
from lightrag.
|
334 |
from llama_index.embeddings.openai import OpenAIEmbedding
|
335 |
from llama_index.llms.openai import OpenAI
|
336 |
|
337 |
rag = LightRAG(
|
338 |
working_dir="your/path",
|
339 |
-
llm_model_func=
|
340 |
embedding_func=EmbeddingFunc( # LlamaIndex-compatible embedding function
|
341 |
embedding_dim=1536,
|
342 |
max_token_size=8192,
|
@@ -346,9 +342,9 @@ rag = LightRAG(
|
|
346 |
```
|
347 |
|
348 |
#### For detailed documentation and examples, see:
|
349 |
-
- [LlamaIndex
|
350 |
-
- [Direct OpenAI Example](examples/
|
351 |
-
- [LiteLLM Proxy Example](examples/
|
352 |
|
353 |
</details>
|
354 |
<details>
|
@@ -499,22 +495,14 @@ custom_kg = {
|
|
499 |
{
|
500 |
"content": "ProductX, developed by CompanyA, has revolutionized the market with its cutting-edge features.",
|
501 |
"source_id": "Source1",
|
502 |
-
"chunk_order_index": 0,
|
503 |
-
},
|
504 |
-
{
|
505 |
-
"content": "One outstanding feature of ProductX is its advanced AI capabilities.",
|
506 |
-
"source_id": "Source1",
|
507 |
-
"chunk_order_index": 1,
|
508 |
},
|
509 |
{
|
510 |
"content": "PersonA is a prominent researcher at UniversityB, focusing on artificial intelligence and machine learning.",
|
511 |
"source_id": "Source2",
|
512 |
-
"chunk_order_index": 0,
|
513 |
},
|
514 |
{
|
515 |
"content": "None",
|
516 |
"source_id": "UNKNOWN",
|
517 |
-
"chunk_order_index": 0,
|
518 |
},
|
519 |
],
|
520 |
}
|
|
|
313 |
|
314 |
</details>
|
315 |
<details>
|
316 |
+
<summary> <b>LlamaIndex</b> </summary>
|
317 |
|
318 |
+
LightRAG supports integration with LlamaIndex.
|
319 |
|
320 |
+
1. **LlamaIndex** (`llm/llama_index_impl.py`):
|
|
|
|
|
321 |
- Integrates with OpenAI and other providers through LlamaIndex
|
322 |
+
- See [LlamaIndex Documentation](lightrag/llm/Readme.md) for detailed setup and examples
|
|
|
|
|
323 |
|
324 |
### Example Usage
|
325 |
|
326 |
```python
|
327 |
# Using LlamaIndex with direct OpenAI access
|
328 |
from lightrag import LightRAG
|
329 |
+
from lightrag.llm.llama_index_impl import llama_index_complete_if_cache, llama_index_embed
|
330 |
from llama_index.embeddings.openai import OpenAIEmbedding
|
331 |
from llama_index.llms.openai import OpenAI
|
332 |
|
333 |
rag = LightRAG(
|
334 |
working_dir="your/path",
|
335 |
+
llm_model_func=llama_index_complete_if_cache, # LlamaIndex-compatible completion function
|
336 |
embedding_func=EmbeddingFunc( # LlamaIndex-compatible embedding function
|
337 |
embedding_dim=1536,
|
338 |
max_token_size=8192,
|
|
|
342 |
```
|
343 |
|
344 |
#### For detailed documentation and examples, see:
|
345 |
+
- [LlamaIndex Documentation](lightrag/llm/Readme.md)
|
346 |
+
- [Direct OpenAI Example](examples/lightrag_llamaindex_direct_demo.py)
|
347 |
+
- [LiteLLM Proxy Example](examples/lightrag_llamaindex_litellm_demo.py)
|
348 |
|
349 |
</details>
|
350 |
<details>
|
|
|
495 |
{
|
496 |
"content": "ProductX, developed by CompanyA, has revolutionized the market with its cutting-edge features.",
|
497 |
"source_id": "Source1",
|
|
|
|
|
|
|
|
|
|
|
|
|
498 |
},
|
499 |
{
|
500 |
"content": "PersonA is a prominent researcher at UniversityB, focusing on artificial intelligence and machine learning.",
|
501 |
"source_id": "Source2",
|
|
|
502 |
},
|
503 |
{
|
504 |
"content": "None",
|
505 |
"source_id": "UNKNOWN",
|
|
|
506 |
},
|
507 |
],
|
508 |
}
|