yangdx commited on
Commit
e6f3b63
·
1 Parent(s): 3e1df32

Update sample code and README

Browse files
Files changed (3) hide show
  1. README-zh.md +29 -21
  2. README.md +30 -22
  3. lightrag/llm/openai.py +1 -1
README-zh.md CHANGED
@@ -35,21 +35,6 @@
35
 
36
  ## 安装
37
 
38
- ### 安装LightRAG Core
39
-
40
- * 从源代码安装(推荐)
41
-
42
- ```bash
43
- cd LightRAG
44
- pip install -e .
45
- ```
46
-
47
- * 从PyPI安装
48
-
49
- ```bash
50
- pip install lightrag-hku
51
- ```
52
-
53
  ### 安装LightRAG服务器
54
 
55
  LightRAG服务器旨在提供Web UI和API支持。Web UI便于文档索引、知识图谱探索和简单的RAG查询界面。LightRAG服务器还提供兼容Ollama的接口,旨在将LightRAG模拟为Ollama聊天模型。这使得AI聊天机器人(如Open WebUI)可以轻松访问LightRAG。
@@ -68,17 +53,40 @@ pip install "lightrag-hku[api]"
68
  pip install -e ".[api]"
69
  ```
70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
  **有关LightRAG服务器的更多信息,请参阅[LightRAG服务器](./lightrag/api/README.md)。**
72
 
73
- ## 快速开始 (仅对LightRAG Core)
74
 
75
- * [视频演示](https://www.youtube.com/watch?v=g21royNJ4fw)展示如何在本地运行LightRAG。
76
- * 所有代码都可以在`examples`中找到。
77
- * 如果使用OpenAI模型,请在环境中设置OpenAI API密钥:`export OPENAI_API_KEY="sk-..."`。
78
- * 下载演示文本"狄更斯的圣诞颂歌":
79
 
80
  ```bash
 
 
 
 
 
81
  curl https://raw.githubusercontent.com/gusye1234/nano-graphrag/main/tests/mock_data.txt > ./book.txt
 
 
82
  ```
83
 
84
  ## 查询
@@ -815,7 +823,7 @@ rag = LightRAG(
815
  create INDEX CONCURRENTLY entity_idx_node_id ON dickens."Entity" (ag_catalog.agtype_access_operator(properties, '"node_id"'::agtype));
816
  CREATE INDEX CONCURRENTLY entity_node_id_gin_idx ON dickens."Entity" using gin(properties);
817
  ALTER TABLE dickens."DIRECTED" CLUSTER ON directed_sid_idx;
818
-
819
  -- 如有必要可以删除
820
  drop INDEX entity_p_idx;
821
  drop INDEX vertex_p_idx;
 
35
 
36
  ## 安装
37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  ### 安装LightRAG服务器
39
 
40
  LightRAG服务器旨在提供Web UI和API支持。Web UI便于文档索引、知识图谱探索和简单的RAG查询界面。LightRAG服务器还提供兼容Ollama的接口,旨在将LightRAG模拟为Ollama聊天模型。这使得AI聊天机器人(如Open WebUI)可以轻松访问LightRAG。
 
53
  pip install -e ".[api]"
54
  ```
55
 
56
+ ### 安装LightRAG Core
57
+
58
+ * 从源代码安装(推荐)
59
+
60
+ ```bash
61
+ cd LightRAG
62
+ pip install -e .
63
+ ```
64
+
65
+ * 从PyPI安装
66
+
67
+ ```bash
68
+ pip install lightrag-hku
69
+ ```
70
+
71
+ ## 快速开始
72
+
73
+ ### 使用LightRAG服务器
74
+
75
  **有关LightRAG服务器的更多信息,请参阅[LightRAG服务器](./lightrag/api/README.md)。**
76
 
77
+ ## 使用LightRAG Core
78
 
79
+ LightRAG核心功能的示例代码请参见`examples`目录。您还可参照[视频](https://www.youtube.com/watch?v=g21royNJ4fw)视频完成环境配置。若已持有OpenAI API密钥,可以通过以下命令运行演示代码:
 
 
 
80
 
81
  ```bash
82
+ ### you should run the demo code with project folder
83
+ cd LightRAG
84
+ ### provide your API-KEY for OpenAI
85
+ export OPENAI_API_KEY="sk-...your_opeai_key..."
86
+ ### download the demo document of "A Christmas Carol" by Charles Dickens
87
  curl https://raw.githubusercontent.com/gusye1234/nano-graphrag/main/tests/mock_data.txt > ./book.txt
88
+ ### run the demo code
89
+ python examples/lightrag_openai_demo.py
90
  ```
91
 
92
  ## 查询
 
823
  create INDEX CONCURRENTLY entity_idx_node_id ON dickens."Entity" (ag_catalog.agtype_access_operator(properties, '"node_id"'::agtype));
824
  CREATE INDEX CONCURRENTLY entity_node_id_gin_idx ON dickens."Entity" using gin(properties);
825
  ALTER TABLE dickens."DIRECTED" CLUSTER ON directed_sid_idx;
826
+
827
  -- 如有必要可以删除
828
  drop INDEX entity_p_idx;
829
  drop INDEX vertex_p_idx;
README.md CHANGED
@@ -71,21 +71,6 @@
71
 
72
  ## Installation
73
 
74
- ### Install LightRAG Core
75
-
76
- * Install from source (Recommend)
77
-
78
- ```bash
79
- cd LightRAG
80
- pip install -e .
81
- ```
82
-
83
- * Install from PyPI
84
-
85
- ```bash
86
- pip install lightrag-hku
87
- ```
88
-
89
  ### Install LightRAG Server
90
 
91
  The LightRAG Server is designed to provide Web UI and API support. The Web UI facilitates document indexing, knowledge graph exploration, and a simple RAG query interface. LightRAG Server also provide an Ollama compatible interfaces, aiming to emulate LightRAG as an Ollama chat model. This allows AI chat bot, such as Open WebUI, to access LightRAG easily.
@@ -104,17 +89,40 @@ pip install "lightrag-hku[api]"
104
  pip install -e ".[api]"
105
  ```
106
 
107
- **For more information about LightRAG Server, please refer to [LightRAG Server](./lightrag/api/README.md).**
 
 
 
 
 
 
 
 
 
108
 
109
- ## Quick Start for LightRAG core only
 
 
 
 
 
 
110
 
111
- * [Video demo](https://www.youtube.com/watch?v=g21royNJ4fw) of running LightRAG locally.
112
- * All the code can be found in the `examples`.
113
- * Set OpenAI API key in environment if using OpenAI models: `export OPENAI_API_KEY="sk-...".`
114
- * Download the demo text "A Christmas Carol by Charles Dickens":
 
115
 
116
  ```bash
 
 
 
 
 
117
  curl https://raw.githubusercontent.com/gusye1234/nano-graphrag/main/tests/mock_data.txt > ./book.txt
 
 
118
  ```
119
 
120
  ## Query
@@ -836,7 +844,7 @@ For production level scenarios you will most likely want to leverage an enterpri
836
  create INDEX CONCURRENTLY entity_idx_node_id ON dickens."Entity" (ag_catalog.agtype_access_operator(properties, '"node_id"'::agtype));
837
  CREATE INDEX CONCURRENTLY entity_node_id_gin_idx ON dickens."Entity" using gin(properties);
838
  ALTER TABLE dickens."DIRECTED" CLUSTER ON directed_sid_idx;
839
-
840
  -- drop if necessary
841
  drop INDEX entity_p_idx;
842
  drop INDEX vertex_p_idx;
 
71
 
72
  ## Installation
73
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
  ### Install LightRAG Server
75
 
76
  The LightRAG Server is designed to provide Web UI and API support. The Web UI facilitates document indexing, knowledge graph exploration, and a simple RAG query interface. LightRAG Server also provide an Ollama compatible interfaces, aiming to emulate LightRAG as an Ollama chat model. This allows AI chat bot, such as Open WebUI, to access LightRAG easily.
 
89
  pip install -e ".[api]"
90
  ```
91
 
92
+ ### Install LightRAG Core
93
+
94
+ * Install from source (Recommend)
95
+
96
+ ```bash
97
+ cd LightRAG
98
+ pip install -e .
99
+ ```
100
+
101
+ * Install from PyPI
102
 
103
+ ```bash
104
+ pip install lightrag-hku
105
+ ```
106
+
107
+ ## Quick Start
108
+
109
+ ### Quick Start for LightRAG Server
110
 
111
+ For more information about LightRAG Server, please refer to [LightRAG Server](./lightrag/api/README.md).
112
+
113
+ ### Quick Start for LightRAG core
114
+
115
+ To get started with LightRAG core, refer to the sample codes available in the `examples` folder. Additionally, a [video demo](https://www.youtube.com/watch?v=g21royNJ4fw) demonstration is provided to guide you through the local setup process. If you already possess an OpenAI API key, you can run the demo right away:
116
 
117
  ```bash
118
+ ### you should run the demo code with project folder
119
+ cd LightRAG
120
+ ### provide your API-KEY for OpenAI
121
+ export OPENAI_API_KEY="sk-...your_opeai_key..."
122
+ ### download the demo document of "A Christmas Carol" by Charles Dickens
123
  curl https://raw.githubusercontent.com/gusye1234/nano-graphrag/main/tests/mock_data.txt > ./book.txt
124
+ ### run the demo code
125
+ python examples/lightrag_openai_demo.py
126
  ```
127
 
128
  ## Query
 
844
  create INDEX CONCURRENTLY entity_idx_node_id ON dickens."Entity" (ag_catalog.agtype_access_operator(properties, '"node_id"'::agtype));
845
  CREATE INDEX CONCURRENTLY entity_node_id_gin_idx ON dickens."Entity" using gin(properties);
846
  ALTER TABLE dickens."DIRECTED" CLUSTER ON directed_sid_idx;
847
+
848
  -- drop if necessary
849
  drop INDEX entity_p_idx;
850
  drop INDEX vertex_p_idx;
lightrag/llm/openai.py CHANGED
@@ -89,7 +89,7 @@ def create_openai_async_client(
89
  if base_url is not None:
90
  merged_configs["base_url"] = base_url
91
  else:
92
- merged_configs["base_url"] = os.environ["OPENAI_API_BASE"]
93
 
94
  return AsyncOpenAI(**merged_configs)
95
 
 
89
  if base_url is not None:
90
  merged_configs["base_url"] = base_url
91
  else:
92
+ merged_configs["base_url"] = os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1")
93
 
94
  return AsyncOpenAI(**merged_configs)
95