yangdx commited on
Commit
6237af5
·
1 Parent(s): 39b6e79

Refine LLM settings in env sample file

Browse files
Files changed (2) hide show
  1. .env.example +5 -1
  2. lightrag/api/README.md +1 -2
.env.example CHANGED
@@ -45,17 +45,21 @@
45
  # MAX_EMBED_TOKENS=8192
46
 
47
  ### LLM Configuration (Use valid host. For local services installed with docker, you can use host.docker.internal)
 
48
  LLM_MODEL=mistral-nemo:latest
49
  LLM_BINDING_API_KEY=your_api_key
50
  ### Ollama example
51
- LLM_BINDING=ollama
52
  LLM_BINDING_HOST=http://localhost:11434
53
  ### OpenAI alike example
54
  # LLM_BINDING=openai
 
55
  # LLM_BINDING_HOST=https://api.openai.com/v1
 
56
  ### lollms example
57
  # LLM_BINDING=lollms
 
58
  # LLM_BINDING_HOST=http://localhost:9600
 
59
 
60
  ### Embedding Configuration (Use valid host. For local services installed with docker, you can use host.docker.internal)
61
  EMBEDDING_MODEL=bge-m3:latest
 
45
  # MAX_EMBED_TOKENS=8192
46
 
47
  ### LLM Configuration (Use valid host. For local services installed with docker, you can use host.docker.internal)
48
+ LLM_BINDING=ollama
49
  LLM_MODEL=mistral-nemo:latest
50
  LLM_BINDING_API_KEY=your_api_key
51
  ### Ollama example
 
52
  LLM_BINDING_HOST=http://localhost:11434
53
  ### OpenAI alike example
54
  # LLM_BINDING=openai
55
+ # LLM_MODEL=gpt-4o
56
  # LLM_BINDING_HOST=https://api.openai.com/v1
57
+ # LLM_BINDING_API_KEY=your_api_key
58
  ### lollms example
59
  # LLM_BINDING=lollms
60
+ # LLM_MODEL=mistral-nemo:latest
61
  # LLM_BINDING_HOST=http://localhost:9600
62
+ # LLM_BINDING_API_KEY=your_api_key
63
 
64
  ### Embedding Configuration (Use valid host. For local services installed with docker, you can use host.docker.internal)
65
  EMBEDDING_MODEL=bge-m3:latest
lightrag/api/README.md CHANGED
@@ -45,7 +45,7 @@ EMBEDDING_BINDING_HOST=http://localhost:11434
45
  LLM_BINDING_HOST=http://localhost:9600
46
  EMBEDDING_BINDING_HOST=http://localhost:9600
47
 
48
- # for openai, openai compatible or azure openai backend
49
  LLM_BINDING_HOST=https://api.openai.com/v1
50
  EMBEDDING_BINDING_HOST=http://localhost:9600
51
  ```
@@ -502,4 +502,3 @@ A query prefix in the query string can determines which LightRAG query mode is u
502
  For example, chat message "/mix 唐僧有几个徒弟" will trigger a mix mode query for LighRAG. A chat message without query prefix will trigger a hybrid mode query by default。
503
 
504
  "/bypass" is not a LightRAG query mode, it will tell API Server to pass the query directly to the underlying LLM with chat history. So user can use LLM to answer question base on the chat history. If you are using Open WebUI as front end, you can just switch the model to a normal LLM instead of using /bypass prefix.
505
-
 
45
  LLM_BINDING_HOST=http://localhost:9600
46
  EMBEDDING_BINDING_HOST=http://localhost:9600
47
 
48
+ # for openai, openai compatible or azure openai backend
49
  LLM_BINDING_HOST=https://api.openai.com/v1
50
  EMBEDDING_BINDING_HOST=http://localhost:9600
51
  ```
 
502
  For example, chat message "/mix 唐僧有几个徒弟" will trigger a mix mode query for LighRAG. A chat message without query prefix will trigger a hybrid mode query by default。
503
 
504
  "/bypass" is not a LightRAG query mode, it will tell API Server to pass the query directly to the underlying LLM with chat history. So user can use LLM to answer question base on the chat history. If you are using Open WebUI as front end, you can just switch the model to a normal LLM instead of using /bypass prefix.