yangdx commited on
Commit
f4d51fa
·
1 Parent(s): eddd4f5

Remove the comments at the end of the environment variable lines in .env file

Browse files
Files changed (2) hide show
  1. env.example +7 -3
  2. lightrag/api/README.md +0 -1
env.example CHANGED
@@ -55,10 +55,14 @@ SUMMARY_LANGUAGE=English
55
  # MAX_EMBED_TOKENS=8192
56
 
57
  ### LLM Configuration
58
- TIMEOUT=150 # Time out in seconds for LLM, None for infinite timeout
 
 
59
  TEMPERATURE=0.5
60
- MAX_ASYNC=4 # Max concurrency requests of LLM
61
- MAX_TOKENS=32768 # Max tokens send to LLM (less than context size of the model)
 
 
62
 
63
  ### Ollama example (For local services installed with docker, you can use host.docker.internal as host)
64
  LLM_BINDING=ollama
 
55
  # MAX_EMBED_TOKENS=8192
56
 
57
  ### LLM Configuration
58
+ ### Time out in seconds for LLM, None for infinite timeout
59
+ TIMEOUT=150
60
+ ### Some models like o1-mini require temperature to be set to 1
61
  TEMPERATURE=0.5
62
+ ### Max concurrency requests of LLM
63
+ MAX_ASYNC=4
64
+ ### Max tokens send to LLM (less than context size of the model)
65
+ MAX_TOKENS=32768
66
 
67
  ### Ollama example (For local services installed with docker, you can use host.docker.internal as host)
68
  LLM_BINDING=ollama
lightrag/api/README.md CHANGED
@@ -422,7 +422,6 @@ EMBEDDING_BINDING_HOST=http://localhost:11434
422
  ```
423
 
424
 
425
-
426
  ## API Endpoints
427
 
428
  All servers (LoLLMs, Ollama, OpenAI and Azure OpenAI) provide the same REST API endpoints for RAG functionality. When API Server is running, visit:
 
422
  ```
423
 
424
 
 
425
  ## API Endpoints
426
 
427
  All servers (LoLLMs, Ollama, OpenAI and Azure OpenAI) provide the same REST API endpoints for RAG functionality. When API Server is running, visit: