ParisNeo commited on
Commit
0af7e9b
Β·
1 Parent(s): a0d3b33

Added Docker container setup

Browse files
Files changed (4) hide show
  1. .env.example +37 -0
  2. Dockerfile +38 -0
  3. docker-compose.yml +21 -0
  4. docs/DockerDeployment.md +174 -0
.env.example ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Server Configuration
2
+ HOST=0.0.0.0
3
+ PORT=9621
4
+
5
+ # Directory Configuration
6
+ WORKING_DIR=/app/data/rag_storage
7
+ INPUT_DIR=/app/data/inputs
8
+
9
+ # LLM Configuration
10
+ LLM_BINDING=ollama
11
+ LLM_BINDING_HOST=http://localhost:11434
12
+ LLM_MODEL=mistral-nemo:latest
13
+
14
+ # Embedding Configuration
15
+ EMBEDDING_BINDING=ollama
16
+ EMBEDDING_BINDING_HOST=http://localhost:11434
17
+ EMBEDDING_MODEL=bge-m3:latest
18
+
19
+ # RAG Configuration
20
+ MAX_ASYNC=4
21
+ MAX_TOKENS=32768
22
+ EMBEDDING_DIM=1024
23
+ MAX_EMBED_TOKENS=8192
24
+
25
+ # Security (empty for no key)
26
+ LIGHTRAG_API_KEY=your-secure-api-key-here
27
+
28
+ # Logging
29
+ LOG_LEVEL=INFO
30
+
31
+ # Optional SSL Configuration
32
+ #SSL=true
33
+ #SSL_CERTFILE=/path/to/cert.pem
34
+ #SSL_KEYFILE=/path/to/key.pem
35
+
36
+ # Optional Timeout
37
+ #TIMEOUT=30
Dockerfile ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Build stage
2
+ FROM python:3.11-slim as builder
3
+
4
+ WORKDIR /app
5
+
6
+ # Install build dependencies
7
+ RUN apt-get update && apt-get install -y --no-install-recommends \
8
+ build-essential \
9
+ && rm -rf /var/lib/apt/lists/*
10
+
11
+ # Copy only requirements files first to leverage Docker cache
12
+ COPY requirements.txt .
13
+ COPY lightrag/api/requirements.txt ./lightrag/api/
14
+
15
+ # Install dependencies
16
+ RUN pip install --user --no-cache-dir -r requirements.txt
17
+ RUN pip install --user --no-cache-dir -r lightrag/api/requirements.txt
18
+
19
+ # Final stage
20
+ FROM python:3.11-slim
21
+
22
+ WORKDIR /app
23
+
24
+ # Copy only necessary files from builder
25
+ COPY --from=builder /root/.local /root/.local
26
+ COPY . .
27
+
28
+ # Make sure scripts in .local are usable
29
+ ENV PATH=/root/.local/bin:$PATH
30
+
31
+ # Create necessary directories
32
+ RUN mkdir -p /app/data/rag_storage /app/data/inputs
33
+
34
+ # Expose the default port
35
+ EXPOSE 9621
36
+
37
+ # Set entrypoint
38
+ ENTRYPOINT ["python", "-m", "lightrag.api.lightrag_server"]
docker-compose.yml ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: '3.8'
2
+
3
+ services:
4
+ lightrag:
5
+ build: .
6
+ ports:
7
+ - "${PORT:-9621}:9621"
8
+ volumes:
9
+ - ./data/rag_storage:/app/data/rag_storage
10
+ - ./data/inputs:/app/data/inputs
11
+ env_file:
12
+ - .env
13
+ environment:
14
+ - TZ=UTC
15
+ restart: unless-stopped
16
+ networks:
17
+ - lightrag_net
18
+
19
+ networks:
20
+ lightrag_net:
21
+ driver: bridge
docs/DockerDeployment.md ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LightRAG
2
+
3
+ A lightweight Knowledge Graph Retrieval-Augmented Generation system with multiple LLM backend support.
4
+
5
+ ## πŸš€ Installation
6
+
7
+ ### Prerequisites
8
+ - Python 3.10+
9
+ - Git
10
+ - Docker (optional for Docker deployment)
11
+
12
+ ### Native Installation
13
+
14
+ 1. Clone the repository:
15
+ ```bash
16
+ # Linux/MacOS
17
+ git clone https://github.com/ParisNeo/LightRAG.git
18
+ cd LightRAG
19
+ ```
20
+ ```powershell
21
+ # Windows PowerShell
22
+ git clone https://github.com/ParisNeo/LightRAG.git
23
+ cd LightRAG
24
+ ```
25
+
26
+ 2. Configure your environment:
27
+ ```bash
28
+ # Linux/MacOS
29
+ cp .env.example .env
30
+ # Edit .env with your preferred configuration
31
+ ```
32
+ ```powershell
33
+ # Windows PowerShell
34
+ Copy-Item .env.example .env
35
+ # Edit .env with your preferred configuration
36
+ ```
37
+
38
+ 3. Create and activate virtual environment:
39
+ ```bash
40
+ # Linux/MacOS
41
+ python -m venv venv
42
+ source venv/bin/activate
43
+ ```
44
+ ```powershell
45
+ # Windows PowerShell
46
+ python -m venv venv
47
+ .\venv\Scripts\Activate
48
+ ```
49
+
50
+ 4. Install dependencies:
51
+ ```bash
52
+ # Both platforms
53
+ pip install -r requirements.txt
54
+ ```
55
+
56
+ ## 🐳 Docker Deployment
57
+
58
+ Docker instructions work the same on all platforms with Docker Desktop installed.
59
+
60
+ 1. Build and start the container:
61
+ ```bash
62
+ docker-compose up -d
63
+ ```
64
+
65
+ ### Configuration Options
66
+
67
+ LightRAG can be configured using environment variables in the `.env` file:
68
+
69
+ #### Server Configuration
70
+ - `HOST`: Server host (default: 0.0.0.0)
71
+ - `PORT`: Server port (default: 9621)
72
+
73
+ #### LLM Configuration
74
+ - `LLM_BINDING`: LLM backend to use (lollms/ollama/openai)
75
+ - `LLM_BINDING_HOST`: LLM server host URL
76
+ - `LLM_MODEL`: Model name to use
77
+
78
+ #### Embedding Configuration
79
+ - `EMBEDDING_BINDING`: Embedding backend (lollms/ollama/openai)
80
+ - `EMBEDDING_BINDING_HOST`: Embedding server host URL
81
+ - `EMBEDDING_MODEL`: Embedding model name
82
+
83
+ #### RAG Configuration
84
+ - `MAX_ASYNC`: Maximum async operations
85
+ - `MAX_TOKENS`: Maximum token size
86
+ - `EMBEDDING_DIM`: Embedding dimensions
87
+ - `MAX_EMBED_TOKENS`: Maximum embedding token size
88
+
89
+ #### Security
90
+ - `LIGHTRAG_API_KEY`: API key for authentication
91
+
92
+ ### Data Storage Paths
93
+
94
+ The system uses the following paths for data storage:
95
+ ```
96
+ data/
97
+ β”œβ”€β”€ rag_storage/ # RAG data persistence
98
+ └── inputs/ # Input documents
99
+ ```
100
+
101
+ ### Example Deployments
102
+
103
+ 1. Using with Ollama:
104
+ ```env
105
+ LLM_BINDING=ollama
106
+ LLM_BINDING_HOST=http://localhost:11434
107
+ LLM_MODEL=mistral
108
+ EMBEDDING_BINDING=ollama
109
+ EMBEDDING_BINDING_HOST=http://localhost:11434
110
+ EMBEDDING_MODEL=bge-m3
111
+ ```
112
+
113
+ 2. Using with OpenAI:
114
+ ```env
115
+ LLM_BINDING=openai
116
+ LLM_MODEL=gpt-3.5-turbo
117
+ EMBEDDING_BINDING=openai
118
+ EMBEDDING_MODEL=text-embedding-ada-002
119
+ OPENAI_API_KEY=your-api-key
120
+ ```
121
+
122
+ ### API Usage
123
+
124
+ Once deployed, you can interact with the API at `http://localhost:9621`
125
+
126
+ Example query using PowerShell:
127
+ ```powershell
128
+ $headers = @{
129
+ "X-API-Key" = "your-api-key"
130
+ "Content-Type" = "application/json"
131
+ }
132
+ $body = @{
133
+ query = "your question here"
134
+ } | ConvertTo-Json
135
+
136
+ Invoke-RestMethod -Uri "http://localhost:9621/query" -Method Post -Headers $headers -Body $body
137
+ ```
138
+
139
+ Example query using curl:
140
+ ```bash
141
+ curl -X POST "http://localhost:9621/query" \
142
+ -H "X-API-Key: your-api-key" \
143
+ -H "Content-Type: application/json" \
144
+ -d '{"query": "your question here"}'
145
+ ```
146
+
147
+ ## πŸ”’ Security
148
+
149
+ Remember to:
150
+ 1. Set a strong API key in production
151
+ 2. Use SSL in production environments
152
+ 3. Configure proper network security
153
+
154
+ ## πŸ“¦ Updates
155
+
156
+ To update the Docker container:
157
+ ```bash
158
+ docker-compose pull
159
+ docker-compose up -d --build
160
+ ```
161
+
162
+ To update native installation:
163
+ ```bash
164
+ # Linux/MacOS
165
+ git pull
166
+ source venv/bin/activate
167
+ pip install -r requirements.txt
168
+ ```
169
+ ```powershell
170
+ # Windows PowerShell
171
+ git pull
172
+ .\venv\Scripts\Activate
173
+ pip install -r requirements.txt
174
+ ```