ParisNeo commited on
Commit
02420d0
·
1 Parent(s): 0c63b5c
Files changed (2) hide show
  1. api/README.md +16 -17
  2. api/ollama_lightrag_server.py +1 -1
api/README.md CHANGED
@@ -47,7 +47,7 @@ pip install -r requirements.txt
47
  The server can be configured using command-line arguments:
48
 
49
  ```bash
50
- python ollama_lightrag_server.py --help
51
  ```
52
 
53
  Available options:
@@ -55,7 +55,7 @@ Available options:
55
  | Parameter | Default | Description |
56
  |-----------|---------|-------------|
57
  | --host | 0.0.0.0 | Server host |
58
- | --port | 8000 | Server port |
59
  | --model | gemma2:2b | LLM model name |
60
  | --embedding-model | nomic-embed-text | Embedding model name |
61
  | --ollama-host | http://localhost:11434 | Ollama host URL |
@@ -71,20 +71,19 @@ Available options:
71
 
72
  1. Basic usage with default settings:
73
  ```bash
74
- python rag_server.py
75
  ```
76
 
77
  2. Custom configuration:
78
  ```bash
79
- python rag_server.py --model llama2:13b --port 8080 --working-dir ./custom_rag
80
  ```
81
 
82
- 3. Using the launch script:
83
  ```bash
84
- chmod +x launch_rag_server.sh
85
- ./launch_rag_server.sh
86
  ```
87
 
 
88
  ## API Endpoints
89
 
90
  ### Query Endpoints
@@ -93,7 +92,7 @@ chmod +x launch_rag_server.sh
93
  Query the RAG system with options for different search modes.
94
 
95
  ```bash
96
- curl -X POST "http://localhost:8000/query" \
97
  -H "Content-Type: application/json" \
98
  -d '{"query": "Your question here", "mode": "hybrid"}'
99
  ```
@@ -102,7 +101,7 @@ curl -X POST "http://localhost:8000/query" \
102
  Stream responses from the RAG system.
103
 
104
  ```bash
105
- curl -X POST "http://localhost:8000/query/stream" \
106
  -H "Content-Type: application/json" \
107
  -d '{"query": "Your question here", "mode": "hybrid"}'
108
  ```
@@ -113,7 +112,7 @@ curl -X POST "http://localhost:8000/query/stream" \
113
  Insert text directly into the RAG system.
114
 
115
  ```bash
116
- curl -X POST "http://localhost:8000/documents/text" \
117
  -H "Content-Type: application/json" \
118
  -d '{"text": "Your text content here", "description": "Optional description"}'
119
  ```
@@ -122,7 +121,7 @@ curl -X POST "http://localhost:8000/documents/text" \
122
  Upload a single file to the RAG system.
123
 
124
  ```bash
125
- curl -X POST "http://localhost:8000/documents/file" \
126
  -F "file=@/path/to/your/document.txt" \
127
  -F "description=Optional description"
128
  ```
@@ -131,7 +130,7 @@ curl -X POST "http://localhost:8000/documents/file" \
131
  Upload multiple files at once.
132
 
133
  ```bash
134
- curl -X POST "http://localhost:8000/documents/batch" \
135
  -F "files=@/path/to/doc1.txt" \
136
  -F "files=@/path/to/doc2.txt"
137
  ```
@@ -140,7 +139,7 @@ curl -X POST "http://localhost:8000/documents/batch" \
140
  Clear all documents from the RAG system.
141
 
142
  ```bash
143
- curl -X DELETE "http://localhost:8000/documents"
144
  ```
145
 
146
  ### Utility Endpoints
@@ -149,7 +148,7 @@ curl -X DELETE "http://localhost:8000/documents"
149
  Check server health and configuration.
150
 
151
  ```bash
152
- curl "http://localhost:8000/health"
153
  ```
154
 
155
  ## Development
@@ -157,14 +156,14 @@ curl "http://localhost:8000/health"
157
  ### Running in Development Mode
158
 
159
  ```bash
160
- uvicorn rag_server:app --reload --port 8000
161
  ```
162
 
163
  ### API Documentation
164
 
165
  When the server is running, visit:
166
- - Swagger UI: http://localhost:8000/docs
167
- - ReDoc: http://localhost:8000/redoc
168
 
169
  ## Contributing
170
 
 
47
  The server can be configured using command-line arguments:
48
 
49
  ```bash
50
+ python ollama_lightollama_lightrag_server.py --help
51
  ```
52
 
53
  Available options:
 
55
  | Parameter | Default | Description |
56
  |-----------|---------|-------------|
57
  | --host | 0.0.0.0 | Server host |
58
+ | --port | 9621 | Server port |
59
  | --model | gemma2:2b | LLM model name |
60
  | --embedding-model | nomic-embed-text | Embedding model name |
61
  | --ollama-host | http://localhost:11434 | Ollama host URL |
 
71
 
72
  1. Basic usage with default settings:
73
  ```bash
74
+ python ollama_lightrag_server.py
75
  ```
76
 
77
  2. Custom configuration:
78
  ```bash
79
+ python ollama_lightrag_server.py --model llama2:13b --port 8080 --working-dir ./custom_rag
80
  ```
81
 
 
82
  ```bash
83
+ python ollama_lightrag_server.py --model mistral-nemo:latest --embedding-dim 1024 --embedding-model bge-m3
 
84
  ```
85
 
86
+
87
  ## API Endpoints
88
 
89
  ### Query Endpoints
 
92
  Query the RAG system with options for different search modes.
93
 
94
  ```bash
95
+ curl -X POST "http://localhost:9621/query" \
96
  -H "Content-Type: application/json" \
97
  -d '{"query": "Your question here", "mode": "hybrid"}'
98
  ```
 
101
  Stream responses from the RAG system.
102
 
103
  ```bash
104
+ curl -X POST "http://localhost:9621/query/stream" \
105
  -H "Content-Type: application/json" \
106
  -d '{"query": "Your question here", "mode": "hybrid"}'
107
  ```
 
112
  Insert text directly into the RAG system.
113
 
114
  ```bash
115
+ curl -X POST "http://localhost:9621/documents/text" \
116
  -H "Content-Type: application/json" \
117
  -d '{"text": "Your text content here", "description": "Optional description"}'
118
  ```
 
121
  Upload a single file to the RAG system.
122
 
123
  ```bash
124
+ curl -X POST "http://localhost:9621/documents/file" \
125
  -F "file=@/path/to/your/document.txt" \
126
  -F "description=Optional description"
127
  ```
 
130
  Upload multiple files at once.
131
 
132
  ```bash
133
+ curl -X POST "http://localhost:9621/documents/batch" \
134
  -F "files=@/path/to/doc1.txt" \
135
  -F "files=@/path/to/doc2.txt"
136
  ```
 
139
  Clear all documents from the RAG system.
140
 
141
  ```bash
142
+ curl -X DELETE "http://localhost:9621/documents"
143
  ```
144
 
145
  ### Utility Endpoints
 
148
  Check server health and configuration.
149
 
150
  ```bash
151
+ curl "http://localhost:9621/health"
152
  ```
153
 
154
  ## Development
 
156
  ### Running in Development Mode
157
 
158
  ```bash
159
+ uvicorn ollama_lightrag_server:app --reload --port 9621
160
  ```
161
 
162
  ### API Documentation
163
 
164
  When the server is running, visit:
165
+ - Swagger UI: http://localhost:9621/docs
166
+ - ReDoc: http://localhost:9621/redoc
167
 
168
  ## Contributing
169
 
api/ollama_lightrag_server.py CHANGED
@@ -21,7 +21,7 @@ def parse_args():
21
 
22
  # Server configuration
23
  parser.add_argument('--host', default='0.0.0.0', help='Server host (default: 0.0.0.0)')
24
- parser.add_argument('--port', type=int, default=8000, help='Server port (default: 8000)')
25
 
26
  # Directory configuration
27
  parser.add_argument('--working-dir', default='./rag_storage',
 
21
 
22
  # Server configuration
23
  parser.add_argument('--host', default='0.0.0.0', help='Server host (default: 0.0.0.0)')
24
+ parser.add_argument('--port', type=int, default=9621, help='Server port (default: 9621)')
25
 
26
  # Directory configuration
27
  parser.add_argument('--working-dir', default='./rag_storage',