Spaces:
Sleeping
Sleeping
Switched to using DeepSeek-V3 for both Engineer and Analyst pipelines for text generation. Updated references to ensure compatibility and performance with the model.
Browse files
README.md
CHANGED
@@ -1,14 +1,51 @@
|
|
1 |
---
|
2 |
title: MultiAgent XAI Demo
|
3 |
-
emoji:
|
4 |
colorFrom: blue
|
5 |
-
colorTo:
|
6 |
sdk: streamlit
|
7 |
sdk_version: 1.41.1
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: mit
|
11 |
-
short_description:
|
12 |
---
|
13 |
|
14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
title: MultiAgent XAI Demo
|
3 |
+
emoji: 🤖
|
4 |
colorFrom: blue
|
5 |
+
colorTo: green
|
6 |
sdk: streamlit
|
7 |
sdk_version: 1.41.1
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: mit
|
11 |
+
short_description: A demo showcasing multi-agent conversational AI using Microsoft PHI-4 for generating technical and analytical insights.
|
12 |
---
|
13 |
|
14 |
+
## MultiAgent XAI Demo
|
15 |
+
|
16 |
+
This demo leverages the `microsoft/phi-4` language model for simulating a conversation between two roles: an Engineer and an Analyst. The goal is to collaboratively address user-provided queries and produce actionable insights.
|
17 |
+
|
18 |
+
### Features
|
19 |
+
- **Engineer Role**: Provides concise, technical solutions.
|
20 |
+
- **Analyst Role**: Offers data-driven recommendations to complement the Engineer's response.
|
21 |
+
- **Natural Dialogue**: Facilitates a three-turn conversation between the roles.
|
22 |
+
- **Actionable Summary**: Generates a final plan summarizing key insights.
|
23 |
+
|
24 |
+
### How It Works
|
25 |
+
1. The user enters a query.
|
26 |
+
2. The Engineer and Analyst respond alternately, building on each other's inputs.
|
27 |
+
3. A final summary is generated, integrating technical and analytical perspectives.
|
28 |
+
|
29 |
+
### Technology Stack
|
30 |
+
- **Streamlit**: Interactive web interface.
|
31 |
+
- **Hugging Face Transformers**: Using `pipeline` with the `microsoft/phi-4` model for text generation.
|
32 |
+
|
33 |
+
### Getting Started
|
34 |
+
1. Clone this repository:
|
35 |
+
```bash
|
36 |
+
git clone <repository-url>
|
37 |
+
```
|
38 |
+
2. Install dependencies:
|
39 |
+
```bash
|
40 |
+
pip install -r requirements.txt
|
41 |
+
```
|
42 |
+
3. Run the application:
|
43 |
+
```bash
|
44 |
+
streamlit run app.py
|
45 |
+
```
|
46 |
+
|
47 |
+
### Configuration Reference
|
48 |
+
Refer to the [Hugging Face Spaces Config Reference](https://huggingface.co/docs/hub/spaces-config-reference) for deployment options.
|
49 |
+
|
50 |
+
### License
|
51 |
+
This project is licensed under the MIT License. See the LICENSE file for details.
|
app.py
CHANGED
@@ -8,20 +8,20 @@ from transformers import pipeline
|
|
8 |
|
9 |
@st.cache_resource
|
10 |
def load_model_engineer():
|
11 |
-
# Engineer:
|
12 |
engineer_pipeline = pipeline(
|
13 |
"text-generation",
|
14 |
-
model="
|
15 |
trust_remote_code=True
|
16 |
)
|
17 |
return engineer_pipeline
|
18 |
|
19 |
@st.cache_resource
|
20 |
def load_model_analyst():
|
21 |
-
# Analyst:
|
22 |
analyst_pipeline = pipeline(
|
23 |
"text-generation",
|
24 |
-
model="
|
25 |
trust_remote_code=True
|
26 |
)
|
27 |
return analyst_pipeline
|
@@ -103,6 +103,4 @@ if st.button("Generate Responses"):
|
|
103 |
|
104 |
# Summarize the final plan
|
105 |
with st.spinner("Generating the final plan..."):
|
106 |
-
final_plan = summarize_conversation
|
107 |
-
st.session_state.conversation.append(("Summary", final_plan))
|
108 |
-
st.markdown(final_plan)
|
|
|
8 |
|
9 |
@st.cache_resource
|
10 |
def load_model_engineer():
|
11 |
+
# Engineer: DeepSeek-V5 via pipeline
|
12 |
engineer_pipeline = pipeline(
|
13 |
"text-generation",
|
14 |
+
model="unsloth/DeepSeek-V3",
|
15 |
trust_remote_code=True
|
16 |
)
|
17 |
return engineer_pipeline
|
18 |
|
19 |
@st.cache_resource
|
20 |
def load_model_analyst():
|
21 |
+
# Analyst: DeepSeek-V5 via pipeline
|
22 |
analyst_pipeline = pipeline(
|
23 |
"text-generation",
|
24 |
+
model="unsloth/DeepSeek-V3",
|
25 |
trust_remote_code=True
|
26 |
)
|
27 |
return analyst_pipeline
|
|
|
103 |
|
104 |
# Summarize the final plan
|
105 |
with st.spinner("Generating the final plan..."):
|
106 |
+
final_plan = summarize_conversation
|
|
|
|