Commit History

Enhanced model response handling: adjusted max_length dynamically and added lower bound for max_new_tokens to ensure optimal output quality. Improved logging of device usage for better debugging.
e8336f7

6Genix commited on

Several modifications
2a21c75

6Genix commited on

Added exception
a2b45c7

6Genix commited on

Refactor response generation to remove sentence truncation, increase max tokens, and adjust model parameters for more comprehensive multi-agent dialogue. Updated readme to reflect improvements.
ba262e3

6Genix commited on

Abandoned DeepSeek and went back to phi-3-mini-4k-instruct
0e7b360

6Genix commited on

Added explicit int8 quantization configuration using BitsAndBytes for DeepSeek-V3. Improved error handling and streamlined model loading for better compatibility.
796a98c

6Genix commited on

Addressed quantization issue by enforcing fp16 precision for DeepSeek-V3 model loading. Updated error handling and improved compatibility for Multi-Agent XAI Demo.
42fdd8c

6Genix commited on

Removed Hugging Face pipeline dependency and implemented direct model loading for DeepSeek-V3 using AutoTokenizer and AutoModelForCausalLM. Improved fallback robustness and error handling for model operations.
10965a9

6Genix commited on

Updated model loading to include fp16 precision fallback for DeepSeek-V3. Enhanced error handling for quantization issues and improved robustness.
eb857d5

6Genix commited on

Enhanced error handling and fallback mechanism for DeepSeek-V3. Added detailed error messages, graceful termination, and support for unsupported quantization configurations.
e79e7ca

6Genix commited on

Added fallback mechanism to switch from pipeline to direct model loading for compatibility. Ensured robust handling for environments without pipeline support in the Transformers library.
9d15a33

6Genix commited on

Switched to using DeepSeek-V3 for both Engineer and Analyst pipelines for text generation. Updated references to ensure compatibility and performance with the model.
6efe883

6Genix commited on

Replaced GPT-Neo with Microsoft PHI-4 model using the pipeline method for improved loading and performance. Updated Engineer and Analyst roles to utilize PHI-4 for text generation, maintaining streamlined conversation flow and summarization.
8d4fe18

6Genix commited on

Refined Multi-Agent XAI Demo by hiding explicit prompt references, improving final plan clarity, and enhancing conversational flow with contextual variations.
945ae37

6Genix commited on

Enhanced Multi-Agent XAI Demo with hidden prompts, streamlined final plan summarization, and improved response diversity. Adjusted repetition penalties and conversation formatting for clarity.
ebf362e

6Genix commited on

Refactored Multi-Agent XAI Demo to hide prompts, limit response length, and enable a natural conversational flow between the Engineer and Analyst. Finalized with a cohesive plan summarizing the dialogue.
52f0fa0

6Genix commited on

Revised Multi-Agent XAI Demo to simulate conversational interaction between Engineer and Analyst. Adjusted prompt design to enable natural dialogue flow based on user input and prior responses. Improved final plan summarization to balance technical and analytical perspectives.
1df2849

6Genix commited on

Enhanced Multi-Agent XAI Demo by improving response formatting and final plan synthesis. Ensured concise, actionable bullet points for Engineer and Analyst responses, with a cohesive final summary for clarity.
47676bc

6Genix commited on

Refactored Multi-Agent XAI Demo to improve output formatting, remove redundant content, and fix issues with tokenizer compatibility. Added fallback for AutoTokenizer import and ensured cohesive and actionable final plan generation.
6f2be1f

6Genix commited on

Added a fallback mechanism for the AutoTokenizer import to handle compatibility issues with older versions of the transformers library. This ensures continued functionality by defaulting to GPT2Tokenizer when AutoTokenizer is unavailable. Streamlined model loading and response generation for robustness in the demo application.
bc2367c

6Genix commited on

Addressed redundancy, formatting, notes, while focusing on goals.
cd48cb7

6Genix commited on

ValueError: has to be a strictly positive float, but is 2
38ac42c

6Genix commited on

Enhance prompt diversity and response clarity, refine model roles for better differentiation, and address redundancy in outputs to improve multi-agent collaboration.
64aeb75

6Genix commited on

Replaced large models with smaller, faster-loading models (EleutherAI GPT-Neo-125M for Engineer and Microsoft DialoGPT-small for Analyst) to improve load times and maintain request-handling efficiency.
10f4f42

6Genix commited on

Updated code to use lightweight models (gpt-neo-125M and DialoGPT-small) to avoid timeouts and ensure compatibility with HuggingFace free tier. Optimized generation parameters for better performance
9f9c42d

6Genix commited on

Updated multi-agent system to use smaller, faster models (EleutherAI GPT-Neo-1.3B and Microsoft DialoGPT-medium) for efficient processing and implemented final plan summarization.
7070a77

6Genix commited on

Updated multi-agent system to use Databricks Dolly-v2-12b for Engineer and HuggingFace Zephyr-7b-alpha for Analyst. Improved conversation flow with concise responses, optimized model parameters, and enhanced final plan summarization.
38c6241

6Genix commited on

Refined response generation to limit iterations, reduce hallucinations, and enhance summarization for actionable insights in the multi-agent system demo
37af4c8

6Genix commited on

Restrict response scope, reduce iterations, prevent hallucination, improve summarization, and dynamically update UI for real-time feedback.
63e83de

6Genix commited on

Refactored multi-agent system to enforce concise responses, limit verbose outputs, improve real-time conversation streaming, and enhance final plan summarization for better user experience in the demo.
57645f1

6Genix commited on

Implemented response length constraints and improved summarization for concise and structured demo output.
99de7e3

6Genix commited on

Enhance XAI transparency and user experience: Added real-time conversation display during model interactions, limited model exchanges to 3 rounds, improved response quality and clarity, and refined summary generation. Integrated spinner feedback for better UI responsiveness.
8a76939

6Genix commited on

Enhance real-time conversation display for Engineer and Analyst, limit exchanges to 3 rounds, and improve XAI explanations for better user transparency.
79db9e6

6Genix commited on

Addressed identified issues in multi-agent system: Improved iterative model conversation, added clear XAI explanations, enhanced summarization logic, and integrated attention masks and padding tokens to ensure reliable model behavior.
6189c2b

6Genix commited on

Added attention mask and pad token id configuration to improve model reliability and updated thinking display for better user experience
32708fd

6Genix commited on

Updated the app to handle attention mask and pad token for reliable generation
acb398e

6Genix commited on

Updated Multi-Agent System to include summarized plan after conversation
b561e5a

6Genix commited on

Improving models' conversation
8cd5554

6Genix commited on

updated max_new_tokens
0d8126f

6Genix commited on

Iterating to improve models' responses.
d753076

6Genix commited on

Issue with repeating text
8357c9c

6Genix commited on

Issue with repeating policies. Removed policies.
7ab699e

6Genix commited on

Swapped out large models for smaller ones
f6eb965

6Genix commited on

Removed the controller model due to Memory Limits
4d80322

6Genix commited on

Zephyr had incorrect folder
4e24e07

6Genix commited on

Reconfigred all models
cf2cc49

6Genix commited on

scrapped using phi 3 mini 4k instruct and switched for gpt4-alpaca
b255fcf

6Genix commited on

ValueError: not enough values to unpack (expected 3, got 2)
26ed70c

6Genix commited on

removal of pipeline transformer and autoconfig
d984bee

6Genix commited on

removed pipeline
01bc7e8

6Genix commited on