Introduction

Medguide is an experimental language model distilled from [DeepSeek-R1-671B] using ZeroRL methods, based on [Qwen2.5-72B-Instruct], focusing on enhancing reasoning capabilities on medical settings.

Deployment Scripts for Medguide (Built with Gradio)

This document provides instructions for deploying the Medguide model for inference using Gradio.

  1. Set up the Conda environment: Follow the instructions in the PKU-Alignment/align-anything repository to configure your Conda environment.

  2. Configure the model path: After setting up the environment, update the MODEL_PATH variable in deploy_medguide.sh to point to your local Medguide model directory.

  3. Verify inference script parameters: Check the following three parameters in both text_inference.py:

    # NOTE: Replace with your own model path if not loaded via the API base
    model = ''
    

    These scripts utilize an OpenAI-compatible server approach. The deploy_medguide.sh script launches the Medguide model locally and exposes it on port 8231 for external access via the specified API base URL.

  4. Running Inference:

    • Streamed Output:
      bash deploy_medguide.sh
      python text_inference.py
      
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support