Update README.md
Browse files
README.md
CHANGED
|
@@ -40,7 +40,7 @@ For RL stage we setup training with:
|
|
| 40 |
|
| 41 |
## III. Evaluation Results
|
| 42 |
|
| 43 |
-
Our II-Medical-8B model achieved a
|
| 44 |
|
| 45 |

|
| 46 |
Detailed result for HealthBench can be found [here](https://huggingface.co/datasets/Intelligent-Internet/OpenAI-HealthBench-II-Medical-8B-GPT-4.1).
|
|
@@ -53,28 +53,28 @@ Journal of Medicine, 4 Options and 5 Options splits from the MedBullets platfo
|
|
| 53 |
| Model | MedMC | MedQA | PubMed | MMLU-P | HealthBench | Lancet | MedB-4 | MedB-5 | MedX | NEJM | Avg |
|
| 54 |
|--------------------------|-------|-------|--------|--------|------|--------|--------|--------|------|-------|-------|
|
| 55 |
| [HuatuoGPT-o1-72B](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-72B) | **76.76** | 88.85 | **79.90** | 80.46 | 22.73 | 70.87 | 77.27 | 73.05 |23.53 |76.29 | 66.97 |
|
| 56 |
-
| [M1](https://huggingface.co/UCSC-VLAA/m1-7B-23K) | 62.54 | 75.81 | 75.80 | 65.86 | 15.51 | 62.62 | 63.64 | 59.74 |19.59 |64.34 |
|
| 57 |
| [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) | 66.53 | 81.38 | 73.9 | 77.85 | 42.27 | 66.26 | 68.83 | 62.66 |19.59 |69.65 | 62.89 |
|
| 58 |
| [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) | 74.18 | 88.92 | 76.1 | 80.7 | **47.08** | 72.33 | 72.27 | 71.42 |28.04 |76.94 | 68.80 |
|
| 59 |
| [MedGemma-27B-IT](https://huggingface.co/google/medgemma-27b-text-it) | 73.24 | 87.27 | 70.9 | 80.13 | 46.54| 70.14 | 75.32 | 73.37 |25.55 |76.28 | 67.87 |
|
| 60 |
| [II-Medical-8B](https://huggingface.co/Intelligent-Internet/II-Medical-8B) | 71.57 | 87.90 | 78.7 |**80.46** | 40.02| 70.38 | 78.25 | 72.07 |25.26 |73.13 |67.77 |
|
| 61 |
-
| [II-Medical-8B-1706](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706) | 74.73 | **90.26** | 79.6 | 80.52 |
|
| 62 |
|
| 63 |
## IV. Dataset Curation
|
| 64 |
|
| 65 |
The training dataset comprises 2.3M samples from the following sources:
|
| 66 |
|
| 67 |
-
### 1. Public Medical Reasoning Datasets
|
| 68 |
-
- [General Medical Reasoning](https://huggingface.co/datasets/GeneralReasoning/GeneralThought-430K)
|
| 69 |
-
- [Medical-R1-Distill-Data](https://huggingface.co/datasets/FreedomIntelligence/Medical-R1-Distill-Data)
|
| 70 |
-
- [Medical-R1-Distill-Data-Chinese](https://huggingface.co/datasets/FreedomIntelligence/Medical-R1-Distill-Data-Chinese)
|
| 71 |
-
- [UCSC-VLAA/m23k-tokenized](https://huggingface.co/datasets/UCSC-VLAA/m23k-tokenized)
|
| 72 |
|
| 73 |
### 2. Synthetic Medical QA Data with Qwen3-235B-A22B
|
| 74 |
Generated from established medical datasets:
|
| 75 |
-
- [MedMcQA](https://huggingface.co/datasets/openlifescienceai/medmcqa)
|
| 76 |
-
- [MedQA](https://huggingface.co/datasets/bigbio/med_qa)
|
| 77 |
-
- [MedReason](https://huggingface.co/datasets/UCSC-VLAA/MedReason)
|
| 78 |
|
| 79 |
### 3. Curated Medical R1 Traces (338,055 samples)
|
| 80 |
|
|
@@ -116,7 +116,7 @@ All R1 reasoning traces were processed through a domain-specific pipeline as fol
|
|
| 116 |
- Minimum threshold: Keep only the prompt with more than 3 words.
|
| 117 |
- Wait Token Filter: Removed traces with has more than 47 occurrences of "Wait" (97th percentile threshold).
|
| 118 |
3. Response Deduplicate
|
| 119 |
-
- Ngram: 4
|
| 120 |
- Jacard Threshold: 0.7
|
| 121 |
|
| 122 |
### Data Decontamination
|
|
|
|
| 40 |
|
| 41 |
## III. Evaluation Results
|
| 42 |
|
| 43 |
+
Our II-Medical-8B model achieved a 41% score on [HealthBench](https://openai.com/index/healthbench/), a comprehensive open-source benchmark evaluating the performance and safety of large language models in healthcare. This performance is comparable to OpenAI's o1 reasoning model and GPT-4.5, OpenAI's largest and most advanced model to date. We provide a comparison to models available in ChatGPT below.
|
| 44 |
|
| 45 |

|
| 46 |
Detailed result for HealthBench can be found [here](https://huggingface.co/datasets/Intelligent-Internet/OpenAI-HealthBench-II-Medical-8B-GPT-4.1).
|
|
|
|
| 53 |
| Model | MedMC | MedQA | PubMed | MMLU-P | HealthBench | Lancet | MedB-4 | MedB-5 | MedX | NEJM | Avg |
|
| 54 |
|--------------------------|-------|-------|--------|--------|------|--------|--------|--------|------|-------|-------|
|
| 55 |
| [HuatuoGPT-o1-72B](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-72B) | **76.76** | 88.85 | **79.90** | 80.46 | 22.73 | 70.87 | 77.27 | 73.05 |23.53 |76.29 | 66.97 |
|
| 56 |
+
| [M1](https://huggingface.co/UCSC-VLAA/m1-7B-23K) | 62.54 | 75.81 | 75.80 | 65.86 | 15.51 | 62.62 | 63.64 | 59.74 |19.59 |64.34 | 56.55 |
|
| 57 |
| [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) | 66.53 | 81.38 | 73.9 | 77.85 | 42.27 | 66.26 | 68.83 | 62.66 |19.59 |69.65 | 62.89 |
|
| 58 |
| [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) | 74.18 | 88.92 | 76.1 | 80.7 | **47.08** | 72.33 | 72.27 | 71.42 |28.04 |76.94 | 68.80 |
|
| 59 |
| [MedGemma-27B-IT](https://huggingface.co/google/medgemma-27b-text-it) | 73.24 | 87.27 | 70.9 | 80.13 | 46.54| 70.14 | 75.32 | 73.37 |25.55 |76.28 | 67.87 |
|
| 60 |
| [II-Medical-8B](https://huggingface.co/Intelligent-Internet/II-Medical-8B) | 71.57 | 87.90 | 78.7 |**80.46** | 40.02| 70.38 | 78.25 | 72.07 |25.26 |73.13 |67.77 |
|
| 61 |
+
| [II-Medical-8B-1706](https://huggingface.co/Intelligent-Internet/II-Medical-8B-1706) | 74.73 | **90.26** | 79.6 | 80.52 | 41.09| 75 | **80.19** | **76.30** |**29.51** |79.77 | **70.7** |
|
| 62 |
|
| 63 |
## IV. Dataset Curation
|
| 64 |
|
| 65 |
The training dataset comprises 2.3M samples from the following sources:
|
| 66 |
|
| 67 |
+
### 1. Public Medical Reasoning Datasets
|
| 68 |
+
- [General Medical Reasoning](https://huggingface.co/datasets/GeneralReasoning/GeneralThought-430K)
|
| 69 |
+
- [Medical-R1-Distill-Data](https://huggingface.co/datasets/FreedomIntelligence/Medical-R1-Distill-Data)
|
| 70 |
+
- [Medical-R1-Distill-Data-Chinese](https://huggingface.co/datasets/FreedomIntelligence/Medical-R1-Distill-Data-Chinese)
|
| 71 |
+
- [UCSC-VLAA/m23k-tokenized](https://huggingface.co/datasets/UCSC-VLAA/m23k-tokenized)
|
| 72 |
|
| 73 |
### 2. Synthetic Medical QA Data with Qwen3-235B-A22B
|
| 74 |
Generated from established medical datasets:
|
| 75 |
+
- [MedMcQA](https://huggingface.co/datasets/openlifescienceai/medmcqa)
|
| 76 |
+
- [MedQA](https://huggingface.co/datasets/bigbio/med_qa)
|
| 77 |
+
- [MedReason](https://huggingface.co/datasets/UCSC-VLAA/MedReason)
|
| 78 |
|
| 79 |
### 3. Curated Medical R1 Traces (338,055 samples)
|
| 80 |
|
|
|
|
| 116 |
- Minimum threshold: Keep only the prompt with more than 3 words.
|
| 117 |
- Wait Token Filter: Removed traces with has more than 47 occurrences of "Wait" (97th percentile threshold).
|
| 118 |
3. Response Deduplicate
|
| 119 |
+
- Ngram: 4
|
| 120 |
- Jacard Threshold: 0.7
|
| 121 |
|
| 122 |
### Data Decontamination
|