|
--- |
|
library_name: transformers |
|
base_model: |
|
- facebook/sam2-hiera-tiny |
|
pipeline_tag: image-segmentation |
|
datasets: |
|
- ayyuce/ACDCPreprocessed |
|
--- |
|
library_name: transformers |
|
tags: |
|
- medical-imaging |
|
- image-segmentation |
|
- ultrasound |
|
- foundation-models |
|
- sam |
|
|
|
# Model Card for Sam2Rad |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
Sam2Rad is a prompt-learning framework that adapts Segment Anything Model (SAM/SAM2) for autonomous segmentation of bony structures in ultrasound images. It eliminates the need for manual prompts through a lightweight Prompt Predictor Network (PPN) that generates learnable prompts directly from image features. Compatible with all SAM variants, it supports three modes: autonomous operation, semi-autonomous human-in-the-loop refinement, and fully manual prompting. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
- **Developed by:** Assefa Seyoum Wahd, Banafshe Felfeliyan, Yuyue Zhou, et al. (University of Alberta and McGill University) |
|
- **Funded by [optional]:** TD Ready Grant, IC-IMPACTS, One Child Every Child, Arthritis Society, Alberta Innovates AICE Concepts |
|
- **Shared by:** Ayyuce Demirbas |
|
- **Model type:** Vision Transformer (ViT)-based segmentation model with prompt learning |
|
- **Language(s) (NLP):** N/A (Image-based model) |
|
- **License:** [More Information Needed] (Check GitHub for exact license) |
|
- **Finetuned from model [optional]:** SAM/SAM2 (Hiera-Tiny, Hiera-Small, Hiera-Large, Hiera-Base+) |
|
|
|
### Model Sources [optional] |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** [GitHub](https://github.com/aswahd/SamRadiology) |
|
- **Paper:** "Sam2Rad: A Segmentation Model for Medical Images with Learnable Prompts" |
|
- **Demo [optional]:** [More Information Needed] |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
- Automatic segmentation of bones in musculoskeletal ultrasound images (hip, wrist, shoulder) |
|
- Integration into clinical workflows for real-time analysis or data labeling |
|
|
|
### Downstream Use [optional] |
|
|
|
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
|
|
|
- Active learning frameworks requiring rapid annotation |
|
- Multi-class medical image segmentation with task-specific adaptations |
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
|
|
- Non-ultrasound modalities (e.g., MRI, CT) without retraining |
|
- Images with severe artifacts or non-anatomical structures |
|
- Non-medical image segmentation |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
- **Domain specificity:** Trained on musculoskeletal ultrasound; performance degrades on unseen modalities. |
|
- **Anatomical limitations:** May struggle with atypical anatomies or surgical implants. |
|
- **Ethical considerations:** Not validated for diagnostic use without clinician oversight. |
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Validate outputs against expert annotations in clinical deployments. Retrain PPN when applying to new anatomical regions or imaging protocols. |
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
```python |
|
# see GitHub for implementation https://github.com/aswahd/SamRadiology |
|
|
|
from transformers import AutoModel |
|
model = AutoModel.from_pretrained("ayyuce/sam2rad") |