Model Card for Model ID
CMR-LLaMA is a large language model designed to automatically extract 31 cardiovascular conditions from cardiac MRI (CMR) reports. In addition to the conditions themselves, it also extracts their associated attributes, including certainty, severity, location, and pattern.
Model Details
Model Description
- Developed by: CCF AIIIH Lab
- Model type: large language model
- Language(s) (NLP): English
- Finetuned from model [optional]: pretrained LLaMA 3.3 with a custom LoRA adapter
Model Sources [optional]
- Repository: https://github.com/michelleUMD/cmr-llama
- Paper [optional]: TBD
- Demo [optional]: TBD
Uses
- Database generation from CMR report impressions sections
- Standardization of free text reports
How to Get Started with the Model
To use the pretrained adapter:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.3-70B-Instruct")
model = PeftModel.from_pretrained(base, "/michelleUMD/cmr-llama/")
Citation [optional]
TBD
BibTeX:
TBD
APA:
TBD
Framework versions
- PEFT 0.12.0
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for michelleUMD/cmr-llama
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.3-70B-Instruct