ShashidharSarvi commited on
Commit
77eb245
·
verified ·
1 Parent(s): 74c792d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -24,10 +24,10 @@ This model integrates a Llama‑2 text backbone with a BLIP vision backbone to p
24
 
25
  ### Model Description
26
 
27
- EIRA‑0.2 is a fine‑tuned multimodal model designed to answer free‑form questions about medical images (e.g., radiographs, histology slides) in conjunction with accompanying text. Internally, it uses:
28
 
29
  - A **text encoder/decoder** based on **meta‑llama/Llama‑2‑7b‑hf**, fine‑tuned for medical QA.
30
- - A **vision encoder** based on **Salesforce/blip-image-captioning-base**, fine‑tuned to extract descriptive features from medical imagery.
31
  - A **fusion module** that cross‑attends between vision features and text embeddings to generate coherent, context‑aware answers.
32
 
33
  - **Developed by:** BockBharath
 
24
 
25
  ### Model Description
26
 
27
+ EIRA‑0.2 is a multimodal model designed to answer free‑form questions about medical images (e.g., radiographs, histology slides) in conjunction with accompanying text. Internally, it uses:
28
 
29
  - A **text encoder/decoder** based on **meta‑llama/Llama‑2‑7b‑hf**, fine‑tuned for medical QA.
30
+ - A **vision encoder** based on **Salesforce/blip-image-captioning-base**, that extract descriptive features from medical imagery.
31
  - A **fusion module** that cross‑attends between vision features and text embeddings to generate coherent, context‑aware answers.
32
 
33
  - **Developed by:** BockBharath