HuatuoGPT-II, One-stage Training for Medical Adaption of LLMs Paper • 2311.09774 • Published Nov 16, 2023 • 1
ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model Paper • 2402.11684 • Published Feb 18, 2024 • 2
Open-FinLLMs: Open Multimodal Large Language Models for Financial Applications Paper • 2408.11878 • Published Aug 20, 2024 • 63
LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture Paper • 2409.02889 • Published Sep 4, 2024 • 55
HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale Paper • 2406.19280 • Published Jun 27, 2024 • 65
Silkie: Preference Distillation for Large Visual Language Models Paper • 2312.10665 • Published Dec 17, 2023 • 11