--- license: mit language: - en tags: - dataset - document-processing - multimodal - vision-language - information-retrieval --- # 📊 Banque_Vision: A Multimodal Dataset for Document Understanding ## 📌 Overview **Banque_Vision** is a **multimodal dataset** designed for **document-based question answering (QA) and information retrieval**. It combines **textual data** and **visual document representations**, enabling research on **how vision models and language models** interact for document comprehension. 🔗 **Created by**: Matteo Khan 🎓 **Affiliation**: TW3Partners 📍 **License**: MIT 🔗 [Connect with me on LinkedIn](https://www.linkedin.com/in/matteo-khan-a10309263/) 🔗 [Dataset on Hugging Face](https://huggingface.co/datasets/YourProfile/banque_vision) ## 📂 Dataset Structure - **Document Text**: The full text of the document related to the query. - **Query**: The question or request for information. - **Document Page**: The specific page containing the answer. - **Document Image**: The visual representation (scan or screenshot) of the document page. This dataset allows models to process and retrieve information across both textual and visual modalities, making it highly relevant for **document AI research**. ## 🎯 Intended Use This dataset is designed for: - ✅ **Document-based QA** (e.g., answering questions based on scanned documents) - ✅ **Information retrieval** from structured/unstructured sources - ✅ **Multimodal learning** for combining text and vision-based features - ✅ **OCR-based research** and benchmarking - ✅ **Fine-tuning vision-language models** like Donut, LayoutLM, and BLIP ## ⚠️ Limitations & Considerations While **Banque_Vision** is a powerful resource, users should be aware of: - ❌ **OCR errors**: Text extraction may be imperfect due to document quality. - ⚠️ **Bias in document sources**: Some domains may be over- or under-represented. - 🔄 **Data labeling noise**: Possible inaccuracies in question-answer alignment. ## 📊 Dataset Format The dataset is stored in **JSONL** format with the following structure: ```json { "document_text": "... The standard interest rate for savings accounts is 2.5% ...", "document_page": 5, "query": "What is the interest rate for savings accounts?", "document_image": "path/to/image.jpg", } ``` ## 🚀 How to Use ```python from datasets import load_dataset dataset = load_dataset("YourProfile/banque_vision") # Example sample = dataset["train"][0] print("Query:", sample["query"]) ``` ## 🌍 Why It Matters - **Bridges the gap** between text and vision-based document processing. - **Supports real-world applications** like legal document analysis, financial records processing, and automated document retrieval. - **Encourages innovation** in hybrid models that combine **LLMs with vision transformers**. ## 📝 Citation ```bibtex @misc{banquevision2025, title={Banque_Vision: A Multimodal Dataset for Document Understanding}, author={Your Name}, year={2025}, eprint={arXiv:XXXX.XXXXX}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` 📩 **Feedback & Contributions**: Feel free to collaborate or provide feedback via [Hugging Face](https://huggingface.co/datasets/YourProfile/banque_vision). 🎉 **Happy Researching!** 🚀