JDocQA / README.md
speed's picture
Update README.md
a2855ef verified
metadata
dataset_info:
  features:
    - name: question_id
      dtype: int64
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: answer_type
      dtype:
        class_label:
          names:
            '0': yes/no
            '1': factoid
            '2': numerical
            '3': open-ended
    - name: image_0
      dtype: image
    - name: image_1
      dtype: image
    - name: image_2
      dtype: image
    - name: image_3
      dtype: image
  splits:
    - name: test
      num_bytes: 758293468.0465306
      num_examples: 1164
  download_size: 577561371
  dataset_size: 758293468.0465306
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

This unofficial dataset consists of QA pairs with images converted from the PDF files of JDocQA, a dataset focusing on chart and table understanding. The conversion was performed using pdf2image.

The original dataset includes 1,176 examples, but 12 examples could not be converted into images. As a result, this image dataset consists of 1,164 examples in total.

We are uploading it here for use in the evaluation of llm-jp-eval-mm.

Please see the official github repo (https://github.com/mizuumi/JDocQA?tab=readme-ov-file#dataset-license) for the LICENSE.

@inproceedings{onami-etal-2024-jdocqa-japanese,
    title = "{JD}oc{QA}: {J}apanese Document Question Answering Dataset for Generative Language Models",
    author = "Onami, Eri  and
      Kurita, Shuhei  and
      Miyanishi, Taiki  and
      Watanabe, Taro",
    editor = "Calzolari, Nicoletta  and
      Kan, Min-Yen  and
      Hoste, Veronique  and
      Lenci, Alessandro  and
      Sakti, Sakriani  and
      Xue, Nianwen",
    booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
    month = may,
    year = "2024",
    address = "Torino, Italy",
    publisher = "ELRA and ICCL",
    url = "https://aclanthology.org/2024.lrec-main.830",
    pages = "9503--9514",
    abstract = "Document question answering is a task of question answering on given documents such as reports, slides, pamphlets, and websites, and it is a truly demanding task as paper and electronic forms of documents are so common in our society. This is known as a quite challenging task because it requires not only text understanding but also understanding of figures and tables, and hence visual question answering (VQA) methods are often examined in addition to textual approaches. We introduce Japanese Document Question Answering (JDocQA), a large-scale document-based QA dataset, essentially requiring both visual and textual information to answer questions, which comprises 5,504 documents in PDF format and annotated 11,600 question-and-answer instances in Japanese. Each QA instance includes references to the document pages and bounding boxes for the answer clues. We incorporate multiple categories of questions and \textit{unanswerable} questions from the document for realistic question-answering applications. We empirically evaluate the effectiveness of our dataset with text-based large language models (LLMs) and multimodal models. Incorporating \textit{unanswerable} questions in finetuning may contribute to harnessing the so-called hallucination generation.",
}