Vir-Pat-2024 / README.md
Mikelium5's picture
Update README.md
288dfbf verified
metadata
configs:
  - config_name: OnlyIntent
    data_files:
      - split: train
        path:
          - OnlyIntent/train_OnlyIntent.json
      - split: validation
        path:
          - OnlyIntent/validation_OnlyIntent.json
      - split: test
        path:
          - OnlyIntent/test_OnlyIntent.json
  - config_name: OnlyQuestion
    data_files:
      - split: train
        path:
          - OnlyQuestion/train_OnlyQuestion.json
      - split: validation
        path:
          - OnlyQuestion/validation_OnlyQuestion.json
      - split: test
        path:
          - OnlyQuestion/test_OnlyQuestion.json
  - config_name: Question+Intent
    data_files:
      - split: train
        path:
          - Question+Intent/train_Question+Intent.json
      - split: validation
        path:
          - Question+Intent/validation_Question+Intent.json
      - split: test
        path:
          - Question+Intent/test_Question+Intent.json
language:
  - es

VirPat-2024

In this repository, you will find the QA corpora used in the following paper: https://aclanthology.org/2024.lrec-main.182/

Introduction

Virtual patients (VPs) have emerged as potent educational tools in medical training and healthcare simulation. They offer medical students the opportunity to simulate authentic clinical consultations, allowing them to practice a wide range of scenarios and gain valuable experience before engaging with real patients during medical exams or interviews. These VPs are built upon dialogue systems, which are automated AI systems designed to interact with users through natural language conversations. The primary objective of a dialogue system is to facilitate effective communication between humans and computers, comprehend user input in the form of text or speech, and provide appropriate responses to user queries.

To achieve this objective, we have developed a medical domain QA corpus in Spanish to enable models to get better results.

VIR-PAT-QA corpus

The goal of the VIR-PAT-QA corpus is to align each dialogue with a corresponding clinical record in natural language, thereby enhancing the dialogue dataset with textual descriptions of each patient. This approach paves the way for the creation of new virtual patients by simply adding new clinical records to the dataset. To achieve this objective, we initially translated doctor-patient consultation dialogues from English to Spanish, which were recorded following the format of the OSCE exams and subsequently transcribed by (Fareez et al. (2022). Following translation, we manually corrected any errors in the translations. Subsequently, we created clinical records based on the doctor-patient dialogues. Finally, we associated each patient's answer with the clinical report in which it was given.

alt text

The ultimate dataset comprises 6,290 question-answer pairs derived from 129 distinct clinical cases, formatted according to the SQuAD v2.0 format proposed by Rajpurkar et al. (2016) with JSON structure. The dataset was partitioned into training, development, and test sets, constituting 75%, 10%, and 15% of the corpus, respectively. Within this dataset, three types of questions are present:

  • Questions that need to be answered: questions that require seeking the answer in the reports.

    • Answered questions: the response appears in the report

    • Unanswered questions: They refer to instances where the required information to formulate a response is absent in the clinical report. In the dataset, these instances are characterized by an empty span and the "is_impossible" attribute set to True. Such questions may arise when the patient fails to comprehend the inquiry or when the provided answer in the dialogue does not adequately address the question. Consequently, the "answer" section in the dataset remains empty for these instances.

  • Questions that do not need and answer pertain to instances where an answer is not required, typically occurring when a medical student or doctor makes a comment rather than solicits information. These comments might include expressions like "Thank you" or "OK," or statements indicating the next course of action, such as "Now I will check your temperature." Detecting and understanding these types of utterances is crucial, as a standard question-answering system might erroneously attempt to generate responses for each question posed, which is inappropriate for these types of statements. In the dataset, such instances are denoted by an "I" in the "answer" section, serving as a distinguishing marker from other question types.

In the following table you can find the distribution of each type of question:

Question type train dev test
Questions that need to be answered 4573 496 915
- Answered questions 2753 295 580
- Unanswered questions 1820 201 335
Questions that do not have to be answered 228 27 51
Total 4801 523 966

The corpus consists of several attributes: Beginning with the "data" section, it contains all the information regarding clinical reports, questions, and answers. Moving on to the "paragraphs" section, we encounter the clinical report, along with the associated questions and answers for a specific patient. Within this section, the "context" refers to the clinical report itself, while the "qas" section contains details regarding the questions and answers. From the questions, we extract the question text ("question"), an identifier ("id"), and a flag indicating whether the question can be answered based on the context ("is_impossible"). The "answer" element includes the actual text of the answer ("text") and the starting position of the answer within the context ("answer_start"). Lastly, there is a less relevant attribute known as the document ID or title ("title"), which is not utilized by the model.

Moreover, the QA corpus also contains different question formats: the one that has the only the question itself, other one with the question and the intention (intent) of the doctor together and the last one that only has the intention of the doctor.

References

Faiha Fareez, Tishya Parikh, Christopher Wavell, Saba Shahab, Meghan Chevalier, Scott Good, Isabella De Blasi, Rafik Rhouma, Christopher McMahon, Jean-Paul Lam, Thomas Lo, and Christopher W. Smith. A dataset of simulated patient-physician medical interviews with a focus on respiratory cases. Scientific Data, 9(1):313, 06 2022.ISSN 2052-4463. doi: 10.1038/s41597-022-01423-1. URL https://doi.org/10.1038/s41597-022-01423-1.