framework

PreDA-small (Prefix-Based Dream Reports Annotation)

This model is a fine-tuned version of google-t5/t5-small on the annotated Dreambank.net dataset.It achieves the following results on the evaluation set:

Intended uses & limitations

This model is designed for research purposes. See the disclaimer for more details.

Training procedure

The overall idea of our approach is to disentangle each dream report from its annotation as a whole and to create an augmented set of (dream report; single feature annotation). To make sure that, given the same report, the model would produce a specific HVDC feature, we simply append at the beginning of each report a string of the form ``HVDC-Feature:'', in a manner that closely mimics T5 task-specific prefix fine-tuning.

After this procedure to the original dataset (~1.8K) we obtain approximately 6.6K items. In the present study, we focused on a subset of six HVDC features: Characters, Activities, Emotion, Friendliness, Misfortune, and Good Fortune. This selection was made to exclude features that represented less than 10% of the total instances. Notably, Good Fortune would have been excluded under this criterion, but we intentionally retained this feature to control against potential memorisation effects and to provide a counterbalance to the Misfortune feature. After filtering out instances whose annotation feature is not one of the six selected features, we are left with ~5.3K dream reports. We then generate a random split of 80%-20% for the training (i.e., 4,311 reports) and testing (i.e. 1,078 reports) sets.

Training

Hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 128
  • eval_batch_size: 128
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 20
  • mixed_precision_training: Native AMP
  • label_smoothing_factor: 0.1

Metrics

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
2.5334 1.0 34 2.2047 0.3655 0.1949 0.3570 0.3570
2.0989 2.0 68 2.0026 0.5321 0.3606 0.5169 0.5168
1.9755 3.0 102 1.9020 0.5873 0.4139 0.5639 0.5646
1.9445 4.0 136 1.8645 0.5968 0.4271 0.5800 0.5805
1.8995 5.0 170 1.8282 0.6438 0.4882 0.6216 0.6216
1.8616 6.0 204 1.7978 0.6675 0.5107 0.6473 0.6473
1.8316 7.0 238 1.7784 0.6890 0.5369 0.6638 0.6636
1.8049 8.0 272 1.7542 0.7191 0.5761 0.6934 0.6937
1.7977 9.0 306 1.7373 0.7322 0.5953 0.7049 0.7052
1.7642 10.0 340 1.7219 0.7545 0.6213 0.7248 0.7252
1.7562 11.0 374 1.7072 0.7664 0.6389 0.7418 0.7423
1.7437 12.0 408 1.6961 0.7777 0.6519 0.7494 0.7496
1.7271 13.0 442 1.6838 0.7893 0.6715 0.7636 0.7638
1.7238 14.0 476 1.6765 0.7946 0.6759 0.7701 0.7703
1.7151 15.0 510 1.6706 0.8065 0.6918 0.7830 0.7833
1.6997 16.0 544 1.6605 0.8143 0.7006 0.7889 0.7892
1.6937 17.0 578 1.6552 0.8202 0.7100 0.7965 0.7968
1.6919 18.0 612 1.6505 0.8238 0.7176 0.8019 0.8019
1.6826 19.0 646 1.6493 0.8262 0.7210 0.8039 0.8040
1.6811 20.0 680 1.6467 0.8305 0.7257 0.8080 0.8080

Evaluation

We selected the best model via validation loss. The table below reposts overall and feature-specific scores.

Feature rouge1 rouge2 rougeL rougeLsum
Activities 72.1 59.8 66.3 66.3
Characters 90.9 86.2 88.5 88.5
Emotion 85.3 77.1 84.7 84.7
Friendliness 74.3 60.5 70.4 70.4
Good Fortune 79.1 11.5 78.6 78.6
Misfortune 65.5 48.9 65.3 65.2
Overall 83.1 72.6 80.8 80.8

Disclaimer

Dream reports and their annotation have been used in clinical settings and applied for diagnostic purposes. This does not apply in any way to our experimental results and output. Our work aims to provide experimental evidence of the feasibility of using PLMs to support humans in annotating dream reports for research purposes, as well as detailing their strengths and limitations when approaching such a task.

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.1.0+cu118
  • Datasets 3.0.1
  • Tokenizers 0.19.1

Usage

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

model_id = "jrc-ai/PreDA-small"
device = "cpu"
encoder_max_length = 100
decoder_max_length = 50

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)

dream = "I was talking with my brother about my birthday dinner. I was feeling sad."
prefixes = ["Emotion", "Activities", "Characters"]
text_inputs = ["{} : {}".format(p, dream) for p in prefixes]

inputs = tokenizer(
    text_inputs,
    max_length=encoder_max_length,
    truncation=True,
    padding=True,
    return_tensors="pt"
)

output = model.generate(
    **inputs.to(device),
    do_sample=False,
    max_length=decoder_max_length,
)

for decode_dream in output:
    print(tokenizer.decode(decode_dream, skip_special_tokens=True))

# (Dreamer, sadness).
# (Dreamer, V= to, individual male brother teenager).
# individual male brother teenager.

Dual-Use Implication

Upon evaluation we identified no dual-use implication for the present model

Cite

Please note that the paper referring to this model, titled PreDA: Prefix-Based Dream Reports Annotation with Generative Language Models, has been accepted for publication at LOD 2025 conference and will appear in the conference proceedings.

Downloads last month
5
Safetensors
Model size
60.5M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jrc-ai/PreDA-small

Base model

google-t5/t5-small
Finetuned
(2022)
this model