--- base_model: - google/flan-t5-large - andgonzalez/flan-t5-large-samsum - Varshitha/flan-t5-large-finetune-medicine-v5 - google/flan-t5-large - jbochi/flan-t5-large-spelling-peft library_name: transformers tags: - mergekit - merge --- # merged_t5 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) as a base. ### Models Merged The following models were included in the merge: * [andgonzalez/flan-t5-large-samsum](https://huggingface.co/andgonzalez/flan-t5-large-samsum) * [Varshitha/flan-t5-large-finetune-medicine-v5](https://huggingface.co/Varshitha/flan-t5-large-finetune-medicine-v5) * [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) + [jbochi/flan-t5-large-spelling-peft](https://huggingface.co/jbochi/flan-t5-large-spelling-peft) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic base_model: google/flan-t5-large models: - model: google/flan-t5-large - model: Varshitha/flan-t5-large-finetune-medicine-v5 parameters: weight: 0.75 - model: andgonzalez/flan-t5-large-samsum parameters: weight: 0.6 - model: google/flan-t5-large+jbochi/flan-t5-large-spelling-peft parameters: weight: 0.3 ```