--- base_model: - ValiantLabs/Qwen3-14B-Esper3 - DMindAI/DMind-1-mini - Qwen/Qwen3-14B library_name: transformers license: apache-2.0 language: - en pipeline_tag: text-generation tags: - mergekit - merge - web3 - esper - esper-3 - dmind - dmind-1-mini - valiant - valiant-labs - qwen - qwen-3 - qwen-3-14b - 14b - reasoning - code - code-instruct - python - javascript - dev-ops - jenkins - terraform - scripting - powershell - azure - aws - gcp - cloud - problem-solving - architect - engineer - developer - creative - analytical - expert - rationality - conversational - chat - instruct --- # sequelbox/Qwen3-14B-Esper3Web3 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit), combining the specialty skills of Esper 3 14b and [DMindAI/DMind-1-mini](https://huggingface.co/DMindAI/DMind-1-mini). ## Merge Details ### Merge Method This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) as a base. ### Models Merged The following models were included in the merge: * [ValiantLabs/Qwen3-14B-Esper3](https://huggingface.co/ValiantLabs/Qwen3-14B-Esper3) * [DMindAI/DMind-1-mini](https://huggingface.co/DMindAI/DMind-1-mini) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: della dtype: bfloat16 parameters: normalize: true models: - model: ValiantLabs/Qwen3-14B-Esper3 parameters: density: 0.5 weight: 0.3 - model: DMindAI/DMind-1-mini parameters: density: 0.5 weight: 0.25 base_model: Qwen/Qwen3-14B ```