--- license: apache-2.0 datasets: - ddrg/math_text - ddrg/math_formulas - ddrg/math_formula_retrieval - ddrg/named_math_formulas language: - en base_model: - google-bert/bert-base-cased --- # MAMUT-Bert (Math Mutator BERT) MAMUT-BERT is a pretrained language model based on [bert-base-cased](https://huggingface.co/bert-base-cased), further pretrained on mathematical texts and formulas. It was introduced in [MAMUT: A Novel Framework for Modifying Mathematical Formulas for the Generation of Specialized Datasets for Language Model Training](https://arxiv.org/abs/2502.20855). The model aims to provide improved mathematical understanding by extending BERT with domain-specific knowledge from mathematical LaTeX formulas and terminology. ## Model Details ### Overview MAMUT-BERT was pretrained on four math-specific tasks across four datasets. - **[Mathematical Formulas (MF)](https://huggingface.co/datasets/ddrg/math_formulas):** A Masked Language Modeling (MLM) task on math formulas written in LaTeX. - **[Mathematical Texts (MT)](https://huggingface.co/datasets/ddrg/math_text):** An MLM task on natural language text containing inline LaTeX math (*mathematical texts*). The masking probability was biased toward mathematical tokens (inside math environment $...$) and domain-specific terms (e.g., *sum*, *one*, ...) - **[Named Math Formulas (NMF)](https://huggingface.co/datasets/ddrg/named_math_formulas):** A Next-Sentence-Prediction (NSP)-style task: given a formula and the name of a mathematical identity (e.g., Pythagorean Theorem), classify whether they match. - **[Math Formula Retrieval (MFR)](https://huggingface.co/datasets/ddrg/math_formula_retrieval):** Another NSP-style task to decide if two formulas describe the same mathematical identity or concept. ![Training Overview](mamutbert-training.png) To support mathematical syntax, *300 additional mathematical [LaTeX-specific tokens](added_tokens.json)* were added to the tokenizer, e.g., `\sum`, `\frac`, and `pmatrix`. ### Model Sources - **Base Model:** [bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) - **Pretraining Code:** [aieng-lab/transformer-math-pretraining](https://github.com/aieng-lab/transformer-math-pretraining) - **MAMUT Repository:** [aieng-lab/math-mutator](https://github.com/aieng-lab/math-mutator) - **Paper:** [MAMUT: A Novel Framework for Modifying Mathematical Formulas for the Generation of Specialized Datasets for Language Model Training](https://arxiv.org/abs/2502.20855) ## Uses MAMUT-BERT is intended for downstream tasks that require improved mathematical understanding, such as: - Formula classification - Retrieval of *semantically* similar formulas - Math-related question answering **Note: This model was saved without the MLM or NSP heads and requires fine-tuning before use in downstream tasks.** Similarly trained models are [MAMUT-MathBERT based on `tbs17/MathBERT`](https://huggingface.co/aieng-lab/MathBERT-mamut) and [MAMUT-MPBERT based on `AnReu/math_structure_bert`](https://huggingface.co/ddrg/math_structure_bert) (best of the three models according to our evaluation). ## Training Details Training configurations are described in [Appendix C of the MAMUT paper](https://arxiv.org/abs/2502.20855). ## Evaluation The model is evaluated in [Section 7 and Appendix C.4 of the MAMUT paper](https://arxiv.org/abs/2502.20855) (MAMUT-BERT). ## Environmental Impact - **Hardware Type:** 8xA100 - **Hours used:** 48 - **Compute Region:** Germany ## Citation **BibTeX:** ```bibtex @article{ drechsel2025mamut, title={{MAMUT}: A Novel Framework for Modifying Mathematical Formulas for the Generation of Specialized Datasets for Language Model Training}, author={Jonathan Drechsel and Anja Reusch and Steffen Herbold}, journal={Transactions on Machine Learning Research}, issn={2835-8856}, year={2025}, url={https://openreview.net/forum?id=khODmRpQEx} } ```