ProGen2: Exploring the Boundaries of Protein Language Models
Abstract
Protein language models ProGen2, scaled to 6.4B parameters and trained on a diverse dataset of over a billion protein sequences, demonstrate state-of-the-art performance in sequence generation and fitness prediction without fine-tuning.
Attention-based models trained on protein sequences have demonstrated incredible success at classification and generation tasks relevant for artificial intelligence-driven protein design. However, we lack a sufficient understanding of how very large-scale models and data play a role in effective protein model development. We introduce a suite of protein language models, named ProGen2, that are scaled up to 6.4B parameters and trained on different sequence datasets drawn from over a billion proteins from genomic, metagenomic, and immune repertoire databases. ProGen2 models show state-of-the-art performance in capturing the distribution of observed evolutionary sequences, generating novel viable sequences, and predicting protein fitness without additional finetuning. As large model sizes and raw numbers of protein sequences continue to become more widely accessible, our results suggest that a growing emphasis needs to be placed on the data distribution provided to a protein sequence model. We release the ProGen2 models and code at https://github.com/salesforce/progen.
Models citing this paper 14
Browse 14 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 10
Collections including this paper 0
No Collection including this paper