Attention-based models trained on protein sequences have demonstrated incredible success at classification and generation tasks relevant for artificial intelligencedriven protein design. However, we lack a sufficient understanding of how very large-scale models and data play a role in effective protein model development. We introduce a suite of protein language models, named ProGen2, that are scaled up to 6.4B parameters and trained on different sequence datasets drawn from over a billion proteins from genomic, metagenomic, and immune repertoire databases. ProGen2 models show state-of-the-art performance in capturing the distribution of observed evolutionary sequences, generating novel viable sequences, and predicting protein fitness without additional finetuning. As large model sizes and raw numbers of protein sequences continue to become more widely accessible, our results suggest that a growing emphasis needs to be placed on the data distribution provided to a protein sequence model. We release the ProGen2 models and code at https://github.com/salesforce/progen.
True stories can win out on social media, study finds (phys.org) Engagement with fact-checked posts on Reddit | PNAS Nexus | Oxford Academic (oup.com) 量子计算机 - 维基百科,自由的百科全书 (wikipedia.org) IBM将其量子计算机的性能提高了一倍 | 雷峰网 (leiphone.com)