Foundation Models For Biomedical NLP

During this talk we will present Stanford CRFM’s efforts to train foundation models for biomedical NLP, culminating in a 2.7B parameter GPT-style model trained on PubMed abstracts and documents in partnership with MosaicML. We will walk through the training process, results of our experiments, next steps, and the broader topic of recent advances in biomedical question answering by language models.

This model has achieved exciting results on QA tasks at a relatively small scale, and we hope it can be useful for studying domain specific language models and building biomedical NLP applications. We are aiming for a vibrant discussion with the biomedical NLP community so we can understand what users want in an open source biomedical language model and gain insights for future models we train.

About the speaker
Amy-Heineike

Elliot Bolton

Research Engineer at Stanford CRFM

Elliot is a research engineer working on language models for Stanford CRFM. Currently he is working on a large scale, open source foundation model for biomedical NLP. For nearly a decade Elliot has helped build and maintain various open source Stanford projects including CoreNLP, Stanza, and Mistral.

NLP-Summit

When

Online Event: April 4-5, 2023

 

Contact

nlpsummit@johnsnowlabs.com

Presented by

jhonsnow_logo