Measuring and Mitigating Bias in Healthcare Language Models

Algorithms have intrinsic biases which can create systematic errors in real-world applications. These errors often yield unintended outcomes that can degrade performance, reliability, and trust. In this talk, Gaurav Kaushik, PhD (Founder, ScienceIO) will provide an overview of methodologies to measure and mitigate bias to improve the performance and safety of large language models in real-world applications.

He will examine the state of benchmarks for healthcare language modeling, gaps in these benchmarks, and overall limitations in benchmarks applied to general-purpose language models. Finally, he will describe a framework for diagnosing and correcting bias in healthcare AI systems, in order to provide higher-quality models to healthcare operators and application developers.

About the speaker
Amy-Heineike

Gaurav Kaushik

Founder at ScienceIO

Gaurav Kaushik is the Co-Founder of ScienceIO, the language platform for healthcare based in New York, NY, where he leads AI and product. Previously, he led the real-world data group at Foundation Medicine (acq. by Roche) where his team built the most comprehensive clinico-genomic database for oncology.

He was also a product lead at Seven Bridges (acquired by Velsera) for the Cancer Genomics Cloud, an NCI Cloud Resource. Gaurav is a biomedical data scientist by training: he received his BS from Columbia University in Biomedical Engineering, a PhD in Bioengineering from UC San Diego, and a fellowship from the NIH at Harvard Medical School.

NLP-Summit

When

Online Event: April 4-5, 2023

 

Contact

nlpsummit@johnsnowlabs.com

Presented by

jhonsnow_logo