Biases in NLP datasets and how that is impacting model results

October 7th at 3:30 PM ET – 4:00 PM ET

Register – Free

Model governance defines a collection of best practices for data science – versioning, reproducibility, experiment tracking, automated CI/CD, and others. Within a high-compliance setting where the data used for training or inference contains private health information (PHI) or similarly sensitive data, additional requirements such as strong identity management, role-based access control, approval workflows, and full audit trail are added. This webinar summarizes requirements and best practices for establishing a high-productivity data science team within a high-compliance environment. It then demonstrates how these requirements can be met using John Snow Labs’ Healthcare AI Platform.
About the speaker

Vamsi Sisla 

Director of Data Sciences at Unify Consulting/ UC Berkeley

A proven leader in Data Sciences & Applied AI. Experience in managing large scale projects and teams in data engineering, ML operations, and ML modeling. Helped in developing and implementing enterprise AI strategy; I have lead Cloud Migration and Integration efforts. Good understanding of State of the Art (SOTA) ML tools, framework, and algorithms.

I enjoy reading the latest ML papers and staying up-to-date with relevant trends in the field. Leading research in SOTA in areas such as Natural Language Inference, Text Entailment, Q&A, Text Classification, and Semantic Similarity using BERT, LSTM, Transformers, and others.

In the past established highly talented engineering teams at various enterprises. I have a successful track record of tactical execution in fast-paced software and technology development environments. I am committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution.

When

Sessions: October 6 – 9
Trainings: October 13 – 16

Contact

nlpsummit@johnsnowlabs.com

Presented by