Biases in NLP datasets and how that is impacting model results
Biases existed since the dawn of humanity. As a society, we have come a long way to address and correct. Can we live without biases in the world of AI?
What kind of biases is making their way into your chatbots and automated systems that use pre-trained and fine-tuned models? What can be done to address these challenges?
Biases in AI systems, including chatbots, highlight the need for ethical applications of generative AI in healthcare, where precision and fairness are crucial to improving patient outcomes.
Vamsi Sisla
Director of Data Sciences at Unify Consulting/ UC Berkeley
A proven leader in Data Sciences & Applied AI. Experience in managing large scale projects and teams in data engineering, ML operations, and ML modeling. Helped in developing and implementing enterprise AI strategy; I have lead Cloud Migration and Integration efforts. Good understanding of State of the Art (SOTA) ML tools, framework, and algorithms.
I enjoy reading the latest ML papers and staying up-to-date with relevant trends in the field. Leading research in SOTA in areas such as Natural Language Inference, Text Entailment, Q&A, Text Classification, and Semantic Similarity using BERT, LSTM, Transformers, and others.
In the past established highly talented engineering teams at various enterprises. I have a successful track record of tactical execution in fast-paced software and technology development environments. I am committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution.