Measuring Hallucinations in Healthcare RAG

RAG or retrieval Augmented Generation is emerging as the architecture pattern of choice for enterprise grade LLM applications, due to its ability to significantly reduce hallucinations and introduce citations, both of which increase the trust of the end user in the result. This is of crucial importance for healthcare applications, where the implications of hallucinations can be dire indeed.

In this talk I will describe what RAG is, how it’s implemented, and introduce Vectara’s Hallucination Evaluation Model (HEM) which allows to measure hallucinations in the response. I will demonstrate this over a sample application of RAG with healthcare research data.

About the speaker
Amy-Heineike

Ofer Mendelevitch

Head of Developer Relations at Vectara

Ofer Mendelevitch leads developer relations at Vectara. He has extensive hands-on experience in machine learning, data science and big data systems across multiple industries, and has focused on developing products using large language models since 2019.

Prior to Vectara he built and led data science teams at Syntegra, Helix, Lendup, Hortonworks and Yahoo! Ofer holds a B.Sc. in computer science from Technion and M.Sc. in EE from Tel Aviv university, and is the author of “Practical Data Science with Hadoop” (Addison Wesley).

NLP-Summit

When

Sessions: April 2nd – 3rd 2024
Trainings: April 15th – 19th 2024

 

Contact

nlpsummit@johnsnowlabs.com

Presented by

jhonsnow_logo