Trustworthy Health AI: Challenges & Lessons Learned
While generative AI models and applications have huge potential across healthcare, their successful deployment requires addressing several ethical, trustworthiness, and safety considerations. These concerns include domain‑specific evaluation, hallucinations, truthfulness and grounding, safety and alignment, bias and fairness, robustness and security, privacy and unlearning, calibration and confidence, and transparency. In this talk, we first highlight the key challenges and opportunities for AI in healthcare, and then discuss unique challenges associated with trustworthy deployment of generative AI in healthcare. Focusing on the clinical documentation use case, we present practical guidelines for applying responsible AI techniques effectively and discuss lessons learned from deploying responsible AI approaches for generative AI applications in practice. This talk will be based on our KDD’24 health day talk (https://docs.google.com/presentation/d/1SfFXL1GTg0UfZWCHtOwEsfm19PYPzCdMN_CcSH5AdhQ/edit?usp=sharing) and KDD’24 LLM grounding and evaluation tutorial (https://docs.google.com/presentation/d/1Fdk5ONzUlKQTNz3h6rUz9FNK2llznJ176Vz4gAJ-mJw/edit?usp=sharing and https://arxiv.org/abs/2407.12858).
About the speaker

Krishnaram Kenthapadi
Chief Scientist, Clinical AI
at Oracle
When
Sessions: April 2nd – 3rd 2024
Trainings: April 15th – 19th 2024