Building Reproducible Evaluation Processes for Spark NLP Models
Healthcare organizations can face numerous challenges when developing high-quality machine learning models. Data is often noisy and unstructured, and developing successful models involves experimenting with numerous parameter configurations, datasets, and model types. Tracking and analyzing the results of these variations can quickly become a huge challenge as the size of an ML team grows.
When building models in this environment, you must iterate fast and frequently, while preserving transparency into your process. The ability to do this can be limited by your choice of tools. In this session, Comet Data Scientist, Dhruv Nair will share how Spark NLP users can leverage the integration with Comet’s ML development platform to create robust evaluation processes for NLP models. You will learn how to use these tools to enhance team collaboration, model reproducibility, and experimentation velocity.
By the end of this session, you will understand how to track your experiments, create visibility into your model development process, and share results and progress with your team.
Dhruv Nair
Data Scientist at Comet ML
Dhruv Nair is a Data Scientist at Comet working on Community and Growth initiatives. He is experienced with building software tools for industrial R&D. He is responsible for expanding Comet’s Experiment Management capabilities by building out integrations with tools and libraries from the broader ML community, and defining how Experiment Management tools can be integrated into model development workflows to bring robustness, reproducibility, and transparency into the model creation process.
Previously, he was a Research Engineer with IBM’s Physical Analytics group where he worked on Autonomous Environment Monitoring Systems with the aim of understanding macroscopic physical systems using physics-based models in combination with statistical and machine learning techniques.
When
Sessions: April 5th – 6th 2022
Trainings: April 12th – 15th 2022