JUNE 23 @ 1 PM ET | 10 AM PT

Monitoring your Production NLP Models



Today, NLP (Natural Language Processing) algorithms power a wide range of intelligent applications from smart devices, customer service chatbots, document processing to search, and targeting. It's hard to develop a state-of-the-art NLP application and it's even harder to monitor and guarantee quality and consistency in production.

With the models making key product and business decisions it's imperative that we have access to specialized production monitoring tools and techniques designed with the complexity and unique approaches of NLP algorithms in mind. For example, to know if your production model is making inaccurate predictions requires ground truth which is very complex and time-consuming to obtain as you consider languages, geographies, context, emotions, and other NLP nuances. On top of that ground truth for NLP is ambiguous and not always black and white.

In this talk we will discuss why monitoring your NLP models is a fundamentally complex problem and key considerations of a model monitoring system. Finally, we will dig into a specific NLP use case and demonstrate how we can leverage the new Verta Model Monitoring capability to easily monitor any NLP model performance, identify model/data drifts and errors, segment model inputs, and outputs by cohorts, and perform root cause analysis.




Presented by Meeta Dash - VP Product at Verta

As VP Product at Verta Meeta Dash is building MLOps tools to help data science teams track, deploy, operate and monitor models and bring order to Enterprise AI/ML chaos. Prior to Verta, Meeta held several product leadership roles in Appen, Figure Eight, Cisco Systems, Tokbox/Telefonica and Computer Associates building ML data platform, Voice & Conversation AI products and Analytics/Operational Monitoring Tools. Meeta has an MBA Degree from UC Davis and an engineering degree from National Institute of Technology, India.