Tackle the challenges of deployment, monitoring, models in production and managing data science workflows and teams.
Sessions will also include best practices from domain experts to operationalize ML at scale and cover the most current and common challenges for MLOps today. Connect & engage with others through live Q&A and discussions.
Founder & CEO
Software Engineer, Machine Learning Platform
Head of AI & ML
Manager & Lead ML Platform Engineer
ML Cloud Architect, MLOps
Sr. Engineering Manager
QuantumBlack, McKinsey & Company
Bringing ML to production with MLOps | Machine Learning Challenges in the Enterprise | Accelerating ML | Deploying serverless ML pipelines | Kubernetes | ML Pipelines | ML Workflows | Model Monitoring | Model Management | Collaboration and Teams | Cost Optimization of ML workflows
|12:00 pm ET
Verta MLOps Salon Welcome and Kickoff
Manasi Vartak - Founder & CEO at Verta
|12:10 pm ET
Deployment for free: removing the need to write model deployment code at Stitch Fix
Stefan Krawczyk - Manager & Lead ML Platform Engineer at Stitch Fix
In this talk I’ll cover how the Model Lifecycle team on Data Platform built a system dubbed the “Model Envelope” to enable “deployment for free”. That is, no code needs to be written by a data scientist to deploy any python model to production, where production means either a micro-service, or a batch python/spark job. With our approach we can remove the need for data scientists to have to worry about python dependencies, or instrumenting model monitoring since we can take care of it for them, in addition to other MLOps concerns. Specifically the talk will cover: *Our API interface we provide to data scientists and how it decouples deployment concerns. *How we approach automatically inferring a type safe API for models of any shape. *How we handle python dependencies so Data Scientists don’t have to. *How our relationship & approach enables us to inject & change MLOps approaches without having to coordinate much with Data Scientists.
|12:45 pm ET
How to manage model lifecycle with a Model Registry
Meeta Dash - VP Product at Verta
|1:20 pm ET
Drift detection on data for monitoring machine learning models in production work
Adam Lieberman - Head of AI & ML at Finastra
Pushing a model into production is no small feat. From notebook to productized deployment we have to not only develop a high performing model, but consider latency, throughput, energy, memory usage, model size, deployment resources, and much more to get our model in the hands of our users. Once we get a model in the wild, we might think the fun is over, but we need constant monitoring to ensure models are functioning as intended. When monitoring models in production we need to constantly lookout for drift, a change in our data that can cause models to behave in unintended ways. In this talk we will define the concept of drift, the many forms it can take, statistical measures to quantify drift, and some mitigation strategies for keeping models healthy and serving the needs of our users.
|2:05 pm ET
Introduction to ML compilers
Chip Huyen - Adjusent Lecturer at Stanford University
|2:40 pm ET
Scaling ML Platform Responsibly at DoorDash
Hien Luu - Sr. Engineering Manager at DoorDash
|3:15 pm ET
Algorithmic Fairness: From Theory to Practice
Ines Marusic - Engagement Manager at QuantumBlack, McKinsey & Company
Recent advances in machine learning have enabled us to automate decisions and processes across many specific tasks. Machine learning is increasingly being used to make decisions that can severely affect people’s lives, for instance, in education, hiring, lending, and criminal risk assessment. In these areas algorithms are used to make predictions for things such as job candidate screening, issuing insurances or making loan approvals. However, the training data often contains bias that exists in our society. This bias can be absorbed or even amplified by the systems, leading to decisions that are unfair with respect to gender or other sensitive attributes (e.g., race). The goal of fairness is to design algorithms that make fair predictions devoid of discrimination. I will discuss the recent advances coming from the machine learning research community on algorithmic fairness, including detection of bias in data, assessment of fairness of machine learning models, and post-processing methods for model predictions to achieve fairness. I will also provide a practitioner’s perspective through best practices of how to incorporate techniques from algorithmic fairness effectively on products in a variety of industries, including pharma, banking, and insurance.
|3:50 pm ET
Kubeflow pipelines and its operational challenges at scale
Mohan Muppidi - ML Cloud Architect - MLOps at iRobot
iRobot has been using Kubeflow pipelines for data, training and validation needs of ML scientists for a year now. This infrastructure has been used at various stages of model development, from data prep to dataset building to training to validation to testing. This talk is about good, bad and ugly sides of managing and using Kubeflow pipelines infrastructure for machine learning research and development.
|4:30 pm ET
45 min Panel Discussion
Manasi Vartak - Founder & CEO at Verta
|5:25 pm ET