LIVE ONLINE EVENT | July 15 @ 12:00 PM ET

MLOps Salon:

Applying MLOps at Scale

Brought to you by  download-1

 

Watch On-demand

 

Tackle the challenges of deployment, monitoring, models in production and managing data science workflows and teams.

Sessions will also include best practices from domain experts to operationalize ML at scale and cover the most current and common challenges for MLOps today. Connect & engage with others through live Q&A and discussions.

 

 

The Verta MLOps Salon Series is a quarterly event focusing on the Operationalization of Machine Learning models in the real world. Bringing together experts from industry as well as research, the MLOps Salon events showcase best-practices, real-world case studies, and community oriented interactive panels. Join us for the live event and continue the discussion on the Slack!

 

Speakers

Manasi-Vartak-2

Manasi Vartak

Founder & CEO

Verta

 

Kornel Csernai

Kornel Csernai

Software Engineer, Machine Learning Platform

DoorDash

 

Adam-Lieberman

Adam Lieberman

Head of AI & ML

Finastra

 

Stefan Krawczyk

Stefan Krawczyk

Manager & Lead ML Platform Engineer

Stitch Fix

 

Mohan Muppidi

Mohan Muppidi

ML Cloud Architect, MLOps

iRobot

 

Hien Luu

Hien Luu

Sr. Engineering Manager

DoorDash

Meeta Dash

Meeta Dash

VP Product

Verta

 

Ines Marusic

Ines Marusic

Engagement Manager

QuantumBlack, McKinsey & Company

 

1603913065498

Chip Huyen

Adjunct Lecturer

Stanford University

Topics

Bringing ML to production with MLOps  |  Machine Learning Challenges in the Enterprise  |  Accelerating ML  |  Deploying serverless ML pipelines  |  Kubernetes  |  ML Pipelines  |  ML Workflows  |  Model Monitoring  |  Model Management  |  Collaboration and Teams  |  Cost Optimization of ML workflows

 

Schedule

 

12:00 pm ET

Verta MLOps Salon Welcome and Kickoff

Manasi Vartak - Founder & CEO at Verta

12:10 pm ET

Deployment for free: removing the need to write model deployment code at Stitch Fix

Stefan Krawczyk - Manager & Lead ML Platform Engineer at Stitch Fix

In this talk I’ll cover how the Model Lifecycle team on Data Platform built a system dubbed the “Model Envelope” to enable “deployment for free”. That is, no code needs to be written by a data scientist to deploy any python model to production, where production means either a micro-service, or a batch python/spark job. With our approach we can remove the need for data scientists to have to worry about python dependencies, or instrumenting model monitoring since we can take care of it for them, in addition to other MLOps concerns. Specifically the talk will cover: *Our API interface we provide to data scientists and how it decouples deployment concerns. *How we approach automatically inferring a type safe API for models of any shape. *How we handle python dependencies so Data Scientists don’t have to. *How our relationship & approach enables us to inject & change MLOps approaches without having to coordinate much with Data Scientists.

12:45 pm ET

How to manage model lifecycle with a Model Registry

Meeta Dash - VP Product at Verta

1:20 pm ET

Drift detection on data for monitoring machine learning models in production work

Adam Lieberman - Head of AI & ML at Finastra

Pushing a model into production is no small feat. From notebook to productized deployment we have to not only develop a high performing model, but consider latency, throughput, energy, memory usage, model size, deployment resources, and much more to get our model in the hands of our users. Once we get a model in the wild, we might think the fun is over, but we need constant monitoring to ensure models are functioning as intended. When monitoring models in production we need to constantly lookout for drift, a change in our data that can cause models to behave in unintended ways. In this talk we will define the concept of drift, the many forms it can take, statistical measures to quantify drift, and some mitigation strategies for keeping models healthy and serving the needs of our users.

2:05 pm ET

Introduction to ML compilers

Chip Huyen - Adjusent Lecturer at Stanford University

2:40 pm ET

Scaling ML Platform Responsibly at DoorDash

Hien Luu - Sr. Engineering Manager at DoorDash
Kornel Csernai - Software Engineer, Machine Learning Platform at DoorDash

DoorDash is a three-sided marketplace that consists of Merchants, Consumers, and Dashers. As DoorDash’s business grows, the ML Platform also needs to grow in a responsible way to meet the needs of the growing Data Science community. The ML Platform has grown and evolved tremendously over the last year. We've learned to collaborate closely with the DS community, worked through the scaling challenges, and steadily and incrementally evolved the platform. As the platform onboarded additional mission-critical use cases, we needed to ensure the integrity of those by applying ML observability best practices, such as feature and model monitoring, and continuous model training. In this session, we will share our ML platform journey and story, as well as technical details about how we worked through the scaling challenges and our approach to ML Observability.

3:15 pm ET

Algorithmic Fairness: From Theory to Practice

Ines Marusic - Engagement Manager at QuantumBlack, McKinsey & Company

Recent advances in machine learning have enabled us to automate decisions and processes across many specific tasks. Machine learning is increasingly being used to make decisions that can severely affect people’s lives, for instance, in education, hiring, lending, and criminal risk assessment. In these areas algorithms are used to make predictions for things such as job candidate screening, issuing insurances or making loan approvals. However, the training data often contains bias that exists in our society. This bias can be absorbed or even amplified by the systems, leading to decisions that are unfair with respect to gender or other sensitive attributes (e.g., race). The goal of fairness is to design algorithms that make fair predictions devoid of discrimination. I will discuss the recent advances coming from the machine learning research community on algorithmic fairness, including detection of bias in data, assessment of fairness of machine learning models, and post-processing methods for model predictions to achieve fairness. I will also provide a practitioner’s perspective through best practices of how to incorporate techniques from algorithmic fairness effectively on products in a variety of industries, including pharma, banking, and insurance.

3:50 pm ET

Kubeflow pipelines and its operational challenges at scale

Mohan Muppidi - ML Cloud Architect - MLOps at iRobot

iRobot has been using Kubeflow pipelines for data, training and validation needs of ML scientists for a year now. This infrastructure has been used at various stages of model development, from data prep to dataset building to training to validation to testing. This talk is about good, bad and ugly sides of managing and using Kubeflow pipelines infrastructure for machine learning research and development.

4:30 pm ET

45 min Panel Discussion

Manasi Vartak - Founder & CEO at Verta
Adam Lieberman - Head of AI & ML at Finastra
Stefan Krawczyk - Manager & Lead ML Platform Engineer at Stitch Fix
Hien Luu - Sr. Engineering Manager at DoorDash

5:25 pm ET

Wrap up!

 

 

 

Watch On-demand

Register to join our upcoming live webinars, or listen to on-demand webinars at any time.

View webinars