Holistic MLOps for better science

mlops
python
Presented at PyData NYC 2022
Published

November 10, 2022

Abstract

Machine learning operations (MLOps) are often synonymous with large and complex applications, but many MLOps practices help practitioners build better models, regardless of the size. This talk shares best practices for operationalizing a model and practical examples using the open-source MLOps framework vetiver to version, share, deploy, and monitor models.

View the repository || Watch the recording

Post talk notes

PyData always hosts such a great conference, and PyData NYC 2022 was no exception! I met so many cool people, learned a lot of lore about pandas from other talks, and ate some delicious NY-style pizza. This talk is a good reference for explaining why the vetiver framework emphasizes pinning a model and reading it into a Dockerfile, rather than putting the model binary into the Docker container. In short, this is due to the difficulty of updating a model later and doing ad hoc analysis.