A year in review: vetiver

vetiver
mlops
r
python
Published

October 25, 2022

It has been almost a year of vetiver! Vetiver for Python (0.1.8) and R (0.1.8) seeks to provide fluent tooling to version, share, deploy, and monitor a trained model.

The journey

Recently, the team dedicated a month to reading about the world of MLOps, as it is today. We all read the fantastic book Designing Machine Learning Systems by Chip Huyen and split up numerous other academic articles and web content between ourselves. The world of machine learning operations moves fast, and we wanted to ensure the choices we had made early on (eg, focusing on versioning, deploying, and monitoring) would serve data practitioners best.

After reading many different definitions of MLOps, the one we found most useful is: “a set of practices to deploy and maintain machine learning models in production reliably and efficiently.” While not every MLOps practice is applicable at scale for every team, these best practices can elevate any size of project.

MLOps applies the same basic principles as DevOps (development operations) in a specialized machine learning context. One common point of tension in MLOps is that data science is highly experimental and iterative compared to general purpose software delivery, but deploying to a production environment still requires reliable software engineering delivery practices. To help ease this pain point, APIs are commonly used to deploy models due to their stability and simplicity. APIs can be tested, and they act nearly identically in every architecture. This allows for software engineering practices to be applied to APIs. They have a straightforward architecture to configure and update, giving data scientists agility to retrain and update models as needed.

Tools labeled as “MLOps frameworks” have a broad scope. Tasks generally fall into one of a few different categories:

  • Orchestration or pipelines
  • Experiment tracking
  • Model versioning
  • Model deployment
  • Model monitoring

Vetiver spans a few of these tasks, but is not a tool built for orchestration or experiment tracking. Rather, Vetiver focuses on the practices of versioning, deploying, and monitoring and will continue building support for these tasks.

Where vetiver is now

Vetiver also leverages the versioning and sharing capabilities of the pins package in Python and R. This package brings a straightforward way to read and write objects to different locations and allows for certain data types (namely csv and arrow files) to be passed between the Python and R language fluently.

Within vetiver itself, the use of VetiverModel, VetiverAPI, and monitoring helper functions gives practitioners lightweight support to bring their models to many different locations via one line deployment functions (for Posit Connect) or Dockerfile generation (for numerous on-prem or public cloud locations). These objects are able to be extended to support more advanced use cases. Vetiver is able to quickly prototype REST APIs, and then scale these prototypes safely.

Where vetiver is going

Fluent monitoring practices is crucial for a robust deployment. While the CI/CD in monitoring can be infrastructure dependent, it is important to close the loop between model prediction, monitoring, and retraining. Feedback loops are a place where bias in models can aggregate undetected. Any continuous monitoring support necessitates careful thought on how to uncover model (un)fairness.

The composability of vetiver with other projects, such as MLFlow or Metaflow, is needed to allow practitioners to build an MLOps framework that is flexible and meets the need of their team. Composability in this sense also includes platform agnosticism for public clouds such as Amazon Web Services, Azure, and Google Cloud Platform. Currently, generic Dockerfiles exist that can be hosted on these platforms, but extended documentation is needed for specific workflows.

Where vetiver is not going

DAG creation is currently out of scope of vetiver. If this is to be supported later on, it would likely end up in a new tool for maximum flexibility.

Support for automatic creation of model registries is not currently short-term plan. However, using pins and Quarto together can create a beautiful document to track your deployed objects. If you’re interested in this topic, this demo shows how to use pins + Quarto to track your models.

In all, we have learned so much from you all this year, and look forward to another year of helping data scientists bring models into production!