Development tools such as Jupyter are prevalent among data scientists because they provide an environment to explore data visually and interactively. However, when deploying a project, we must ensure the analysis can run reliably in a production environment like Airflow or Argo; this causes data scientists to move code back and forth between their notebooks and these production tools. Furthermore, data scientists have to learn an unfamiliar framework and write pipeline code, which severely delays the deployment process.
Ploomber solves this problem by providing:
1. A workflow orchestrator that automatically infers task execution order using static analysis.
2. A sensible layout to bootstrap projects.
3. A development environment integrated with Jupyter.
4. Capabilities to export to production systems (Airflow and Argo) without code changes.
This talk develops and deploys a Machine Learning pipeline in 30 minutes to demonstrate how Ploomber streamlines the Machine Learning development and deployment process.
Who and why
This talk is for data scientists (with experience developing Machine Learning projects) looking to enhance their workflow. Experience with production tools such as Airflow or Argo is not necessary.
The talk has two objectives:
1. Advocate for more development-friendly tools that let data scientists focus on analyzing data and taking off popular production tools' overhead.
2. Demonstrate an example workflow using Ploomber where a pipeline is developed interactively (using Jupyter) and deployed without code changes.
Bio: Eduardo is interested in developing tools to deliver reliable Machine Learning products. Towards that end, he created Ploomber, an open-source Python library to compose production-ready data workflows. Eduardo holds an M.S in Data Science from Columbia University, where he took part in Computational Neuroscience research. Eduardo started his Data Science career in 2015 at the Center for Data Science and Public Policy at The University of Chicago.