Abstract: In recent years, advances in ML/AI have made tremendous progress yet designing large-scale data science and machine learning applications still remain challenging. The variety of machine learning frameworks, hardware accelerators, cloud vendors as well as the complexity of data science workflows brings new challenges to MLOps. One particular challenge is that it’s non-trivial to build an inference system that’s suitable for models of different sizes, especially for LLMs or large models in general.
This talk presents various best practices and challenges on building large, efficient, scalable, and reliable AI/ML model inference platforms using cloud-native technologies such as Kubernetes and KServe that are production-ready for models at any size.
Bio: Yuan is a principal software engineer at Red Hat, working on OpenShift AI. He's a project lead of Argo and Kubeflow, a maintainer of TensorFlow and XGBoost, and an author of many popular open source projects. In addition, Yuan authored three machine learning books and published numerous impactful papers. He's a regular conference speaker, technical advisor, leader, and mentor at various organizations.