Keynote

9:15 am to 10:15 am
SESSION CHAIR:  David Redmiles
Engineering the End-to-End AI Lifecycle
Principal Research Staff Member
Manager, AI Lifecycle Acceleration
Keynote Abstract

Machine learning continues to advance and reach or even surpass human-level performance across many tasks in areas such as automated speech recognition, computer vision, and natural language processing. Along with these advances, we see a growing interest in incorporating AI into virtually all application domains from health care, retail and financial industries to science and engineering. Yet, when leaving the realms of the data scientist, we often struggle to bring out the full potential of AI in production. Upon taking a closer look, it becomes quickly evident that existing processes for delivering machine learning capabilities into production tend to be ad-hoc and lack scalability, repeatability and automation. Often, significant manual effort is invested by a set of stakeholders including data scientists, developers and DevOps engineers to set up a "one-of" delivery pipeline that doesn't scale and cannot easily be re-used. Unfortunately, established software CI/CD (continuous integration/continuous delivery) methodologies do not readily apply. Due to the statistical nature of machine learning, the familiar lifecycle stages of testing, validation and monitoring need to be re-envisioned. Also, new and potentially very long running lifecycle stages such as feature engineering and model training need to be accommodated. Finally, compared to software, the rate of artifact change can be dramatically higher; data scientists sometimes explore tens or even hundreds of model versions in order to produce one satisfying model that can be moved to staging.

In this talk, we discuss what constitutes an end-to-end lifecycle for machine learning models and how we can operationalize it using the concepts of re-usable machine learning pipelines. We will present several new lifecycle capabilities, we are working on, that are aimed at continuous improvements of models, including stress-testing, performance prediction, drift detection and goal-driven active learning. We show how to operationalize these new capabilities as re-usable pipeline components so they easily integrate into the lifecycle of production AI. Our ultimate goal is to provide an AI lifecycle toolkit that enables data scientists, developers and DevOps engineers to collaboratively create and operate the end-to-end lifecycle for their application models by using configurable lifecycle templates constructed from a set of re-usable components.

About the Speaker

Evelyn Duesterwald is a Principal Research Staff Member and Manager of the AI Lifecycle Acceleration Group at IBM Research in Yorktown Heights, New York. Her team is currently working on operationalizing the AI lifecycle with a focus on continuous improvements for production deployed AI models. In prior roles at IBM, Evelyn earned IBM Research Accomplishment awards in the areas of AI Security, High Performance Computing, and Software Technologies. Before coming to IBM, Evelyn worked at HP Labs, where she and her colleagues conducted seminal research in the area of dynamic binary optimization earning both a Best Paper award and a retrospective Most Influential Paper award ten years later at the ACM PLDI Conference (Programming Languages Design and Implementation). Holding MS and PhD degrees in Computer Science, she has been granted over 30 patents and has published on a broad range of topics in machine learning safety, parallel programming, compilers, dynamic and static program optimization, and software engineering. Evelyn is an ACM Distinguished Scientist.