Talk

The 80/20 of ML Monitoring: What Actually Matters

Thursday, May 28

11:05 - 11:35
RoomGnocchi
LanguageEnglish
Audience levelIntermediate
Elevator pitch

Monitoring is overhyped and badly implemented: I’m going to show you the five signals that prevent most ML disasters, cand how to implement them in plain Python. You’ll leave knowing exactly what to monitor, what to ignore, and how to build a reliable stack.

Abstract

ML systems rarely fail dramatically: they fail quietly. This session cuts through the noise and shows the small set of monitoring signals that prevent the majority of real-world model failures. We’ll build a practical monitoring architecture in Python using Evidently, River, FastAPI, and MLflow, and learn how to detect data drift, feature staleness, performance degradation, pipeline breakage, and silent data shifts. No overcomplexity, no vendor lock-in, just the minimal viable monitoring stack that teams can deploy in days, not months. If you want monitoring that works rather than dashboards that look good, this talk is for you.

TagsData Engineering, Scientific Python, Data Science & Data Visualisation
Participant

Pietro Mascolo

Pragmatic Data Scientist.