AI agents are having a moment, but most of them are little more than fragile prototypes that break under pressure. Together, we’ll explore why so many agentic systems fail in practice, and how to fix that with real engineering principles.
Let’s face it: most AI agents are glorified demos. They look flashy, but they’re brittle, hard to debug, and rarely make it into real products. Why? Because wiring an LLM to a few tools is easy. Engineering a robust, testable, and scalable system is hard.
This talk is for practitioners, data scientists, AI engineers, and developers who want to stop tinkering and start shipping. We’ll take a candid look at the common reasons agent systems fail and introduce practical patterns to fix them using Haystack, an open-source framework purpose-built for production-grade LLM pipelines.
You’ll learn how to design agents that are:
We’ll also cover advanced topics like Model Context Protocol (MCP), Human-in-the-Loop to push your agents into more capable territory.
Whether you’re just starting to explore agents or trying to tame an unruly prototype, you’ll leave with a clear, actionable blueprint to build something that’s not just smart, but also reliable.