Would you like to build LLM-based systems that think, decide and act autonomously? Learn how to build AI Agents from scratch! We’ll explore three architectures — Reflection for self-correction, ReAct for using external tools, Multi-Agent for complex tasks — with practical Python implementations
The Large Language Model era is rapidly evolving: it’s no longer just about generating text in response to prompts, but about building intelligent systems capable of making autonomous decisions, using external tools and collaborating to solve complex problems. These systems are called AI Agents and represent one of the most discussed topics in the past year within LLM-based applications. The talk aims to guide the audience through understanding and practically implementing three advanced agentic architectures: Reflection, ReAct and Multi-Agent systems, providing not only theory but especially working code and reusable patterns to develop agents in your own projects.
In traditional applications we’re used to building pipelines that follow a predefined, deterministic flow: a fixed sequence of operations always executed the same way. AI Agents instead introduce the concept of decision-making autonomy: they can dynamically choose which tools to use, when to stop execution, when to retry an operation and how to combine information from different sources. This flexibility makes them ideal for scenarios where the path to the solution cannot be defined upfront but emerges from the interaction between the agent’s reasoning and its operating environment.
The talk focuses on three architectural patterns representing the state-of-the-art in intelligent agent development. The first pattern, Reflection, introduces the self-reflection mechanism: the agent critically evaluates its own output, identifies potential errors and iterates to improve response quality. The “self-critique” approach significantly increases accuracy in tasks requiring multi-step reasoning, reducing common errors like incorrect calculations or unverified assumptions.
The second pattern is ReAct (Reasoning + Acting), an architecture combining explicit reasoning with external tool use. Unlike a system that executes actions blindly, a ReAct agent alternates “thinking” phases — where it plans what to do — and “action” phases — where it interacts with external tools to obtain information. This pattern is particularly powerful when working with dynamic information or when interaction with external systems is necessary.
The third pattern concerns Multi-Agent systems, where multiple specialized agents collaborate to achieve a common goal. When complexity increases, dividing responsibilities among different agents allows obtaining more robust and maintainable results. This orchestration demonstrates how agents can work in parallel, exchange information and coordinate to produce complex outputs difficult to obtain with a single monolithic agent.
For each architecture, we’ll start with a conceptual schema illustrating the decision flow in abstract form, move to implementation with working Python code using the main open-source orchestration frameworks, and conclude with a discussion on advantages, limitations and optimal use cases. This will provide the audience with tools to consciously choose which architecture to adopt in their projects.
The talk’s central motivation lies in bridging the gap between AI agent theory and practical implementation. Agents represent a paradigm shift: no longer simple “conversational interfaces” to LLMs, but autonomous systems capable of solving complex problems through iterations, tool use and collaboration. Understanding how to build them means acquiring increasingly important skills: personal assistants autonomously navigating the web, automation systems adapting to variable contexts, development tools writing and testing code independently.
In a technological landscape where static LLM invocation is no longer sufficient to tackle complex problems, this talk offers concrete tools to take your projects to the next level: intelligent systems capable of reasoning autonomously, interacting with the external world and collaborating to achieve ambitious goals.
After earning a degree in experimental physics at the University of Pisa, I completed a PhD in Data Science at Scuola Normale Superiore. During my training I conducted research periods at Fermilab in Chicago and CERN in Geneva. I’m currently an AI Engineer at Var Group, where I work on software development in artificial intelligence and Large Language Models.