AI frameworks like LangChain, LlamaIndex and Haystack are making you worse: bloat, leaky abstractions, debugging hell. Build real features with plain Python and simple libraries first. Skip the crutches! Prove you understand AI before you trust the framework “magic”.
The AI framework ecosystem stabilized around a familiar set of tools—LangChain for chaining calls, LlamaIndex for data indexing, Haystack for search pipelines, DSPy for prompt optimization, plus agent frameworks like CrewAI. Tutorials and blog posts push developers straight into these tools, but for Python programmers not specializing in AI research, this often means starting with unnecessary complexity. Heavy abstractions wrap simple operations in layers of configuration, magic methods, and framework-specific concepts that obscure rather than clarify.
This talk demonstrates the opposite approach. Starting from plain Python and HTTP requests to an LLM, we build concrete functionality—a text analyzer, a simple multi-step processor, or basic Q&A helper—using only standard libraries and lightweight helpers. The result stays readable, debuggable, and dependency-minimal. No sprawling object graphs, no opaque chain executions, no vendor-specific loaders. From this foundation, we examine what popular frameworks layer on top: LangChain’s chaining abstractions that hide HTTP calls behind fluent syntax, LlamaIndex’s data connectors that bundle dozens of loaders you might never use, Haystack’s pipeline components that compose elegantly but demand framework fluency to debug. Each adds real value for specific scale problems, but introduces bloat, leaky abstractions, and debugging friction.
Most importantly, the talk reveals how starting framework-free builds intuition for when frameworks become justified. Simple Python code exposes exactly what happens, network latency, parsing failures, API rate limits, while frameworks abstract these away until something breaks. Developers learn to recognize the concrete triggers for framework adoption: when you need dozens of tools orchestrated, complex retry logic across multiple LLMs, or systematic prompt optimization across datasets. Until those needs emerge, explicit code with lightweight libraries proves more maintainable, performant, and understandable than framework “magic” that trades clarity for perceived productivity.