Brains Behind Bots: Sessions, Memory, and Context Engineering"
- mirglobalacademy
- Nov 20, 2025
- 3 min read
Chapter 1:
Why AI Needs a Brain
Most people think AI is smart out of the box. It’s not.
Large Language Models (LLMs) are like goldfish: smart in the moment, but forgetful. Without memory, they don’t remember what you said five minutes ago. Without context, they don’t understand what you're really asking. This chapter kicks off our journey into context engineering – the secret sauce behind intelligent, persistent, and helpful AI agents.
Let’s introduce three pivotal (crucial and central) concepts:
Context Engineering: Dynamically managing and assembling the info the model needs right now.
Sessions: The conversation thread, like a transcript of an ongoing chat.
Memory: Long-term understanding across chats. Think of it as the AI’s diary.
Chapter 2:
Context Engineering – The Mise en Place of AI
In cooking, mise en place (French for "everything in its place") is when a chef prepares all ingredients before starting to cook. That’s what context engineering is for AI.
LLMs are stateless (without memory of the past). So, every time you ask something, you have to give it the recipe, ingredients, and cooking tools. Context engineering fixes this by dynamically assembling:
System instructions: Who the agent is and what it can do.
Few-shot examples: Demonstrations of how to behave.
External knowledge: Pulled in real-time from databases or documents.
Memory: Persisted facts about you or the task.
Conversation history: What just happened.
The goal? Provide no more and no less than what’s relevant for that moment.
Think of it like packing for a trip. You don’t bring your whole closet. Just what you need, depending on where you’re going and what you’re doing.
Chapter 3: Sessions – The Workbench
A session is the ephemeral (short-lived but essential) workspace where the agent tracks a single ongoing conversation. It includes:
Events: User inputs, AI responses, tool calls, etc.
State: The scratchpad or working memory (e.g., a shopping cart).
It’s like the desk you use to work on a project. When the project ends, you clean it up and store only the good stuff.
There are two types of session models in multi-agent systems:
Shared: All agents contribute to one history.
Private: Each agent has its own log and shares only what’s necessary.
Different frameworks (like ADK, LangGraph) have different approaches, but the mission is the same: track the evolving dialogue and working state reliably.
Chapter 4: Memory – The Filing Cabinet
Unlike sessions, memory is long-term. It’s what lets an AI remember your name, your favorite airline seat, or that you hate pineapple on pizza.
Types of memory include:
Declarative (knowing what): Facts and preferences.
Procedural (knowing how): Sequences or steps for tasks.
Memory is built through a process that resembles a gardener tending a garden:
Extraction: Pulling out seeds of useful information.
Consolidation: Weeding out duplicates or contradictions.
Storage: Filing it in the right drawer.
Retrieval: Knowing when to pull it back out.
Memory can be stored in different ways:
Vector databases: For fuzzy, semantic search.
Knowledge graphs: For structured relationships.
And can be scoped by:
User: Personalized experience.
Session: Contextual to a single task.
Application: Shared global knowledge.
Think of memory as the AI's personal assistant. It whispers in the agent’s ear: "Hey, remember last time, the user said they liked window seats."

Chapter 5:
The Dance Between Session and Memory
Sessions feed memory. Memory supports sessions.
As conversations grow, agents must compact (reduce) history using smart strategies:
Truncation: Keep the last N turns.
Summarization: Replace old convo with a summary.
Semantic pruning: Keep what matters.
Memory is then built from these sessions, curated, and used again in future chats. It’s an elegant cycle, powered by:
Triggering rules: When should memory be created?
Confidence scores: How reliable is that memory?
Lineage tracking: Where did this memory come from?
All of this ensures the bot becomes not just reactive, but reflective (thoughtfully aware and evolving).
Chapter 6:
Designing Bots That Remember Like Humans
To design a bot that feels intelligent, you must:
Build smart sessions that track interaction states.
Create memories that evolve and consolidate.
Engineer the right context for every reply.
Handle multimodal inputs (text, image, audio).
Compact and prune data over time.
And above all, balance privacy (user trust) with performance (snappy replies).
Your AI should feel less like a search engine, and more like a thoughtful assistant that knows you.
Chapter 7: Final Thoughts
AI isn’t just about predicting the next word. It’s about building systems that understand, adapt, and evolve.
Sessions are the transient now. Memory is the persistent past. Context is the crafted present.
Together, they create the brain behind the bot.
Welcome to the age of stateful AI.


Comments