Skip to content

Commit c6c3446

Browse files
ofriwclaude
andcommitted
Update lesson 2 with AbstractShapesVisualization and streamlined context explanation
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
1 parent c657755 commit c6c3446

File tree

1 file changed

+8
-36
lines changed

1 file changed

+8
-36
lines changed

website/docs/understanding-the-tools/lesson-2-understanding-agents.md

Lines changed: 8 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@ sidebar_label: 'Lesson 2: Understanding Agents'
44
title: 'Understanding Agents'
55
---
66

7+
import AbstractShapesVisualization from '@site/src/components/VisualElements/AbstractShapesVisualization';
8+
79
In Lesson 1, we established that **LLMs are brains** (token prediction engines) and **agent frameworks are bodies** (execution layers). Now let's understand how these components work together to create autonomous coding agents that can complete complex tasks.
810

911
## The Agent Execution Loop
@@ -181,47 +183,17 @@ Here's a crucial insight that transforms how you work with AI coding agents: **T
181183

182184
The LLM doesn't "remember" previous conversations. It has no hidden internal state. Each response is generated solely from the text currently in the context. When the conversation continues, the LLM sees its previous responses as text in the context, not as memories it recalls.
183185

184-
**This is a massive advantage, not a limitation.**
185-
186-
**Advantage 1: Total control over agent "memory"**
187-
188-
You decide what the agent knows by controlling what's in the context:
189-
190-
```
191-
# First conversation
192-
You: "Implement user authentication using JWT"
193-
Agent: [Implements auth with JWT, stores tokens in localStorage]
194-
195-
# Later, new conversation (fresh context)
196-
You: "Implement user authentication using sessions"
197-
Agent: [Implements auth with sessions, no JWT bias]
198-
```
186+
**This is a massive advantage, not a limitation.** You control what the agent knows by controlling what's in the context.
199187

200-
The agent doesn't carry baggage from previous decisions. Each conversation is a clean slate. You can explore alternative approaches without the agent defending its earlier choices.
201-
202-
**Advantage 2: Unbiased verification**
203-
204-
The agent can review its own work with fresh eyes:
205-
206-
```
207-
# Step 1: Implementation
208-
You: "Add email validation to registration endpoint"
209-
Agent: [Writes validation code]
210-
211-
# Step 2: Verification (agent doesn't "remember" writing this)
212-
You: "Review the validation logic in src/handlers/user.ts for security issues"
213-
Agent: [Analyzes code objectively, finds potential regex DoS vulnerability]
214-
```
188+
**Clean-slate exploration:** Start a new conversation, and the agent has no bias from previous decisions. Ask it to implement authentication with JWT in one context, sessions in another - each gets evaluated on merit without defending earlier choices.
215189

216-
**The agent doesn't know it wrote that code** unless you tell it. It reviews the code as objectively as if someone else wrote it. No ego, no defensive justification of past decisions.
190+
**Unbiased code review:** The agent can critically audit its own work. Don't reveal code authorship, and it applies full scrutiny with no defensive bias.
217191

218-
This enables powerful workflows:
192+
<AbstractShapesVisualization />
219193

220-
- **Generate → Review → Iterate** - Agent writes code, then critically reviews it
221-
- **Multi-perspective analysis** - Ask for security review in one context, performance review in another
222-
- **A/B testing approaches** - Explore different implementations without cross-contamination
194+
The same code that gets "looks sound overall" in one context triggers "Critical security vulnerabilities: localStorage exposes tokens to XSS attacks" in a fresh context. This enables Generate → Review → Iterate workflows where the agent writes code then objectively audits it, or multi-perspective analysis (security review in one context, performance in another).
223195

224-
**Production implication:** Design your prompts to control what context the agent sees. Want unbiased code review? Don't tell it who wrote the code. Want it to follow existing patterns? Include examples in the context. The agent's "knowledge" is entirely what you engineer into the conversation.
196+
**How you engineer context determines agent behavior.** This manipulation happens through tools - the mechanisms agents use to read files, run commands, and observe results.
225197

226198
## Tools: Built-In vs External
227199

0 commit comments

Comments
 (0)