You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: scripts/output/podcasts/intro.md
+31-45Lines changed: 31 additions & 45 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,79 +7,65 @@ speakers:
7
7
- name: Sam
8
8
role: Senior Engineer
9
9
voice: Charon
10
-
generatedAt: 2025-12-21T12:23:17.140Z
10
+
generatedAt: 2025-12-30T13:21:26.115Z
11
11
model: claude-opus-4.5
12
-
tokenCount: 1504
12
+
tokenCount: 1535
13
13
---
14
14
15
-
Alex: Welcome to Agentic Coding. I'm Alex, and with me is Sam. We're going to teach you how to actually operate AI coding assistants in production—not the demo version, the real thing.
15
+
Alex: Welcome to Agentic Coding. I'm Alex, and joining me is Sam. We're here to talk about something that's become impossible to ignore in 2025—AI coding assistants and why most engineers are using them wrong.
16
16
17
-
Sam: Before we dive in, I have to point out something a bit meta here. This script we're reading? It was generated using Claude Code and the exact techniques we're about to teach.
17
+
Sam: Before we dive in, I have to point out something a bit recursive here. This course, including the script we're reading right now, was developed using the exact AI-assisted techniques we're about to teach. We're AI-generated voices reading an AI-generated script about how to work with AI.
18
18
19
-
Alex: Right, and the audio you're hearing is synthesized through Google's Gemini API. So you've got AI-generated voices reading an AI-generated script about how to use AI effectively.
19
+
Alex: It's meta, I know. But that's actually the point. If these techniques can produce production-grade training material about their own application, they're robust enough for your codebase. Consider it validation, not just instruction.
20
20
21
-
Sam: There's something almost absurd about that recursion. AI teaching humans how to use AI, using AI.
21
+
Sam: Fair enough. So let's get into the actual problem. What's going wrong out there?
22
22
23
-
Alex: It is recursive, but here's why it matters: if these techniques can produce production-grade training material about their own application, they're robust enough for your codebase. This isn't a demo. It's validation. Now, let's get into it.
23
+
Alex: The numbers tell the story. Over 77,000 organizations have adopted GitHub Copilot. 51% of developers use AI tools daily. Companies are shipping features faster. Peer-reviewed research shows baseline efficiency gains over 55%, and practitioners with proper methodology—including the author of this course—report 10x improvements. The technology works.
24
24
25
-
Sam: So what's the actual problem we're solving here? AI coding assistants are everywhere in 2025. Companies are shipping faster, engineers are claiming 10x productivity gains. But most developers I know hit a wall within weeks.
25
+
Sam: But there's a catch, right? I've seen the frustration firsthand.
26
26
27
-
Alex: The problem isn't the tools. It's the operating model. Most engineers treat AI agents like junior developers—waiting for them to "understand," fixing their code line-by-line, fighting context limits. That mental model is fundamentally wrong.
27
+
Alex: 66% of developers say AI solutions are "almost right, but not quite." Only 3% highly trust the output. The tools aren't the problem. The operating model is.
28
28
29
-
Sam: What's the right mental model then?
29
+
Sam: What do you mean by operating model?
30
30
31
-
Alex: AI agents aren't teammates. They're CNC machines for code. Think about it—a CNC machine doesn't "understand" what you want. You give it precise specifications, it executes. If the output is wrong, you don't coach the machine, you fix your specifications.
31
+
Alex: Most engineers treat AI agents like junior developers. They wait for the AI to "understand" the task. They fix code line-by-line. They fight context limits constantly. That's the wrong mental model entirely. AI agents aren't teammates—they're power tools. You don't wait for a power drill to understand what you want. You learn to operate it.
32
32
33
-
Sam: That reframe is significant. Instead of managing a junior dev, you're operating industrial equipment.
33
+
Sam: That reframe is significant. I've definitely fallen into the "just let it figure it out" trap.
34
34
35
-
Alex: Exactly. And that's what this course is—operator training. We teach a systematic approach used in production environments. Three phases: Plan, Execute, Validate.
35
+
Alex: And the research confirms the consequences. Developers without proper methodology are actually 19% slower with AI tools. Meanwhile, practitioners using systematic approaches report up to 10x efficiency gains. The difference is entirely operator skill.
36
36
37
-
Sam: Break those down for me.
37
+
Sam: So this course is operator training.
38
38
39
-
Alex: Planning means breaking work into agent-appropriate tasks, researching architecture, grounding prompts in context. Execution is crafting precise prompts, delegating to specialized sub-agents, running operations in parallel. Validation uses tests as guardrails, reviews generated code critically, and requires evidence of correctness.
39
+
Alex: Exactly. We teach a systematic approach used in production environments. Four phases: Research, Plan, Execute, Validate. Research means grounding agents in codebase patterns and domain knowledge before they act. Planning means designing changes strategically—exploring when you're uncertain, being directive when you're clear. Execution means running agents supervised or autonomous based on trust and task criticality. Validation means verifying against your mental model, then iterating or regenerating.
40
40
41
-
Sam: That sounds like a proper engineering workflow, just with agents as the execution layer.
41
+
Sam: Let's set expectations. What is this course not?
42
42
43
-
Alex: That's precisely what it is. You're still the architect. You're still responsible for quality. But the agent handles execution at a pace you couldn't match manually.
43
+
Alex: It's not AI theory—we cover enough internals to operate effectively, nothing more. It's not prompt templates—copying prompts doesn't work because understanding principles does. It's not a replacement for fundamentals—you still need architecture, design patterns, and system design knowledge. And it's explicitly not for beginners. If you don't have production experience, start there first.
44
44
45
-
Sam: Let's be clear about what this course isn't, though. I've seen a lot of "prompt engineering" content that's basically template copying.
45
+
Sam: Who should be listening then?
46
46
47
-
Alex: This isn't that. Copying prompts doesn't work because context matters. We're teaching principles, not templates. This also isn't AI theory—we cover enough internals to operate effectively, nothing more. And critically, this isn't a replacement for fundamentals. You still need architecture knowledge, design patterns, system design. The agent amplifies your skills; it doesn't replace them.
47
+
Alex: You're the target audience if you have three or more years of professional engineering experience. If you've already tried AI coding assistants and hit frustration points. If you want to move faster without sacrificing code quality. If you need to understand codebases, debug issues, or plan features more efficiently. And critically, if you care about production-readiness, not demos.
48
48
49
-
Sam: So who's this actually for?
49
+
Sam: How should people approach the material?
50
50
51
-
Alex: Engineers with three or more years of professional experience who've already tried AI assistants and hit frustration points. People who want to move faster without sacrificing code quality. If you don't have production experience yet, go get that first. This course assumes you know how to engineer software—we're teaching you how to orchestrate agents that execute it.
51
+
Alex: This is a reference manual, not a traditional course with exercises. I recommend reading sequentially first—Module 1 covers fundamentals and mental models, Module 2 covers methodology including prompting and grounding and workflow design, Module 3 covers practical techniques for onboarding, planning, testing, reviewing, and debugging. Then return to specific lessons as you encounter relevant situations in your actual work. The value comes from having the right mental models when you need them.
52
52
53
-
Sam: What's the structure?
53
+
Sam: What outcomes should people expect?
54
54
55
-
Alex: Three modules, sequential. Module one covers fundamentals—mental models and architecture. Module two is methodology—prompting, grounding, workflow design. Module three is practical techniques—onboarding, planning, testing, reviewing, debugging. Each module builds on the previous. Don't skip ahead.
55
+
Alex: After completing this course, you'll be able to onboard to unfamiliar codebases 5 to 10x faster using agentic research. You'll refactor complex features reliably with test-driven validation. You'll debug production issues by delegating log and database analysis to agents. You'll review code systematically with AI assistance while maintaining critical judgment. And you'll plan and execute features with parallel sub-agent delegation.
56
56
57
-
Sam: And the exercises?
57
+
Sam: What do people need to get started?
58
58
59
-
Alex: Mandatory. Reading won't build operating skills. Work through the exercises on real codebases, not the toy examples we provide. You learn to operate by operating.
59
+
Alex: Three things. First, experience—three or more years of professional software engineering. Second, tools—access to a CLI coding agent like Claude Code, OpenAI Codex, or Copilot CLI. If you haven't picked one yet, Claude Code is recommended at time of writing for its plan mode, sub-agents, slash commands, hierarchical configuration, and status bar support. Third, mindset—willingness to unlearn "AI as teammate" and adopt "AI as tool."
60
60
61
-
Sam: What should engineers expect to gain by the end?
61
+
Sam: There's something deeper here though, isn't there? Beyond just tool proficiency.
62
62
63
-
Alex: Concrete capabilities. Onboard to unfamiliar codebases five to ten times faster using agentic research. Refactor complex features reliably with test-driven validation. Debug production issues by delegating log and database analysis to agents. Review code systematically with AI assistance while maintaining critical judgment. Plan and execute features with parallel sub-agent delegation.
63
+
Alex: This course isn't really about AI. It's about rigorous engineering with tools that happen to be stochastic systems. AI agents are amplifiers—of your architectural clarity, your testing discipline, your code patterns. Good or bad, they compound what exists. Research shows AI-assisted code has 8x more duplication—not because agents create it, but because they amplify existing patterns in your codebase.
64
64
65
-
Sam: That's a substantial list. But I'd argue the most valuable skill isn't on it.
65
+
Sam: So the quality of what you put in determines what you get out.
66
66
67
-
Alex: You're thinking of judgment—knowing when to use agents and when to write code yourself.
67
+
Alex: You are the circuit breaker. Every accepted line becomes pattern context for future agents. Your engineering judgment—in review, in architecture, in pattern design—determines which direction the exponential curve bends. The tools changed. The fundamentals didn't.
68
68
69
-
Sam: Exactly. That's what separates someone who's productive from someone who's fighting the tools constantly.
69
+
Sam: Where do we start?
70
70
71
-
Alex: And that judgment is what we're really teaching. The techniques are learnable. The judgment comes from understanding the underlying principles deeply enough to make good calls in novel situations.
72
-
73
-
Sam: What do people need before starting?
74
-
75
-
Alex: Three or more years of professional software engineering experience. Access to a CLI coding agent—Claude Code, OpenAI Codex, Copilot CLI, whatever you prefer. If you haven't picked one, Claude Code is recommended at time of writing for its plan mode, sub-agents, slash commands, hierarchical CLAUDE.md configuration, and status bar support. And most importantly, a willingness to unlearn "AI as teammate" and adopt "AI as tool."
76
-
77
-
Sam: That mindset shift is probably the hardest part.
78
-
79
-
Alex: It is. Engineers have spent years developing collaboration skills for working with humans. Those instincts actively interfere with operating AI effectively. You have to consciously override them.
80
-
81
-
Sam: Alright. Where do we start?
82
-
83
-
Alex: Lesson one: LLMs Demystified. We need to understand just enough about how these systems work to operate them effectively. Not the theory—the practical implications for your workflow.
84
-
85
-
Sam: Let's get into it.
71
+
Alex: Lesson 1: LLMs Demystified. We'll cover exactly enough about how these systems work to operate them effectively. No more, no less.
Copy file name to clipboardExpand all lines: website/docs/faq.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ export const faqData = {
18
18
{ question: "What is vibe engineering?", answer: "Vibe engineering, coined by Simon Willison (October 2025), describes AI-assisted coding where you thoroughly review and understand generated code before shipping. It's a disciplined approach that amplifies established software engineering practices: automated testing, documentation, code review, and validation. Agentic coding shares this accountability principle but adds explicit structure: the four-phase workflow (research, plan, execute, validate) that makes AI-assisted development predictable and repeatable." },
19
19
{ question: "Which AI coding tools support agentic coding?", answer: "This course recommends CLI coding agents—Claude Code, Aider, or Codex CLI—because terminal-based tools enable parallelism: run multiple agent instances across different terminal tabs or git worktrees simultaneously. IDE agents (Cursor, Copilot) work but are coupled to single windows, limiting concurrent workflows. The methodology applies broadly, but CLI agents unlock the parallel execution model this course teaches." },
20
20
{ question: "Is agentic coding better than prompt engineering?", answer: "Prompt engineering optimizes individual prompts. Agentic coding is a complete methodology: research, plan, execute, validate. You still write prompts, but within a structured workflow that includes grounding agents in your codebase, reviewing plans before execution, and validating output against your mental model." },
21
-
{ question: "Why CNC machine instead of junior developer?", answer: "A CNC machine doesn't 'understand' the part it's making—it executes instructions with precision. You don't get frustrated when it fails to interpret vague coordinates; you provide exact specifications. Same with AI agents. You own the results, validate through testing, and debug your input when output fails." },
21
+
{ question: "Why power tool instead of junior developer?", answer: "A power tool doesn't 'understand' what you're building—it executes based on how you operate it. You don't get frustrated when a circular saw cuts the wrong angle; you adjust your setup and technique. Same with AI agents. You own the results, validate through testing, and debug your input when output fails." },
22
22
{ question: "What is the agentic coding workflow?", answer: "Four phases: Research (ground agents in your codebase patterns and external domain knowledge), Plan (choose exploration or exact planning strategy, make architectural decisions), Execute (delegate to agents in supervised or autonomous mode), Validate (decide iterate or regenerate based on alignment with your mental model and automated checks). Skipping any phase dramatically increases failure rate." },
23
23
{ question: "Why do some developers report being slower with AI tools?", answer: "Studies show experienced developers are often slower on individual tasks with AI—despite believing they're faster. Speed per task is the wrong metric. This methodology teaches that the real productivity gain comes from parallelism: running multiple agents on different tasks while you attend meetings, review PRs, or handle other work. A senior engineer with three parallel agents ships more than one babysitting a single conversation." },
24
24
{ question: "Why does my AI agent hallucinate incorrect code?", answer: "Agents don't know your codebase exists. Without grounding, they generate from training data patterns frozen at their knowledge cutoff. The fix: inject your real code patterns, architectural constraints, and current documentation into context before asking for generation. Grounding significantly reduces hallucination by anchoring generation in your actual codebase." },
@@ -136,9 +136,9 @@ Prompt engineering optimizes individual prompts. Agentic coding is a complete me
136
136
137
137
*Learn more in [Lesson 4: Prompting 101](/docs/methodology/lesson-4-prompting-101).*
138
138
139
-
### Why "CNC machine" instead of "junior developer"?
139
+
### Why "power tool" instead of "junior developer"?
140
140
141
-
A CNC machine doesn't 'understand' the part it's making—it executes instructions with precision. You don't get frustrated when it fails to interpret vague coordinates; you provide exact specifications. Same with AI agents. You own the results, validate through testing, and debug your input when output fails.
141
+
A power tool doesn't 'understand' what you're building—it executes based on how you operate it. You don't get frustrated when a circular saw cuts the wrong angle; you adjust your setup and technique. Same with AI agents. You own the results, validate through testing, and debug your input when output fails.
142
142
143
143
*Learn more in [Lesson 1: How LLMs Work](/docs/fundamentals/lesson-1-how-llms-work).*
-**Result:** Massive gains in bandwidth, repeatability, and precision
18
18
19
19
**Software engineering transformation:**
@@ -99,9 +99,9 @@ Understanding the machinery prevents three critical errors:
99
99
- Reality: It's a precision instrument that speaks English
100
100
- Your fix: Maintain tool mindset (Principle 3, covered in Lesson 3)
101
101
102
-
**Analogy: LLM is to software engineers what CNC/3D printers are to mechanical engineers**
102
+
**Analogy: LLMs are power tools for code**
103
103
104
-
A CNC machine doesn't "understand" the part it's making. It executes instructions precisely. You don't get mad at it for misinterpreting vague coordinates - you provide exact specifications.
104
+
A power tool doesn't "understand" what you're building. It executes based on how you operate it. You don't blame a circular saw for a bad cut—you adjust your technique and setup.
105
105
106
106
Same with LLMs. They're tools that execute language-based instructions with impressive fluency but zero comprehension.
0 commit comments