You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: scripts/output/podcasts/intro.md
+38-40Lines changed: 38 additions & 40 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,81 +7,79 @@ speakers:
7
7
- name: Sam
8
8
role: Senior Engineer
9
9
voice: Charon
10
-
generatedAt: 2025-12-12T07:57:45.738Z
10
+
generatedAt: 2025-12-21T12:23:17.140Z
11
11
model: claude-opus-4.5
12
-
tokenCount: 1581
12
+
tokenCount: 1504
13
13
---
14
14
15
-
Alex: Welcome to the AI Coding Course. I'm Alex, and joining me is Sam. Before we dive into the actual content, there's something worth acknowledging upfront.
15
+
Alex: Welcome to Agentic Coding. I'm Alex, and with me is Sam. We're going to teach you how to actually operate AI coding assistants in production—not the demo version, the real thing.
16
16
17
-
Sam: The meta thing?
17
+
Sam: Before we dive in, I have to point out something a bit meta here. This script we're reading? It was generated using Claude Code and the exact techniques we're about to teach.
18
18
19
-
Alex: Exactly. This script you're listening to right now was generated using the same AI-assisted workflow we're about to teach you. The course content, the lesson structure, even this dialog—all developed using Claude Code and the techniques covered in later modules.
19
+
Alex: Right, and the audio you're hearing is synthesized through Google's Gemini API. So you've got AI-generated voices reading an AI-generated script about how to use AI effectively.
20
20
21
-
Sam: So we're AI-generated voices reading an AI-generated script about how to use AI to generate code. That's... recursive.
21
+
Sam: There's something almost absurd about that recursion. AI teaching humans how to use AI, using AI.
22
22
23
-
Alex: It is. And honestly, that's the point. If these techniques can produce production-grade training material about their own application, they're robust enough for your codebase. But enough meta-commentary—let's talk about why this course exists.
23
+
Alex: It is recursive, but here's why it matters: if these techniques can produce production-grade training material about their own application, they're robust enough for your codebase. This isn't a demo. It's validation. Now, let's get into it.
24
24
25
-
Sam: Right. AI coding assistants are everywhere now. It's 2025, they're production-standard. But I've talked to a lot of engineers who tried them, got frustrated, and either gave up or settled into this awkward pattern where they're basically babysitting the AI.
25
+
Sam: So what's the actual problem we're solving here? AI coding assistants are everywhere in 2025. Companies are shipping faster, engineers are claiming 10x productivity gains. But most developers I know hit a wall within weeks.
26
26
27
-
Alex: That frustration wall is real, and it's predictable. The problem isn't the tools—it's the operating model. Most developers approach AI agents like they're junior developers. They wait for the AI to "understand" their intent, they fix generated code line-by-line, they fight context limits constantly.
27
+
Alex: The problem isn't the tools. It's the operating model. Most engineers treat AI agents like junior developers—waiting for them to "understand," fixing their code line-by-line, fighting context limits. That mental model is fundamentally wrong.
28
28
29
-
Sam: Which is exhausting. You end up spending more time correcting the AI than you would have spent just writing the code yourself.
29
+
Sam: What's the right mental model then?
30
30
31
-
Alex: Exactly. And that's because "AI as junior developer" is the wrong mental model entirely. AI agents aren't teammates. They're CNC machines for code. You don't negotiate with a CNC machine or hope it understands your vision. You learn to operate it—you give it precise instructions, you set up the right fixtures, you validate the output.
31
+
Alex: AI agents aren't teammates. They're CNC machines for code. Think about it—a CNC machine doesn't "understand" what you want. You give it precise specifications, it executes. If the output is wrong, you don't coach the machine, you fix your specifications.
32
32
33
-
Sam: That's a useful reframe. A CNC machine is incredibly capable, but only if you know how to program it correctly.
33
+
Sam: That reframe is significant. Instead of managing a junior dev, you're operating industrial equipment.
34
34
35
-
Alex: Right. And that's what this course is: operator training. We're teaching the systematic approach used in production environments. Three phases: Plan, Execute, Validate.
35
+
Alex: Exactly. And that's what this course is—operator training. We teach a systematic approach used in production environments. Three phases: Plan, Execute, Validate.
36
36
37
-
Sam: Walk me through those.
37
+
Sam: Break those down for me.
38
38
39
-
Alex: Planning means breaking work into agent-appropriate tasks, researching architecture, grounding the agent in context. Execution is about crafting precise prompts, delegating to specialized sub-agents, running operations in parallel when possible. Validation uses tests as guardrails, requires critical code review, demands evidence of correctness.
39
+
Alex: Planning means breaking work into agent-appropriate tasks, researching architecture, grounding prompts in context. Execution is crafting precise prompts, delegating to specialized sub-agents, running operations in parallel. Validation uses tests as guardrails, reviews generated code critically, and requires evidence of correctness.
40
40
41
-
Sam: So it's not just "write better prompts"—it's a complete workflow.
41
+
Sam: That sounds like a proper engineering workflow, just with agents as the execution layer.
42
42
43
-
Alex: Correct. And let me be clear about what this course isn't. It's not AI theory. We cover enough internals to operate effectively, but we're not doing a deep dive on transformer architectures. It's not prompt templates—copying prompts doesn't work; understanding principles does.
43
+
Alex: That's precisely what it is. You're still the architect. You're still responsible for quality. But the agent handles execution at a pace you couldn't match manually.
44
44
45
-
Sam: That's important. I've seen engineers collect prompt templates like recipes, but they never get consistent results because they don't understand why a particular prompt structure works.
45
+
Sam: Let's be clear about what this course isn't, though. I've seen a lot of "prompt engineering" content that's basically template copying.
46
46
47
-
Alex: Exactly. This course also isn't a replacement for fundamentals. You still need to know architecture, design patterns, system design. And it's explicitly not for beginners—if you don't have production experience, you need to get that first.
47
+
Alex: This isn't that. Copying prompts doesn't work because context matters. We're teaching principles, not templates. This also isn't AI theory—we cover enough internals to operate effectively, nothing more. And critically, this isn't a replacement for fundamentals. You still need architecture knowledge, design patterns, system design. The agent amplifies your skills; it doesn't replace them.
48
48
49
-
Sam: So who is this for?
49
+
Sam: So who's this actually for?
50
50
51
-
Alex: Engineers with three or more years of professional experience who've already tried AI coding assistants and hit those frustration points. People who want to move faster without sacrificing code quality. Engineers who need to understand codebases, debug issues, or plan features more efficiently—and care about production-readiness, not demos.
51
+
Alex: Engineers with three or more years of professional experience who've already tried AI assistants and hit frustration points. People who want to move faster without sacrificing code quality. If you don't have production experience yet, go get that first. This course assumes you know how to engineer software—we're teaching you how to orchestrate agents that execute it.
52
52
53
-
Sam: That's a specific audience. What do they get out of completing this?
53
+
Sam: What's the structure?
54
54
55
-
Alex: Concrete capabilities. Onboarding to unfamiliar codebases five to ten times faster using agentic research. Refactoring complex features reliably with test-driven validation. Debugging production issues by delegating log and database analysis to agents. Reviewing code systematically with AI assistance while maintaining critical judgment. Planning and executing features with parallel sub-agent delegation.
55
+
Alex: Three modules, sequential. Module one covers fundamentals—mental models and architecture. Module two is methodology—prompting, grounding, workflow design. Module three is practical techniques—onboarding, planning, testing, reviewing, debugging. Each module builds on the previous. Don't skip ahead.
56
56
57
-
Sam: Those are significant multipliers.
58
-
59
-
Alex: They are. But the most important skill you'll develop is judgment—knowing when to use agents and when to write code yourself. That's what separates effective operators from frustrated ones.
57
+
Sam: And the exercises?
60
58
61
-
Sam: Let's talk about prerequisites. You mentioned three-plus years of experience. What else?
59
+
Alex: Mandatory. Reading won't build operating skills. Work through the exercises on real codebases, not the toy examples we provide. You learn to operate by operating.
62
60
63
-
Alex: You need access to a CLI coding agent. Claude Code, OpenAI Codex, Copilot CLI—any of them will work. If you haven't picked one yet, Claude Code is recommended at time of writing because of features like plan mode, sub-agents, slash commands, hierarchical CLAUDE.md files, and status bar support.
61
+
Sam: What should engineers expect to gain by the end?
64
62
65
-
Sam: And mindset?
63
+
Alex: Concrete capabilities. Onboard to unfamiliar codebases five to ten times faster using agentic research. Refactor complex features reliably with test-driven validation. Debug production issues by delegating log and database analysis to agents. Review code systematically with AI assistance while maintaining critical judgment. Plan and execute features with parallel sub-agent delegation.
66
64
67
-
Alex: Willingness to unlearn "AI as teammate" and adopt "AI as tool." That shift is harder than it sounds for a lot of engineers.
65
+
Sam: That's a substantial list. But I'd argue the most valuable skill isn't on it.
68
66
69
-
Sam: I can see that. We're trained to collaborate, to explain context, to work with people. Treating something that responds in natural language as a machine you operate—that requires a mental reset.
67
+
Alex: You're thinking of judgment—knowing when to use agents and when to write code yourself.
70
68
71
-
Alex: It does. But once you make that shift, everything else in this course clicks into place.
69
+
Sam: Exactly. That's what separates someone who's productive from someone who's fighting the tools constantly.
72
70
73
-
Sam: How should people work through the material?
71
+
Alex: And that judgment is what we're really teaching. The techniques are learnable. The judgment comes from understanding the underlying principles deeply enough to make good calls in novel situations.
74
72
75
-
Alex: Sequential consumption is recommended. Each module builds on previous concepts. Module one covers understanding the tools—mental models and architecture. Module two is methodology—prompting, grounding, workflow design. Module three is practical techniques—onboarding, planning, testing, reviewing, debugging.
73
+
Sam: What do people need before starting?
76
74
77
-
Sam: And the exercises?
75
+
Alex: Three or more years of professional software engineering experience. Access to a CLI coding agent—Claude Code, OpenAI Codex, Copilot CLI, whatever you prefer. If you haven't picked one, Claude Code is recommended at time of writing for its plan mode, sub-agents, slash commands, hierarchical CLAUDE.md configuration, and status bar support. And most importantly, a willingness to unlearn "AI as teammate" and adopt "AI as tool."
78
76
79
-
Alex: Mandatory. Reading alone won't build operating skills. Work through the exercises on real codebases—your own projects, not the examples we provide. The goal is to develop muscle memory for this workflow.
77
+
Sam: That mindset shift is probably the hardest part.
80
78
81
-
Sam: Makes sense. If you only practice on toy examples, you'll still struggle when you hit a real codebase with messy dependencies and unclear architecture.
82
-
83
-
Alex: Exactly. This course is designed for engineers who ship production code. The exercises reflect that.
79
+
Alex: It is. Engineers have spent years developing collaboration skills for working with humans. Those instincts actively interfere with operating AI effectively. You have to consciously override them.
84
80
85
81
Sam: Alright. Where do we start?
86
82
87
-
Alex: Module one: Understanding the Tools. We'll build the mental models you need before we get into methodology. Let's go.
83
+
Alex: Lesson one: LLMs Demystified. We need to understand just enough about how these systems work to operate them effectively. Not the theory—the practical implications for your workflow.
0 commit comments