You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Regenerate podcast scripts and audio with unified generator
- Update all podcast scripts with improved dialog generation
- Regenerate all audio files using unified pipeline
- Update manifests for both scripts and audio
- Ensure consistency across all course modules
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Copy file name to clipboardExpand all lines: scripts/output/podcasts/intro.md
+31-51Lines changed: 31 additions & 51 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,85 +7,65 @@ speakers:
7
7
- name: Sam
8
8
role: Senior Engineer
9
9
voice: Charon
10
-
generatedAt: 2025-11-04T07:17:19.414Z
10
+
generatedAt: 2025-11-05T11:46:27.245Z
11
11
model: claude-haiku-4.5
12
-
tokenCount: 1799
12
+
tokenCount: 1192
13
13
---
14
14
15
-
Alex: Welcome to the AI Coding Course. I'm Alex, and I'll be walking you through how to actually operate AI agents in production. This isn't theory—it's practical training for engineers who've already tried these tools and hit the inevitable frustration wall.
15
+
Alex: Welcome to the AI Coding Course. I'm Alex, and this is Sam. We're going to spend the next few modules walking through how to actually operate AI coding agents in production. Not theory—practice.
16
16
17
-
Sam: I'm Sam. I've been building systems for about eight years now, and I'll be asking the questions I assume you're thinking. So let's start with the obvious one: why do we need a course for this? The tools work. I've used them. But something always feels... off.
17
+
Sam: So this is operator training, not AI 101?
18
18
19
-
Alex: Exactly. And here's the thing that makes this worth acknowledging up front—this course was built using the exact techniques we're about to teach you. The content structure, the lessons, even this podcast script... all created with AI tools working alongside the process. Which is both ironic and validating.
19
+
Alex: Exactly. By 2025, AI coding assistants are already standard practice in most shops. Companies ship features faster, engineers are 10x their output. The technology works. But the frustration wall comes fast—usually within weeks.
20
20
21
-
Sam: So we're teaching you how to operate AI by showing you AI in operation. I appreciate the transparency, but it also feels like we need to acknowledge the weird loop here before we move on.
21
+
Sam: Where does that frustration come from, if the tools are working?
22
22
23
-
Alex: Agreed. The recursive nature is real—using AI to teach about AI is either terrifying or obvious, depending on how you look at it. But what matters is this: if these techniques can produce production-grade training material on their own application, they're robust enough for your codebase. That's the validation.
23
+
Alex: The operating model. Most developers treat these agents like junior developers. You wait for them to understand, you fix their code line-by-line, you fight context limits. That's the wrong mental model entirely. Think of AI agents as CNC machines for code, not teammates. You don't manage a CNC machine—you learn to operate it with precision.
24
24
25
-
Sam: Fair point. So beyond the meta-commentary, what's actually broken about how most engineers use these tools right now?
25
+
Sam: So we're not trying to make the agent "smarter" or more collaborative. We're learning how to task it effectively.
26
26
27
-
Alex: The operating model is fundamentally wrong. Most people treat AI agents like junior developers. They send a task, wait for understanding, review the output line-by-line, fight context limits, get frustrated. That's not how this works.
27
+
Alex: Precisely. And that changes everything about how you work. Instead of back-and-forth refinement, you plan deliberately, break work into agent-appropriate units, ground your context, execute with precision, then validate rigorously. It's a systematic approach—plan, execute, validate.
28
28
29
-
Think of it differently: AI agents aren't teammates. They're CNC machines for code. You don't expect a CNC machine to understand your intent or learn from experience. You give it precise specifications, run the operation, validate the output. That's it.
29
+
Sam: That sounds like it requires a different mindset than treating it like a colleague.
30
30
31
-
Sam: That's a useful reframing. But there's a skill gap there, right? Most engineers have been trained to collaborate with humans. You're saying we need to completely flip our mental model.
31
+
Alex: Completely different. And that's actually why this course exists. We assume you've already tried AI coding assistants. You've hit the frustration wall. You want to move faster without sacrificing code quality. That's the audience.
32
32
33
-
Alex: Completely. And that's what this course is really about. We're teaching you how to operate, not how to collaborate. There's a systematic approach—three phases that work consistently in production: Plan, Execute, Validate.
33
+
Sam: What's not in scope here?
34
34
35
-
Sam: Break that down for me. What does planning actually look like when your "teammate" is a machine?
35
+
Alex: We're not covering AI theory. We'll explain enough internals to operate effectively, but we're not going deep on transformers or token optimization. We're also not selling you prompt templates—copying templates doesn't generalize. We're teaching principles that adapt to your codebase, your architecture, your constraints.
36
36
37
-
Alex: Planning is about decomposition and grounding. You break work into agent-appropriate tasks, research the architecture of what you're working with, understand the constraints and context. You're not asking the agent to figure this out. You're doing it.
37
+
Sam: And it's not a replacement for knowing your craft.
38
38
39
-
Then in execution, you craft precise prompts based on that groundwork. You delegate to specialized sub-agents. You run operations in parallel where possible. You're orchestrating, not waiting.
39
+
Alex: Right. You need to understand architecture, design patterns, system design. If you don't have production experience, that's the prerequisite. This course assumes you can engineer software. We're teaching you how to orchestrate agents that execute it autonomously.
40
40
41
-
Finally, validation is where most teams stumble. Tests become guardrails. You review generated code critically—not assuming it's correct, but looking for evidence. That's the discipline that separates effective operators from frustrated ones.
41
+
Sam: So who's actually sitting through this?
42
42
43
-
Sam: Okay, so this is fundamentally about work discipline. The tools are capable, but you have to approach them systematically.
43
+
Alex: Anyone with 3+ years professional experience who wants to actually get the value out of these tools. If you've already tried agents and hit the ceiling, this is for you. If you need to onboard unfamiliar codebases faster, debug production issues, refactor complex features reliably—this changes how you operate.
44
44
45
-
Alex: Exactly. And here's what that discipline enables: engineers who've learned this approach consistently report 5-10x improvements in specific areas. Onboarding to unfamiliar codebases, refactoring complex features, debugging production issues, reviewing code systematically, planning and executing features in parallel. These aren't theoretical improvements.
45
+
Sam: And the practical outcome? What can someone actually do after this?
46
46
47
-
But there's a gate: you need to know when to use agents and when to write code yourself. That judgment is what separates people who benefit from this versus people who get frustrated.
47
+
Alex: You'll onboard to unfamiliar codebases 5-10x faster using agentic research. You'll refactor complex features reliably with test-driven validation. Debug production issues by delegating log and database analysis to agents. Review code with AI assistance while maintaining critical judgment. Plan and execute features with parallel sub-agent delegation.
48
48
49
-
Sam: I'm curious about the audience you're targeting. This isn't for everyone, is it?
49
+
Sam: That's significant. But I imagine the real skill is knowing when to use agents and when to just write code yourself.
50
50
51
-
Alex: No. We're assuming 3+ years of professional engineering experience. You know architecture, design patterns, system design fundamentals. You've tried AI tools and hit walls. You want to move faster without sacrificing code quality. You care about production-readiness.
51
+
Alex: That's exactly it. That's what separates effective operators from frustrated ones. The judgment of when to delegate and when to stay hands-on.
52
52
53
-
If you're new to engineering, this course assumes you already know how to engineer software. We're teaching orchestration, not fundamentals.
53
+
Sam: So how's this structured?
54
54
55
-
Sam: And the practical element—I'm assuming we're not just lecturing at you?
55
+
Alex: Three modules. Module One: Understanding the Tools—mental models and architecture. Module Two: Methodology—prompting, grounding, workflow design. Module Three: Practical Techniques—how to actually onboard, plan, test, review, debug with agents. Each builds on the previous.
56
56
57
-
Alex: Hands-on exercises are mandatory. Reading alone doesn't build operating skills. You need to work through problems on real codebases, not examples. That's where the judgment develops.
57
+
Sam: And exercises are mandatory, I assume.
58
58
59
-
Sam: So what will someone actually be able to do after finishing this?
59
+
Alex: Non-negotiable. Reading alone won't build operating skills. You need to work through the exercises on real codebases, not the examples we provide. That's where the pattern recognition actually develops.
60
60
61
-
Alex: Concrete outcomes. Onboarding to unfamiliar codebases 5-10x faster using systematic agentic research. Refactoring complex features reliably with test-driven validation. Debugging production issues by delegating analysis to agents while maintaining critical judgment. Reviewing code systematically with AI assistance. Planning and executing features with parallel sub-agent delegation.
61
+
Sam: Before we get started—I noticed something meta about this course. It was developed using AI.
62
62
63
-
The meta-outcome is knowing your boundaries. You'll know when to use agents, when to code yourself, and how to validate that you're making the right choice.
63
+
Alex: Yes. The entire curriculum—structure, progression, code examples, documentation—was built using the same techniques you're about to learn. Every module was planned, researched, drafted, and refined through systematic prompting and agentic research. The podcast versions were generated using Claude Code and Gemini, including the voices you're hearing.
64
64
65
-
Sam: I think what you're describing is operating discipline combined with tool mastery. Most courses focus on the tool mastery part.
65
+
Sam: That's actually the perfect validation for what you're teaching.
66
66
67
-
Alex: Exactly. And that's why most people plateau. You can learn prompting templates. But principles teach you to operate in novel contexts. That's the difference between training and education.
67
+
Alex: It is. If these techniques can produce production-grade training material about their own application, they're robust enough for your codebase. This isn't marketing—it's evidence.
68
68
69
-
Sam: So what's the actual structure here?
69
+
Sam: Ready to start?
70
70
71
-
Alex: Three modules, building sequentially. First, we cover the mental models and architecture—understanding the tools well enough to operate them effectively. Not deep theory, just what matters.
72
-
73
-
Second, we dive into methodology: how to plan work, how to ground agents in context, how to execute systematically, how to validate output.
74
-
75
-
Third, practical techniques. We work through real scenarios: onboarding to codebases, planning features, using tests as guardrails, reviewing code, debugging issues.
76
-
77
-
Sam: No theory for theory's sake, then.
78
-
79
-
Alex: None. And we're explicitly not covering AI theory, prompt template libraries, or beginner fundamentals. If you need those, we point you elsewhere. This is focused on one thing: making you a more effective operator.
80
-
81
-
Sam: One more practical question. What do you need to get started?
82
-
83
-
Alex: Three things, really. You need access to a CLI coding agent—Claude Code, Copilot, whatever you have available. You need 3+ years of production experience. And you need to unlearn the "AI as teammate" mental model and adopt "AI as tool" instead.
84
-
85
-
That third one is the hardest. Most of the frustration isn't technical. It's conceptual. Once you shift that frame, everything else becomes operational.
86
-
87
-
Sam: Alright. I think that gives people a clear sense of what they're walking into. Where do we actually start?
88
-
89
-
Alex: Next module: Understanding the Tools. We'll build the mental models that make everything else click. No hand-holding, but fundamentally clear. That's where the framework starts.
90
-
91
-
Sam: Let's go.
71
+
Alex: Ready. Let's begin with Understanding the Tools.
0 commit comments