Skip to content

Commit 3438d06

Browse files
committed
Regenerated podcasts using latest gemini tts
1 parent eeebbf8 commit 3438d06

25 files changed

+663
-789
lines changed

scripts/output/podcasts/intro.md

Lines changed: 52 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -7,57 +7,81 @@ speakers:
77
- name: Sam
88
role: Senior Engineer
99
voice: Charon
10-
generatedAt: 2025-11-08T08:16:32.274Z
11-
model: claude-haiku-4.5
12-
tokenCount: 1763
10+
generatedAt: 2025-12-12T07:57:45.738Z
11+
model: claude-opus-4.5
12+
tokenCount: 1581
1313
---
1414

15-
Alex: Welcome to the AI Coding Course. I'm Alex, and I'll be walking us through this together with Sam, who's a senior engineer we'll be learning alongside. Before we dive into the fundamentals, I want to address something important: this course exists because of the exact techniques we're going to teach you. The curriculum, the structure, every lesson—we built it using AI-assisted workflows. Even this podcast script was generated using Claude Code and synthesized into dialogue. So in a way, you're about to learn from a course that proves these methods work at scale. There's something recursive about that, but it's also the strongest validation we could offer.
15+
Alex: Welcome to the AI Coding Course. I'm Alex, and joining me is Sam. Before we dive into the actual content, there's something worth acknowledging upfront.
1616

17-
Sam: That's a great setup, actually. It means the techniques are battle-tested enough to build something substantial, not just toy examples. So this isn't theoretical—this is what's actually happening in production right now?
17+
Sam: The meta thing?
1818

19-
Alex: Exactly. That's the core premise. AI coding assistants are production-standard in 2025. Companies are shipping features faster. Individual engineers are 10-xing their output. The technology fundamentally works. But here's what we keep seeing: developers hit a frustration wall within weeks.
19+
Alex: Exactly. This script you're listening to right now was generated using the same AI-assisted workflow we're about to teach you. The course content, the lesson structure, even this dialog—all developed using Claude Code and the techniques covered in later modules.
2020

21-
Sam: What does that look like? People aren't getting value anymore, or are they hitting bugs?
21+
Sam: So we're AI-generated voices reading an AI-generated script about how to use AI to generate code. That's... recursive.
2222

23-
Alex: Both, but the real issue is deeper. Most developers treat AI agents like junior developers. You describe what you want, wait for it to "understand," then spend hours fixing code line-by-line, fighting context limits, tweaking prompts. It feels like the assistant should just get it, so when it doesn't, it feels broken. But the problem isn't the tools—it's the operating model. You're using the wrong mental framework entirely.
23+
Alex: It is. And honestly, that's the point. If these techniques can produce production-grade training material about their own application, they're robust enough for your codebase. But enough meta-commentary—let's talk about why this course exists.
2424

25-
Sam: So how should we think about them differently?
25+
Sam: Right. AI coding assistants are everywhere now. It's 2025, they're production-standard. But I've talked to a lot of engineers who tried them, got frustrated, and either gave up or settled into this awkward pattern where they're basically babysitting the AI.
2626

27-
Alex: Think of AI agents as CNC machines for code. A CNC machine isn't a teammate. It's a tool. You don't wait for it to "understand" what you want. You develop precision—you write exact specifications, you run operations efficiently, you validate output critically. That's operator training, not team management. This course is about learning to operate these machines systematically.
27+
Alex: That frustration wall is real, and it's predictable. The problem isn't the tools—it's the operating model. Most developers approach AI agents like they're junior developers. They wait for the AI to "understand" their intent, they fix generated code line-by-line, they fight context limits constantly.
2828

29-
Sam: That reframes everything. So instead of "help me build this feature," it's more like "here's the problem, here's the architecture context, here's how to break this into executable tasks"?
29+
Sam: Which is exhausting. You end up spending more time correcting the AI than you would have spent just writing the code yourself.
3030

31-
Alex: Precisely. The framework is straightforward: Plan, Execute, Validate. During planning, you break work into agent-appropriate tasks, research the architecture you're working with, and ground the agent in context. During execution, you craft precise prompts, delegate to specialized sub-agents, and run operations in parallel when possible. And then you validate relentlessly—tests are your guardrails, you review generated code with actual critical judgment, and you require evidence that the work is correct before moving forward. No assumptions.
31+
Alex: Exactly. And that's because "AI as junior developer" is the wrong mental model entirely. AI agents aren't teammates. They're CNC machines for code. You don't negotiate with a CNC machine or hope it understands your vision. You learn to operate it—you give it precise instructions, you set up the right fixtures, you validate the output.
3232

33-
Sam: I notice you're not talking about prompt templates or magic phrasing.
33+
Sam: That's a useful reframe. A CNC machine is incredibly capable, but only if you know how to program it correctly.
3434

35-
Alex: Right. And that's intentional. Copying prompts from the internet doesn't scale. Understanding the principles does. What works for my codebase might fail on yours because the architecture is different, the patterns are different, the constraints are different. We're teaching you how to think about orchestrating agents, not how to memorize prompts.
35+
Alex: Right. And that's what this course is: operator training. We're teaching the systematic approach used in production environments. Three phases: Plan, Execute, Validate.
3636

37-
Sam: What about the people who haven't done the work yet? Should they take this course?
37+
Sam: Walk me through those.
3838

39-
Alex: No. This course assumes you have 3+ years of professional engineering experience and that you already know how to engineer software. If you don't have that foundation—if you're still learning data structures, design patterns, system design—finish that first. We're not replacing fundamentals. We're teaching you how to orchestrate agents that execute code autonomously. That only makes sense if you can already evaluate whether that code is correct.
39+
Alex: Planning means breaking work into agent-appropriate tasks, researching architecture, grounding the agent in context. Execution is about crafting precise prompts, delegating to specialized sub-agents, running operations in parallel when possible. Validation uses tests as guardrails, requires critical code review, demands evidence of correctness.
4040

41-
Sam: So someone with production experience who's tried these tools and hit that frustration wall you mentioned—that's the core audience?
41+
Sam: So it's not just "write better prompts"—it's a complete workflow.
4242

43-
Alex: Exactly. You've already got hands-on experience. You've probably tried Claude or ChatGPT Code or GitHub Copilot and thought, "This is amazing... for about two weeks." You want to understand why it breaks, how to get more consistent results, how to apply it to real problems at scale. That's where this course adds value. It's also useful if you need to understand unfamiliar codebases faster, if you're debugging production issues and want to delegate log analysis to an agent, or if you're planning a complex refactor and want systematic validation.
43+
Alex: Correct. And let me be clear about what this course isn't. It's not AI theory. We cover enough internals to operate effectively, but we're not doing a deep dive on transformer architectures. It's not prompt templates—copying prompts doesn't work; understanding principles does.
4444

45-
Sam: The course is sequential, right? You need to understand the fundamentals before you can apply the techniques?
45+
Sam: That's important. I've seen engineers collect prompt templates like recipes, but they never get consistent results because they don't understand why a particular prompt structure works.
4646

47-
Alex: Yes. It breaks into three modules. Module one covers mental models and architecture—understanding how these tools work under the hood, just enough to operate them effectively. Module two is methodology: how to structure prompts, how to ground agents in context, how to design workflows that actually work. Module three is practical techniques—onboarding to unfamiliar codebases, planning complex features, testing, code review with AI assistance, debugging. You build each skill on top of the previous one.
47+
Alex: Exactly. This course also isn't a replacement for fundamentals. You still need to know architecture, design patterns, system design. And it's explicitly not for beginners—if you don't have production experience, you need to get that first.
4848

49-
Sam: Are these modules something you can jump between, or do you really need to go start to finish?
49+
Sam: So who is this for?
5050

51-
Alex: Start to finish. Each section assumes you've internalized the concepts from the previous one. And critically, there are exercises throughout. Reading alone won't build the operating skill. You need to apply these techniques to real codebases—not the examples we provide, but actual projects you're working on. That hands-on practice is where the learning happens.
51+
Alex: Engineers with three or more years of professional experience who've already tried AI coding assistants and hit those frustration points. People who want to move faster without sacrificing code quality. Engineers who need to understand codebases, debug issues, or plan features more efficiently—and care about production-readiness, not demos.
5252

53-
Sam: What changes by the end? What's the practical difference in how someone works?
53+
Sam: That's a specific audience. What do they get out of completing this?
5454

55-
Alex: Several concrete things. You can onboard to unfamiliar codebases 5 to 10 times faster using agentic research instead of manual exploration. You can refactor complex features reliably because you're using test-driven validation with agent assistance. You can delegate log and database analysis to agents when debugging production issues—the agent runs the queries, interprets the results, you maintain the judgment. You can review code systematically with AI assistance while still maintaining critical thinking. And you can plan and execute features with parallel sub-agent delegation instead of sequential bottlenecks. But more importantly, you develop judgment about when to use agents and when to write code yourself. That judgment is what separates people who actually move faster from people who just feel frustrated.
55+
Alex: Concrete capabilities. Onboarding to unfamiliar codebases five to ten times faster using agentic research. Refactoring complex features reliably with test-driven validation. Debugging production issues by delegating log and database analysis to agents. Reviewing code systematically with AI assistance while maintaining critical judgment. Planning and executing features with parallel sub-agent delegation.
5656

57-
Sam: So the course proves itself by being built with its own methodology?
57+
Sam: Those are significant multipliers.
5858

59-
Alex: That's the idea. If these techniques can produce production-grade training material from the ground up—content structure, lesson progression, code examples, documentation—then they're robust enough for your codebase. This isn't a marketing claim. It's validation through application.
59+
Alex: They are. But the most important skill you'll develop is judgment—knowing when to use agents and when to write code yourself. That's what separates effective operators from frustrated ones.
6060

61-
Sam: When do we start?
61+
Sam: Let's talk about prerequisites. You mentioned three-plus years of experience. What else?
6262

63-
Alex: Let's dive into Module One. Understanding the Tools.
63+
Alex: You need access to a CLI coding agent. Claude Code, OpenAI Codex, Copilot CLI—any of them will work. If you haven't picked one yet, Claude Code is recommended at time of writing because of features like plan mode, sub-agents, slash commands, hierarchical CLAUDE.md files, and status bar support.
64+
65+
Sam: And mindset?
66+
67+
Alex: Willingness to unlearn "AI as teammate" and adopt "AI as tool." That shift is harder than it sounds for a lot of engineers.
68+
69+
Sam: I can see that. We're trained to collaborate, to explain context, to work with people. Treating something that responds in natural language as a machine you operate—that requires a mental reset.
70+
71+
Alex: It does. But once you make that shift, everything else in this course clicks into place.
72+
73+
Sam: How should people work through the material?
74+
75+
Alex: Sequential consumption is recommended. Each module builds on previous concepts. Module one covers understanding the tools—mental models and architecture. Module two is methodology—prompting, grounding, workflow design. Module three is practical techniques—onboarding, planning, testing, reviewing, debugging.
76+
77+
Sam: And the exercises?
78+
79+
Alex: Mandatory. Reading alone won't build operating skills. Work through the exercises on real codebases—your own projects, not the examples we provide. The goal is to develop muscle memory for this workflow.
80+
81+
Sam: Makes sense. If you only practice on toy examples, you'll still struggle when you hit a real codebase with messy dependencies and unclear architecture.
82+
83+
Alex: Exactly. This course is designed for engineers who ship production code. The exercises reflect that.
84+
85+
Sam: Alright. Where do we start?
86+
87+
Alex: Module one: Understanding the Tools. We'll build the mental models you need before we get into methodology. Let's go.
Lines changed: 36 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -1,74 +1,74 @@
11
{
22
"methodology/lesson-4-prompting-101.md": {
33
"scriptPath": "methodology/lesson-4-prompting-101.md",
4-
"size": 13153,
5-
"tokenCount": 3216,
6-
"generatedAt": "2025-11-08T08:32:09.277Z"
4+
"size": 10964,
5+
"tokenCount": 2667,
6+
"generatedAt": "2025-12-12T08:16:21.162Z"
77
},
88
"methodology/lesson-3-high-level-methodology.md": {
99
"scriptPath": "methodology/lesson-3-high-level-methodology.md",
10-
"size": 10394,
11-
"tokenCount": 2521,
12-
"generatedAt": "2025-12-09T08:56:15.283Z"
10+
"size": 12117,
11+
"tokenCount": 2949,
12+
"generatedAt": "2025-12-12T08:03:08.688Z"
1313
},
1414
"practical-techniques/lesson-10-debugging.md": {
1515
"scriptPath": "practical-techniques/lesson-10-debugging.md",
16-
"size": 9853,
17-
"tokenCount": 2391,
18-
"generatedAt": "2025-11-09T18:04:31.219Z"
16+
"size": 10894,
17+
"tokenCount": 2646,
18+
"generatedAt": "2025-12-12T08:46:28.642Z"
1919
},
2020
"methodology/lesson-5-grounding.md": {
2121
"scriptPath": "methodology/lesson-5-grounding.md",
22-
"size": 13419,
23-
"tokenCount": 3280,
24-
"generatedAt": "2025-11-08T14:21:41.584Z"
22+
"size": 15657,
23+
"tokenCount": 3837,
24+
"generatedAt": "2025-12-12T08:28:36.376Z"
2525
},
2626
"practical-techniques/lesson-6-project-onboarding.md": {
2727
"scriptPath": "practical-techniques/lesson-6-project-onboarding.md",
28-
"size": 12911,
29-
"tokenCount": 3150,
30-
"generatedAt": "2025-11-08T09:11:35.965Z"
28+
"size": 8264,
29+
"tokenCount": 1992,
30+
"generatedAt": "2025-12-12T09:02:24.950Z"
3131
},
3232
"practical-techniques/lesson-7-planning-execution.md": {
3333
"scriptPath": "practical-techniques/lesson-7-planning-execution.md",
34-
"size": 11291,
35-
"tokenCount": 2745,
36-
"generatedAt": "2025-12-07T19:20:22.116Z"
34+
"size": 9737,
35+
"tokenCount": 2359,
36+
"generatedAt": "2025-12-12T09:09:11.841Z"
3737
},
3838
"practical-techniques/lesson-8-tests-as-guardrails.md": {
3939
"scriptPath": "practical-techniques/lesson-8-tests-as-guardrails.md",
40-
"size": 13441,
41-
"tokenCount": 3281,
42-
"generatedAt": "2025-11-08T09:34:38.785Z"
40+
"size": 11613,
41+
"tokenCount": 2823,
42+
"generatedAt": "2025-12-12T09:29:08.344Z"
4343
},
4444
"practical-techniques/lesson-9-reviewing-code.md": {
4545
"scriptPath": "practical-techniques/lesson-9-reviewing-code.md",
46-
"size": 14682,
47-
"tokenCount": 3594,
48-
"generatedAt": "2025-11-10T08:50:52.364Z"
46+
"size": 9083,
47+
"tokenCount": 2191,
48+
"generatedAt": "2025-12-12T09:53:56.197Z"
4949
},
5050
"understanding-the-tools/lesson-2-understanding-agents.md": {
5151
"scriptPath": "understanding-the-tools/lesson-2-understanding-agents.md",
52-
"size": 7828,
53-
"tokenCount": 1889,
54-
"generatedAt": "2025-11-08T09:57:51.535Z"
52+
"size": 9621,
53+
"tokenCount": 2337,
54+
"generatedAt": "2025-12-12T10:17:17.452Z"
5555
},
5656
"understanding-the-tools/lesson-1-intro.md": {
5757
"scriptPath": "understanding-the-tools/lesson-1-intro.md",
58-
"size": 10095,
59-
"tokenCount": 2459,
60-
"generatedAt": "2025-11-08T09:49:49.970Z"
58+
"size": 6820,
59+
"tokenCount": 1641,
60+
"generatedAt": "2025-12-12T10:10:28.138Z"
6161
},
6262
"intro.md": {
6363
"scriptPath": "intro.md",
64-
"size": 7306,
65-
"tokenCount": 1763,
66-
"generatedAt": "2025-11-08T08:16:32.275Z"
64+
"size": 6579,
65+
"tokenCount": 1581,
66+
"generatedAt": "2025-12-12T07:57:45.739Z"
6767
},
6868
"practical-techniques/lesson-11-agent-friendly-code.md": {
6969
"scriptPath": "practical-techniques/lesson-11-agent-friendly-code.md",
70-
"size": 8251,
71-
"tokenCount": 1983,
72-
"generatedAt": "2025-12-09T09:53:20.989Z"
70+
"size": 8141,
71+
"tokenCount": 1958,
72+
"generatedAt": "2025-12-12T08:55:23.116Z"
7373
}
7474
}

0 commit comments

Comments
 (0)