@@ -106,7 +106,7 @@ PRESENTATION STRUCTURE REQUIREMENTS:
106106 - Discussion prompts or questions to ask students
107107 - Real-world examples to reference
108108✓ DO: Preserve important code examples as slide content
109- ✓ DO: Identify which visual components to use (CapabilityMatrix, UShapeAttentionCurve, etc.)
109+ ✓ DO: Identify which visual components to use (CapabilityMatrix, UShapeAttentionCurve, WorkflowCircle, GroundingComparison, ContextWindowMeter, AbstractShapesVisualization, etc.)
110110
111111✗ AVOID: Long paragraphs on slides (slides are visual anchors, not reading material)
112112✗ AVOID: More than 5 bullet points per slide
@@ -137,9 +137,41 @@ For presentation slides:
137137✓ Add context in speaker notes about what the code demonstrates
138138✓ For comparison slides, show ineffective and effective side-by-side
139139✓ Keep code snippets under 15 lines for readability
140+ ✓ EXCEPTION: Textual context flow examples showing agent conversation flows should use "codeExecution" slide type regardless of length (see section below)
140141✗ Don't include every code example from the lesson
141142✗ Don't show code without explaining its purpose
142143
144+ COMPONENT DETECTION (CRITICAL):
145+
146+ The source content contains markers for visual React components in the format:
147+ [VISUAL_COMPONENT: ComponentName]
148+
149+ Examples you will see:
150+ - [VISUAL_COMPONENT: AbstractShapesVisualization]
151+ - [VISUAL_COMPONENT: CapabilityMatrix]
152+ - [VISUAL_COMPONENT: UShapeAttentionCurve]
153+ - [VISUAL_COMPONENT: ContextWindowMeter]
154+
155+ **MANDATORY RULE:** When you encounter a [VISUAL_COMPONENT: X] marker, you MUST:
156+ 1. Generate a "visual" slide type (NOT a "concept" slide)
157+ 2. Set "component" field to the exact component name from the marker
158+ 3. Use the surrounding context to write a descriptive caption
159+
160+ Example:
161+ {
162+ "type": "visual",
163+ "component": "AbstractShapesVisualization",
164+ "caption": "Visual comparison showing cluttered vs clean context"
165+ }
166+
167+ **DO NOT:**
168+ - Convert component markers into text bullet points
169+ - Skip component markers
170+ - Change the component name
171+ - Generate a "concept" slide when you see a component marker
172+
173+ If you see [VISUAL_COMPONENT: X] anywhere in the content, it MUST become a visual slide.
174+
143175CODE EXECUTION SLIDES:
144176
145177Use the "codeExecution" slide type to visualize step-by-step processes like:
@@ -171,6 +203,40 @@ SEMANTIC RULES (critical for correct color coding):
171203✗ Don't create more than 10 steps (split into multiple slides if needed)
172204✗ Don't confuse "LLM receives data and predicts" (prediction) with "data returned" (feedback)
173205
206+ RECOGNIZING TEXTUAL CONTEXT FLOW PATTERNS (CRITICAL):
207+
208+ When you see code blocks showing conversation/execution flows with patterns like:
209+ - "SYSTEM: ... USER: ... ASSISTANT: ... TOOL_RESULT: ..."
210+ - Sequential back-and-forth between human, LLM, and tools
211+ - Full execution traces showing how text flows through agent context
212+ - Examples demonstrating the actual content of the context window
213+
214+ → These are PEDAGOGICALLY CRITICAL and must be included as "codeExecution" slides
215+
216+ Why these matter MORE than config examples:
217+ - They show the fundamental mental model of how agents operate
218+ - They demystify what "context" actually contains
219+ - They're the core learning insight, not just implementation details
220+
221+ How to handle them:
222+ 1. Break the flow into 8-12 logical steps (not necessarily every line)
223+ 2. Map conversation elements to highlightTypes:
224+ - "SYSTEM:" or system instructions → human
225+ - "USER:" or task specification → human
226+ - "ASSISTANT:" thinking/reasoning → prediction
227+ - "<tool_use>" or tool calls → execution
228+ - "TOOL_RESULT:" or outputs → feedback
229+ 3. Add annotations explaining the significance of each step
230+ 4. Focus on the FLOW of text through the context, not just the code
231+
232+ Example transformation:
233+ - Source: 67-line conversation showing full agent execution
234+ - Slide: 10 steps highlighting key moments in the conversation flow
235+ - Annotations: "Notice how the tool result becomes input to the next prediction"
236+
237+ PRIORITIZATION: Textual flow examples showing context mechanics trump configuration
238+ examples like MCP setup. Configuration is implementation; textual flow is understanding.
239+
174240SPEAKER NOTES GUIDELINES:
175241
176242For each slide, provide speaker notes with:
@@ -278,15 +344,34 @@ You must generate a valid JSON file with this structure:
278344 ],
279345 "speakerNotes": { ... }
280346 },
347+
348+ COMPARISON SLIDE CONVENTION (CRITICAL - HARDCODED IN UI):
349+
350+ The comparison slide type has HARDCODED styling in the presentation component:
351+ - LEFT side → RED background, RED heading, ✗ icons (ineffective/worse/limited)
352+ - RIGHT side → GREEN background, GREEN heading, ✓ icons (effective/better/superior)
353+
354+ YOU MUST ALWAYS follow this convention:
355+ - LEFT: The worse/ineffective/traditional/limited approach
356+ - RIGHT: The better/effective/modern/superior approach
357+
358+ Correct examples:
359+ - "Chat Interface" (left) vs "Agent Workflow" (right)
360+ - "Heavy Mocking" (left) vs "Sociable Tests" (right)
361+ - "Chat/IDE Agents" (left) vs "CLI Agents" (right)
362+ - "Traditional RAG" (left) vs "Agentic RAG" (right)
363+
364+ INCORRECT: Putting the better option on the left will show it with RED ✗ styling!
365+
281366 {
282367 "type": "comparison",
283368 "title": "Ineffective vs Effective",
284369 "left": {
285- "label": "Ineffective",
370+ "label": "Ineffective", // MANDATORY: LEFT = worse/ineffective/limited (RED ✗)
286371 "content": ["Point 1", "Point 2"]
287372 },
288373 "right": {
289- "label": "Effective",
374+ "label": "Effective", // MANDATORY: RIGHT = better/effective/superior (GREEN ✓)
290375 "content": ["Point 1", "Point 2"]
291376 },
292377 "speakerNotes": { ... }
@@ -307,7 +392,7 @@ You must generate a valid JSON file with this structure:
307392 {
308393 "type": "visual",
309394 "title": "Visual Component",
310- "component": "CapabilityMatrix",
395+ "component": "CapabilityMatrix | UShapeAttentionCurve | WorkflowCircle | GroundingComparison | ContextWindowMeter | AbstractShapesVisualization ",
311396 "caption": "Description of what the visual shows",
312397 "speakerNotes": { ... }
313398 },
@@ -507,6 +592,90 @@ async function promptSelectFile(files, baseDir) {
507592// PROCESSING
508593// ============================================================================
509594
595+ /**
596+ * Extract visual component names from parsed content
597+ * @param {string } content - Parsed markdown content
598+ * @returns {string[] } Array of component names
599+ */
600+ function extractExpectedComponents ( content ) {
601+ const componentRegex = / \[ V I S U A L _ C O M P O N E N T : ( [ A - Z a - z ] + ) \] / g;
602+ const components = [ ] ;
603+ let match ;
604+
605+ while ( ( match = componentRegex . exec ( content ) ) !== null ) {
606+ components . push ( match [ 1 ] ) ;
607+ }
608+
609+ return components ;
610+ }
611+
612+ /**
613+ * Validate that all expected visual components appear in the presentation
614+ * @param {string } content - Parsed markdown content
615+ * @param {object } presentation - Generated presentation object
616+ * @returns {object } Validation result with missing components
617+ */
618+ function validateComponents ( content , presentation ) {
619+ const expectedComponents = extractExpectedComponents ( content ) ;
620+ const visualSlides = presentation . slides . filter ( s => s . type === 'visual' ) ;
621+ const renderedComponents = visualSlides . map ( s => s . component ) ;
622+
623+ const missing = expectedComponents . filter ( c => ! renderedComponents . includes ( c ) ) ;
624+
625+ return {
626+ expected : expectedComponents ,
627+ rendered : renderedComponents ,
628+ missing,
629+ allPresent : missing . length === 0
630+ } ;
631+ }
632+
633+ /**
634+ * Validate semantic correctness of comparison slides
635+ * Checks that better/effective options are on the RIGHT (green ✓)
636+ * and worse/ineffective options are on the LEFT (red ✗)
637+ * @param {object } presentation - Generated presentation object
638+ * @returns {object } Validation result with potential ordering issues
639+ */
640+ function validateComparisonSemantics ( presentation ) {
641+ const comparisonSlides = presentation . slides . filter ( s => s . type === 'comparison' ) ;
642+ const issues = [ ] ;
643+
644+ // Keywords that indicate a "positive/better" option
645+ const positiveKeywords = [ 'cli' , 'effective' , 'better' , 'modern' , 'agentic' , 'sociable' , 'agent workflow' ] ;
646+ // Keywords that indicate a "negative/worse" option
647+ const negativeKeywords = [ 'chat' , 'ide' , 'ineffective' , 'worse' , 'traditional' , 'mocked' , 'chat interface' ] ;
648+
649+ for ( const slide of comparisonSlides ) {
650+ if ( ! slide . left || ! slide . right ) continue ;
651+
652+ const leftLabel = slide . left . label ?. toLowerCase ( ) || '' ;
653+ const rightLabel = slide . right . label ?. toLowerCase ( ) || '' ;
654+
655+ // Check if left side has positive keywords (should be on right instead)
656+ const leftIsPositive = positiveKeywords . some ( k => leftLabel . includes ( k ) ) ;
657+ // Check if right side has negative keywords (should be on left instead)
658+ const rightIsNegative = negativeKeywords . some ( k => rightLabel . includes ( k ) ) ;
659+
660+ if ( leftIsPositive || rightIsNegative ) {
661+ issues . push ( {
662+ slide : slide . title ,
663+ left : slide . left . label ,
664+ right : slide . right . label ,
665+ reason : leftIsPositive
666+ ? `"${ slide . left . label } " appears positive/better but is on LEFT (will show RED ✗)`
667+ : `"${ slide . right . label } " appears negative/worse but is on RIGHT (will show GREEN ✓)`
668+ } ) ;
669+ }
670+ }
671+
672+ return {
673+ valid : issues . length === 0 ,
674+ issues,
675+ totalComparisons : comparisonSlides . length
676+ } ;
677+ }
678+
510679/**
511680 * Generate presentation for a file
512681 */
@@ -549,6 +718,31 @@ async function generatePresentation(filePath, manifest, config) {
549718 // Generate presentation using Claude
550719 const presentation = await generatePresentationWithClaude ( prompt , outputPath ) ;
551720
721+ // Validate that all visual components were included
722+ const validation = validateComponents ( content , presentation ) ;
723+ if ( ! validation . allPresent ) {
724+ console . log ( ` ⚠️ WARNING: ${ validation . missing . length } visual component(s) not rendered:` ) ;
725+ validation . missing . forEach ( c => console . log ( ` - ${ c } ` ) ) ;
726+ console . log ( ` ℹ️ Expected: [${ validation . expected . join ( ', ' ) } ]` ) ;
727+ console . log ( ` ℹ️ Rendered: [${ validation . rendered . join ( ', ' ) } ]` ) ;
728+ } else if ( validation . expected . length > 0 ) {
729+ console . log ( ` ✅ All ${ validation . expected . length } visual component(s) rendered correctly` ) ;
730+ }
731+
732+ // Validate comparison slide semantics
733+ const semanticValidation = validateComparisonSemantics ( presentation ) ;
734+ if ( ! semanticValidation . valid ) {
735+ console . log ( ` ⚠️ WARNING: ${ semanticValidation . issues . length } comparison slide(s) may have reversed order:` ) ;
736+ semanticValidation . issues . forEach ( issue => {
737+ console . log ( ` - "${ issue . slide } "` ) ;
738+ console . log ( ` LEFT: "${ issue . left } " | RIGHT: "${ issue . right } "` ) ;
739+ console . log ( ` ${ issue . reason } ` ) ;
740+ } ) ;
741+ console . log ( ` ℹ️ Remember: LEFT = ineffective/worse (RED ✗), RIGHT = effective/better (GREEN ✓)` ) ;
742+ } else if ( semanticValidation . totalComparisons > 0 ) {
743+ console . log ( ` ✅ All ${ semanticValidation . totalComparisons } comparison slide(s) follow correct convention` ) ;
744+ }
745+
552746 // Copy to static directory for deployment
553747 const staticPath = join ( STATIC_OUTPUT_DIR , dirname ( relativePath ) , outputFileName ) ;
554748 mkdirSync ( dirname ( staticPath ) , { recursive : true } ) ;
0 commit comments