-
Notifications
You must be signed in to change notification settings - Fork 0
Feat/barrel enhancements #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…trol feat: add file size validation in FileSystemService test: add contract validation tests for Barrel types and enums chore: update ESLint rules for TypeScript annotations
…el file generation
…te logic and fix critical bug
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request enhances the barrel file generation system with performance optimizations, robustness safeguards, and improved content preservation. The changes focus on making the system more reliable and scalable for large codebases.
Changes:
- Added preservation of direct definitions (functions, types, constants) in
index.tsfiles when updating barrel files, ensuring only re-exports are sanitized - Implemented batching, concurrency control, and LRU-style caching for export parsing to optimize performance and memory usage
- Added safeguards including recursion depth limits (20 levels) and file size checks (10MB maximum) to prevent infinite loops and memory issues
Reviewed changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
| src/types/contracts.test.ts | New comprehensive contract validation tests for enums, types, and behavioral contracts |
| src/extension.ts | Added BarrelCommandQueue to prevent concurrent barrel generation operations |
| src/core/io/file-system.service.ts | Added file size validation and new getFileStats method for caching support |
| src/core/io/file-system.service.test.ts | Added tests for file size validation and updated existing tests for stat calls |
| src/core/barrel/barrel-file.generator.ts | Major refactoring with caching, batching, concurrency control, recursion depth limiting, and definition preservation logic |
| src/core/barrel/barrel-file.generator.test.ts | Added test for preserving direct definitions during barrel file updates |
| scripts/run-tests.cjs | Added test patterns for dist/types/*.test.js and dist/src/types/*.test.js |
| eslint.config.mjs | Added linting rule to discourage inline object types in type annotations |
Comments suppressed due to low confidence (1)
src/core/io/file-system.service.ts:142
- The error handling at lines 139-142 will catch and re-wrap errors from the file size check at lines 131-136. This means if a file is too large, the original descriptive error message about the file size will be wrapped in a generic "Failed to read file" message, making it less clear to the user what the actual problem is.
Consider checking the error type and re-throwing the size error directly, or move the size check outside the try-catch block to avoid double-wrapping the error message.
} catch (error) {
const errorMessage = getErrorMessage(error);
throw new Error(`Failed to read file ${filePath}: ${errorMessage}`);
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const exportPath = this.extractExportPath(trimmedLine); | ||
| if (exportPath) { | ||
| // This is a re-export, check if it's still valid | ||
| if (this.isValidExportPath(exportPath, directoryPath, validTsFiles, validSubdirectories)) { | ||
| sanitizedExports.push(exportPath); | ||
| } | ||
| // Skip adding to preservedLines since we'll regenerate valid re-exports | ||
| } else if (trimmedLine.length > 0) { | ||
| // This is not a re-export (direct definition, comment, etc.), preserve it | ||
| preservedLines.push(line); | ||
| } |
Copilot
AI
Feb 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The extractExportPath method (called at line 488) only recognizes re-exports in the format export * from '...'. However, TypeScript also supports other re-export patterns like export { foo, bar } from './module', export { default } from './module', and export { foo as bar } from './module'.
These other re-export patterns will return null from extractExportPath, causing them to be treated as direct definitions and preserved at line 497, even though they are re-exports that should be sanitized and validated. This could lead to stale or invalid re-exports being retained in the barrel file.
Consider extending the extractExportPath logic to handle all TypeScript re-export patterns, or document this limitation clearly.
| } | ||
| // Skip adding to preservedLines since we'll regenerate valid re-exports | ||
| } else if (trimmedLine.length > 0) { | ||
| // This is not a re-export (direct definition, comment, etc.), preserve it |
Copilot
AI
Feb 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At line 497, the original line (not the trimmed line) is preserved when it's not a re-export. This is correct for maintaining the original formatting, including indentation. However, at line 485, the code checks if trimmedLine.length > 0, which means empty lines in the original content will be skipped. This is intentional according to line 202 which filters out empty lines anyway.
However, there's an inconsistency: if the original line has only whitespace, it won't be added at line 497 (because trimmedLine.length === 0), but if it had mixed content, the original spacing is preserved. This could lead to inconsistent whitespace handling. Consider documenting this behavior or normalizing the whitespace handling approach.
| // This is not a re-export (direct definition, comment, etc.), preserve it | |
| // This is not a re-export (direct definition, comment, etc.), preserve it. | |
| // Note: Lines that are only whitespace are intentionally skipped to normalize | |
| // empty lines, while lines with content keep their original indentation/spacing. |
| options: NormalizedGenerationOptions, | ||
| depth: number, | ||
| ): Promise<void> { | ||
| const maxDepth = 20; // Prevent infinite recursion in deeply nested directory structures |
Copilot
AI
Feb 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The recursion depth check at line 308 uses >= which means processing stops when depth equals maxDepth. However, the initial call at line 99 starts with depth=0, so the maximum actual depth processed is 19, not 20 as suggested by the comment at line 306.
For example: depth 0, 1, 2, ..., 19 are processed (20 levels), and depth 20 is rejected. This is correct for 20 levels, but the comment could be clearer that this allows 20 levels starting from 0. Consider clarifying the comment to say "Maximum 20 levels of nesting" or adjust the check to > maxDepth if you want to allow 21 levels (0-20).
| const maxDepth = 20; // Prevent infinite recursion in deeply nested directory structures | |
| const maxDepth = 20; // Maximum 20 levels of nesting (depth 0–19) to prevent infinite recursion |
| release(): void { | ||
| this.permits++; | ||
| if (this.waiting.length > 0) { | ||
| const resolve = this.waiting.shift()!; | ||
| this.permits--; | ||
| resolve(); | ||
| } |
Copilot
AI
Feb 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Semaphore implementation has a subtle ordering issue in the release method. At line 668, it increments permits, then at lines 669-672, if there are waiters, it decrements permits again. This means if a waiter exists, the permit count temporarily increases by 1 then decreases by 1, resulting in no net change. However, if release() is called and there's no waiter initially, the permit count increases. Then if acquire() is called before the next release(), it will correctly decrement.
The issue is that this creates a race condition: the permits are incremented at line 668 before checking for waiters, which means there's a brief moment where permits is higher than it should be. If another acquire() call happens between line 668 and 670, it might pass the check at line 654 when it shouldn't.
A safer implementation would check for waiters first before incrementing permits.
| try { | ||
| await operation(); | ||
| } catch (error) { | ||
| // Log error but continue processing queue | ||
| console.error('Barrel command failed:', error); | ||
| } |
Copilot
AI
Feb 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error handling in the BarrelCommandQueue has a problem. When an error occurs in the operation at line 52-53, it calls reject(error) which correctly rejects the promise returned by enqueue(). However, at line 77, if an error occurs during operation execution in the while loop, it's caught and logged but the error has already been propagated via reject at line 53. This creates a double error handling path.
The bigger issue is that at line 74, await operation() is called, and if that operation throws, it will be caught at line 75-78. But the operation itself (lines 48-55) already has a try-catch that will call reject(). So if the operation's inner code throws, it will:
- Be caught at line 52, call reject(error) at line 53
- Then propagate up to line 74, get caught at line 75, and logged at line 77
This means errors are handled twice, which could lead to confusion. The processQueue catch at line 75-78 is redundant since each operation already has its own try-catch.
| try { | |
| await operation(); | |
| } catch (error) { | |
| // Log error but continue processing queue | |
| console.error('Barrel command failed:', error); | |
| } | |
| await operation(); |
| // Limit cache size to prevent memory issues | ||
| if (this.exportCache.size > 1000) { | ||
| const firstKey = this.exportCache.keys().next().value; | ||
| if (firstKey) { | ||
| this.exportCache.delete(firstKey); | ||
| } | ||
| } |
Copilot
AI
Feb 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cache eviction strategy is not truly LRU (Least Recently Used). The current implementation simply deletes the first entry in the Map when the size exceeds 1000. JavaScript Maps maintain insertion order, but this approach removes the oldest inserted entry, not the least recently accessed entry.
To implement true LRU, the cache should track access order and remove the least recently accessed item. The current implementation will remove entries that were added first, even if they were recently used. This could lead to cache thrashing where frequently accessed files that were cached early get evicted while less frequently accessed newer entries remain.
Consider using a proper LRU cache implementation or updating the access order when cache hits occur at line 585-586.
| console.warn( | ||
| `Maximum recursion depth (${maxDepth}) reached at depth ${depth}. Skipping deeper directories.`, | ||
| ); |
Copilot
AI
Feb 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code uses console.warn for logging when the maximum recursion depth is reached. The codebase has a proper logging infrastructure (OutputChannelLogger) that should be used instead of console methods for consistency and proper integration with VS Code's output channel.
Consider injecting the logger into BarrelFileGenerator or using the OutputChannelLogger to ensure warnings are visible in the extension's output channel rather than just the console.
| return { relativePath, entry: { kind: BarrelEntryKind.File, exports } }; | ||
| } catch (error) { | ||
| // Log error but continue processing other files | ||
| console.warn(`Failed to process file ${filePath}:`, error); |
Copilot
AI
Feb 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code uses console.warn for logging file processing errors. The codebase has a proper logging infrastructure (OutputChannelLogger) that should be used instead for consistency and to ensure errors are properly visible in the VS Code output channel.
Consider injecting the logger into BarrelFileGenerator or using the OutputChannelLogger to log these warnings.
| await operation(); | ||
| } catch (error) { | ||
| // Log error but continue processing queue | ||
| console.error('Barrel command failed:', error); |
Copilot
AI
Feb 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code uses console.error for logging queue processing errors. The codebase has a proper logging infrastructure (OutputChannelLogger) that should be used instead for consistency.
Consider using the OutputChannelLogger to ensure errors are properly captured in the VS Code output channel.
| const results: R[] = []; | ||
| const semaphore = new Semaphore(concurrencyLimit); | ||
|
|
||
| const promises = items.map(async (item) => { | ||
| await semaphore.acquire(); | ||
| try { | ||
| return await processor(item); | ||
| } finally { | ||
| semaphore.release(); | ||
| } | ||
| }); | ||
|
|
||
| const resolvedResults = await Promise.all(promises); | ||
| results.push(...resolvedResults); | ||
| return results; |
Copilot
AI
Feb 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The processConcurrently method has redundant code. Lines 619 and 632 are unnecessary. The method creates an empty results array at line 619, then at line 631 resolves all promises into resolvedResults, and at line 632 pushes all of them into results before returning it at line 633.
This can be simplified to just return the result of Promise.all() directly without the intermediate results array, i.e., return await Promise.all(promises);. The current code adds unnecessary operations and memory allocation.
…r concurrency control - Added unit tests for array utilities, including `isEmptyArray`. - Implemented comprehensive tests for custom assertion utilities. - Created tests for error handling utilities, including `getErrorMessage` and `formatErrorForLog`. - Developed tests for ESLint plugin helpers and string utilities. - Introduced a Semaphore class to manage concurrency with a limit on concurrent operations. - Added tests for the Semaphore class to ensure correct behavior in queuing and releasing permits. - Updated string utility functions with improved sorting logic and added tests for various scenarios. - Added type definitions for Logger and OutputChannel interfaces.
… and update tests for consistency
This pull request introduces several improvements and safeguards to the barrel file generation system, focusing on preserving direct definitions in
index.ts, optimizing performance and memory usage, and enhancing robustness against large files and deep directory structures. It also updates tests and linting rules to support these changes.Barrel file generation and content preservation:
BarrelFileGeneratornow preserves direct definitions (such as functions, types, and constants) inindex.tsfiles when updating, instead of overwriting or removing them. Only re-exports are sanitized and regenerated as needed. [1] [2] [3] [4]Performance and memory optimizations:
Robustness and safeguards:
Testing and linting improvements:
dist/typesanddist/src/types.These changes enhance the reliability, maintainability, and scalability of the barrel file generation system, especially for large or complex codebases.