forked from vercel/next.js
-
Notifications
You must be signed in to change notification settings - Fork 1
[pull] canary from vercel:canary #722
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The recent change to always run all tests without aborting on failure (#88435) inadvertently broke manifest generation. Previously, test output was emitted for all tests when `NEXT_TEST_CONTINUE_ON_ERROR` was set, but that variable was removed. Now test output is only emitted for failing tests, causing the manifest to lose all passing test entries. This adds a new `NEXT_TEST_EMIT_ALL_OUTPUT` environment variable that restores the full output behavior specifically for the manifest generation workflows.
### What? Improve the feature flag to show compressed size
…88854) When deployed, the sentinels where showing `at runtime` instead of `at buildtime`. There are two reasons for this: 1. The page sentinel was accidentally rendered in a client component, which causes a hydration mismatch, and the value toggling from `at buildtime` to `at runtime` when the page is hydrated before the assertion is done. 2. The middleware was setting sentinel cookies that are intended for the `/cookies/*` pages, but not for the `/server-action-inline` page. This caused the server action (which is triggered in the second test) to consider the page as revalidated, which then led the client router to refetch the page. With `next start` this is not a problem because the RSC response is prerendered at build time, and returned as a static file, which contains the correct sentinel values. However, when deployed on Vercel, the RSC request results in an ISR cache miss for some reason, which causes the page to be revalidated (with runtime sentinel values), and then the third test fails when it receives the regenerated response instead of the build-time prerender result. This needs to be investigated separately, but for now we disable the middleware from setting cookies on this page to prevent the ISR revalidation request. closes #88817 closes NAR-740 > [!TIP] > Best reviewed with hidden whitespace changes.
…#88556) When generating PPR fallback shells, a cache miss during warmup was being treated as a dynamic hole, so the warmup render never filled new cache entries. This caused root‑param fallback shells (e.g. `/en/blog/[slug]`) to suspend even though root params were already known. This change always creates a fresh prerender RDC for warmup and copies any seed hits into it. The final render then uses the warmed cache, so fallback shells can reuse populated entries without short‑circuiting. On the export side, we now choose a seed RDC by matching known params (e.g. locale) so /en/... fallback shells don’t get seeded with /fr/... data. (Previously, the RDC seed was "last-write-wins". Fixes NAR-716
…#88602) # What Closes #84782 Closes #74842 Closes #82520 Closes PACK-3723 Replace blob URLs for web workers with real entrypoint files, fixing relative URL resolution inside workers. Implements support for `SharedWorkers` since it's now a simple variation of the `Worker` case. Note that all generated files (and shared-worker querystrings) are appropriately content-hashed and do not rely on deployment IDs and will work w/immutable resources. # Why Workers created via `new Worker(new URL('./worker.js', import.meta.url))` previously got blob URLs. This breaks relative URL resolution inside workers - WASM imports like `new URL('./file.wasm', import.meta.url)` fail because `import.meta.url` returns `blob:http://...` instead of a real path. # How We emit a real worker entrypoint file instead of blob URLs. The entrypoint reads bootstrap config from URL hash params (`#params=....`), then loads the worker's chunks via `importScripts`. The `ChunkingContext` trait gets a new `worker_entrypoint()` method returning a singleton bootstrap asset. Node.js `bail!`s for now but could implement its own in the future. The browser implementation (`EcmascriptBrowserWorkerEntrypoint`) compiles and minifies `worker-entrypoint.ts`, which parses `#params={globals,chunks}` from the URL at runtime. `WorkerLoaderChunkItem` now emits `__turbopack_worker_url__(entrypoint, chunks, shared)` instead of the old blob URL shim. SharedWorker passes `shared=true` which adds a querystring - the browser handles worker identity via URL, so same URL means same instance. Generated code changes (pseudocode): ```javascript __turbopack_export_value__(__turbopack_worker_blob_url__(["chunk1.js", "chunk2.js"])); ``` After: ```javascript __turbopack_export_value__(__turbopack_worker_url__("worker-entrypoint.js", ["chunk1.js", "chunk2.js"], false)); ``` # Notes Webpack doesn't currently correctly support `SharedWorkers`. The e2e test behaves properly in turbopack after this change, but webpack spins up two separate workers where it should only be spinning up one. I generated snapshot tests for workers in the first and last commits to help understand the generated code better. # Testing - Snapshot tests for Worker, SharedWorker, and minified output - E2E tests for WASM loading in workers (`test/e2e/app-dir/worker/`)
… fuzzing (#88665) In addition to creating task invalidators for file reads, we also create invalidators for file writes. This extends the fuzzer to cover that. This is a bit trickier to test. Because the `write` call doesn't return anything, the caller of `write` never gets invalidated and re-executed, even though `write` does. So instead, we must observe that `write` was re-executed by looking at the contents of the file it wrote to. We write a sentinel value to files when we call the turbo-tasks-fs `write` function, and write random values to it to trigger an invalidation. I tested this on Windows and Linux. I don't care about the code quality here, so this was LLM-generated with very lightweight review.
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot] (v2.0.0-alpha.4)
Can you help keep this open source service alive? 💖 Please sponsor : )