forked from vercel/next.js
-
Notifications
You must be signed in to change notification settings - Fork 1
[pull] canary from vercel:canary #723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ks in a tight loop (#88667) In an attempt to reproduce #88382, this tries creating and updating a bunch of symlinks/junction points in a tight loop with some parallelism. However, I've been unable to reproduce the error reported in that discussion. I don't care about the code quality here, so this was LLM-generated with very lightweight review. A simplified version of the fuzzer is also included as a unit test in `turbo-tasks-fs`. This version only does a few iterations and runs very quickly.
# What Fixes the `test_mixed_cpu_bound_and_waiting_tasks` to avoid flakes. # Why The `test_mixed_cpu_bound_and_waiting_tasks` was non-deterministic and relied on thread scheduling order. The previous test relied on timing-based synchronization using sleep() calls with varying durations to simulate CPU-bound and async work. This approach was inherently flaky because: 1. Thread scheduling is non-deterministic - a task sleeping for 10ms might complete before one sleeping for 5ms depending on OS scheduler behaviour 2. The assertions depended on specific completion ordering that wasn't guaranteed by the timing 3. Test behaviour could vary based on system load, available cores, and Tokio runtime scheduling # How he new approach uses std::sync::Barrier pairs (start, finish) for each task, giving the test explicit control over when tasks can proceed. Because this is a priority-ordered system, we know which barriers should be available at any given time and can release them. If the given task is not current waiting on its own corresponding barrier, the test will fail.
… string formatting (#88668) - #87661 removed the `tokio::task::spawn_blocking` call from `retry_blocking`, so we can simplify things a lot by removing the `Send + 'static` bounds, and eliminate a bunch of cloning `Path`s into `PathBuf`s. - `retry_blocking` no longer takes a path argument. After removing the path cloning requirements, it's now easier for callers to capture paths via a closure, and it's more flexible for IO operations that need to pass in more than a single path. - Adds `retry_blocking_custom`, which I use in #88669 that allows customizing what error kinds we retry on. - Adds documentation to `retry_blocking` explaining why we need this. It's often unclear to new people on the team why we have to retry filesystem operations (Windows, basically). This was unclear to me when I was new to this code, and other people have asked similar questions. - We shouldn't use `Path::display` for error messages or tracing spans. It's easier to use `Debug`, and `Debug` ensures that if the error is related to unicode, that information isn't lost in the error message. - More consistently apply tracing spans using `.instrument`. - Use the modern inline syntax for some format strings. - `let foo = RcStr::from(bar)` is preferred over `let foo: RcStr = bar.into()`
…80 on Windows (#88669) Though I've been unable to reproduce the issue (prior PRs in this stack are my attempts at that), I believe this *should* fix the OS Error 80 ([`ERROR_FILE_EXISTS`](https://learn.microsoft.com/en-us/windows/win32/debug/system-error-codes--0-499-)) that we've seen users report on Windows: #88382 Basically instead of deleting the existing link in a retry loop, and then trying to create new new link using a retry loop, we should instead: - Merge these two retry loops into one, in case another process creates the link after we delete it. This shouldn't typically happen, but could theoretically happen. - Extend the retry logic to also retry on [`ErrorKind::AlreadyExists`](https://github.com/rust-lang/rust/blob/503745e9170b40841611aaaa634641edffd00b29/library/std/src/sys/io/error/windows.rs#L19). This might be possible if the filesystem is eventually consistent (which can happen on Windows), but I wasn't able to reproduce this. There's a similar -- but different -- issue that users have been reporting where they get OS Error 1 (`ERROR_INVALID_FUNCTION`) on Windows. I believe this happens if we try to create a junction point on a non-NTFS filesystem. In that case, we need to report a clearer issue to the user instead of bubbling this up as an internal Turbopack error. I tested this on Windows with the symlink fuzzer from #88667, as well as `cargo test -p turbo-tasks-fs`: 
) ## What Replace `InnerStorage` with `TaskStorage` generated by the new derive macro from #88338 . Extend the TaskStorage derive macro to generate `CachedDataItem` adapter methods on the `TaskStorageAccessors` trait so we can maintain API compatibility. Replace the serialization layer to use the native TaskStorage representations now that CachedDataitem is only used for API access ## Why TaskStorage will enable more performant serialization and access patterns. From testing on vercel-site this saves ~1% of persistent cache size. For this PR there is no benefit to access pattern performance (in fact i would anticipate a small regression due to the adaptor layer) ## How 1. **Extended macro attribute parsing** - Added `variant`, and `key_field`` attributes to specify CachedDataItem variant mapping per field 2. **Generated adapter methods on TaskStorageAccessors trait**: - `insert_kv(item)` - Insert a CachedDataItem into typed storage - `get(key)` - Look up by CachedDataItemKey - `remove(key)` - Remove by key - `get_mut(key)` - Mutable access (returns None for flags/sets/multimaps) - `iter(type)` - Iterate over a specific variant type - ... This enables current access patterns to keep working 3. **Migrated serialization and deserialization flow**: - use `TaskStorage` throughout the serialization flow using apis like `restore_from` and `clone_data` to create targeted copies for lock free IO. 4. **Removed new dead code** - Delete the `Storage` layer now that it is dead. On its own this will be a small savings in serialized data size (saves 20Mb for vercel-site) and is slightly faster to snapshot/restore. Access performance should be nearly equivalent in performance.
…ier on Linux for bare-bones Linux images without root CA stores (#88869) This should fix the user-reported issue here: #88290 (comment) That issue occurs when building with bare-bones docker images that contain no root CA store. It's inconvenient to test inside of a bare-bones docker image, but I did at least test this works on a normal (Debian) Linux install when fetching fonts for shadcn-ui.
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot] (v2.0.0-alpha.4)
Can you help keep this open source service alive? 💖 Please sponsor : )