-
-
Notifications
You must be signed in to change notification settings - Fork 5.7k
threads: Implement asymmetric atomic fences #60311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
b683c64 to
44b50e2
Compare
|
Will this be documented as part of #46739 ? |
src/signals-unix.c
Outdated
| jl_mprotect_membarrier(); | ||
| } | ||
| } | ||
| #endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Claude:
2. Missing jl_membarrier for non-Linux/FreeBSD Unix (signals-unix.c)
The jl_membarrier function is only defined inside the #if defined(_OS_LINUX_) || defined(_OS_FREEBSD_) block. Other Unix systems (like OpenBSD, NetBSD, Solaris) will have a
missing symbol at link time.
| #endif | |
| #else | |
| JL_DLLEXPORT void jl_membarrier(void) { | |
| jl_mprotect_membarrier(); | |
| } | |
| #endif |
(and move jl_mprotect_membarrier implementation)
Alternatively, we could use our existing default-sound implementation instead of the default-unsound one:
JL_DLLEXPORT void jl_membarrier(void)
{
bt_context_t signal_context;
for (int tid = 0; tid < jl_atomic_load_acquire(&jl_n_threads); tid++) {
while (!jl_thread_suspend(tid, &signal_context)) { // work to try to interrupt for up to 1 second each iteration
jl_ptls_t ptls2 = jl_atomic_load_relaxed(&jl_all_tls_states)[tid];
jl_task_t *ct2 = ptls2 ? jl_atomic_load_relaxed(&ptls2->current_task) : NULL;
if (ct2 == NULL) {
// this thread is not alive or already dead
if (++tid >= jl_atomic_load_acquire(&jl_n_threads))
return;
}
}
jl_thread_resume(tid, &signal_context));
}
}
(we could even have fun and optimize that similar to the kernel implementation, since we might not care about threads in certain gc_state (as they have already done a barrier) or threads with sleep_check_state != not_sleeping, but there isn't all that much point to that)
Asymmetric atomic fences are a performance optimization of regular atomic fences (the seq_cst version of which we expose as `Base.Threads.atomic_fence`). The problem with these regular fences is that they require a CPU fence instruction, which can be very expensive and is thus unsuitable for code in the hot path. Asymmetric fences on the other hand split an ordinary fence into two: A `light` side where the fence is extremely cheap (only a compiler reordering barrier) and a `heavy` side where the fence is very expensive. Basically the way it works is that the heavy side does a system call that issues an inter-processor-interrupt (IPI) which then issues the appropriate barrier instruction on the other CPU (i.e. both CPUs will have issues a barrier instruction, one of them just does it asynchronously due to interrupt). The `light` and `heavy` naming here is taken from C++ PR1202R5 [1], which is the proposal for the same feature in the C++ standard library (to appear in the next iteration of the C++ concurrency spec). On the julia side, these functions are exposed as `Threads.atomic_fence_light` and `Threads.atomic_fence_heavy`. The light side lowers to `fence singlethread` in llvm IR (the Core.Intrinsic atomic_fence is adjusted appropriately to faciliate this). The heavy side has OS-specifc implementations, where: 1. Linux/FreeBSD try to use the `membarrier` syscall or a fallback to `mprotect` for systems that don't have it. 2. Windows uses the `FlushProcessWriteBuffers` syscall. 3. macOS uses an implementation from the dotnet runtime (dotnet/runtime#44670), which the dotnet folks have checked with Apple does the right thing by happenstance (i.e. an IPI/memory barrier is needed to execute the syscall), but looks a little nonsensical by itself. However, since it's what Apple recommended to dotnet, I don't see much risk here, though I wouldn't be surprised if Apple added a proper syscall for this in the future (since freebsd has it now). Note that unlike the C++ spec, I have specified that `atomic_fence_heavy` does synchronize with `atomic_fence`. This matches the underlying system call. I suspect C++ chose to omit this for a hypothetical future architecture that has instruction support for doing this from userspace that would then not synchronize with ordinary barriers, but I think I would rather cross that bridge when we get there. I intend to use this in #60281, but it's an independently useful feature. [1] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p1202r5.pdf
625b2ea to
848340a
Compare
|
Addressed review, added NEWS, added docstring ref to the appropriate section. |
|
dotnet/runtime#111776 makes a claim that the mprotect thing doesn't work on FreeBSD. |
|
Ah, I guess that's arm64 specific and linux has the same issue (https://lwn.net/Articles/728795/). |
Co-authored-by: Cody Tapscott <84105208+topolarity@users.noreply.github.com>
|
So should we use a sound (but slower) implementation as fallback instead of the mprotect hack? #60311 (comment) |
Sure, especially since mlock can fail. |
The mprotect-based membarrier fallback is known not to work on AArch64 (see https://lwn.net/Articles/728795/). This commit disables the mprotect fallback on AArch64 and ARM and instead uses a thread suspension-based fallback which is sound (but slower). Additionally, this changes the mprotect fallback initialization to return a status code rather than aborting if mmap or mlock fails. This allows graceful fallback to the thread suspension mechanism if mlock fails (e.g. due to low RLIMIT_MEMLOCK limits). The thread suspension fallback works by iterating through all threads and using the existing jl_thread_suspend/jl_thread_resume mechanism, which forces each thread to execute memory barriers via the signal handler mechanism. If suspension fails, we retry until either it succeeds or we determine the thread is no longer alive. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Asymmetric atomic fences are a performance optimization of regular atomic fences (the seq_cst version of which we expose as
Base.Threads.atomic_fence). The problem with these regular fences is that they require a CPU fence instruction, which can be very expensive and is thus unsuitable for code in the hot path. Asymmetric fences on the other hand split an ordinary fence into two: Alightside where the fence is extremely cheap (only a compiler reordering barrier) and aheavyside where the fence is very expensive.Basically the way it works is that the heavy side does a system call that issues an inter-processor-interrupt (IPI) which then issues the appropriate barrier instruction on the other CPU (i.e. both CPUs will have issues a barrier instruction, one of them just does it asynchronously due to interrupt).
The
lightandheavynaming here is taken from C++ PR1202R5 [1], which is the proposal for the same feature in the C++ standard library (to appear in the next iteration of the C++ concurrency spec).On the julia side, these functions are exposed as
Threads.atomic_fence_lightandThreads.atomic_fence_heavy. The light side lowers tofence singlethreadin llvm IR (the Core.Intrinsic atomic_fence is adjusted appropriately to faciliate this). The heavy side has OS-specifc implementations, where:membarriersyscall or a fallback tomprotectfor systems that don't have it.FlushProcessWriteBufferssyscall.Note that unlike the C++ spec, I have specified that
atomic_fence_heavydoes synchronize withatomic_fence. This matches the underlying system call. I suspect C++ chose to omit this for a hypothetical future architecture that has instruction support for doing this from userspace that would then not synchronize with ordinary barriers, but I think I would rather cross that bridge when we get there.I intend to use this in #60281, but it's an independently useful feature.
[1] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p1202r5.pdf