Skip to content

Conversation

@apozharski
Copy link

As discussed in JuliaGPU/CUDA.jl#2998 and #611 currently GPUCompiler.jl leaks nonexistant gpu_* llvm functions into the cpu cache making ahead of time compilation impossible for any package that uses it.

I am currently fixing this by moving these runtime methods into the method table defined in the GPUCompiler module and having the CPU versions throw errors as is done in CUDA.jl. This feels like somewhat of a hack, however, it seems to work and without a better understanding of what this might break it seems to be the simplest solution.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 22, 2025

Your PR requires formatting changes to meet the project's style guidelines.
Please consider running Runic (git runic master) to apply these changes.

Click here to view the suggested changes.
diff --git a/src/runtime.jl b/src/runtime.jl
index 2f7312e..55affb1 100644
--- a/src/runtime.jl
+++ b/src/runtime.jl
@@ -83,10 +83,11 @@ function compile(def, return_type, types, llvm_return_type=nothing, llvm_types=n
     #        using the new nonrecursive codegen to handle function lookup ourselves?
     if def isa Symbol
         args = [gensym() for typ in types]
-        @eval GPUCompiler.@device_function($return_type,
-                                           @inline $def($(args...)) =
-                                               ccall($("extern $llvm_name"), llvmcall, $return_type, ($(types...),), $(args...))
-                                           )
+        @eval GPUCompiler.@device_function(
+            $return_type,
+            @inline $def($(args...)) =
+                ccall($("extern $llvm_name"), llvmcall, $return_type, ($(types...),), $(args...))
+        )
     end
 
     return
diff --git a/src/utils.jl b/src/utils.jl
index 4de1081..48096e0 100644
--- a/src/utils.jl
+++ b/src/utils.jl
@@ -200,12 +200,14 @@ macro device_function(rt, ex)
         $rt(1)
     end
 
-    esc(quote
-        $(combinedef(def))
+    return esc(
+        quote
+            $(combinedef(def))
 
-        # NOTE: no use of `@consistent_overlay` here because the regular function errors
-        Base.Experimental.@overlay($(GPUCompiler).GLOBAL_METHOD_TABLE, $ex)
-    end)
+            # NOTE: no use of `@consistent_overlay` here because the regular function errors
+            Base.Experimental.@overlay($(GPUCompiler).GLOBAL_METHOD_TABLE, $ex)
+        end
+    )
 end
 
 

@KSepetanc
Copy link

KSepetanc commented Dec 23, 2025

Loaded both forked CUDA.jl and this PR and tried to compile my full code and got error:

Stacktrace is massive so I copied first several lines.

ERROR: LoadError: Invalid return type for runtime function 'box_bool': expected LLVM.PointerType(ptr addrspace(10)), got LLVM.VoidType(void)
Stacktrace:
  [1] error(s::String)
    @ Base .\error.jl:44
  [2] emit_function!(mod::LLVM.Module, config::GPUCompiler.CompilerConfig{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}, f::Type, method::GPUCompiler.Runtime.RuntimeMethodInstance)
    @ GPUCompiler C:\Users\karlo\.julia\packages\GPUCompiler\vRm9U\src\rtlib.jl:81
  [3] build_runtime(job::GPUCompiler.CompilerJob)
    @ GPUCompiler C:\Users\karlo\.julia\packages\GPUCompiler\vRm9U\src\rtlib.jl:117
  [4] (::GPUCompiler.var"#load_runtime##0#load_runtime##1"{GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}})()
    @ GPUCompiler C:\Users\karlo\.julia\packages\GPUCompiler\vRm9U\src\rtlib.jl:159
  [5] lock(f::GPUCompiler.var"#load_runtime##0#load_runtime##1"{GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}}, l::ReentrantLock)
    @ Base .\lock.jl:335

Otherwise, I could compile GPUCompiler in image.

Do you have idea where could this come from?

@apozharski
Copy link
Author

@KSepetanc yep, I caught that in the tests for the PR as well (somehow they were passing for me locally, but I suspect that was just poor environment management on my part).

Unsurprisingly, my hack seems to break things in GPUCompiler at runtime. I have some ideas, namely the fact that i am replacing the stub llvm call with an exception in the cpu-cache which simply returns void. It has taken me a bit to get something I can test with (since my machine at home has a quite broken CUDA installation 😅 ) but it seems I am able to test with OpenCL so hopefully I can get something a bit less hacky working soon depending on how much time I will have over the holidays.

I will turn this PR to a draft until then.

@apozharski apozharski marked this pull request as draft December 23, 2025 11:49
…g us out of some KernelAbstractions compilations in e.g OpenCL.jl
@KSepetanc
Copy link

KSepetanc commented Dec 23, 2025

@apozharski are you using CUDA 590 driver branch (it is CUDA 13.1)? I have seen maintainers are preparing support for it, but last I checked a few days ago it still was not released. Without knowing more about your system, I presume you just need to downgrade to 580 series driver that comes with CUDA 13.0. I had this issue too.

I will soon have more questions as it seems that more fixes are needed than just GPUCompiler.jl and CUDA.jl to AOT compile MadNLPGPU which I need, but it is still WIP so I will wait a bit.

@codecov
Copy link

codecov bot commented Dec 31, 2025

Codecov Report

❌ Patch coverage is 85.71429% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 74.72%. Comparing base (e4a697f) to head (de4599b).

Files with missing lines Patch % Lines
src/runtime.jl 50.00% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##           master     #749   +/-   ##
=======================================
  Coverage   74.72%   74.72%           
=======================================
  Files          24       24           
  Lines        3675     3680    +5     
=======================================
+ Hits         2746     2750    +4     
- Misses        929      930    +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@apozharski apozharski marked this pull request as ready for review January 4, 2026 14:12
@apozharski
Copy link
Author

Marking ready. I seem to have fixed the issues I can repro on my own machine (it turns out including @warn in the cpu cache was breaking some things, though in truth I do not understand why)

@apozharski
Copy link
Author

apozharski commented Jan 5, 2026

(it turns out including @warn in the cpu cache was breaking some things, though in truth I do not understand why)

To clarify this was causing a bunch of ccalls to cpu (julia) functions to be emitted in the IR for the compile job, and digging through it pointed to it being all the necessary components for the warn to be executed, even though the gpu_* runtime method was being correctly pulled only from the GLOBAL_METHOD_TABLE overlay.

I did some more digging this morning and I still don't understand why this was happening but my debugging is fairly primitive.

@KSepetanc
Copy link

It works! Loaded both CUDA.jl PR and this one and full code case passes for me (tried using exe files and all good). @apozharski thank you!!

@KSepetanc
Copy link

Came by one hurdle after all. Compiling with JuliaC worked as stated above, but got error when compiling with PackageCompiler instead. @apozharski could you run your debug with PackageCompiler and your code and check if you are getting it too?

PackageCompiler has three stages:

  1. creating compiler sysimage (completes)
  2. compiling fresh sysimage (completes)
  3. fails at start here during precompile

Most errors are of this type (small part of stack trace below).

ERROR: LoadError: Failed to precompile GPUCompiler [61eb1bfa-7361-4325-ad38-22787b887f55] to "C:\\Users\\karlo\\.julia\\compiled\\v1.12\\GPUCompiler\\jl_11DD.tmp".
Stacktrace:
  [1] error(s::String)
    @ Base .\error.jl:44
  [2] compilecache(pkg::Base.PkgId, path::String, internal_stderr::IO, internal_stdout::IO, keep_loaded_modules::Bool; flags::Cmd, cacheflags::Base.CacheFlags, reasons::Dict{String, Int64}, loadable_exts::Nothing)
    @ Base .\loading.jl:3311
  [3] (::Base.var"#__require_prelocked##0#__require_prelocked##1"{Base.PkgId, String, Dict{String, Int64}})()
    @ Base .\loading.jl:2679
  [4] maybe_cachefile_lock(f::Base.var"#__require_prelocked##0#__require_prelocked##1"{Base.PkgId, String, Dict{String, Int64}}, pkg::Base.PkgId, srcpath::String; stale_age::Int64)
    @ Base .\loading.jl:3898

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants