diff --git a/docs/codeflash-concepts/benchmarking-gpu-code.mdx b/docs/codeflash-concepts/benchmarking-gpu-code.mdx new file mode 100644 index 000000000..41d4f1d89 --- /dev/null +++ b/docs/codeflash-concepts/benchmarking-gpu-code.mdx @@ -0,0 +1,117 @@ +--- +title: "How Codeflash Measures Code Runtime on GPUs" +description: "Learn how Codeflash accurately measures code performance on GPUs" +icon: "microchip" +sidebarTitle: "GPU Benchmarking" +keywords: ["benchmarking", "performance", "timing", "measurement", "runtime", "noise reduction", "GPU", "MPS"] +--- + +## Accurate Benchmarking on GPU devices + +When a GPU (Graphics Processing Unit) operation is executed, it executes **asynchronously**. This means the CPU (Central Processing Unit) queues up work for the GPU and immediately continues to the next line of code - it doesn't wait for the GPU to finish. Accurate measurement of code execution on GPUs involves the insertion of synchronization barriers to ensure no pending GPU tasks are executing before and after the timing measurements are made. + +## Illustration + +### Without Synchronization + +```mermaid actions={false} +%%{init: {'gantt': {'useWidth': 1200}}}%% +gantt + title CPU vs CUDA Stream Timeline (Without Synchronization) + dateFormat X + axisFormat %s + + section CPU + Timer Start :milestone, m1, 0, 0 + Launch Kernel 1 :active, cpu0, 0, 4 + Launch Kernel 2 :active, cpu1, 4, 8 + Launch Kernel 3 :active, cpu2, 8, 12 + Timer End :milestone, m2, 12, 12 + + section CUDA Stream + Waiting :done, wait, 0, 4 + Kernel 1 :active, k1, 4, 11 + Kernel 2 :active, k2, 11, 18 + Kernel 3 :active, k3, 18, 25 + + section Problem + Timer ends too early :done, p1, after m2, 25 +``` + +Here you can see that the timing statements are measuring the duration up till the end of the final kernel launch. The GPU computation hasn't completed yet, which means the timing measurement is not accurate and would affect any future inference based on this information. + +### With Synchronization + +```mermaid actions={false} +%%{init: {'gantt': {'useWidth': 1200}}}%% +gantt + title CPU vs CUDA Stream Timeline (With Synchronization) + dateFormat X + axisFormat %s + + section CPU + Device Synchronization :done, wait, 0, 4 + Timer Start :milestone, m1, 4, 4 + Launch Kernel 1 :active, cpu0, 4, 8 + Launch Kernel 2 :active, cpu1, 8, 12 + Launch Kernel 3 :active, cpu2, 12, 16 + Device Synchronization :done, wait, 16, 33 + Timer End :milestone, m2, 33, 33 + + section CUDA Stream + Previous Work :done, wait, 0, 4 + Waiting :done, wait, 4, 8 + Kernel 1 :active, k1, 8, 15 + Kernel 2 :active, k2, 15, 22 + Kernel 3 :active, k3, 22, 33 +``` + +Here you can see that a device synchronization call is made before executing the code, this ensures that the CPU waits for any pending GPU tasks to finish before starting the timer. After the launch of the final kernel, another device synchronization call is made which ensures all pending GPU tasks are finished before measuring the runtime. + + + +## Pytorch Example + +Execute the following code in your Python Interpreter to get the kernel launch time (Replace `cuda` with `mps` everywhere to run on your Mac). +```python +import torch +import time +device = "cuda" +x = torch.randn(8192, 8192, device=device) +y = torch.randn(8192, 8192, device=device) +t0 = time.perf_counter_ns() +z = torch.matmul(x, y) +t1 = time.perf_counter_ns() +print(f"Without synchronize: {(t1 - t0) / 1e6:.3f} ms") +``` + +Now, **Restart** your interpreter and execute the following code to get the kernel execution time (Replace `cuda` with `mps` everywhere to run on your Mac). +```python +import torch +import time +device = "cuda" +x = torch.randn(8192, 8192, device=device) +y = torch.randn(8192, 8192, device=device) +torch.cuda.synchronize() # clear any pending work +t0 = time.perf_counter_ns() +z = torch.matmul(x, y) +torch.cuda.synchronize() # wait for GPU to finish +t1 = time.perf_counter_ns() +print(f"With synchronize: {(t1 - t0) / 1e6:.3f} ms") +``` + + +Expected Output on CUDA + +``` +Without synchronize: 69.157 ms +With synchronize: 152.277 ms +``` + +# How Codeflash measures execution time involving GPUs + +Codeflash automatically inserts synchronization barriers before measuring performance. It currently supports GPU code written in `Pytorch`, `Tensorflow` and `JAX` for NVIDIA GPUs (`CUDA`) and MacOS Metal Performance Shaders (`MPS`). + +- **PyTorch**: Uses `torch.cuda.synchronize()` (`CUDA`) or `torch.mps.synchronize()` (`MPS`) depending on the device. +- **JAX**: Uses `jax.block_until_ready()` to wait for computation to complete. It works for both `CUDA` and `MPS` devices. +- **TensorFlow**: Uses `tf.test.experimental.sync_devices()` for device synchronization. It works for both `CUDA` and `MPS` devices. diff --git a/docs/docs.json b/docs/docs.json index a36fc82dc..87236e236 100644 --- a/docs/docs.json +++ b/docs/docs.json @@ -66,7 +66,9 @@ "group": "🧠 Core Concepts", "pages": [ "codeflash-concepts/how-codeflash-works", - "codeflash-concepts/benchmarking" + "codeflash-concepts/benchmarking", + "codeflash-concepts/benchmarking-gpu-code", + "support-for-jit/index" ] }, { diff --git a/docs/support-for-jit/index.mdx b/docs/support-for-jit/index.mdx new file mode 100644 index 000000000..970ecf934 --- /dev/null +++ b/docs/support-for-jit/index.mdx @@ -0,0 +1,257 @@ +--- +title: "Just-in-Time Compilation" +description: "Learn how Codeflash optimizes code using JIT compilation with Numba, PyTorch, TensorFlow, and JAX" +icon: "bolt" +sidebarTitle: "JIT Compilation" +keywords: ["JIT", "just-in-time", "numba", "pytorch", "tensorflow", "jax", "GPU", "CUDA", "MPS", "compilation", "performance"] +--- + +# Just-in-Time Compilation + +Just-in-time (JIT) compilation is a runtime technique where code is compiled into machine code on the fly, right before it is executed, to improve performance. Codeflash supports optimizing numerical code using Just-in-Time (JIT) compilation via leveraging JIT compilers from the **Numba**, **PyTorch**, **TensorFlow**, and **JAX** frameworks. + +## When JIT Compilation Helps + +JIT compilation is most effective for: + +- Numerical computations with loops that can't be easily vectorized. +- Custom algorithms not covered by existing optimized libraries. +- Functions that are called repeatedly with consistent input types. +- Code that benefits from hardware-specific optimizations (SIMD acceleration). + +### Example + +#### Function Definition + +```python +import torch +def complex_activation(x): + """A custom activation with many small operations - compile makes a huge difference""" + # Many sequential element-wise ops create kernel launch overhead + x = torch.sin(x) + x = x * torch.cos(x) + x = x + torch.exp(-x.abs()) + x = x / (1 + x.pow(2)) + x = torch.tanh(x) * torch.sigmoid(x) + x = x - 0.5 * x.pow(3) + return x +``` + +#### Benchmarking Snippet (replace `cuda` with `mps` to run on your Mac) + +```python +import time +# Create compiled version +complex_activation_compiled = torch.compile(complex_activation) + +# Benchmark +x = torch.randn(1000, 1000, device='cuda') + +# Warmup steps are slower as the JIT compiler is understanding the function execution to compile into machine code +for _ in range(10): + _ = complex_activation(x) + _ = complex_activation_compiled(x) + +# Time uncompiled +torch.cuda.synchronize() +start = time.time() +for _ in range(100): + y = complex_activation(x) +torch.cuda.synchronize() +uncompiled_time = time.time() - start + +# Time compiled +torch.cuda.synchronize() +start = time.time() +for _ in range(100): + y = complex_activation_compiled(x) +torch.cuda.synchronize() +compiled_time = time.time() - start + +print(f"Uncompiled: {uncompiled_time:.4f}s") +print(f"Compiled: {compiled_time:.4f}s") +print(f"Speedup: {uncompiled_time/compiled_time:.2f}x") +``` + +Expected Output on CUDA + +``` +Uncompiled: 0.0176s +Compiled: 0.0063s +Speedup: 2.80x +``` + +Here, JIT compilation via `torch.compile` is the only viable option because +1. Already vectorized - All operations are already PyTorch tensor ops. +2. Multiple Kernel Launches - Uncompiled code launches ~10 separate kernels. `torch.compile` fuses them into 1-2 kernels, eliminating kernel launch overhead. +3. No algorithmic improvement - The computation itself is already optimal. +4. Python overhead elimination - Removes Python interpreter overhead between operations. + + +## When JIT Compilation May Not Help + +JIT compilation may not provide speedups when: + +- The code already uses highly optimized libraries (e.g., `NumPy` with `MKL`, `cuBLAS`, `cuDNN`). +- Functions have variable input types or shapes that prevent effective compilation. +- The compilation overhead exceeds the runtime savings for short-running functions. +- The code relies heavily on Python objects or dynamic features that JIT compilers can't optimize. + +### Example + +#### Function Definition + +```python +def adaptive_processing(x, threshold=0.5): + """Function with data-dependent control flow - compile struggles here""" + # Check how many values exceed threshold (data-dependent!) + mask = x > threshold + num_large = mask.sum().item() # .item() causes graph break + + if num_large > x.numel() * 0.3: + # Path 1: Many large values - use expensive operation + result = torch.matmul(x, x.T) # Already optimized by cuBLAS + result = result.mean(dim=0) + else: + # Path 2: Few large values - use cheap operation + result = x.mean(dim=1) + + return result +``` + +#### Benchmarking Snippet (replace `cuda` with `mps` to run on your Mac) + +```python +# Create compiled version +adaptive_processing_compiled = torch.compile(adaptive_processing) + +# Test with data that causes branch variation +x = torch.randn(500, 500, device='cuda') + +# Warmup steps are slower as the JIT compiler is understanding the function execution to compile into machine code +for _ in range(10): + _ = adaptive_processing(x) + _ = adaptive_processing_compiled(x) + +# Benchmark with varying data (causes recompilation) +torch.cuda.synchronize() +start = time.time() +for i in range(100): + # Vary the data to trigger different branches + x_test = torch.randn(500, 500, device='cuda') + (i % 2) + y = adaptive_processing(x_test) +torch.cuda.synchronize() +uncompiled_time = time.time() - start + +torch.cuda.synchronize() +start = time.time() +for i in range(100): + x_test = torch.randn(500, 500, device='cuda') + (i % 2) + y = adaptive_processing_compiled(x_test) # Recompiles frequently! +torch.cuda.synchronize() +compiled_time = time.time() - start + +print(f"Uncompiled: {uncompiled_time:.4f}s") +print(f"Compiled: {compiled_time:.4f}s") +print(f"Slowdown: {compiled_time/uncompiled_time:.2f}x") +``` + +Expected Output on CUDA + +``` +Uncompiled: 0.0296s +Compiled: 0.2847s +Slowdown: 9.63x +``` + +Why `torch.compile` is detrimental here: + +1. Graph breaks - `.item()` forces a graph break, negating compile benefits. +2. Recompilation overhead - Different branches cause expensive recompilation each time. +3. Dynamic control flow - Data-dependent conditionals can't be optimized away. +4. Already optimized ops - `matmul` already uses `cuBLAS`; compile adds overhead without benefit. + +#### Better Optimization Strategy + +```python +def optimized_version(x, threshold=0.5): + """Remove data-dependent control flow - vectorize instead""" + mask = (x > threshold).float() + weight = (mask.mean() > 0.3).float() # Keep on GPU + + # Compute both paths, blend based on weight (branchless) + expensive = torch.matmul(x, x.T).mean(dim=0) + cheap = x.mean(dim=1).squeeze() + + # Pad cheap result to match expensive dimensions + cheap_padded = cheap.expand(expensive.shape[0]) + + result = weight * expensive + (1 - weight) * cheap_padded + return result +``` + +Expected Output on CUDA + +``` +Optimized: 0.0277s +Speedup compared to Uncompiled: 1.57x +``` + +Key improvements: + +1. Eliminate `.item()` - Keep computation on GPU. +2. Branchless execution - Compute both paths, blend results. +3. Vectorization - Replace conditionals with masked operations. +4. Reduce Python overhead - Minimize host-device synchronization. + +## Supported JIT Frameworks + +Each framework uses different compilation strategies to accelerate Python code: + +### Numba (CPU Code) + +Numba compiles Python functions to optimized machine code using the LLVM compiler infrastructure. Codeflash can suggest Numba optimizations that use: + +- **`@jit`** - General-purpose JIT compilation with optional flags. + - **`nopython=True`** - Compiles to machine code without falling back to the Python interpreter. + - **`fastmath=True`** - Uses aggressive floating-point optimizations via LLVM's fastmath flag. + - **`cache=True`** - cache compiled function to disk which reduces future runtimes. + - **`parallel=True`** - Parallelizes code inside loops. + +### PyTorch + +PyTorch provides JIT compilation through `torch.compile()`, the recommended compilation API introduced in PyTorch 2.0. It uses TorchDynamo to capture Python bytecode and TorchInductor to generate optimized kernels. + +- **`torch.compile()`** - Compiles a function or module for optimized execution. + - **`mode`** - Controls the compilation strategy: + - `"default"` - Balanced compilation with moderate optimization. + - `"reduce-overhead"` - Minimizes Python overhead using CUDA graphs, ideal for small batches. + - `"max-autotune"` - Spends more time auto-tuning to find the fastest kernels. + - **`fullgraph=True`** - Requires the entire function to be captured as a single graph. Raises an error if graph breaks occur, useful for ensuring complete optimization. + - **`dynamic=True`** - Enables dynamic shape support, allowing the compiled function to handle varying input sizes without recompilation. + +### TensorFlow + +TensorFlow uses `@tf.function` to compile Python functions into optimized TensorFlow graphs. When combined with XLA (Accelerated Linear Algebra), it can generate highly optimized machine code for both CPU and GPU. + +- **`@tf.function`** - Converts Python functions into TensorFlow graphs for optimized execution. + - **`jit_compile=True`** - Enables XLA compilation, which performs whole-function optimization including operation fusion, memory layout optimization, and target-specific code generation. + +### JAX + +JAX uses XLA to JIT compile pure functions into optimized machine code. It emphasizes functional programming patterns and captures side-effect-free operations for optimization. + +- **`@jax.jit`** - JIT compiles functions using XLA with automatic operation fusion. + +## How Codeflash Optimizes with JIT + +When Codeflash identifies a function that could benefit from JIT compilation, it: + +1. Rewrites the code in a JIT-compatible format, which may involve breaking down complex functions into separate JIT-compiled components. +2. Generates appropriate tests that are compatible with JIT-compiled code, carefully handling data types since JIT compilers have stricter input type requirements. +3. Disables JIT compilation when running coverage and tracer. This ensures accurate coverage and trace data, since both rely on Python bytecode execution. JIT-compiled code bypasses Python bytecode, so it would prevent proper tracking. +4. Disables the Line Profiler for JIT compiled code. It could be possible to disable JIT compilation and run the line profiler, but that would lead to inaccurate information which could misguide the optimization process. + +## Configuration + +JIT compilation support is **enabled automatically** in Codeflash. You don't need to modify any configuration to enable JIT-based optimizations. Codeflash will automatically detect when JIT compilation could improve performance and suggest appropriate optimizations. \ No newline at end of file