Skip to content

Conversation

@tongyuantongyu
Copy link
Member

@tongyuantongyu tongyuantongyu commented Jan 20, 2026

Description

Add filter_source_cuda_architectures to replace filter_cuda_archs. The new utility correctly handles arch conditional and family conditional cubins.

Previous: Enable only 103 compiles 100f, 103a and 100a cubins.
Now: Enable only 103 compiles only 100f and 103a cubins.

Test Coverage

Covered by exist CI tests.

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • New Features

    • Added cmake-format configuration for consistent code formatting standards across the project.
    • Introduced enhanced CUDA architecture management with improved filtering and family-aware support.
  • Improvements

    • Refined kernel trait initialization with default member initializers.
    • Updated kernel configuration list handling to derive sizes dynamically from array definitions rather than using separate constants.
    • Expanded CUDA architecture compatibility across GEMM, FMHA, and attention kernel implementations.

✏️ Tip: You can customize this high-level summary in your review settings.

{

#if !defined(EXCLUDE_SM_100) || !defined(EXCLUDE_SM_103)
#ifndef EXCLUDE_SM_100F
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should return nullptr only when SM_100, SM_100f and SM_103 are all excluded.

Copy link
Member Author

@tongyuantongyu tongyuantongyu Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is guaranteed from CMake side that SM_100F is only excluded when no sm10x arch is enabled. This test should be sufficient.

@tongyuantongyu tongyuantongyu force-pushed the ytong/sm10x-compilation branch 2 times, most recently from 48428ce to ef29aca Compare January 22, 2026 09:49
@tongyuantongyu
Copy link
Member Author

/bot run --disable-fail-fast --gpu-type "B300, DGX_B300"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33150 [ run ] triggered by Bot. Commit: ef29aca

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33150 [ run ] completed with state FAILURE. Commit: ef29aca
/LLM/main/L0_MergeRequest_PR pipeline #25618 (Partly Tested) completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@tongyuantongyu tongyuantongyu force-pushed the ytong/sm10x-compilation branch from ef29aca to d9c2ca3 Compare January 23, 2026 05:39
@tongyuantongyu
Copy link
Member Author

/bot run --disable-fail-fast --gpu-type "B300, DGX_B300"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33314 [ run ] triggered by Bot. Commit: d9c2ca3

Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
@tongyuantongyu tongyuantongyu force-pushed the ytong/sm10x-compilation branch from d9c2ca3 to 8dc347f Compare January 23, 2026 09:10
@tongyuantongyu tongyuantongyu marked this pull request as ready for review January 23, 2026 09:26
@tongyuantongyu tongyuantongyu requested a review from a team as a code owner January 23, 2026 09:26
@tensorrt-cicd
Copy link
Collaborator

PR_Github #33314 [ run ] completed with state SUCCESS. Commit: d9c2ca3
/LLM/main/L0_MergeRequest_PR pipeline #25718 (Partly Tested) completed with status: 'SUCCESS'

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 23, 2026

📝 Walkthrough

Walkthrough

This pull request introduces a cmake-format configuration and refactors the CUDA architecture handling system. The cuda_configuration.cmake module is redesigned with new public APIs for managing CUDA architectures on targets, while CMakeLists.txt files across kernel modules are systematically updated to use the new filter_source_cuda_architectures function, adopt OBJECT library patterns, and add public linkage to trtllm_gen_fmha_interface. Additionally, kernel metadata is updated to reference SM100F variants instead of SM100.

Changes

Cohort / File(s) Summary
Configuration & Build System
.cmake-format.json, cpp/cmake/modules/cuda_configuration.cmake
New cmake-format configuration file with comprehensive defaults for parsing, formatting, linting, and markup. CUDA configuration module extensively refactored with new public functions (add_cuda_architectures, set_cuda_architectures, filter_source_cuda_architectures, setup_cuda_architectures) replacing legacy setup; introduces CMAKE_CUDA_* constants with PARENT_SCOPE propagation and enhanced architecture filtering logic.
Top-Level CMake Updates
cpp/tensorrt_llm/common/CMakeLists.txt, cpp/tensorrt_llm/kernels/CMakeLists.txt
Common library now links PUBLIC trtllm_gen_fmha_interface; kernels CMakeLists adds multiple subdirectories, removes legacy filter_cuda_archs function, and establishes PUBLIC link to trtllm_gen_fmha_interface.
Attention Kernel CMakeLists Refactoring
cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention/CMakeLists.txt, cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/CMakeLists.txt
Both refactored to use filter_source_cuda_architectures with architecture specifications; added PUBLIC linkage of trtllm_gen_fmha_interface; contextFusedMultiHeadAttention introduces per-arch object library pattern with dynamic architecture selection.
Decoder & Batched GEMM CMakeLists
cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/CMakeLists.txt, cpp/tensorrt_llm/kernels/trtllmGenKernels/batchedGemm/CMakeLists.txt
Converted to OBJECT library targets with target_sources attachment; replaced filter_cuda_archs with filter_source_cuda_architectures; added trtllm_gen_fmha_interface as public dependency.
GEMM Family CMakeLists
cpp/tensorrt_llm/kernels/trtllmGenKernels/gemm/CMakeLists.txt, cpp/tensorrt_llm/kernels/trtllmGenKernels/gemmGatedAct/CMakeLists.txt
Both refactored to OBJECT library pattern with target_sources; replaced legacy filter_cuda_archs calls with filter_source_cuda_architectures specifying ARCHS and TARGET; gemmGatedAct adds POSITION_INDEPENDENT_CODE and CUDA_RESOLVE_DEVICE_SYMBOLS properties.
Kernel Interface & Config Headers
cpp/tensorrt_llm/kernels/trtllmGenKernels/batchedGemm/trtllmGen_bmm_export/BatchedGemmInterface.h, cpp/tensorrt_llm/kernels/trtllmGenKernels/gemm/trtllmGen_gemm_export/GemmInterface.h, cpp/tensorrt_llm/kernels/trtllmGenKernels/gemmGatedAct/trtllmGen_gatedAct_export/GemmGatedActInterface.h
All three interface headers updated to compute getNumGemmConfigs/getNumBatchedGemmConfigs using array size division (sizeof array / sizeof element) instead of returning dedicated length constants.
Kernel Traits Headers
cpp/tensorrt_llm/kernels/trtllmGenKernels/batchedGemm/trtllmGen_bmm_export/KernelTraits.h, cpp/tensorrt_llm/kernels/trtllmGenKernels/gemm/trtllmGen_gemm_export/KernelTraits.h, cpp/tensorrt_llm/kernels/trtllmGenKernels/gemmGatedAct/trtllmGen_gatedAct_export/KernelTraits.h
Added default member initializers ({}) to public data members (mMmaKind, mFuseUtccpWithUtcmma, mUseMaxTmemOverlap, mNumEpilogueWarps) to ensure zero-initialization in default construction.
FMHA & GemmGatedAct SM Configuration
cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha/fmhaKernels.h, cpp/tensorrt_llm/kernels/trtllmGenKernels/gemmGatedAct/trtllmGen_gatedAct_export/KernelMetaInfo.h
fmhaKernels.h changes preprocessor guard from EXCLUDE_SM_100/103 to EXCLUDE_SM_100F; KernelMetaInfo.h updates all kernel extern declarations and configuration list entries from sm100/Sm100a variants to sm100f/Sm100f, affecting ~30 symbol references.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 40.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly describes the main change: introducing proper conditional compilation handling for SM10x cubins by replacing filter_cuda_archs with filter_source_cuda_architectures.
Description check ✅ Passed The PR description adequately explains the issue (incorrect conditional compilation of cubins) and the solution (new filter_source_cuda_architectures utility), with a concrete before/after behavior example.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/cmake/modules/cuda_configuration.cmake (1)

1-3: Update copyright year.

Per coding guidelines, the copyright header should include the year of the latest meaningful modification. This file has significant changes, so the year range should be updated.

Suggested fix
 #
-# SPDX-FileCopyrightText: Copyright (c) 1993-2022 NVIDIA CORPORATION &
+# SPDX-FileCopyrightText: Copyright (c) 1993-2026 NVIDIA CORPORATION &
 # AFFILIATES. All rights reserved. SPDX-License-Identifier: Apache-2.0
🧹 Nitpick comments (1)
cpp/cmake/modules/cuda_configuration.cmake (1)

382-403: Implicit ordering dependency on setup_cuda_architectures.

The variables CMAKE_CUDA_MIN_ARCHITECTURE_HAS_ACCEL and CMAKE_CUDA_MIN_ARCHITECTURE_HAS_FAMILY are set with PARENT_SCOPE here, but are later used by add_cuda_architectures (line 486), set_cuda_architectures (line 532), and filter_source_cuda_architectures (line 635). If these functions are called before setup_cuda_architectures, they will fail silently or produce incorrect results.

Consider adding a guard check at the start of dependent functions to provide a clear error message if the prerequisites aren't met.

Example guard for dependent functions
# At the start of add_cuda_architectures, set_cuda_architectures, 
# and filter_source_cuda_architectures:
if(NOT DEFINED CMAKE_CUDA_MIN_ARCHITECTURE_HAS_ACCEL)
  message(FATAL_ERROR 
    "setup_cuda_architectures() must be called before using this function")
endif()

@tongyuantongyu
Copy link
Member Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33360 [ run ] triggered by Bot. Commit: 8dc347f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33360 [ run ] completed with state SUCCESS. Commit: 8dc347f
/LLM/main/L0_MergeRequest_PR pipeline #25750 completed with status: 'SUCCESS'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants