Skip to content

Conversation

@JintaoPengCS
Copy link
Collaborator

@JintaoPengCS JintaoPengCS commented Jan 20, 2026

Summary by CodeRabbit

  • Refactor
    • Reorganized internal operator registration for RMS norm quantization across modules
    • Simplified RMS norm module to consistently use optimized fused computation
    • Streamlined tensor device and shape handling logic
    • Removed conditional runtime checks, reducing code complexity

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@JintaoPengCS JintaoPengCS requested review from a team as code owners January 20, 2026 12:27
@JintaoPengCS JintaoPengCS requested review from hyukn and yuxianq January 20, 2026 12:27
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 20, 2026

📝 Walkthrough

Walkthrough

The changes reorganize the fake Torch operator registration for trtllm::fused_add_rms_norm_quant by moving it from torch_custom_ops.py to cpp_custom_ops.py, and refactor the RMSNorm module to enforce an always-fused NVFP4 quantization pathway, removing conditional checks and fallback code paths.

Changes

Cohort / File(s) Summary
Fake operator registration reorganization
tensorrt_llm/_torch/custom_ops/cpp_custom_ops.py, tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
Moved _fused_add_rms_norm_quant_fake implementation from torch_custom_ops.py (+20 in cpp_custom_ops.py) to cpp_custom_ops.py (-19 from torch_custom_ops.py). Implementation remains functionally equivalent.
RMSNorm module simplification
tensorrt_llm/_torch/modules/rms_norm.py
Removed conditional fused-path gatekeeping and unfused fallback path; now enforces always-fused NVFP4 quantization. Replaced dynamic dtype casting with strict contiguous-with-dtype validation. Simplified device/shape handling and enforces device consistency via runtime error. Changed operator import to direct torch.ops.trtllm reference. Removed logger usage.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is incomplete and contains only the template placeholder text without any actual description, test coverage details, or completion of the PR checklist items. Fill in the Description section explaining what changes were made and why. List relevant tests in the Test Coverage section. Complete the PR checklist by checking items and providing specifics on which guidelines, tests, and documentation updates were addressed.
✅ Passed checks (2 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Title check ✅ Passed The title accurately describes the primary change: reorganizing the RMSNorm custom operator implementation by moving the fake registration to cpp_custom_ops.py and updating the calling code in rms_norm.py.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/modules/rms_norm.py`:
- Around line 102-105: The ValueError raised in the RMSNorm NVFP4 fused path
wrongly uses a `key=` keyword argument (inside the raise in
tensorrt_llm/_torch/modules/rms_norm.py), which will cause a TypeError; replace
the keyword argument by embedding the key information into the error message
(e.g., append `rmsnorm_nvfp4_cast_{key}` to the f-string) or use a logging call
before raising if structured metadata is required, updating the raise in the
function/method where the f"RMSNorm NVFP4 fused path: casting {key} from
{t.dtype} to {hs_2d.dtype}." message is created (look for the raise ValueError
line) so only positional arguments are passed to ValueError.
- Around line 99-107: In _ensure_contiguous_with_dtype, the assignment "t =
t.to(dtype=hs_2d.dtype)" is unreachable because the preceding raise ValueError
always executes on a dtype mismatch; remove that dead assignment so the function
either raises on mismatch (strict behavior) or, if you intended lenient
behavior, replace the raise with a warning and then perform the cast — reference
the _ensure_contiguous_with_dtype function, the t variable and hs_2d.dtype to
locate and fix the logic.
🧹 Nitpick comments (1)
tensorrt_llm/_torch/modules/rms_norm.py (1)

118-119: Redundant None check - nvfp4_scale is already validated.

Since lines 85-88 raise ValueError when nvfp4_scale is None, the ternary check here is unnecessary—nvfp4_scale is guaranteed to be non-None at this point.

♻️ Suggested simplification
-                sf_scale = nvfp4_scale.contiguous(
-                ) if nvfp4_scale is not None else None
+                sf_scale = nvfp4_scale.contiguous()

@JintaoPengCS JintaoPengCS changed the title [None][Feat] tensorrt_llm: update RMSNorm custom op plumbing [None][Feat] tensorrt llm update RMSNorm custom op plumbing Jan 20, 2026
@JintaoPengCS JintaoPengCS changed the title [None][Feat] tensorrt llm update RMSNorm custom op plumbing [None][feat] Adding torch ext API for FusedAddRMSNormQuant kernel1 Jan 20, 2026
@JintaoPengCS JintaoPengCS changed the title [None][feat] Adding torch ext API for FusedAddRMSNormQuant kernel1 [None][feat] Update RMSNorm custom op plumbing Jan 20, 2026
@JintaoPengCS JintaoPengCS changed the title [None][feat] Update RMSNorm custom op plumbing [None][fix] Update RMSNorm custom op plumbing Jan 20, 2026
@JintaoPengCS
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32748 [ run ] triggered by Bot. Commit: e48303c

Squashed changes:
- update RMSNorm custom op plumbing
- tweak fake op stub name
- fix NVFP4 RMSNorm dtype-cast warning

Signed-off-by: jintaop <jintaop@nvidia.com>
Tidy up NVFP4 RMSNorm fused-path code formatting and remove stray blank
lines in torch custom ops.

Signed-off-by: jintaop <jintaop@nvidia.com>
@JintaoPengCS
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32762 [ run ] triggered by Bot. Commit: 0498421

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32762 [ run ] completed with state SUCCESS. Commit: 0498421
/LLM/main/L0_MergeRequest_PR pipeline #25357 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@JintaoPengCS
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32820 [ run ] triggered by Bot. Commit: 0498421

JintaoPengCS and others added 3 commits January 21, 2026 09:49
Avoid importing fused_add_rms_norm_quant inside RMSNorm; call it via torch.ops.trtllm.

Signed-off-by: jintaop <jintaop@nvidia.com>
Use has_residual to clarify when residual was provided, preserving tuple return behavior.

Signed-off-by: jintaop <jintaop@nvidia.com>
@JintaoPengCS JintaoPengCS requested a review from hyukn January 21, 2026 02:10
@JintaoPengCS
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32844 [ run ] triggered by Bot. Commit: 981ca8d

Signed-off-by: jintaop <jintaop@nvidia.com>
@JintaoPengCS
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32869 [ run ] triggered by Bot. Commit: e5d7e81

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32869 [ run ] completed with state SUCCESS. Commit: e5d7e81
/LLM/main/L0_MergeRequest_PR pipeline #25427 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@JintaoPengCS
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32931 [ run ] triggered by Bot. Commit: e5d7e81

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32931 [ run ] completed with state SUCCESS. Commit: e5d7e81
/LLM/main/L0_MergeRequest_PR pipeline #25472 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@JintaoPengCS
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33031 [ run ] triggered by Bot. Commit: 030e0ca

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33031 [ run ] completed with state SUCCESS. Commit: 030e0ca
/LLM/main/L0_MergeRequest_PR pipeline #25538 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@JintaoPengCS
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33146 [ run ] triggered by Bot. Commit: 030e0ca

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33146 [ run ] completed with state SUCCESS. Commit: 030e0ca
/LLM/main/L0_MergeRequest_PR pipeline #25619 completed with status: 'SUCCESS'

@JintaoPengCS JintaoPengCS merged commit 9beb971 into NVIDIA:main Jan 22, 2026
5 checks passed
greg-kwasniewski1 pushed a commit to nv-auto-deploy/TensorRT-LLM that referenced this pull request Jan 22, 2026
Signed-off-by: jintaop <jintaop@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants