Skip to content

Comments

MONet Bundle Integration into MONAI Deploy#574

Open
SimoneBendazzoli93 wants to merge 12 commits intoProject-MONAI:mainfrom
SimoneBendazzoli93:main
Open

MONet Bundle Integration into MONAI Deploy#574
SimoneBendazzoli93 wants to merge 12 commits intoProject-MONAI:mainfrom
SimoneBendazzoli93:main

Conversation

@SimoneBendazzoli93
Copy link

@SimoneBendazzoli93 SimoneBendazzoli93 commented Dec 12, 2025

This PR introduces support for the MONet Bundle (an nnUNet wrapper for the MONAI Bundle) into MONAI Deploy.

Key Features:

  • Added a new operator: MONetBundleInferenceOperator, extending MonaiBundleInferenceOperator

  • Included an example application demonstrating spleen segmentation using the MONetBundleInferenceOperator

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced MONetBundleInferenceOperator, a specialized inference operator for MONet Bundle NN-Unet-style models with multimodal input support and automatic model validation.
  • Bug Fixes

    • Fixed YAML file extension handling in bundle configuration reading.
    • Improved metadata handling to ensure consistent dictionary validation.

@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
7.3% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This breaks the existing project structure by introducing a new folder 'devel', can this not be part of the file under ''operators" folder? - Also, can you add some links in the docstrings on how to generate the MONET bundle. Does this currently support all versions of nnunet?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The devel folder was accidentally included in the PR. I have now removed, adding also some references to the MONet Bundle in the docstrings

SimoneBendazzoli93 and others added 10 commits January 26, 2026 11:01
- Included MONetBundleInferenceOperator in the __init__.py file for operator registration.
- Updated import statements to reflect the addition of the new operator.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Corrected the bundle suffixes tuple to include a period before 'yml'.
- Fixed a method call to ensure casefold() is invoked correctly.
- Initialized meta_data to an empty dictionary if not provided.

These changes enhance code clarity and prevent potential runtime errors.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Introduced a new operator, MONetBundleInferenceOperator, for performing inference using the MONet bundle.
- Extended functionality from MonaiBundleInferenceOperator to support nnUNet-specific configurations.
- Implemented methods for initializing configurations and performing predictions with multimodal data handling.

This addition enhances the inference capabilities within the MONAI framework.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Introduced a new file containing the implementation of the MONetBundleInferenceOperator.
- This operator extends the MonaiBundleInferenceOperator to facilitate inference with nnUNet-specific configurations.
- Implemented methods for configuration initialization and multimodal data prediction, enhancing the MONAI framework's inference capabilities.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Registered MONetBundleInferenceOperator in the __init__.py file to ensure it is included in the module's public API.
- This change facilitates easier access to the operator for users of the MONAI framework.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
… tested alone (Project-MONAI#573)

* Added saving decoded pixels for in deepth review if needed

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Fixed linting complaints

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Fixed the code and improve the tests with failed tests to be addressed.

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Force YBR for JEPG baseline, and test nvimgcodec without any decault decoders

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Critical changes make uncompressed images matching pydicom default decoders.

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Removed support for 12bit "JPEG Extended, Process 2+4"

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Address review comments including from AI agent

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Added reason for ignoring dcm files known to fail to uncompress

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Updated the notes on perf test results

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Explicitly minimized lazy loading impact and added comments on it.

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Updated doc sentences

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Editorial changes made to comments

Signed-off-by: M Q <mingmelvinq@nvidia.com>

---------

Signed-off-by: M Q <mingmelvinq@nvidia.com>
Signed-off-by: Simone Bendazzoli <simben@kth.se>
* Release v3.5.0

Signed-off-by: M Q <mingmelvinq@nvidia.com>

* Bump version: 3.4.0 → 3.5.0

Signed-off-by: M Q <mingmelvinq@nvidia.com>

---------

Signed-off-by: M Q <mingmelvinq@nvidia.com>
Signed-off-by: Simone Bendazzoli <simben@kth.se>
…mplementation of the MONetBundleInferenceOperator. This deletion simplifies the codebase by eliminating unused or redundant components.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Enhanced the docstring for MONetBundleInferenceOperator to include a reference to the MONet bundle repository and provide additional context on its functionality.
- This update improves clarity for users regarding the operator's purpose and usage.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Improved the type checking for the model_network parameter to enhance readability and maintainability.
- Adjusted formatting in the predict method for better clarity and consistency in multimodal data handling.
- These changes contribute to cleaner code and improved functionality within the MONAI framework.

Signed-off-by: Simone Bendazzoli <simben@kth.se>
- Integrated TritonModel type checking into the MONetBundleInferenceOperator to enhance model compatibility.
- Updated the predict method to retain metadata from input data, improving the output structure for predictions.

These changes improve the operator's functionality and usability within the MONAI framework.
@coderabbitai
Copy link

coderabbitai bot commented Feb 17, 2026

Walkthrough

A new MONetBundleInferenceOperator class is introduced extending MonaiBundleInferenceOperator to support nnUNet-style inference with multimodal input handling. Minor fixes correct YAML file extension handling and metadata validation. The operator is exposed in the public API.

Changes

Cohort / File(s) Summary
MONetBundleInferenceOperator Implementation
monai/deploy/operators/monet_bundle_inference_operator.py
New operator class extending MonaiBundleInferenceOperator with nnUNet predictor initialization, model network validation, and predict method supporting multimodal inputs via ResampleToMatch and ConcatItemsd composition.
MONetBundleInferenceOperator Export
monai/deploy/operators/__init__.py
Imports MONetBundleInferenceOperator and adds it to all and autosummary, exposing the new operator as part of the public API.
Bundle Configuration Fixes
monai/deploy/operators/monai_bundle_inference_operator.py
Corrects bundle_suffixes YAML extension from "yml" to ".yml"; ensures meta_data is guaranteed to be a dict via null-coalescing assignment.

Sequence Diagram

sequenceDiagram
    actor Client
    participant MONetOp as MONetBundleInferenceOperator
    participant Predictor as nnUNet Predictor
    participant Transform as ResampleToMatch/<br/>ConcatItemsd

    Client->>MONetOp: predict(data, **kwargs)
    MONetOp->>MONetOp: _set_model_network(model)
    alt Multimodal kwargs provided
        MONetOp->>Transform: Apply ResampleToMatch to extra modalities
        Transform-->>MONetOp: Transformed modalities
        MONetOp->>Transform: ConcatItemsd to combine into "image" key
        Transform-->>MONetOp: Multimodal input
    end
    MONetOp->>MONetOp: Ensure batch dimension for 4D tensors
    MONetOp->>Predictor: Execute predictor with input
    Predictor-->>MONetOp: Prediction output
    MONetOp->>MONetOp: Propagate input meta to prediction
    MONetOp-->>Client: Return prediction with metadata
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 Hop, hop! A new MONet arrives,
With nnUNet models that truly thrive!
Multimodal inputs blend with grace,
Bundle inference finds its place. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 71.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main objective of the pull request: introducing MONet Bundle support into MONAI Deploy through a new operator and example application.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sonarqubecloud
Copy link

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
monai/deploy/operators/monai_bundle_inference_operator.py (1)

149-149: ⚠️ Potential issue | 🔴 Critical

Bug: Missing leading dot on "yml" suffix in _read_directory_bundle_config.

bundle_suffixes here has "yml" without a leading dot, so constructing f"{config_name_base}{suffix}" at Line 170 would produce e.g. "inferenceyml" instead of "inference.yml". The archive-based reader at Line 189 was correctly fixed to ".yml", but this directory-based reader was missed.

🐛 Proposed fix
-    bundle_suffixes = (".json", ".yaml", "yml")  # The only supported file ext(s)
+    bundle_suffixes = (".json", ".yaml", ".yml")  # The only supported file ext(s)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monai_bundle_inference_operator.py` at line 149, In
_read_directory_bundle_config the bundle_suffixes tuple is missing a leading dot
for "yml", causing f"{config_name_base}{suffix}" to produce filenames like
"inferenceyml"; update bundle_suffixes in monai_bundle_inference_operator.py to
include the leading dot (".yml") so that filenames built by config_name_base +
suffix are correct; confirm the change in the _read_directory_bundle_config
function where config_name_base and suffix are concatenated.
🧹 Nitpick comments (2)
monai/deploy/operators/monet_bundle_inference_operator.py (2)

90-95: Non-MetaTensor kwargs (e.g. from base class) are silently dropped from multimodal data.

The base class compute passes **other_inputs to predict, which may include non-tensor entries. The if len(kwargs) > 0 guard enters the multimodal path for any kwargs, but only MetaTensor values are added to multimodal_data. Non-MetaTensor kwargs are silently ignored. Consider filtering kwargs more explicitly — e.g. only enter multimodal path if there are actually MetaTensor values:

Proposed fix
-        if len(kwargs) > 0:
-            multimodal_data = {"image": data}
-            for key in kwargs.keys():
-                if isinstance(kwargs[key], MetaTensor):
-                    multimodal_data[key] = ResampleToMatch(mode="bilinear")(kwargs[key], img_dst=data)
-            data = ConcatItemsd(keys=list(multimodal_data.keys()), name="image")(multimodal_data)["image"]
+        meta_tensor_kwargs = {k: v for k, v in kwargs.items() if isinstance(v, MetaTensor)}
+        if meta_tensor_kwargs:
+            multimodal_data = {"image": data}
+            for key, value in meta_tensor_kwargs.items():
+                multimodal_data[key] = ResampleToMatch(mode="bilinear")(value, img_dst=data)
+            data = ConcatItemsd(keys=list(multimodal_data.keys()), name="image")(multimodal_data)["image"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 90 -
95, The multimodal branch currently triggers for any kwargs but only adds
MetaTensor values and silently drops others; change the logic in predict
(referencing kwargs, multimodal_data, MetaTensor, ResampleToMatch, ConcatItemsd)
to first filter kwargs for MetaTensor entries (e.g., meta_kwargs = {k:v for k,v
in kwargs.items() if isinstance(v, MetaTensor)}), only enter the multimodal path
when meta_kwargs is non-empty, build multimodal_data from meta_kwargs
(resampling via ResampleToMatch and concatenating with ConcatItemsd) and leave
non-MetaTensor kwargs untouched so compute/predict (and other_inputs) still
receive them.

17-17: Hard import of monai.transforms breaks the optional_import pattern used elsewhere.

The base operator and this file use optional_import for torch and MetaTensor, but ConcatItemsd and ResampleToMatch are imported directly. If monai is not installed (or partially installed), this will raise ImportError at module load time rather than deferring it to usage.

Proposed fix
-from monai.transforms import ConcatItemsd, ResampleToMatch
+ConcatItemsd, _ = optional_import("monai.transforms", name="ConcatItemsd")
+ResampleToMatch, _ = optional_import("monai.transforms", name="ResampleToMatch")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monet_bundle_inference_operator.py` at line 17,
Replace the hard import of ConcatItemsd and ResampleToMatch with the same
optional_import pattern used for torch/MetaTensor: use
optional_import("monai.transforms") to get the transforms module (or None), then
assign ConcatItemsd = transforms.ConcatItemsd and ResampleToMatch =
transforms.ResampleToMatch if transforms is not None; if they are None, ensure
any code that uses ConcatItemsd/ResampleToMatch checks for None and raises a
clear ImportError or defers functionality until monai is available. Reference
the symbols ConcatItemsd and ResampleToMatch in
monet_bundle_inference_operator.py and follow the existing optional_import usage
style in the file for consistency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@monai/deploy/operators/monet_bundle_inference_operator.py`:
- Line 1: Update the file header in monet_bundle_inference_operator.py to
correct the copyright year: replace the incorrect "2002" with "2025" in the
top-of-file comment so the copyright line accurately reflects the current year.
- Around line 26-46: Remove the duplicated sentence in the module/class
docstring that repeats "A specialized operator for performing inference using
the MONet bundle"; edit the docstring (above MonetBundleInferenceOperator / the
class definition containing _init_config and predict) to keep a single, coherent
opening sentence, preserve the rest of the docstring content and formatting
(attributes/methods sections), and ensure the triple-quoted string remains
properly closed and PEP257-style spacing is preserved.
- Around line 58-64: The _init_config implementation is re-parsing the bundle
and overwriting self._parser after calling super()._init_config, which causes
double I/O and a mismatch with objects the parent initialized (e.g.,
self._device, self._inferer, self._preproc, self._postproc); remove the extra
get_bundle_config call and instead reuse the parser the parent already created
(use self._parser) to obtain network_def via
self._parser.get_parsed_content("network_def") and assign that to
self._nnunet_predictor without reassigning self._parser.
- Around line 75-81: The runtime type-check block for model_network is using
torch.jit.isinstance (meant for TorchScript refinement) which is incorrect for
eager Python; replace torch.jit.isinstance(model_network,
torch.jit.ScriptModule) with the standard isinstance(model_network,
torch.jit.ScriptModule) in the validation that checks model_network in the
MonetBundleInferenceOperator (the block referencing model_network,
torch.jit.ScriptModule, TorchScriptModel, TritonModel) so the condition uses
only Python isinstance checks and the TypeError remains unchanged.

---

Outside diff comments:
In `@monai/deploy/operators/monai_bundle_inference_operator.py`:
- Line 149: In _read_directory_bundle_config the bundle_suffixes tuple is
missing a leading dot for "yml", causing f"{config_name_base}{suffix}" to
produce filenames like "inferenceyml"; update bundle_suffixes in
monai_bundle_inference_operator.py to include the leading dot (".yml") so that
filenames built by config_name_base + suffix are correct; confirm the change in
the _read_directory_bundle_config function where config_name_base and suffix are
concatenated.

---

Nitpick comments:
In `@monai/deploy/operators/monet_bundle_inference_operator.py`:
- Around line 90-95: The multimodal branch currently triggers for any kwargs but
only adds MetaTensor values and silently drops others; change the logic in
predict (referencing kwargs, multimodal_data, MetaTensor, ResampleToMatch,
ConcatItemsd) to first filter kwargs for MetaTensor entries (e.g., meta_kwargs =
{k:v for k,v in kwargs.items() if isinstance(v, MetaTensor)}), only enter the
multimodal path when meta_kwargs is non-empty, build multimodal_data from
meta_kwargs (resampling via ResampleToMatch and concatenating with ConcatItemsd)
and leave non-MetaTensor kwargs untouched so compute/predict (and other_inputs)
still receive them.
- Line 17: Replace the hard import of ConcatItemsd and ResampleToMatch with the
same optional_import pattern used for torch/MetaTensor: use
optional_import("monai.transforms") to get the transforms module (or None), then
assign ConcatItemsd = transforms.ConcatItemsd and ResampleToMatch =
transforms.ResampleToMatch if transforms is not None; if they are None, ensure
any code that uses ConcatItemsd/ResampleToMatch checks for None and raises a
clear ImportError or defers functionality until monai is available. Reference
the symbols ConcatItemsd and ResampleToMatch in
monet_bundle_inference_operator.py and follow the existing optional_import usage
style in the file for consistency.

@@ -0,0 +1,100 @@
# Copyright 2002 MONAI Consortium
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo in copyright year: 2002 should likely be 2025.

Proposed fix
-# Copyright 2002 MONAI Consortium
+# Copyright 2025 MONAI Consortium
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Copyright 2002 MONAI Consortium
# Copyright 2025 MONAI Consortium
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monet_bundle_inference_operator.py` at line 1, Update
the file header in monet_bundle_inference_operator.py to correct the copyright
year: replace the incorrect "2002" with "2025" in the top-of-file comment so the
copyright line accurately reflects the current year.

Comment on lines +26 to +46
"""
A specialized operator for performing inference using the MONet bundle (https://github.com/minnelab/MONet-Bundle).
For more details, please refer to the [MONet-Bundle](https://github.com/minnelab/MONet-Bundle) repository.
A specialized operator for performing inference using the MONet bundle.
This operator extends the `MonaiBundleInferenceOperator` to support nnUNet-specific
configurations and prediction logic. It initializes the nnUNet predictor and provides
a method for performing inference on input data.

Attributes
----------
_nnunet_predictor : torch.nn.Module
The nnUNet predictor module used for inference.

Methods
-------
_init_config(config_names)
Initializes the configuration for the nnUNet bundle, including parsing the bundle
configuration and setting up the nnUNet predictor.
predict(data, *args, **kwargs)
Performs inference on the input data using the nnUNet predictor.
"""
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duplicate opening sentence in the docstring.

Lines 27–29 repeat the description ("A specialized operator for performing inference using the MONet bundle") — the second occurrence (Line 29) appears to be a leftover.

Proposed fix — remove the duplicate line
     """
     A specialized operator for performing inference using the MONet bundle (https://github.com/minnelab/MONet-Bundle).
     For more details, please refer to the [MONet-Bundle](https://github.com/minnelab/MONet-Bundle) repository.
-    A specialized operator for performing inference using the MONet bundle.
     This operator extends the `MonaiBundleInferenceOperator` to support nnUNet-specific
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"""
A specialized operator for performing inference using the MONet bundle (https://github.com/minnelab/MONet-Bundle).
For more details, please refer to the [MONet-Bundle](https://github.com/minnelab/MONet-Bundle) repository.
A specialized operator for performing inference using the MONet bundle.
This operator extends the `MonaiBundleInferenceOperator` to support nnUNet-specific
configurations and prediction logic. It initializes the nnUNet predictor and provides
a method for performing inference on input data.
Attributes
----------
_nnunet_predictor : torch.nn.Module
The nnUNet predictor module used for inference.
Methods
-------
_init_config(config_names)
Initializes the configuration for the nnUNet bundle, including parsing the bundle
configuration and setting up the nnUNet predictor.
predict(data, *args, **kwargs)
Performs inference on the input data using the nnUNet predictor.
"""
"""
A specialized operator for performing inference using the MONet bundle (https://github.com/minnelab/MONet-Bundle).
For more details, please refer to the [MONet-Bundle](https://github.com/minnelab/MONet-Bundle) repository.
This operator extends the `MonaiBundleInferenceOperator` to support nnUNet-specific
configurations and prediction logic. It initializes the nnUNet predictor and provides
a method for performing inference on input data.
Attributes
----------
_nnunet_predictor : torch.nn.Module
The nnUNet predictor module used for inference.
Methods
-------
_init_config(config_names)
Initializes the configuration for the nnUNet bundle, including parsing the bundle
configuration and setting up the nnUNet predictor.
predict(data, *args, **kwargs)
Performs inference on the input data using the nnUNet predictor.
"""
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 26 -
46, Remove the duplicated sentence in the module/class docstring that repeats "A
specialized operator for performing inference using the MONet bundle"; edit the
docstring (above MonetBundleInferenceOperator / the class definition containing
_init_config and predict) to keep a single, coherent opening sentence, preserve
the rest of the docstring content and formatting (attributes/methods sections),
and ensure the triple-quoted string remains properly closed and PEP257-style
spacing is preserved.

Comment on lines +58 to +64
def _init_config(self, config_names):

super()._init_config(config_names)
parser = get_bundle_config(str(self._bundle_path), config_names)
self._parser = parser

self._nnunet_predictor = parser.get_parsed_content("network_def")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Double parsing: _init_config parses the bundle config twice.

super()._init_config(config_names) already calls get_bundle_config and assigns self._parser. Lines 61–62 immediately re-parse the same bundle and overwrite self._parser, discarding the one the parent just set up. This doubles the I/O and parsing work. More importantly, the parent's _init_config configures self._device, self._inferer, self._preproc, self._postproc, etc., all using the first parser. Overwriting self._parser afterward creates a divergence between those cached objects and the active parser.

If the intent is just to get network_def, you can use the parser that super() already set:

Proposed fix
     def _init_config(self, config_names):
 
         super()._init_config(config_names)
-        parser = get_bundle_config(str(self._bundle_path), config_names)
-        self._parser = parser
-
-        self._nnunet_predictor = parser.get_parsed_content("network_def")
+        self._nnunet_predictor = self._parser.get_parsed_content("network_def")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _init_config(self, config_names):
super()._init_config(config_names)
parser = get_bundle_config(str(self._bundle_path), config_names)
self._parser = parser
self._nnunet_predictor = parser.get_parsed_content("network_def")
def _init_config(self, config_names):
super()._init_config(config_names)
self._nnunet_predictor = self._parser.get_parsed_content("network_def")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 58 -
64, The _init_config implementation is re-parsing the bundle and overwriting
self._parser after calling super()._init_config, which causes double I/O and a
mismatch with objects the parent initialized (e.g., self._device, self._inferer,
self._preproc, self._postproc); remove the extra get_bundle_config call and
instead reuse the parser the parent already created (use self._parser) to obtain
network_def via self._parser.get_parsed_content("network_def") and assign that
to self._nnunet_predictor without reassigning self._parser.

Comment on lines +75 to +81
if (
not isinstance(model_network, torch.nn.Module)
and not torch.jit.isinstance(model_network, torch.jit.ScriptModule)
and not isinstance(model_network, TorchScriptModel)
and not isinstance(model_network, TritonModel)
):
raise TypeError("model_network must be an instance of torch.nn.Module or torch.jit.ScriptModule")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

torch.jit.isinstance PyTorch documentation usage in Python

💡 Result:

torch.jit.isinstance(obj, target_type) is a TorchScript-only isinstance used for type refinement, especially for parameterized container types like List[Tensor], Dict[str, str], Optional[...], Tuple[...]. It returns True if TorchScript can refine obj to target_type (and refines the type within that branch), otherwise False with no refinement. [1]

Example (from the PyTorch docs):

import torch
from typing import Any, Dict, List

class MyModule(torch.nn.Module):
    def forward(self, input: Any):
        if torch.jit.isinstance(input, List[torch.Tensor]):
            for t in input:
                y = t.clamp(0, 0.5)
        elif torch.jit.isinstance(input, Dict[str, str]):
            for val in input.values():
                print(val)

m = torch.jit.script(MyModule())
m([torch.rand(3,3), torch.rand(4,3)])
m({"key1":"val1","key2":"val2"})

[1]

Related: TorchScript’s language/type system docs discuss Any and isinstance checks in scripted code (note: TorchScript is deprecated in favor of torch.export). [2]

Sources: [1] [2]


🏁 Script executed:

# Search for torch.jit.isinstance usage in the codebase
rg "torch\.jit\.isinstance" -A 2 -B 2

Repository: Project-MONAI/monai-deploy-app-sdk

Length of output: 642


🌐 Web query:

torch.jit.isinstance behavior non-scripted objects regular Python runtime error

💡 Result:

torch.jit.isinstance is meant for TorchScript type refinement (e.g., refining an Any to List[Tensor] inside torch.jit.script). In TorchScript it participates in the compiler/type system, not just a runtime check. [1]

In regular eager Python (non-scripted execution), it does not refine types; it falls back to a pure-Python helper (torch._jit_internal._isinstance) that tries to approximate checks for parameterized containers using typing.get_origin/get_args, and otherwise just calls Python’s isinstance. [2]

Why you might see a “regular Python runtime error” in eager mode

In eager mode, _isinstance will raise RuntimeError for misuse of the second argument, for example: [2]

  • Passing something that is a container but not a tuple of types (i.e., second argument must be a type or a tuple of types).
  • Passing an un-parameterized container type (e.g., List instead of List[int] / List[Tensor]): it errors because the element types are missing.

A common “gotcha” (not an error, but surprising behavior)

If the value being checked is an empty container ([], {}, ()), eager-mode _isinstance emits a warning that the “inner type is lost”, meaning checks like List[int] can incorrectly behave like list and return True for the wrong element type. [2]

Practical guidance: in eager-only code, prefer Python’s built-in isinstance for non-typing checks, and use torch.jit.isinstance only when you actually need TorchScript-friendly container-type checks (typically inside code you plan to torch.jit.script). [1]

Sources: PyTorch API docs for torch.jit.isinstance [1], and the PyTorch implementation of _jit_internal._isinstance (used by torch.jit.isinstance in eager mode) [2].


🏁 Script executed:

# Check the file context around the problematic code to understand its usage
cat -n monai/deploy/operators/monet_bundle_inference_operator.py | sed -n '65,85p'

Repository: Project-MONAI/monai-deploy-app-sdk

Length of output: 1182


Replace torch.jit.isinstance with standard isinstance for runtime type checking.

torch.jit.isinstance is designed for TorchScript type refinement in scripted code, not for regular Python type checks. In eager Python, using it for simple types like torch.jit.ScriptModule is a misuse of the API and can lead to unexpected behavior. Replace with the standard isinstance check.

Proposed fix
         if (
             not isinstance(model_network, torch.nn.Module)
-            and not torch.jit.isinstance(model_network, torch.jit.ScriptModule)
+            and not isinstance(model_network, torch.jit.ScriptModule)
             and not isinstance(model_network, TorchScriptModel)
             and not isinstance(model_network, TritonModel)
         ):
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (
not isinstance(model_network, torch.nn.Module)
and not torch.jit.isinstance(model_network, torch.jit.ScriptModule)
and not isinstance(model_network, TorchScriptModel)
and not isinstance(model_network, TritonModel)
):
raise TypeError("model_network must be an instance of torch.nn.Module or torch.jit.ScriptModule")
if (
not isinstance(model_network, torch.nn.Module)
and not isinstance(model_network, torch.jit.ScriptModule)
and not isinstance(model_network, TorchScriptModel)
and not isinstance(model_network, TritonModel)
):
raise TypeError("model_network must be an instance of torch.nn.Module or torch.jit.ScriptModule")
🧰 Tools
🪛 Ruff (0.15.0)

[warning] 81-81: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@monai/deploy/operators/monet_bundle_inference_operator.py` around lines 75 -
81, The runtime type-check block for model_network is using torch.jit.isinstance
(meant for TorchScript refinement) which is incorrect for eager Python; replace
torch.jit.isinstance(model_network, torch.jit.ScriptModule) with the standard
isinstance(model_network, torch.jit.ScriptModule) in the validation that checks
model_network in the MonetBundleInferenceOperator (the block referencing
model_network, torch.jit.ScriptModule, TorchScriptModel, TritonModel) so the
condition uses only Python isinstance checks and the TypeError remains
unchanged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants