Skip to content

Conversation

@aymuos15
Copy link

Summary

Addresses #6753

This PR adds an in_memory option to PersistentDataset that combines the benefits of persistent disk storage with fast RAM access:

  • First access: Data is loaded from disk cache (or preprocessed if not cached) and stored in RAM
  • Subsequent accesses: Data is served directly from RAM

This provides the best of both worlds:

  • vs default PersistentDataset: No repeated disk reads during training epochs
  • vs CacheDataset: No reprocessing on script restart

Changes

  • Add in_memory: bool = False parameter to PersistentDataset
  • Add memory_cache_size property for inspecting cache state
  • Optimize _cachecheck() to compute cache key once
  • Clear memory cache when set_data() is called
  • Add comprehensive unit tests including automatic hybrid caching test

Usage

from monai.data import PersistentDataset

# Hybrid mode: disk persistence + RAM caching
ds = PersistentDataset(
    data=data,
    transform=transform,
    cache_dir="./cache",
    in_memory=True  # NEW
)

Test plan

  • Unit tests for in_memory cache functionality
  • Unit tests for cache_dir=None (pure RAM mode)
  • Unit tests for automatic hybrid caching behavior
  • Verified set_data() clears memory cache
  • Benchmarked against default PersistentDataset and CacheDataset

- Add `in_memory` parameter to PersistentDataset that combines persistent
  disk storage with RAM caching for faster subsequent access
- Add `memory_cache_size` property for inspecting cache state
- Compute cache key once in _cachecheck() to avoid redundant hash computation
- Clear memory cache when set_data() is called
- Works with or without cache_dir (pure RAM cache mode)

Addresses: Project-MONAI#6753

Signed-off-by: Soumya Snigdha Kundu <soumya_snigdha.kundu@kcl.ac.uk>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 12, 2026

📝 Walkthrough

Walkthrough

The pull request introduces an in-memory caching layer to PersistentDataset. A new in_memory parameter (default False) enables storage of cached items in a dictionary alongside disk-based caching. The memory cache is consulted before disk cache. A new memory_cache_size property reports current in-memory cache occupancy. Cache operations are updated to coordinate between both layers—memory cache clears on set_data, populates on cache reads or creations, and persists across data accesses. Three test cases validate hybrid caching, memory-only caching, and restart persistence scenarios.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed Title clearly and concisely summarizes the main change: adding an in_memory option for hybrid caching to PersistentDataset.
Description check ✅ Passed Description includes issue reference, clear explanation of changes, comprehensive test coverage, and usage example. Follows template structure with types of changes marked.
Docstring Coverage ✅ Passed Docstring coverage is 90.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @tests/data/test_persistentdataset.py:
- Around line 294-296: The test expects untransformed data but the used
transform class _InplaceXform mutates data by adding np.pi to data[0], so update
the assertion in the results loop to account for that transform: compute the
expected row as list(range(i)) then add np.pi to its first element (or use numpy
to build expected and add np.pi to expected[0,0]) and compare with results using
a numeric comparison (e.g., np.testing.assert_allclose) instead of plain
equality so cached, transformed values like [[np.pi, 1]] match the expectation.
🧹 Nitpick comments (3)
tests/data/test_persistentdataset.py (1)

231-234: Consider testing object identity for memory cache hits.

With in_memory caching, accessing the same index twice should return the same object reference. Using assertIs(result1, result2) would verify the cache is truly being used.

monai/data/dataset.py (2)

417-422: Static analysis hint: return in try block.

Ruff TRY300 suggests moving the return to an else block. Current code is correct, but restructuring would improve clarity by separating success path from exception handling.

Optional refactor
         if hashfile is not None and hashfile.is_file():  # cache hit
             try:
                 _item_transformed = torch.load(hashfile, weights_only=self.weights_only)
-                if self.in_memory:
-                    self._memory_cache[cache_key] = _item_transformed
-                return _item_transformed
             except PermissionError as e:
                 if sys.platform != "win32":
                     raise e
             except (UnpicklingError, RuntimeError) as e:  # corrupt or unloadable cached files are recomputed
                 if "Invalid magic number; corrupt file" in str(e) or isinstance(e, UnpicklingError):
                     warnings.warn(f"Corrupt cache file detected: {hashfile}. Deleting and recomputing.")
                     hashfile.unlink()
                 else:
                     raise e
+            else:
+                if self.in_memory:
+                    self._memory_cache[cache_key] = _item_transformed
+                return _item_transformed

444-461: Potential type inconsistency between disk and memory cache.

Line 445 converts to tensor before saving to disk with convert_to_tensor, but line 460 stores the original _item_transformed in memory. On reload, disk cache returns tensor types, but memory cache may have different types.

If consistency matters, store the converted version:

Proposed fix
+        _converted = convert_to_tensor(_item_transformed, convert_numeric=False, track_meta=self.track_meta)
         try:
             # NOTE: Writing to a temporary directory and then using a nearly atomic rename operation
             #       to make the cache more robust to manual killing of parent process
             #       which may leave partially written cache files in an incomplete state
             with tempfile.TemporaryDirectory() as tmpdirname:
                 temp_hash_file = Path(tmpdirname) / hashfile.name
                 torch.save(
-                    obj=convert_to_tensor(_item_transformed, convert_numeric=False, track_meta=self.track_meta),
+                    obj=_converted,
                     f=temp_hash_file,
                     pickle_module=look_up_option(self.pickle_module, SUPPORTED_PICKLE_MOD),
                     pickle_protocol=self.pickle_protocol,
                 )
                 if temp_hash_file.is_file() and not hashfile.is_file():
                     # On Unix, if target exists and is a file, it will be replaced silently if the user has permission.
                     # for more details: https://docs.python.org/3/library/shutil.html#shutil.move.
                     try:
                         shutil.move(str(temp_hash_file), hashfile)
                     except FileExistsError:
                         pass
         except PermissionError:  # project-monai/monai issue #3613
             pass
         if self.in_memory:
-            self._memory_cache[cache_key] = _item_transformed
+            self._memory_cache[cache_key] = _converted
         return _item_transformed
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 57fdd59 and e132236.

📒 Files selected for processing (2)
  • monai/data/dataset.py
  • tests/data/test_persistentdataset.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

⚙️ CodeRabbit configuration file

Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.

Files:

  • tests/data/test_persistentdataset.py
  • monai/data/dataset.py
🧬 Code graph analysis (1)
tests/data/test_persistentdataset.py (2)
monai/data/dataset.py (5)
  • memory_cache_size (311-313)
  • set_data (335-344)
  • set_data (642-648)
  • set_data (875-910)
  • set_data (1114-1128)
tests/data/test_gdsdataset.py (1)
  • _InplaceXform (65-68)
🪛 Ruff (0.14.10)
monai/data/dataset.py

422-422: Consider moving this statement to an else block

(TRY300)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: packaging
  • GitHub Check: flake8-py3 (codeformat)
  • GitHub Check: flake8-py3 (mypy)
  • GitHub Check: quick-py3 (windows-latest)
  • GitHub Check: build-docs
  • GitHub Check: quick-py3 (ubuntu-latest)
  • GitHub Check: quick-py3 (macOS-latest)
  • GitHub Check: flake8-py3 (pytype)
  • GitHub Check: min-dep-pytorch (2.8.0)
  • GitHub Check: min-dep-py3 (3.12)
  • GitHub Check: min-dep-pytorch (2.5.1)
  • GitHub Check: min-dep-pytorch (2.7.1)
  • GitHub Check: min-dep-py3 (3.9)
  • GitHub Check: min-dep-py3 (3.10)
  • GitHub Check: min-dep-os (macOS-latest)
  • GitHub Check: min-dep-py3 (3.11)
  • GitHub Check: min-dep-os (windows-latest)
  • GitHub Check: min-dep-pytorch (2.6.0)
  • GitHub Check: min-dep-os (ubuntu-latest)
🔇 Additional comments (7)
tests/data/test_persistentdataset.py (2)

240-254: LGTM.

Pure RAM caching test correctly validates memory_cache_size behavior with cache_dir=None.


256-279: LGTM.

Good test coverage for the hybrid caching restart scenario. The use of Path.glob("*.pt") properly verifies disk cache population.

monai/data/dataset.py (5)

235-235: LGTM.

New in_memory parameter with sensible default False. Docstring is clear. Initialization is correct.

Also applies to: 277-280, 307-308


310-313: LGTM.

Simple property with appropriate docstring.


341-341: LGTM.

Memory cache correctly cleared on data update.


433-437: LGTM.

Pure RAM mode correctly stores transformed item when cache_dir=None.


413-422: Thread safety consideration for multi-worker DataLoader.

_memory_cache is a plain dict accessed without synchronization. With num_workers > 0 in DataLoader, each worker is a separate process with its own memory, so this works. However, if used with thread-based data loading or shared memory patterns, concurrent writes could race.

Current usage is safe for typical DataLoader patterns.

Comment on lines +294 to +296
# Verify: Results are correct
for i, result in enumerate(results):
self.assertEqual(result, [list(range(i))])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Assertion may fail due to transform behavior.

The _InplaceXform modifies data by adding np.pi to data[0] (line 60). The assertion self.assertEqual(result, [list(range(i))]) expects untransformed data, but cached results will have np.pi added.

For i=2, cached result would be [[np.pi, 1]], not [[0, 1]].

Proposed fix
-            for i, result in enumerate(results):
-                self.assertEqual(result, [list(range(i))])
+            for i, result in enumerate(results):
+                expected = [list(range(i))]
+                if expected[0]:
+                    expected[0][0] = expected[0][0] + np.pi
+                self.assertEqual(result, expected)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Verify: Results are correct
for i, result in enumerate(results):
self.assertEqual(result, [list(range(i))])
# Verify: Results are correct
for i, result in enumerate(results):
expected = [list(range(i))]
if expected[0]:
expected[0][0] = expected[0][0] + np.pi
self.assertEqual(result, expected)
🤖 Prompt for AI Agents
In @tests/data/test_persistentdataset.py around lines 294 - 296, The test
expects untransformed data but the used transform class _InplaceXform mutates
data by adding np.pi to data[0], so update the assertion in the results loop to
account for that transform: compute the expected row as list(range(i)) then add
np.pi to its first element (or use numpy to build expected and add np.pi to
expected[0,0]) and compare with results using a numeric comparison (e.g.,
np.testing.assert_allclose) instead of plain equality so cached, transformed
values like [[np.pi, 1]] match the expectation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant