[WIP][Newton] Migrate manager based workflow to warp#4480
[WIP][Newton] Migrate manager based workflow to warp#4480hujc7 wants to merge 5 commits intoisaac-sim:dev/newtonfrom
Conversation
d4796f8 to
ef0e0f6
Compare
247d950 to
108a707
Compare
|
P1: Training result (last step)iteration P1: 299/300; iteration P2: 299/300
Stepgap P1: -4.72% (Σsub 7941.26 us/step vs timed 8334.90 us/step); gap P2: -13.46% (Σsub 1937.77 us/step vs timed 2239.05 us/step)
Reset idxgap P1: -7.69% (Σsub 2763.60 us/step vs timed 2993.70 us/step); gap P2: -61.19% (Σsub 133.11 us/step vs timed 342.93 us/step)
|
|
As timer itself has overhead and it also does a wp.sync, Data with only step timed P1: /home/jichuanh/workspaces/isaac/data/manager-based/cpv-torch-baseline-less.txt Training result (last step)iteration P1: 299/300
Stepgap P1: -100.00% (Σsub 0.00 us/step vs timed 6974.04 us/step)
|
|
covergence issue is fixed. However, with event manger migrated, there's still a slower to converge problem between mean episode length of 100-200, which is likely due to rng usage. Torch impl uses a single seed to sample events, but warp version has per env rng state, causing slightly different distribution over short period. Below is the comparison of torch run vs captured warp run. Training result (last step)iteration P1: 299/300
Stepgap P1: -100.00% (Σsub 0.00 us/step vs timed 8006.61 us/step)
Training convergence
|
Greptile SummaryThis PR migrates the manager-based RL workflow to use Warp (NVIDIA's Python framework for GPU-accelerated computing) for improved performance through CUDA graph capture. The implementation adds an experimental fork of the stable environment under Key Changes
ArchitectureThe workflow uses persistent Warp buffers with stable pointers to enable CUDA graph capture. Manager terms write results in-place to pre-allocated buffers rather than returning values. Environment mask resolution uses shared scratch buffers ( Previous Review CommentsSeveral issues were identified in earlier review threads and are noted as known concerns:
These are acknowledged limitations of the WIP implementation and do not block experimental development. Confidence Score: 3/5
Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[ManagerBasedRLEnvWarp] --> B[ManagerCallSwitch]
A --> C[Scene w/ Articulation]
A --> D[ActionManager]
A --> E[ObservationManager]
A --> F[RewardManager]
A --> G[TerminationManager]
A --> H[CommandManager]
A --> I[EventManager]
B -->|Mode 0| J[Stable Managers]
B -->|Mode 1| K[Warp Not Captured]
B -->|Mode 2| L[Warp CUDA Graph]
D --> M[JointEffortAction]
M --> N[Articulation.set_joint_effort_target]
E --> O[CircularBuffer History]
E --> P[Noise/Modifiers]
F --> Q[Per-term Warp Buffers]
F --> R[_reward_finalize Kernel]
G --> S[Boolean Masks]
G --> T[Episode Logging]
H --> U[RNG State Management]
I --> V[reset/interval/startup]
I --> W[reset_joints_by_offset]
N --> C
W --> C
style L fill:#90EE90
style J fill:#FFB6C1
style K fill:#FFE4B5
Last reviewed commit: a247e5d |
| # ids provided as python sequence (already normalized to list above) | ||
| if len(env_ids) == 0: | ||
| return self.ENV_MASK | ||
| raise ValueError(f"Unsupported env_ids type: {type(env_ids)}") |
There was a problem hiding this comment.
resolve_env_mask fails for non-empty Python lists
When env_ids is a non-empty Python list (e.g. [0, 2, 5]), the code reaches line 490 and unconditionally raises ValueError. The empty-list check on line 488 is handled, but non-empty lists are never converted to a Warp array and fed to the kernel. This means any caller that passes a plain Python list of env ids (a common pattern) will crash.
You likely need to convert the Python list to a wp.array and launch the kernel, similar to the ArticulationData._resolve_1d_mask implementation:
| # ids provided as python sequence (already normalized to list above) | |
| if len(env_ids) == 0: | |
| return self.ENV_MASK | |
| raise ValueError(f"Unsupported env_ids type: {type(env_ids)}") | |
| # ids provided as python sequence (already normalized to list above) | |
| if len(env_ids) == 0: | |
| return self.ENV_MASK | |
| ids_wp = wp.array(env_ids, dtype=wp.int32, device=self.device) | |
| wp.launch( | |
| kernel=_generate_env_mask_from_ids_int32, | |
| dim=ids_wp.shape[0], | |
| inputs=[self.ENV_MASK, ids_wp], | |
| device=self.device, | |
| ) | |
| return self.ENV_MASK |
Note: this allocates a temporary wp.array which is not CUDA-graph-capture friendly, but it matches the behavior of ArticulationData._resolve_1d_mask and is required for correctness on non-captured paths.
| sim: SimulationCfg = SimulationCfg( | ||
| newton_cfg=NewtonCfg( | ||
| solver_cfg=MJWarpSolverCfg( | ||
| nconmax=5, | ||
| ), | ||
| ) | ||
| ) | ||
|
|
||
| # Scene settings | ||
| scene: CartpoleSceneCfg = CartpoleSceneCfg(num_envs=4096, env_spacing=4.0, clone_in_fabric=True) | ||
| # Basic settings | ||
| observations: ObservationsCfg = ObservationsCfg() | ||
| actions: ActionsCfg = ActionsCfg() | ||
| events: EventCfg = EventCfg() | ||
| # MDP settings | ||
| rewards: RewardsCfg = RewardsCfg() | ||
| terminations: TerminationsCfg = TerminationsCfg() | ||
| # Simulation settings | ||
| sim: SimulationCfg = SimulationCfg( | ||
| newton_cfg=NewtonCfg( | ||
| solver_cfg=MJWarpSolverCfg( | ||
| njmax=5, | ||
| nconmax=3, | ||
| ls_iterations=10, | ||
| cone="pyramidal", | ||
| impratio=1, | ||
| ls_parallel=True, | ||
| integrator="implicit", | ||
| ), | ||
| num_substeps=1, | ||
| debug_mode=False, | ||
| use_cuda_graph=True, | ||
| ) | ||
| ) |
There was a problem hiding this comment.
Duplicate sim field — first definition is silently overridden
CartpoleEnvCfg defines sim: SimulationCfg twice (lines 164-170 and lines 182-197). Because Python @configclass/@dataclass semantics allow field redefinition, the first assignment is silently overridden by the second. This makes the code confusing and the first definition on line 164 is dead code. Consider removing the first sim definition to avoid confusion.
| @@ -149,6 +156,7 @@ def setup_manager_visualizers(self): | |||
| Operations - MDP | |||
| """ | |||
There was a problem hiding this comment.
Always-on timer decorator on step() in the stable env
The @Timer(name="env_step", ..., enable=True) decorator is hardcoded to enable=True, meaning every call to step() in the stable ManagerBasedRLEnv now incurs timer overhead unconditionally. The TIMER_ENABLED_STEP flag only controls the inner section timers. This should likely use enable=TIMER_ENABLED_STEP or be controlled by the same flag to avoid always-on overhead in production use.
| """ | |
| @Timer(name="env_step", msg="Step took:", enable=TIMER_ENABLED_STEP, format="us") |
| self.reset_terminated = self.termination_manager.terminated | ||
| self.reset_time_outs = self.termination_manager.time_outs | ||
|
|
||
| @Timer(name="env_step", msg="Step took:", enable=True, format="us") |
There was a problem hiding this comment.
Always-on timer decorator on step() in experimental env
Same as the stable env: the @Timer(name="env_step", ..., enable=True) decorator is hardcoded to enable=True. This means every call to step() incurs timer overhead unconditionally, even though the TIMER_ENABLED_STEP flag controls the inner section timers. Consider using enable=TIMER_ENABLED_STEP for consistency.
| @Timer(name="env_step", msg="Step took:", enable=True, format="us") | |
| @Timer(name="env_step", msg="Step took:", enable=TIMER_ENABLED_STEP, format="us") |
| def _manager_name_from_stage(self, stage: str) -> str: | ||
| if "_" not in stage: | ||
| raise ValueError(f"Invalid stage '{stage}'. Expected '{{manager_name}}_{{function_name}}'.") | ||
| return stage.split("_", 1)[0] |
There was a problem hiding this comment.
Stage name parsing assumes single underscore delimiter
_manager_name_from_stage splits on the first _ to derive the manager name (e.g., "ActionManager_process_action" → "ActionManager"). However, Scene_write_data_to_sim and Scene_reset use "Scene" as the prefix, which is not in MANAGER_NAMES. This means get_mode_for_manager("Scene") will always fall through to the default mode. While this works by accident (the default is WARP_CAPTURED), it means the Scene stage can never be independently configured to a different mode.
This isn't a bug per se, but worth being aware of since it creates an implicit coupling between the Scene call mode and the default configuration.
|
|
||
| @wp.kernel | ||
| def _reward_pre_compute_reset( | ||
| # output | ||
| reward_buf: wp.array(dtype=wp.float32), | ||
| step_reward: wp.array(dtype=wp.float32, ndim=2), | ||
| term_outs: wp.array(dtype=wp.float32, ndim=2), |
There was a problem hiding this comment.
_reward_finalize stores weighted reward rate but uses it as step reward
The comment on line 68 says "store weighted reward rate (matches old: value/dt)", but the code stores raw * weight without dividing by dt. Line 73 then multiplies by dt to compute the actual step reward contribution (val = weighted * dt). The step_reward array appears to store weighted-but-not-dt-scaled values, which is indeed a "rate" interpretation. However, this could be confusing for downstream consumers of _step_reward_wp (used in get_active_iterable_terms) — the logged values represent reward rates, not actual per-step rewards. Worth verifying this matches the expected semantics.
| s = rng_state[env_id] | ||
| time_left[env_id] = wp.randf(s, lower, upper) | ||
| rng_state[env_id] = s + wp.uint32(1) |
There was a problem hiding this comment.
RNG state increment uses addition instead of wp.randf
In _resample_time_left_and_increment_counter, the RNG state is advanced by simple addition (rng_state[env_id] = s + wp.uint32(1)), unlike other kernels in this PR (e.g., in event_manager.py and noise_model.py) that advance the state implicitly through wp.randf / wp.randn calls. After wp.randf on line 75, the state s has already been advanced internally by Warp. Writing s + 1 back effectively skips one RNG output. While this won't cause a crash, it may produce slightly different random streams compared to the pattern used elsewhere in the codebase.
| s = rng_state[env_id] | |
| time_left[env_id] = wp.randf(s, lower, upper) | |
| rng_state[env_id] = s + wp.uint32(1) | |
| rng_state[env_id] = s |
| def resolve(self, scene: InteractiveScene): | ||
| # run the stable resolution first (fills joint_ids/body_ids from names/regex) | ||
| super().resolve(scene) | ||
|
|
||
| # Build a Warp joint mask for articulations. | ||
| entity: Articulation = scene[self.name] | ||
|
|
||
| # Pre-allocate a full-length mask (all True for default selection). | ||
| if self.joint_ids == slice(None): | ||
| joint_ids_list = list(range(entity.num_joints)) | ||
| mask_list = [True] * entity.num_joints | ||
| else: | ||
| joint_ids_list = list(self.joint_ids) | ||
| mask_list = [False] * entity.num_joints | ||
| for idx in joint_ids_list: | ||
| mask_list[idx] = True | ||
| self.joint_mask = wp.array(mask_list, dtype=wp.bool, device=scene.device) | ||
| self.joint_ids_wp = wp.array(joint_ids_list, dtype=wp.int32, device=scene.device) |
There was a problem hiding this comment.
resolve() assumes the entity is always an Articulation
SceneEntityCfg.resolve() unconditionally casts scene[self.name] to Articulation (line 39) and accesses entity.num_joints. If the entity is a different type (e.g., a RigidObject), this will fail with an AttributeError. The base class SceneEntityCfg supports non-articulation entities, so this override should either check the entity type or guard against it.
Since this is currently only used for the Cartpole task (which is always an Articulation), this isn't immediately broken, but it will be a problem when the experimental workflow is extended to other asset types.
| mask: wp.array | torch.Tensor | None, | ||
| all_mask: wp.array, | ||
| scratch_mask: wp.array, | ||
| ) -> wp.array: | ||
| """Resolve ids/mask into a warp boolean mask. | ||
|
|
||
| Notes: | ||
| - Returns ``all_mask`` when both ids and mask are None (or ids is slice(None)). | ||
| - If ids are provided and mask is None, this populates ``scratch_mask`` in-place using Warp kernels. | ||
| - Torch inputs are supported for compatibility, but are generally not CUDA-graph-capture friendly. | ||
| """ | ||
| # Fast path: explicit mask provided. | ||
| if mask is not None: | ||
| if isinstance(mask, torch.Tensor): | ||
| # Ensure boolean + correct device, then wrap for Warp. | ||
| if mask.dtype != torch.bool: | ||
| mask = mask.to(dtype=torch.bool) | ||
| if str(mask.device) != self.device: | ||
| mask = mask.to(self.device) | ||
| return wp.from_torch(mask, dtype=wp.bool) | ||
| return mask | ||
|
|
||
| # Fast path: ids == all / not specified. | ||
| if ids is None: | ||
| return all_mask | ||
| if isinstance(ids, slice) and ids == slice(None): | ||
| return all_mask | ||
|
|
||
| # Normalize ids into a 1D wp.int32 array. | ||
| if isinstance(ids, slice): | ||
| # Convert to explicit indices (supports partial slices). | ||
| # We infer the valid range from the scratch mask length. | ||
| start, stop, step = ids.indices(scratch_mask.shape[0]) | ||
| ids = list(range(start, stop, step)) | ||
|
|
||
| if isinstance(ids, wp.array): | ||
| ids_wp = ids | ||
| elif isinstance(ids, torch.Tensor): | ||
| if ids.dtype != torch.int32: | ||
| ids = ids.to(dtype=torch.int32) | ||
| if str(ids.device) != self.device: | ||
| ids = ids.to(self.device) | ||
| ids_wp = wp.from_torch(ids, dtype=wp.int32) | ||
| else: | ||
| ids_list = list(ids) | ||
| ids_wp = wp.array(ids_list, dtype=wp.int32, device=self.device) if len(ids_list) > 0 else None | ||
|
|
||
| # Populate scratch mask. | ||
| scratch_mask.fill_(False) | ||
| if ids_wp is not None: | ||
| wp.launch( | ||
| kernel=generate_mask_from_ids, | ||
| dim=ids_wp.shape[0], | ||
| inputs=[scratch_mask, ids_wp], | ||
| device=self.device, |
There was a problem hiding this comment.
Scratch mask is shared — concurrent callers will race
_resolve_1d_mask populates a shared scratch buffer (scratch_mask) in-place. If multiple callers resolve masks concurrently (e.g., resolve_joint_mask for two different action terms in the same step, since both use JOINT_MASK as the scratch), the second call will overwrite the first caller's mask before it's consumed.
This is acknowledged in manager_based_env_warp.py with the comment "Not thread-safe / re-entrant", but in the articulation data there's no such warning. The Newton articulation now uses resolve_joint_mask in many places (all the set_joint_* methods). If two action terms call set_joint_effort_target with different joint subsets in sequence, the first term's resolved joint mask could be overwritten by the second term before the first kernel launch consumes it. This is safe only if Warp kernel launches are queued (CUDA stream ordering), but worth documenting or verifying.
7a309b8 to
94ce315
Compare
Introduces ManagerBasedEnvWarp and ManagerBasedRLEnvWarp with ManagerCallSwitch for per-manager stable/warp/captured mode selection. Includes Newton articulation data extensions, RL wrapper adaptations, and train script integration.
Warp-first implementations of all 7 managers (action, observation, reward, termination, event, command, recorder) with mask-based reset for CUDA graph compatibility. Includes MDP term library (observations, rewards, terminations, events, actions), IO descriptors, and utility modules (noise, modifiers, circular buffers, warp kernels).
Cartpole env config for the warp manager-based RL env, registered as Isaac-Cartpole-Warp-v0. Includes warp-first custom reward term (joint_pos_target_l2).
94ce315 to
a247e5d
Compare
|
@greptileai review updated. |
Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context.
List any dependencies that are required for this change.
Fixes # (issue)
Type of change
Screenshots
Please attach before and after screenshots of the change if applicable.
Checklist
pre-commitchecks with./isaaclab.sh --formatconfig/extension.tomlfileCONTRIBUTORS.mdor my name already exists there