Skip to content

Comments

HDDS-14646. SCM should not close Ratis pipelines on Finalize#9779

Open
sodonnel wants to merge 12 commits intoapache:HDDS-14496-zdufrom
sodonnel:close-pipelines
Open

HDDS-14646. SCM should not close Ratis pipelines on Finalize#9779
sodonnel wants to merge 12 commits intoapache:HDDS-14496-zdufrom
sodonnel:close-pipelines

Conversation

@sodonnel
Copy link
Contributor

What changes were proposed in this pull request?

When SCM finalizes an upgrade, it should no longer close all the Ratis pipelines on the datanodes. Instead they should be kept open. The SCM finalize command now needs to wait for all healthy datanodes to report matching SLV and MLV versions to know they have been finalized. Only when all datanodes are finalized, should SCM complete the finalization process.

This change is mostly to remove the existing close pipeline code and anything else that depended on it. The only new code added is the change to wait in DNs reporting SVL == MVL, as before new pipelines being created was the trigger to complete the process.

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-14646

How was this patch tested?

Existing tests, some of which had to be adapted slightly.

@github-actions github-actions bot added the zdu Pull requests for Zero Downtime Upgrade (ZDU) https://issues.apache.org/jira/browse/HDDS-14496 label Feb 17, 2026
"datanodes have finalized ({} remaining).",
finalizedNodes, totalHealthyNodes, unfinalizedNodes);
try {
Thread.sleep(5000);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should decrease sleep interval

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The plan is to remove this entire section to be replaced with a polling mechanism in one of the followup PRs listed below.

if (!allDatanodesFinalized) {
LOG.info("Waiting for datanodes to finalize. Status: {}/{} healthy " +
"datanodes have finalized ({} remaining).",
finalizedNodes, totalHealthyNodes, unfinalizedNodes);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its would be nice to have a ids for unhealthy nodes and unfinalizedNodes. Also it would be nice to have a time in wait state.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The plan is to remove this entire section to be replaced with a polling mechanism in one of the followup PRs listed below.

@sodonnel
Copy link
Contributor Author

sodonnel commented Feb 18, 2026

The Integration tests seem to fail with:

2026-02-18 10:03:04,129 [main] ERROR volume.MutableVolumeSet (MutableVolumeSet.java:initializeVolumeSet(193)) - Failed to parse the storage location: file:///home/runner/work/ozone/ozone/hadoop-hdds/container-service/target/tmp/dfs/data
org.apache.hadoop.ozone.common.InconsistentStorageStateException: Mismatched DatanodeUUIDs. Version File : /home/runner/work/ozone/ozone/hadoop-hdds/container-service/target/tmp/dfs/data/hdds/VERSION has datanodeUuid: 37fe7285-51fc-49f0-8ced-b4814363d903 and Datanode has datanodeUuid: f2bcb13b-daa7-4306-9fad-7488b660dff6
	at org.apache.hadoop.ozone.container.common.utils.StorageVolumeUtil.getDatanodeUUID(StorageVolumeUtil.java:122)
	at org.apache.hadoop.ozone.container.common.volume.StorageVolume.readVersionFile(StorageVolume.java:381)
	at org.apache.hadoop.ozone.container.common.volume.StorageVolume.initializeImpl(StorageVolume.java:239)
	at org.apache.hadoop.ozone.container.common.volume.StorageVolume.initialize(StorageVolume.java:214)
	at org.apache.hadoop.ozone.container.common.volume.HddsVolume.<init>(HddsVolume.java:155)
	at org.apache.hadoop.ozone.container.common.volume.HddsVolume.<init>(HddsVolume.java:82)

Resulting in no storage locations available on the DN. This was supposed to be fixed by HDDS-14632, which is on the branch, but its still failing. Passes locally.

EDIT: the failure was from an old run. The latest doesn't fail with this error!

@adoroszlai
Copy link
Contributor

This was supposed to be fixed by HDDS-14632, which is on the branch, but its still failing.

It failed for this commit:

HEAD is now at 246e9a4 Merge 9acf575a86b50ff3c59360f1d2d539ec266de1f5 into af22a5c3a435f771c17307d325d7e80eae7361ec

where af22a5c was the target branch state, from two weeks ago.

It passes after feature branch HDDS-14496-zdu was updated to current master.

@sodonnel sodonnel requested a review from errose28 February 18, 2026 19:21
Copy link
Contributor

@errose28 errose28 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this @sodonnel. I assume we are planning to remove the healthy readonly state and finalization checkpoints in follow up changes. Can you file those Jiras to clarify the scope of this PR?

logAndEmit(msg);
throw new UpgradeException(msg, PREFINALIZE_VALIDATION_FAILED);
}
public void preFinalizeUpgrade(DatanodeStateMachine dsm) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can remove this override, it is the same as the parent.

// outdated layout information.
// This operation is not idempotent.
if (checkpoint == FinalizationCheckpoint.MLV_EQUALS_SLV) {
upgradeContext.getNodeManager().forceNodesToHealthyReadOnly();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can actually delete forceNodesToHealthyReadOnly since this was the only caller outside of tests.


void onLeaderReady();

static boolean shouldCreateNewPipelines(FinalizationCheckpoint checkpoint) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

crossedCheckpoint can also be removed from this interface since it is now unused.

LOG.info("testPostUpgradeConditionsSCM: container state is {}",
ciState.name());
assertTrue((ciState == HddsProtos.LifeCycleState.CLOSED) ||
// Containers can now be in any state since we no longer close pipelines
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove this test on container states completely? The new upgrade flow should be independent of any container states. There's a few other places in this test where I think we can remove similar checks.

@sodonnel
Copy link
Contributor Author

Thanks for working on this @sodonnel. I assume we are planning to remove the healthy readonly state and finalization checkpoints in follow up changes. Can you file those Jiras to clarify the scope of this PR?

I have filed the follow to continue this work:

HDDS-14669 Waiting in finalization should not block a server thread
HDDS-14670 SCM should not finalize unless it is out of
HDDS-14671 Remove healthy_readonly state from SCM
HDDS-14672 Remove finalization checkpoints

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

zdu Pull requests for Zero Downtime Upgrade (ZDU) https://issues.apache.org/jira/browse/HDDS-14496

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants