-
Notifications
You must be signed in to change notification settings - Fork 2.9k
feat(compute): add compute consistency group clone #9693
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…group_delete samples, created test
minherz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we prefer simple code samples to complex one. this one uses heuristic that is not easy to comprehend to distinguish between regional and zonal disks. There is also a potentially confusing use of multiple region variables despite the fact that the location for the group and the disk should be the same (in case of the region).
| if (Character.isDigit(disksLocation.charAt(disksLocation.length() - 1))) { | ||
| // Initialize client that will be used to send requests. This client only needs to be created | ||
| // once, and can be reused for multiple requests. | ||
| try (RegionDisksClient disksClient = RegionDisksClient.create()) { | ||
| BulkInsertRegionDiskRequest request = BulkInsertRegionDiskRequest.newBuilder() | ||
| .setProject(project) | ||
| .setRegion(disksLocation) | ||
| .setBulkInsertDiskResourceResource( | ||
| BulkInsertDiskResource.newBuilder() | ||
| .setSourceConsistencyGroupPolicy(sourceConsistencyGroupPolicy) | ||
| .build()) | ||
| .build(); | ||
|
|
||
| response = disksClient.bulkInsertAsync(request).get(3, TimeUnit.MINUTES); | ||
| } | ||
| } else { | ||
| try (DisksClient disksClient = DisksClient.create()) { | ||
| BulkInsertDiskRequest request = BulkInsertDiskRequest.newBuilder() | ||
| .setProject(project) | ||
| .setZone(disksLocation) | ||
| .setBulkInsertDiskResourceResource( | ||
| BulkInsertDiskResource.newBuilder() | ||
| .setSourceConsistencyGroupPolicy(sourceConsistencyGroupPolicy) | ||
| .build()) | ||
| .build(); | ||
|
|
||
| response = disksClient.bulkInsertAsync(request).get(3, TimeUnit.MINUTES); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this code does two code samples in one. please, fix this either by demonstrating cloning on a zonal disk or creating two code samples for different demos. I recommend to reach out to tech writer to see if it makes sense to have two samples.
Note that the current documentation use of pseudo "LOCATION" parameter that is further have to be edited by a user is not a recommended practice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! Created sample for zonal location
| String disksLocation = "us-central1"; | ||
| // Name of the consistency group you want to clone disks from. | ||
| String consistencyGroupName = "YOUR_CONSISTENCY_GROUP_NAME"; | ||
| // Name of the region in which your consistency group is located. | ||
| String consistencyGroupLocation = "us-central1"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that consistency group cannot include disks from a region other than the group's region. I suggest to reflect it in the code by using a single parameter for disk location (for regional disks). Otherwise, use zone disks in the same zone explicitly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! Fixed
|
This PR cannot be merged. Merging the main branch into PR's branch is forbidden pattern. It introduces changes that cannot be effectively tracked in the commit history and increase the complexity of the review. When PR's branch gets out of sync with the main it has to be rebased to the main branch and not merged. I recommend to close this PR and submit the original changes that address all review feedback received so far as a new PR. Alternative to it is to undo all changes caused by the merge. I converted PR to draft pending your decision. |
Description
Documentation - https://cloud.google.com/compute/docs/disks/async-pd/manage-async-disks?authuser=0#clone_all_disks_in_a_consistency_group
Node.js sample - GoogleCloudPlatform/nodejs-docs-samples#3920
Fixes #
Note: Before submitting a pull request, please open an issue for discussion if you are not associated with Google.
Checklist
pom.xmlparent set to latestshared-configurationmvn clean verifyrequiredmvn -P lint checkstyle:checkrequiredmvn -P lint clean compile pmd:cpd-check spotbugs:checkadvisory only