NFS Cloudstack volume and file and export policy utils#18
NFS Cloudstack volume and file and export policy utils#18piyush5netapp wants to merge 9 commits intomainfrom
Conversation
There was a problem hiding this comment.
Pull Request Overview
This PR adds support for NFS CloudStack volume creation and management utilities for the ONTAP storage plugin. The implementation includes file operations, export policy management, and integration with the existing iSCSI workflow.
Key Changes:
- Added NFS protocol support alongside existing iSCSI implementation
- Implemented file and export policy management utilities for NFS volumes
- Added host listener infrastructure for storage pool attachment workflows
Reviewed Changes
Copilot reviewed 14 out of 17 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| Utility.java | Refactored volume creation to support both NFS and iSCSI protocols using switch statement; added storage strategy retrieval utility |
| Constants.java | Added volume UUID and name constants for storage pool mapping |
| ProtocolType.java | Renamed NFS3 enum value to NFS |
| CloudStackVolume.java | Added Volume and cloudstackVolName fields to support NFS workflow |
| AccessGroup.java | Added fields for host connection and scope information |
| UnifiedNASStrategy.java | Implemented NFS volume creation and file/export policy management operations |
| StorageStrategy.java | Enhanced volume creation to return Volume object with UUID for storage pool mapping |
| StorageProviderFactory.java | Updated protocol enum reference from NFS3 to NFS |
| OntapPrimaryDatastoreProvider.java | Integrated OntapHostListener for host attachment events |
| OntapHostListener.java | Added stub implementation of HypervisorHostListener interface |
| OntapPrimaryDatastoreLifecycle.java | Implemented access group creation in attachCluster and attachZone methods |
| VolumeFeignClient.java | Added volume query method and simplified updateVolumeRebalancing signature |
| NASFeignClient.java | Cleaned up parameter formatting and removed unnecessary headers |
| OntapPrimaryDatastoreDriver.java | Added NFS protocol handling in volume creation workflow |
Comments suppressed due to low confidence (2)
plugins/storage/volume/ontap/src/main/java/org/apache/cloudstack/storage/lifecycle/OntapPrimaryDatastoreLifecycle.java:1
- Corrected duplicated word 'volume' in error message.
/*
plugins/storage/volume/ontap/src/main/java/org/apache/cloudstack/storage/lifecycle/OntapPrimaryDatastoreLifecycle.java:267
- Missing space between 'catch' and the opening parenthesis. Should be '} catch (Exception e) {'.
} catch (Exception e) {
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| FileInfo file = new FileInfo(); | ||
| //file.setName("test1"); // to be replaced with volume name // this should not be passed for dir | ||
| //file.setName(volumeObject.getName()); // to check whether this needs to be sent or not | ||
| file.setSize(Long.parseLong("10000")); |
There was a problem hiding this comment.
Dead code: This line sets a hardcoded size that is immediately overwritten on the next line. Remove this line as the actual size is set from volumeObject.getSize() on line 74.
| file.setSize(Long.parseLong("10000")); |
| * @return the created AccessGroup object | ||
| */ | ||
| abstract AccessGroup createAccessGroup(AccessGroup accessGroup); | ||
| abstract public AccessGroup createAccessGroup(AccessGroup accessGroup); |
There was a problem hiding this comment.
The modifier 'public' should come before 'abstract' following Java convention. Change to 'public abstract AccessGroup createAccessGroup(AccessGroup accessGroup);'.
| abstract public AccessGroup createAccessGroup(AccessGroup accessGroup); | |
| public abstract AccessGroup createAccessGroup(AccessGroup accessGroup); |
There was a problem hiding this comment.
Pull Request Overview
Copilot reviewed 17 out of 20 changed files in this pull request and generated 9 comments.
Comments suppressed due to low confidence (1)
plugins/storage/volume/ontap/src/main/java/org/apache/cloudstack/storage/utils/Utility.java:1
- Export policy name generation should be extracted to a utility method as indicated by the TODO comment.
/*
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| FileInfo file = new FileInfo(); | ||
| //file.setName("test1"); // to be replaced with volume name // this should not be passed for dir | ||
| //file.setName(volumeObject.getName()); // to check whether this needs to be sent or not | ||
| file.setSize(Long.parseLong("10000")); |
There was a problem hiding this comment.
Line 73 sets a hardcoded size that is immediately overwritten on line 74. Remove the hardcoded assignment on line 73.
| file.setSize(Long.parseLong("10000")); |
| //file.setName("test1"); // to be replaced with volume name // this should not be passed for dir | ||
| //file.setName(volumeObject.getName()); // to check whether this needs to be sent or not | ||
| file.setSize(Long.parseLong("10000")); | ||
| file.setSize(volumeObject.getSize()); | ||
| file.setUnixPermissions(755); // check if it is needed only for dir ? it is needed for dir | ||
| file.setType(FileInfo.TypeEnum.DIRECTORY); // We are creating file for a cloudstack volume . Should it be dir ? // TODO change once multipart is done |
There was a problem hiding this comment.
TODO comments indicate uncertainty about implementation details. These should be resolved or clarified before merging, particularly whether the type should be DIRECTORY and whether permissions are appropriate.
| //file.setName("test1"); // to be replaced with volume name // this should not be passed for dir | |
| //file.setName(volumeObject.getName()); // to check whether this needs to be sent or not | |
| file.setSize(Long.parseLong("10000")); | |
| file.setSize(volumeObject.getSize()); | |
| file.setUnixPermissions(755); // check if it is needed only for dir ? it is needed for dir | |
| file.setType(FileInfo.TypeEnum.DIRECTORY); // We are creating file for a cloudstack volume . Should it be dir ? // TODO change once multipart is done | |
| file.setName(volumeObject.getName()); // Set file name to volume name | |
| file.setSize(volumeObject.getSize()); | |
| file.setUnixPermissions(644); // Set permissions appropriate for a file | |
| file.setType(FileInfo.TypeEnum.FILE); // Set type to FILE for NFS volume |
| s_logger.debug("Successfully created file in ONTAP under volume with path {} or name {} ", cloudstackVolume.getVolume().getUuid(), cloudstackVolume.getCloudstackVolName()); | ||
| FileInfo responseFile = cloudstackVolume.getFile(); | ||
| responseFile.setPath(cloudstackVolume.getCloudstackVolName()); | ||
| }catch (Exception e) { |
There was a problem hiding this comment.
Missing space between closing brace and 'catch' keyword. Should be '} catch (Exception e) {'.
| }catch (Exception e) { | |
| } catch (Exception e) { |
| s_logger.info("Successfully assigned exportPolicy {} to volume {}", exportPolicy.getName(), volumeName); | ||
| accessGroup.setPolicy(exportPolicy); | ||
| return accessGroup; | ||
| }catch (Exception e){ |
There was a problem hiding this comment.
Missing space between closing brace and 'catch' keyword, and between 'e)' and '{'. Should be '} catch (Exception e) {'.
| }catch (Exception e){ | |
| } catch (Exception e) { |
| private FileInfo file; | ||
| private Lun lun; | ||
| private Volume volume; | ||
| // will be replaced after testing |
There was a problem hiding this comment.
Comment indicates temporary field. Either remove the comment if the field is permanent, or create a tracking issue for its removal if it's truly temporary.
| // will be replaced after testing |
| private String addExportRule(String policyName, String clientMatch, String[] protocols, String[] roRule, String[] rwRule) { | ||
| return ""; | ||
| } | ||
|
|
There was a problem hiding this comment.
Unused private method 'addExportRule' with stub implementation should be removed or implemented if needed.
| private String addExportRule(String policyName, String clientMatch, String[] protocols, String[] roRule, String[] rwRule) { | |
| return ""; | |
| } |
| @RequestLine("GET /") | ||
| @Headers({"Authorization: {authHeader}"}) | ||
| OntapResponse<ExportPolicy> getExportPolicyResponse(@Param("authHeader") String authHeader); | ||
| ExportPolicy getExportPolicyResponse(@Param("authHeader") String authHeader); |
There was a problem hiding this comment.
Return type changed from 'OntapResponse' to 'ExportPolicy'. This could be a breaking change if the method is used elsewhere expecting the wrapped response type.
| ExportPolicy getExportPolicyResponse(@Param("authHeader") String authHeader); | |
| OntapResponse<ExportPolicy> getExportPolicyResponse(@Param("authHeader") String authHeader); |
| @Headers({"Authorization: {authHeader}"}) | ||
| OntapResponse<ExportPolicy> getExportPolicyById(@Param("authHeader") String authHeader, | ||
| @Param("id") String id); | ||
| ExportPolicy getExportPolicyById(@Param("authHeader") String authHeader, |
There was a problem hiding this comment.
Return type changed from 'OntapResponse' to 'ExportPolicy'. This could be a breaking change if the method is used elsewhere expecting the wrapped response type.
| ExportPolicy getExportPolicyById(@Param("authHeader") String authHeader, | |
| OntapResponse<ExportPolicy> getExportPolicyById(@Param("authHeader") String authHeader, |
| @Headers({ "Authorization: {authHeader}"}) | ||
| JobResponse updateVolumeRebalancing(@Param("authHeader") String authHeader, @Param("uuid") String uuid, Volume volumeRequest); |
There was a problem hiding this comment.
Removed 'acceptHeader' parameter from method signature. This is a breaking change that could affect existing callers of this method.
| @Headers({ "Authorization: {authHeader}"}) | |
| JobResponse updateVolumeRebalancing(@Param("authHeader") String authHeader, @Param("uuid") String uuid, Volume volumeRequest); | |
| @Headers({ "Authorization: {authHeader}", "Accept: {acceptHeader}" }) | |
| JobResponse updateVolumeRebalancing(@Param("authHeader") String authHeader, @Param("acceptHeader") String acceptHeader, @Param("uuid") String uuid, Volume volumeRequest); |
There was a problem hiding this comment.
Pull Request Overview
Copilot reviewed 17 out of 20 changed files in this pull request and generated 13 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| cloudStackVolumeRequest = new CloudStackVolume(); | ||
| FileInfo file = new FileInfo(); | ||
| //file.setName("test1"); // to be replaced with volume name // this should not be passed for dir | ||
| //file.setName(volumeObject.getName()); // to check whether this needs to be sent or not |
There was a problem hiding this comment.
Remove these commented-out lines. If this logic needs investigation, track it with a TODO comment instead of leaving commented code.
| //file.setName(volumeObject.getName()); // to check whether this needs to be sent or not | |
| // TODO: Investigate whether file.setName(volumeObject.getName()) needs to be sent for NFS file creation. |
| //file.setName(volumeObject.getName()); // to check whether this needs to be sent or not | ||
| file.setSize(Long.parseLong("10000")); | ||
| file.setSize(volumeObject.getSize()); | ||
| file.setUnixPermissions(755); // check if it is needed only for dir ? it is needed for dir |
There was a problem hiding this comment.
The comment is unclear and contradictory ('check if it is needed only for dir ? it is needed for dir'). Either remove the question or clarify whether this is always required or only for directories.
| file.setUnixPermissions(755); // check if it is needed only for dir ? it is needed for dir | |
| file.setUnixPermissions(755); // Set permissions because the file is a directory |
| file.setSize(Long.parseLong("10000")); | ||
| file.setSize(volumeObject.getSize()); | ||
| file.setUnixPermissions(755); // check if it is needed only for dir ? it is needed for dir | ||
| file.setType(FileInfo.TypeEnum.DIRECTORY); // We are creating file for a cloudstack volume . Should it be dir ? // TODO change once multipart is done |
There was a problem hiding this comment.
The comment contains conflicting information about whether this should be a file or directory. Update the TODO to be more specific about what needs to change and when.
| file.setType(FileInfo.TypeEnum.DIRECTORY); // We are creating file for a cloudstack volume . Should it be dir ? // TODO change once multipart is done | |
| file.setType(FileInfo.TypeEnum.DIRECTORY); // TODO: Currently set to DIRECTORY for NFS volumes. Change to FILE when multipart file support is implemented. |
| private FileInfo file; | ||
| private Lun lun; | ||
| private Volume volume; | ||
| // will be replaced after testing |
There was a problem hiding this comment.
The comment 'will be replaced after testing' is vague. Specify what will replace this field and under what conditions, or create a TODO if this is temporary.
| // will be replaced after testing | |
| // TODO: Replace cloudstackVolName with the final volume naming scheme after testing is complete |
| this.volumeFeignClient = feignClientFactory.createClient(VolumeFeignClient.class,baseURL ); | ||
| this.jobFeignClient = feignClientFactory.createClient(JobFeignClient.class, baseURL ); |
There was a problem hiding this comment.
[nitpick] Inconsistent spacing before closing parentheses. Line 69 has a space before the closing parenthesis while line 70 has a space after 'baseURL'. Standardize the formatting.
| this.volumeFeignClient = feignClientFactory.createClient(VolumeFeignClient.class,baseURL ); | |
| this.jobFeignClient = feignClientFactory.createClient(JobFeignClient.class, baseURL ); | |
| this.volumeFeignClient = feignClientFactory.createClient(VolumeFeignClient.class, baseURL); | |
| this.jobFeignClient = feignClientFactory.createClient(JobFeignClient.class, baseURL); |
| // CloudStack will construct the full mount path as: hostAddress + ":" + path | ||
| path = "/" + storagePoolName; | ||
| s_logger.info("Setting NFS path for storage pool: " + path); | ||
| host = "10.193.192.136"; // TODO hardcoded for now |
There was a problem hiding this comment.
Hardcoded IP address will cause failures in different environments. This must be replaced with the actual management LIF from details.get(Constants.MANAGEMENT_LIF) before merging.
| host = "10.193.192.136"; // TODO hardcoded for now | |
| host = details.get(Constants.MANAGEMENT_LIF); |
| PrimaryDataStoreInfo primaryStore = (PrimaryDataStoreInfo)dataStore; | ||
| List<HostVO> hostsToConnect = _resourceMgr.getEligibleUpAndEnabledHostsInClusterForStorageConnection(primaryStore); | ||
|
|
||
| logger.debug(" datastore object received is {} ",primaryStore ); |
There was a problem hiding this comment.
[nitpick] Extra spaces in the log message format string. Change ' datastore object received is {} ' to 'Datastore object received is {}'.
| logger.debug(" datastore object received is {} ",primaryStore ); | |
| logger.debug("Datastore object received is {}", primaryStore); |
| AccessGroup accessGroupRequest = new AccessGroup(); | ||
| accessGroupRequest.setHostsToConnect(hostsToConnect); | ||
| accessGroupRequest.setScope(scope); | ||
| primaryStore.setDetails(details);// setting details as it does not come from cloudstack |
There was a problem hiding this comment.
[nitpick] Missing space after the semicolon on line 282. Add a space between '//' and 'setting' for consistent comment formatting.
| primaryStore.setDetails(details);// setting details as it does not come from cloudstack | |
| primaryStore.setDetails(details); // setting details as it does not come from cloudstack |
| accessGroupRequest.setPolicy(exportPolicy); | ||
| strategy.createAccessGroup(accessGroupRequest); | ||
|
|
||
| logger.debug("attachCluster: Attaching the pool to each of the host in the cluster: {}", primaryStore.getClusterId()); |
There was a problem hiding this comment.
[nitpick] Missing space after the semicolon on line 282. Add a space between '//' and 'setting' for consistent comment formatting.
| @Override | ||
| public Object decode(Response response, Type type) throws IOException, DecodeException { | ||
| if (response.body() == null) { | ||
| logger.debug("Response body is null, returning null"); |
There was a problem hiding this comment.
Excessive debug logging added throughout the decoder (lines 140-169). This level of verbose logging should be removed or converted to trace level before merging to production to avoid log pollution.
Description
This PR...
Types of changes
Feature/Enhancement Scale or Bug Severity
Feature/Enhancement Scale
Bug Severity
Screenshots (if appropriate):
How Has This Been Tested?
How did you try to break this feature and the system with this change?