-
Notifications
You must be signed in to change notification settings - Fork 121
Added support for lifecycle prevent_destroy option #3448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
| schema: test-schema | ||
| catalog: main | ||
|
|
||
| <<: *pipeline_base |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious: why use the YAML anchors if they aren't used anywhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I use them in the test to remove the whole resource from the configuration
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternative, could be easier to manage and follow than anchors:
- have each resource in their own yaml
- main databricks includes "resources/*.yml"
removing resources: rm resources/filename.yml
duplicating resource: cp resource/a.yml resource/b.yml, followed by update_file.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a preference here. If you think this makes it easier to understand, I'll change it to that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done here 65bc5dd
…uct (#3571) ## Changes Extract common resource fields into a separate embedded struct `BaseResource` ## Why Based on feedback from #3448 (comment) ## Tests Existing tests pass <!-- If your PR needs to be included in the release notes for next release, add a separate entry in NEXT_CHANGELOG.md as part of your PR. -->
|
|
||
| Permissions []ClusterPermission `json:"permissions,omitempty"` | ||
|
|
||
| compute.ClusterSpec |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious about this move -- was it needed for this PR to work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, just grouping them together, it's less readable when these structs are separated by custom fields
| testStruct(t, | ||
| reflect.TypeOf(config.Root{}), | ||
| 3800, 4200, // 4001 at the time of the update | ||
| 4000, 4300, // 4003 at the time of the update |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, it's okay to set higher bound to 10k or something. IT's a sanity check but I did not intend it to be updated frequently.
| lifecycle: | ||
| prevent_destroy: true | ||
|
|
||
| pipeline: &pipeline_base |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it work if we had large databricks.yml with all the resources? So we don't loop but test all at once. This requires all error messages to be printed rather than the first one, which may be a good idea anyway.
| lifecycle: | ||
| prevent_destroy: true | ||
|
|
||
| pipeline: &pipeline_base |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It could work, but if you add a new type of resource, you need to remember to add it to this databricks.yml file.
It depends. If existing resource provide enough variety and implementation is general (it is) we can say we've covered our ground.
The unit test I added will fail if the new resource does not handle prevent destroy correctly.
It only covers TF, right?
Overall I agree we can proceed without full coverage, but it would be interesting to cover >1 resource to see what kind of error messages we get. Do we get them all? One at random?
| schema: test-schema | ||
| catalog: main | ||
|
|
||
| <<: *pipeline_base |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternative, could be easier to manage and follow than anchors:
- have each resource in their own yaml
- main databricks includes "resources/*.yml"
removing resources: rm resources/filename.yml
duplicating resource: cp resource/a.yml resource/b.yml, followed by update_file.py
| // If there is no prevent_destroy, skip | ||
| preventDestroyV, err := dyn.GetByPath(root, path) | ||
| if err != nil { | ||
| return nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be a return or continue? Looks like we going to miss actions that come afterwards?
Changes
Added support for lifecycle prevent_destroy option
In TF, if lifecycle.prevent_destroy is set to true,
terraform planwill fail with a corresponding error. That's why we can't really reuse the same code logic for both direct and TF.Direct implementation mimics the TF one.
A. When
lifecycle.prevent_destroy is set to true:bundle destroywill failbundle deploywill failB. When
lifecycle.prevent_destroy is switched to truedestroy and deploy will succeedC. When the whole resource is removed from bundle confiuguration, destroy and deploy will succeed
Why
Allows specifying resources as non-destructible, which is already supported by TF
Tests
Added accceptance tests