Skip to content

Conversation

@andrewnester
Copy link
Contributor

@andrewnester andrewnester commented Aug 20, 2025

Changes

Added support for lifecycle prevent_destroy option

In TF, if lifecycle.prevent_destroy is set to true, terraform plan will fail with a corresponding error. That's why we can't really reuse the same code logic for both direct and TF.

Direct implementation mimics the TF one.

A. When lifecycle.prevent_destroy is set to true:

  1. bundle destroy will fail
  2. if there are any configuration changes that recreate the resource, bundle deploy will fail

B. When lifecycle.prevent_destroy is switched to true destroy and deploy will succeed

C. When the whole resource is removed from bundle confiuguration, destroy and deploy will succeed

Why

Allows specifying resources as non-destructible, which is already supported by TF

Tests

Added accceptance tests

@eng-dev-ecosystem-bot
Copy link
Collaborator

eng-dev-ecosystem-bot commented Aug 20, 2025

Run: 17584931660

Env ✅​pass 🔄​flaky 🙈​skip
✅​ aws linux 311 525
✅​ aws windows 312 524
🔄​ aws-ucws linux 421 2 423
✅​ aws-ucws windows 424 422
✅​ azure linux 311 524
✅​ azure windows 312 523
✅​ azure-ucws linux 423 422
✅​ azure-ucws windows 424 421
✅​ gcp linux 310 526
✅​ gcp windows 311 525
Test Name aws-ucws linux
TestAccept 🔄​flaky
TestAccept/bundle/deployment/bind/job/job-spark-python-task 🔄​flaky

@andrewnester andrewnester requested a review from denik September 2, 2025 10:44
@andrewnester andrewnester marked this pull request as ready for review September 2, 2025 10:44
schema: test-schema
catalog: main

<<: *pipeline_base
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious: why use the YAML anchors if they aren't used anywhere?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I use them in the test to remove the whole resource from the configuration

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternative, could be easier to manage and follow than anchors:

  • have each resource in their own yaml
  • main databricks includes "resources/*.yml"

removing resources: rm resources/filename.yml
duplicating resource: cp resource/a.yml resource/b.yml, followed by update_file.py

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a preference here. If you think this makes it easier to understand, I'll change it to that

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done here 65bc5dd

@andrewnester andrewnester requested a review from pietern September 8, 2025 13:07
github-merge-queue bot pushed a commit that referenced this pull request Sep 9, 2025
…uct (#3571)

## Changes
Extract common resource fields into a separate embedded struct
`BaseResource`

## Why
Based on feedback from
#3448 (comment)

## Tests
Existing tests pass

<!-- If your PR needs to be included in the release notes for next
release,
add a separate entry in NEXT_CHANGELOG.md as part of your PR. -->

Permissions []ClusterPermission `json:"permissions,omitempty"`

compute.ClusterSpec
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious about this move -- was it needed for this PR to work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, just grouping them together, it's less readable when these structs are separated by custom fields

testStruct(t,
reflect.TypeOf(config.Root{}),
3800, 4200, // 4001 at the time of the update
4000, 4300, // 4003 at the time of the update
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, it's okay to set higher bound to 10k or something. IT's a sanity check but I did not intend it to be updated frequently.

lifecycle:
prevent_destroy: true

pipeline: &pipeline_base
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it work if we had large databricks.yml with all the resources? So we don't loop but test all at once. This requires all error messages to be printed rather than the first one, which may be a good idea anyway.

lifecycle:
prevent_destroy: true

pipeline: &pipeline_base
Copy link
Contributor

@denik denik Sep 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could work, but if you add a new type of resource, you need to remember to add it to this databricks.yml file.

It depends. If existing resource provide enough variety and implementation is general (it is) we can say we've covered our ground.

The unit test I added will fail if the new resource does not handle prevent destroy correctly.

It only covers TF, right?

Overall I agree we can proceed without full coverage, but it would be interesting to cover >1 resource to see what kind of error messages we get. Do we get them all? One at random?

schema: test-schema
catalog: main

<<: *pipeline_base
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternative, could be easier to manage and follow than anchors:

  • have each resource in their own yaml
  • main databricks includes "resources/*.yml"

removing resources: rm resources/filename.yml
duplicating resource: cp resource/a.yml resource/b.yml, followed by update_file.py

@andrewnester andrewnester added this pull request to the merge queue Sep 9, 2025
Merged via the queue into main with commit e5a4da5 Sep 9, 2025
19 checks passed
@andrewnester andrewnester deleted the feature/prevent-destroy branch September 9, 2025 14:34
// If there is no prevent_destroy, skip
preventDestroyV, err := dyn.GetByPath(root, path)
if err != nil {
return nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be a return or continue? Looks like we going to miss actions that come afterwards?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants