Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .codegen/_openapi_sha
Original file line number Diff line number Diff line change
@@ -1 +1 @@
8f5eedbc991c4f04ce1284406577b0c92d59a224
8b2cd47cbac64b32e120601110a5fc70b8189ba4
1 change: 1 addition & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
Expand Up @@ -168,6 +168,7 @@ cmd/workspace/volumes/volumes.go linguist-generated=true
cmd/workspace/warehouses/warehouses.go linguist-generated=true
cmd/workspace/workspace-bindings/workspace-bindings.go linguist-generated=true
cmd/workspace/workspace-conf/workspace-conf.go linguist-generated=true
cmd/workspace/workspace-entity-tag-assignments/workspace-entity-tag-assignments.go linguist-generated=true
cmd/workspace/workspace-iam-v2/workspace-iam-v2.go linguist-generated=true
cmd/workspace/workspace-settings-v2/workspace-settings-v2.go linguist-generated=true
cmd/workspace/workspace/workspace.go linguist-generated=true
1 change: 1 addition & 0 deletions NEXT_CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@

### Dependency updates

* Upgrade Go SDK to 0.93.0 ([#4112](https://github.com/databricks/cli/pull/4112))
* Bump Go toolchain to 1.25.5.

### API Changes
1 change: 1 addition & 0 deletions acceptance/help/output.txt
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,7 @@ Database Instances

Tags
tag-policies The Tag Policy API allows you to manage policies for governed tags in Databricks.
workspace-entity-tag-assignments Manage tag assignments on workspace-scoped objects.

Developer Tools
bundle Databricks Asset Bundles let you express data/AI/analytics projects as code.
Expand Down
22 changes: 21 additions & 1 deletion bundle/direct/dresources/dashboard.go
Original file line number Diff line number Diff line change
Expand Up @@ -214,6 +214,12 @@ func (r *ResourceDashboard) DoCreate(ctx context.Context, config *resources.Dash

createResp, err := r.client.Lakeview.Create(ctx, dashboards.CreateDashboardRequest{
Dashboard: dashboard,

// Note: these remain unset until there is a TF release with support for these fields.
DatasetCatalog: "",
DatasetSchema: "",

ForceSendFields: nil,
})

// The API returns 404 if the parent directory doesn't exist.
Expand All @@ -223,7 +229,15 @@ func (r *ResourceDashboard) DoCreate(ctx context.Context, config *resources.Dash
if err != nil {
return "", nil, fmt.Errorf("failed to create parent directory: %w", err)
}
createResp, err = r.client.Lakeview.Create(ctx, dashboards.CreateDashboardRequest{Dashboard: dashboard})
createResp, err = r.client.Lakeview.Create(ctx, dashboards.CreateDashboardRequest{
Dashboard: dashboard,

// Note: these remain unset until there is a TF release with support for these fields.
DatasetCatalog: "",
DatasetSchema: "",

ForceSendFields: nil,
})
}
if err != nil {
return "", nil, err
Expand Down Expand Up @@ -256,6 +270,12 @@ func (r *ResourceDashboard) DoUpdate(ctx context.Context, id string, config *res
updateResp, err := r.client.Lakeview.Update(ctx, dashboards.UpdateDashboardRequest{
DashboardId: id,
Dashboard: dashboard,

// Note: these remain unset until there is a TF release with support for these fields.
DatasetCatalog: "",
DatasetSchema: "",

ForceSendFields: nil,
})
if err != nil {
return nil, err
Expand Down
4 changes: 3 additions & 1 deletion bundle/direct/dresources/synced_database_table.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,8 @@ func (r *ResourceSyncedDatabaseTable) DoUpdate(ctx context.Context, id string, c

func (r *ResourceSyncedDatabaseTable) DoDelete(ctx context.Context, id string) error {
return r.client.Database.DeleteSyncedDatabaseTable(ctx, database.DeleteSyncedDatabaseTableRequest{
Name: id,
Name: id,
PurgeData: false,
ForceSendFields: nil,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shreyas-goenka @denik I suspect we need to break this out in the configuration schema proper.

What do we do for the other resources that contain data (schemas, registered models)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And fwiw, this is a net new field, so no precedence for this resource.

})
}
10 changes: 5 additions & 5 deletions bundle/internal/schema/annotations_openapi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -243,8 +243,7 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
"enable_elastic_disk":
"description": |-
Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk
space when its Spark workers are running low on disk space. This feature requires specific AWS
permissions to function correctly - refer to the User Guide for more details.
space when its Spark workers are running low on disk space.
"enable_local_disk_encryption":
"description": |-
Whether to enable LUKS on cluster VMs' local disks
Expand Down Expand Up @@ -1702,8 +1701,7 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
"enable_elastic_disk":
"description": |-
Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk
space when its Spark workers are running low on disk space. This feature requires specific AWS
permissions to function correctly - refer to the User Guide for more details.
space when its Spark workers are running low on disk space.
"enable_local_disk_encryption":
"description": |-
Whether to enable LUKS on cluster VMs' local disks
Expand Down Expand Up @@ -4961,7 +4959,9 @@ github.com/databricks/databricks-sdk-go/service/sql.AlertV2Notification:
Whether to notify alert subscribers when alert returns back to normal.
"retrigger_seconds":
"description": |-
Number of seconds an alert must wait after being triggered to rearm itself. After rearming, it can be triggered again. If 0 or not specified, the alert will not be triggered again.
Number of seconds an alert waits after being triggered before it is allowed to send another notification.
If set to 0 or omitted, the alert will not send any further notifications after the first trigger
Setting this value to 1 allows the alert to send a notification on every evaluation where the condition is met, effectively making it always retrigger for notification purposes.
"subscriptions": {}
github.com/databricks/databricks-sdk-go/service/sql.AlertV2Operand:
"column": {}
Expand Down
6 changes: 3 additions & 3 deletions bundle/schema/jsonschema.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions cmd/workspace/cmd.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions cmd/workspace/database/database.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 4 additions & 0 deletions cmd/workspace/lakeview/lakeview.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading