Skip to content

Commit 48a01ca

Browse files
authored
Convert TestDeployBundleWithCluster to an acceptance test (#2929)
## Why <!-- Why are these changes needed? Provide the context that the reviewer might be missing. For example, were there any decisions behind the change that are not reflected in the code itself? --> One change in the series of changes for converting integration tests into acceptance tests. This will allow for easier testing of various backing solutions for bundle deployment
1 parent 72bafdb commit 48a01ca

File tree

10 files changed

+115
-77
lines changed

10 files changed

+115
-77
lines changed

integration/bundle/bundles/clusters/template/databricks.yml.tmpl renamed to acceptance/bundle/resources/clusters/deploy/simple/databricks.yml.tmpl

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,24 @@
11
bundle:
2-
name: basic
2+
name: test-deploy-cluster-simple
33

44
workspace:
5-
root_path: "~/.bundle/{{.unique_id}}"
5+
root_path: ~/.bundle/$UNIQUE_NAME
66

77
resources:
88
clusters:
99
test_cluster:
10-
cluster_name: "test-cluster-{{.unique_id}}"
11-
spark_version: "{{.spark_version}}"
12-
node_type_id: "{{.node_type_id}}"
10+
cluster_name: test-cluster-$UNIQUE_NAME
11+
spark_version: $DEFAULT_SPARK_VERSION
12+
node_type_id: $NODE_TYPE_ID
1313
num_workers: 2
1414
spark_conf:
1515
"spark.executor.memory": "2g"
1616

1717
jobs:
1818
foo:
19-
name: test-job-with-cluster-{{.unique_id}}
19+
name: test-job-with-cluster-$UNIQUE_NAME
2020
tasks:
21-
- task_key: my_notebook_task
21+
- task_key: my_spark_python_task
2222
existing_cluster_id: "${resources.clusters.test_cluster.cluster_id}"
2323
spark_python_task:
2424
python_file: ./hello_world.py

integration/bundle/bundles/clusters/template/hello_world.py renamed to acceptance/bundle/resources/clusters/deploy/simple/hello_world.py

File renamed without changes.
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
2+
>>> [CLI] bundle deploy
3+
Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/[UNIQUE_NAME]/files...
4+
Deploying resources...
5+
Updating deployment state...
6+
Deployment complete!
7+
8+
=== Cluster should exist after bundle deployment:
9+
{
10+
"cluster_name": "test-cluster-[UNIQUE_NAME]",
11+
"num_workers": 2
12+
}
13+
14+
>>> [CLI] bundle destroy --auto-approve
15+
The following resources will be deleted:
16+
delete cluster test_cluster
17+
delete job foo
18+
19+
All files and directories at the following location will be deleted: /Workspace/Users/[USERNAME]/.bundle/[UNIQUE_NAME]
20+
21+
Deleting files...
22+
Destroy complete!
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
envsubst < databricks.yml.tmpl > databricks.yml
2+
3+
cleanup() {
4+
trace $CLI bundle destroy --auto-approve
5+
}
6+
trap cleanup EXIT
7+
8+
trace $CLI bundle deploy
9+
10+
title "Cluster should exist after bundle deployment:\n"
11+
CLUSTER_ID=$($CLI bundle summary -o json | jq -r '.resources.clusters.test_cluster.id')
12+
$CLI clusters get "${CLUSTER_ID}" | jq '{cluster_name,num_workers}'
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
Local = false
2+
Cloud = true
3+
RecordRequests = false
4+
5+
Ignore = [
6+
"databricks.yml",
7+
]
8+
9+
[[Repls]]
10+
Old = "[0-9]{4}-[0-9]{6}-[0-9a-z]{8}"
11+
New = "[CLUSTER-ID]"
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
2+
>>> cp [TESTROOT]/bundle/resources/clusters/run/spark_python_task/../../deploy/simple/hello_world.py .
3+
4+
>>> cp [TESTROOT]/bundle/resources/clusters/run/spark_python_task/../../deploy/simple/databricks.yml.tmpl .
5+
6+
>>> [CLI] bundle deploy
7+
Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/[UNIQUE_NAME]/files...
8+
Deploying resources...
9+
Updating deployment state...
10+
Deployment complete!
11+
12+
>>> [CLI] bundle run foo
13+
Run URL: [DATABRICKS_URL]/?o=[NUMID]#job/[NUMID]/run/[NUMID]
14+
15+
[TIMESTAMP] "test-job-with-cluster-[UNIQUE_NAME]" RUNNING
16+
[TIMESTAMP] "test-job-with-cluster-[UNIQUE_NAME]" TERMINATED SUCCESS
17+
Hello World!
18+
19+
>>> [CLI] bundle destroy --auto-approve
20+
The following resources will be deleted:
21+
delete cluster test_cluster
22+
delete job foo
23+
24+
All files and directories at the following location will be deleted: /Workspace/Users/[USERNAME]/.bundle/[UNIQUE_NAME]
25+
26+
Deleting files...
27+
Destroy complete!
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
trace cp $TESTDIR/../../deploy/simple/hello_world.py .
2+
trace cp $TESTDIR/../../deploy/simple/databricks.yml.tmpl .
3+
envsubst < databricks.yml.tmpl > databricks.yml
4+
5+
cleanup() {
6+
trace $CLI bundle destroy --auto-approve
7+
}
8+
trap cleanup EXIT
9+
10+
trace $CLI bundle deploy
11+
trace $CLI bundle run foo
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
Local = false
2+
CloudSlow = true
3+
RecordRequests = false
4+
5+
Ignore = [
6+
"databricks.yml",
7+
"databricks.yml.tmpl",
8+
"hello_world.py",
9+
]
10+
11+
[[Repls]]
12+
Old = '2\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d'
13+
New = "[TIMESTAMP]"
14+
15+
[[Repls]]
16+
Old = '\d{5,}'
17+
New = "[NUMID]"
18+
19+
[[Repls]]
20+
Old = '\\\\'
21+
New = '/'
22+
23+
[[Repls]]
24+
Old = '\\'
25+
New = '/'

integration/bundle/bundles/clusters/databricks_template_schema.json

Lines changed: 0 additions & 16 deletions
This file was deleted.

integration/bundle/clusters_test.go

Lines changed: 0 additions & 54 deletions
This file was deleted.

0 commit comments

Comments
 (0)