You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| <aname="input_allow_cluster_create"></a> [allow\_cluster\_create](#input\_allow\_cluster\_create)| This is a field to allow the group to have cluster create privileges. More fine grained permissions could be assigned with databricks\_permissions and cluster\_id argument. Everyone without allow\_cluster\_create argument set, but with permission to use Cluster Policy would be able to create clusters, but within boundaries of that specific policy. |`bool`|`true`| no |
381
402
| <aname="input_allow_instance_pool_create"></a> [allow\_instance\_pool\_create](#input\_allow\_instance\_pool\_create)| This is a field to allow the group to have instance pool create privileges. More fine grained permissions could be assigned with databricks\_permissions and instance\_pool\_id argument. |`bool`|`true`| no |
382
403
| <aname="input_allow_sql_analytics_access"></a> [allow\_sql\_analytics\_access](#input\_allow\_sql\_analytics\_access)| This is a field to allow the group to have access to SQL Analytics feature through databricks\_sql\_endpoint. |`bool`|`true`| no |
@@ -386,6 +407,7 @@ No modules.
386
407
| <aname="input_cluster_access_control"></a> [cluster\_access\_control](#input\_cluster\_access\_control)| Cluster access control |`any`|`null`| no |
387
408
| <aname="input_cluster_autotermination_minutes"></a> [cluster\_autotermination\_minutes](#input\_cluster\_autotermination\_minutes)| cluster auto termination duration |`number`|`30`| no |
388
409
| <aname="input_cluster_id"></a> [cluster\_id](#input\_cluster\_id)| Existing cluster id |`string`|`null`| no |
410
+
| <aname="input_cluster_name"></a> [cluster\_name](#input\_cluster\_name)| Cluster name |`string`|`null`| no |
389
411
| <aname="input_cluster_policy_id"></a> [cluster\_policy\_id](#input\_cluster\_policy\_id)| Exiting cluster policy id |`string`|`null`| no |
390
412
| <aname="input_create_group"></a> [create\_group](#input\_create\_group)| Create a new group, if group already exists the deployment will fail. |`bool`|`false`| no |
391
413
| <aname="input_create_user"></a> [create\_user](#input\_create\_user)| Create a new user, if user already exists the deployment will fail. |`bool`|`false`| no |
@@ -400,6 +422,7 @@ No modules.
400
422
| <aname="input_deploy_worker_instance_pool"></a> [deploy\_worker\_instance\_pool](#input\_deploy\_worker\_instance\_pool)| Worker instance pool |`bool`|`false`| no |
401
423
| <aname="input_driver_node_type_id"></a> [driver\_node\_type\_id](#input\_driver\_node\_type\_id)| The node type of the Spark driver. This field is optional; if unset, API will set the driver node type to the same value as node\_type\_id. |`string`|`null`| no |
402
424
| <aname="input_email_notifications"></a> [email\_notifications](#input\_email\_notifications)| Email notification block. |`any`|`null`| no |
425
+
| <aname="input_existing_cluster"></a> [existing\_cluster](#input\_existing\_cluster)| Existing job cluster |`bool`|`false`| no |
403
426
| <aname="input_fixed_value"></a> [fixed\_value](#input\_fixed\_value)| Number of nodes in the cluster. |`number`|`0`| no |
404
427
| <aname="input_gb_per_core"></a> [gb\_per\_core](#input\_gb\_per\_core)| Number of gigabytes per core available on instance. Conflicts with min\_memory\_gb. Defaults to 0. |`string`|`0`| no |
405
428
| <aname="input_gpu"></a> [gpu](#input\_gpu)| GPU required or not. |`bool`|`false`| no |
@@ -408,13 +431,12 @@ No modules.
408
431
| <aname="input_group_can_restart"></a> [group\_can\_restart](#input\_group\_can\_restart)| Group allowed to access the platform. |`string`|`""`| no |
409
432
| <aname="input_idle_instance_autotermination_minutes"></a> [idle\_instance\_autotermination\_minutes](#input\_idle\_instance\_autotermination\_minutes)| idle instance auto termination duration |`number`|`20`| no |
410
433
| <aname="input_instance_pool_access_control"></a> [instance\_pool\_access\_control](#input\_instance\_pool\_access\_control)| Instance pool access control |`any`|`null`| no |
411
-
| <aname="input_instance_profile_arn"></a> [instance\_profile\_arn](#input\_instance\_profile\_arn)| ARN attribute of aws\_iam\_instance\_profile output, the EC2 instance profile association to AWS IAM role. This ARN would be validated upon resource creation and it's not possible to skip validation. |`any`|`null`| no |
412
434
| <aname="input_is_meta_instance_profile"></a> [is\_meta\_instance\_profile](#input\_is\_meta\_instance\_profile)| Whether the instance profile is a meta instance profile. Used only in IAM credential passthrough. |`any`|`false`| no |
413
435
| <aname="input_jobs_access_control"></a> [jobs\_access\_control](#input\_jobs\_access\_control)| Jobs access control |`any`|`null`| no |
414
436
| <aname="input_language"></a> [language](#input\_language)| notebook language |`string`|`"PYTHON"`| no |
415
437
| <aname="input_local_disk"></a> [local\_disk](#input\_local\_disk)| Pick only nodes with local storage. Defaults to false. |`string`|`true`| no |
416
-
| <aname="input_local_notebooks"></a> [local\_notebooks](#input\_local\_notebooks)|nested block: NestingSet, min items: 0, max items: 0|`any`|`[]`| no |
417
-
| <aname="input_local_path"></a> [local\_path](#input\_local\_path)|notebook location on user machine |`string`|`null`| no |
438
+
| <aname="input_local_notebooks"></a> [local\_notebooks](#input\_local\_notebooks)|Local path to the notebook(s) that will be used by the job|`any`|`[]`| no |
439
+
| <aname="input_local_path"></a> [local\_path](#input\_local\_path)|Notebook(s) location on users machine |`string`|`null`| no |
418
440
| <aname="input_max_capacity"></a> [max\_capacity](#input\_max\_capacity)| instance pool maximum capacity |`number`|`3`| no |
419
441
| <aname="input_max_concurrent_runs"></a> [max\_concurrent\_runs](#input\_max\_concurrent\_runs)| An optional maximum allowed number of concurrent runs of the job. |`number`|`null`| no |
420
442
| <aname="input_max_retries"></a> [max\_retries](#input\_max\_retries)| An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result\_state or INTERNAL\_ERROR life\_cycle\_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. |`number`|`0`| no |
@@ -424,13 +446,13 @@ No modules.
424
446
| <aname="input_min_memory_gb"></a> [min\_memory\_gb](#input\_min\_memory\_gb)| Minimum amount of memory per node in gigabytes. Defaults to 0. |`string`|`0`| no |
425
447
| <aname="input_min_retry_interval_millis"></a> [min\_retry\_interval\_millis](#input\_min\_retry\_interval\_millis)| An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried. |`number`|`null`| no |
426
448
| <aname="input_ml"></a> [ml](#input\_ml)| ML required or not. |`bool`|`false`| no |
427
-
| <aname="input_notebooks_access_control"></a> [notebook\_access\_control](#input\_notebook\_access\_control)|Notebook access control |`any`|`null`| no |
428
-
| <aname="input_notebooks"></a> [notebooks](#input\_notebooks)|nested block: NestingSet, min items: 0, max items: 0 |`any`|`[]`| no |
449
+
| <aname="input_notebooks"></a> [notebooks](#input\_notebooks)|Local path to the notebook(s) that will be deployed |`any`|`[]`| no |
450
+
| <aname="input_notebooks_access_control"></a> [notebooks\_access\_control](#input\_notebooks\_access\_control)|Notebook access control |`any`|`null`| no |
429
451
| <aname="input_num_workers"></a> [num\_workers](#input\_num\_workers)| number of workers for job |`number`|`1`| no |
430
452
| <aname="input_policy_access_control"></a> [policy\_access\_control](#input\_policy\_access\_control)| Policy access control |`any`|`null`| no |
431
453
| <aname="input_policy_overrides"></a> [policy\_overrides](#input\_policy\_overrides)| Cluster policy overrides |`any`|`null`| no |
432
454
| <aname="input_prjid"></a> [prjid](#input\_prjid)| (Required) Name of the project/stack e.g: mystack, nifieks, demoaci. Should not be changed after running 'tf apply' |`string`| n/a | yes |
433
-
| <aname="input_remote_notebooks"></a> [remote\_notebooks](#input\_remote\_notebooks)|nested block: NestingSet, min items: 0, max items: 0|`any`|`[]`| no |
455
+
| <aname="input_remote_notebooks"></a> [remote\_notebooks](#input\_remote\_notebooks)|Path to notebook(s) in the databricks workspace that will be used by the job|`any`|`[]`| no |
434
456
| <aname="input_retry_on_timeout"></a> [retry\_on\_timeout](#input\_retry\_on\_timeout)| An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout. |`bool`|`false`| no |
435
457
| <aname="input_schedule"></a> [schedule](#input\_schedule)| Job schedule configuration. |`map(any)`|`null`| no |
436
458
| <aname="input_spark_conf"></a> [spark\_conf](#input\_spark\_conf)| Optional Spark configuration block |`any`|`null`| no |
0 commit comments