You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|[databricks_current_user.me](https://registry.terraform.io/providers/databricks/databricks/latest/docs/data-sources/current_user)| data source |
357
+
|[databricks_node_type.cluster_node_type](https://registry.terraform.io/providers/databricks/databricks/latest/docs/data-sources/node_type)| data source |
358
+
|[databricks_spark_version.latest](https://registry.terraform.io/providers/databricks/databricks/latest/docs/data-sources/spark_version)| data source |
359
+
360
+
## Inputs
361
+
362
+
| Name | Description | Type | Default | Required |
| <aname="input_allow_cluster_create"></a> [allow\_cluster\_create](#input\_allow\_cluster\_create)| This is a field to allow the group to have cluster create privileges. More fine grained permissions could be assigned with databricks\_permissions and cluster\_id argument. Everyone without allow\_cluster\_create argument set, but with permission to use Cluster Policy would be able to create clusters, but within boundaries of that specific policy. |`bool`|`true`| no |
366
+
| <aname="input_allow_instance_pool_create"></a> [allow\_instance\_pool\_create](#input\_allow\_instance\_pool\_create)| This is a field to allow the group to have instance pool create privileges. More fine grained permissions could be assigned with databricks\_permissions and instance\_pool\_id argument. |`bool`|`true`| no |
367
+
| <aname="input_always_running"></a> [always\_running](#input\_always\_running)| Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. |`bool`|`false`| no |
368
+
| <aname="input_auto_scaling"></a> [auto\_scaling](#input\_auto\_scaling)| Number of min and max workers in auto scale. |`list(any)`|`null`| no |
369
+
| <aname="input_aws_attributes"></a> [aws\_attributes](#input\_aws\_attributes)| Optional configuration block contains attributes related to clusters running on AWS. |`any`|`null`| no |
370
+
| <aname="input_azure_attributes"></a> [azure\_attributes](#input\_azure\_attributes)| Optional configuration block contains attributes related to clusters running on Azure. |`any`|`null`| no |
371
+
| <aname="input_category"></a> [category](#input\_category)| Node category, which can be one of: General purpose, Memory optimized, Storage optimized, Compute optimized, GPU |`string`|`"General purpose"`| no |
372
+
| <aname="input_cluster_access_control"></a> [cluster\_access\_control](#input\_cluster\_access\_control)| Cluster access control |`any`|`null`| no |
373
+
| <aname="input_cluster_autotermination_minutes"></a> [cluster\_autotermination\_minutes](#input\_cluster\_autotermination\_minutes)| cluster auto termination duration |`number`|`30`| no |
374
+
| <aname="input_cluster_id"></a> [cluster\_id](#input\_cluster\_id)| Existing cluster id |`string`|`null`| no |
375
+
| <aname="input_cluster_name"></a> [cluster\_name](#input\_cluster\_name)| Cluster name |`string`|`null`| no |
376
+
| <aname="input_cluster_policy_id"></a> [cluster\_policy\_id](#input\_cluster\_policy\_id)| Exiting cluster policy id |`string`|`null`| no |
377
+
| <aname="input_create_group"></a> [create\_group](#input\_create\_group)| Create a new group, if group already exists the deployment will fail. |`bool`|`false`| no |
378
+
| <aname="input_create_user"></a> [create\_user](#input\_create\_user)| Create a new user, if user already exists the deployment will fail. |`bool`|`false`| no |
379
+
| <aname="input_custom_tags"></a> [custom\_tags](#input\_custom\_tags)| Extra custom tags |`any`|`null`| no |
380
+
| <aname="input_databricks_username"></a> [databricks\_username](#input\_databricks\_username)| User allowed to access the platform. |`string`|`""`| no |
381
+
| <aname="input_deploy_cluster"></a> [deploy\_cluster](#input\_deploy\_cluster)| feature flag, true or false |`bool`|`false`| no |
382
+
| <aname="input_deploy_cluster_policy"></a> [deploy\_cluster\_policy](#input\_deploy\_cluster\_policy)| feature flag, true or false |`bool`|`false`| no |
383
+
| <aname="input_deploy_driver_instance_pool"></a> [deploy\_driver\_instance\_pool](#input\_deploy\_driver\_instance\_pool)| Driver instance pool |`bool`|`false`| no |
384
+
| <aname="input_deploy_job_cluster"></a> [deploy\_job\_cluster](#input\_deploy\_job\_cluster)| feature flag, true or false |`bool`|`false`| no |
385
+
| <aname="input_deploy_jobs"></a> [deploy\_jobs](#input\_deploy\_jobs)| feature flag, true or false |`bool`|`false`| no |
386
+
| <aname="input_deploy_worker_instance_pool"></a> [deploy\_worker\_instance\_pool](#input\_deploy\_worker\_instance\_pool)| Worker instance pool |`bool`|`false`| no |
387
+
| <aname="input_driver_node_type_id"></a> [driver\_node\_type\_id](#input\_driver\_node\_type\_id)| The node type of the Spark driver. This field is optional; if unset, API will set the driver node type to the same value as node\_type\_id. |`string`|`null`| no |
388
+
| <aname="input_email_notifications"></a> [email\_notifications](#input\_email\_notifications)| Email notification block. |`any`|`null`| no |
389
+
| <aname="input_fixed_value"></a> [fixed\_value](#input\_fixed\_value)| Number of nodes in the cluster. |`number`|`0`| no |
390
+
| <aname="input_gb_per_core"></a> [gb\_per\_core](#input\_gb\_per\_core)| Number of gigabytes per core available on instance. Conflicts with min\_memory\_gb. Defaults to 0. |`string`|`0`| no |
391
+
| <aname="input_gcp_attributes"></a> [gcp\_attributes](#input\_gcp\_attributes)| Optional configuration block contains attributes related to clusters running on GCP. |`any`|`null`| no |
392
+
| <aname="input_gpu"></a> [gpu](#input\_gpu)| GPU required or not. |`bool`|`false`| no |
393
+
| <aname="input_idle_instance_autotermination_minutes"></a> [idle\_instance\_autotermination\_minutes](#input\_idle\_instance\_autotermination\_minutes)| idle instance auto termination duration |`number`|`20`| no |
394
+
| <aname="input_instance_pool_access_control"></a> [instance\_pool\_access\_control](#input\_instance\_pool\_access\_control)| Instance pool access control |`any`|`null`| no |
395
+
| <aname="input_jobs_access_control"></a> [jobs\_access\_control](#input\_jobs\_access\_control)| Jobs access control |`any`|`null`| no |
396
+
| <aname="input_libraries"></a> [libraries](#input\_libraries)| Installs a library on databricks\_cluster |`map(any)`|`{}`| no |
397
+
| <aname="input_local_disk"></a> [local\_disk](#input\_local\_disk)| Pick only nodes with local storage. Defaults to false. |`string`|`true`| no |
398
+
| <aname="input_local_notebooks"></a> [local\_notebooks](#input\_local\_notebooks)| Local path to the notebook(s) that will be used by the job |`any`|`[]`| no |
399
+
| <aname="input_max_capacity"></a> [max\_capacity](#input\_max\_capacity)| instance pool maximum capacity |`number`|`3`| no |
400
+
| <aname="input_max_concurrent_runs"></a> [max\_concurrent\_runs](#input\_max\_concurrent\_runs)| An optional maximum allowed number of concurrent runs of the job. |`number`|`null`| no |
401
+
| <aname="input_max_retries"></a> [max\_retries](#input\_max\_retries)| An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result\_state or INTERNAL\_ERROR life\_cycle\_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. |`number`|`0`| no |
402
+
| <aname="input_min_cores"></a> [min\_cores](#input\_min\_cores)| Minimum number of CPU cores available on instance. Defaults to 0. |`string`|`0`| no |
403
+
| <aname="input_min_gpus"></a> [min\_gpus](#input\_min\_gpus)| Minimum number of GPU's attached to instance. Defaults to 0. |`string`|`0`| no |
404
+
| <aname="input_min_idle_instances"></a> [min\_idle\_instances](#input\_min\_idle\_instances)| instance pool minimum idle instances |`number`|`1`| no |
405
+
| <aname="input_min_memory_gb"></a> [min\_memory\_gb](#input\_min\_memory\_gb)| Minimum amount of memory per node in gigabytes. Defaults to 0. |`string`|`0`| no |
406
+
| <aname="input_min_retry_interval_millis"></a> [min\_retry\_interval\_millis](#input\_min\_retry\_interval\_millis)| An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried. |`number`|`null`| no |
407
+
| <aname="input_ml"></a> [ml](#input\_ml)| ML required or not. |`bool`|`false`| no |
408
+
| <aname="input_notebooks"></a> [notebooks](#input\_notebooks)| Local path to the notebook(s) that will be deployed |`any`|`[]`| no |
409
+
| <aname="input_notebooks_access_control"></a> [notebooks\_access\_control](#input\_notebooks\_access\_control)| Notebook access control |`any`|`null`| no |
410
+
| <aname="input_policy_access_control"></a> [policy\_access\_control](#input\_policy\_access\_control)| Policy access control |`any`|`null`| no |
411
+
| <aname="input_policy_overrides"></a> [policy\_overrides](#input\_policy\_overrides)| Cluster policy overrides |`any`|`null`| no |
412
+
| <aname="input_prjid"></a> [prjid](#input\_prjid)| (Required) Name of the project/stack e.g: mystack, nifieks, demoaci. Should not be changed after running 'tf apply' |`string`| n/a | yes |
413
+
| <aname="input_remote_notebooks"></a> [remote\_notebooks](#input\_remote\_notebooks)| Path to notebook(s) in the databricks workspace that will be used by the job |`any`|`[]`| no |
414
+
| <aname="input_retry_on_timeout"></a> [retry\_on\_timeout](#input\_retry\_on\_timeout)| An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout. |`bool`|`false`| no |
415
+
| <aname="input_schedule"></a> [schedule](#input\_schedule)| Job schedule configuration. |`map(any)`|`null`| no |
416
+
| <aname="input_spark_conf"></a> [spark\_conf](#input\_spark\_conf)| Map with key-value pairs to fine-tune Spark clusters, where you can provide custom Spark configuration properties in a cluster configuration. |`any`|`null`| no |
417
+
| <aname="input_spark_env_vars"></a> [spark\_env\_vars](#input\_spark\_env\_vars)| Map with environment variable key-value pairs to fine-tune Spark clusters. Key-value pairs of the form (X,Y) are exported (i.e., X='Y') while launching the driver and workers. |`any`|`null`| no |
418
+
| <aname="input_spark_version"></a> [spark\_version](#input\_spark\_version)| Runtime version of the cluster. Any supported databricks\_spark\_version id. We advise using Cluster Policies to restrict the list of versions for simplicity while maintaining enough control. |`string`|`null`| no |
419
+
| <aname="input_task_parameters"></a> [task\_parameters](#input\_task\_parameters)| Base parameters to be used for each run of this job. |`map(any)`|`{}`| no |
420
+
| <aname="input_teamid"></a> [teamid](#input\_teamid)| (Required) Name of the team/group e.g. devops, dataengineering. Should not be changed after running 'tf apply' |`string`| n/a | yes |
421
+
| <aname="input_timeout"></a> [timeout](#input\_timeout)| An optional timeout applied to each run of this job. The default behavior is to have no timeout. |`number`|`null`| no |
422
+
| <aname="input_worker_node_type_id"></a> [worker\_node\_type\_id](#input\_worker\_node\_type\_id)| The node type of the Spark worker. |`string`|`null`| no |
423
+
424
+
## Outputs
425
+
426
+
| Name | Description |
427
+
|------|-------------|
428
+
| <aname="output_cluster_id"></a> [cluster\_id](#output\_cluster\_id)| databricks cluster id |
429
+
| <aname="output_cluster_name"></a> [cluster\_name](#output\_cluster\_name)| databricks cluster name |
| <aname="output_databricks_user"></a> [databricks\_user](#output\_databricks\_user)| databricks user name |
435
+
| <aname="output_databricks_user_id"></a> [databricks\_user\_id](#output\_databricks\_user\_id)| databricks user id |
436
+
| <aname="output_existing_cluster_new_job_existing_notebooks_id"></a> [existing\_cluster\_new\_job\_existing\_notebooks\_id](#output\_existing\_cluster\_new\_job\_existing\_notebooks\_id)| databricks new cluster job id |
437
+
| <aname="output_existing_cluster_new_job_existing_notebooks_job"></a> [existing\_cluster\_new\_job\_existing\_notebooks\_job](#output\_existing\_cluster\_new\_job\_existing\_notebooks\_job)| databricks new cluster job url |
438
+
| <aname="output_existing_cluster_new_job_new_notebooks_id"></a> [existing\_cluster\_new\_job\_new\_notebooks\_id](#output\_existing\_cluster\_new\_job\_new\_notebooks\_id)| databricks new cluster job id |
439
+
| <aname="output_existing_cluster_new_job_new_notebooks_job"></a> [existing\_cluster\_new\_job\_new\_notebooks\_job](#output\_existing\_cluster\_new\_job\_new\_notebooks\_job)| databricks new cluster job url |
0 commit comments