Skip to content

Commit eda41d1

Browse files
terraform-docs: automated action
1 parent e577e70 commit eda41d1

File tree

1 file changed

+140
-1
lines changed

1 file changed

+140
-1
lines changed

README.md

Lines changed: 140 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -304,5 +304,144 @@ Error: Scope <scope name> does not exist!
304304
```
305305

306306
<!-- BEGIN_TF_DOCS -->
307-
307+
## Requirements
308+
309+
| Name | Version |
310+
|------|---------|
311+
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.0.1 |
312+
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 4.14 |
313+
| <a name="requirement_databricks"></a> [databricks](#requirement\_databricks) | >= 0.5.7 |
314+
315+
## Providers
316+
317+
| Name | Version |
318+
|------|---------|
319+
| <a name="provider_databricks"></a> [databricks](#provider\_databricks) | >= 0.5.7 |
320+
321+
## Modules
322+
323+
No modules.
324+
325+
## Resources
326+
327+
| Name | Type |
328+
|------|------|
329+
| [databricks_cluster.cluster](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/cluster) | resource |
330+
| [databricks_cluster_policy.this](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/cluster_policy) | resource |
331+
| [databricks_group.this](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/group) | resource |
332+
| [databricks_group_member.group_members](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/group_member) | resource |
333+
| [databricks_instance_pool.driver_instance_nodes](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/instance_pool) | resource |
334+
| [databricks_instance_pool.worker_instance_nodes](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/instance_pool) | resource |
335+
| [databricks_instance_profile.shared](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/instance_profile) | resource |
336+
| [databricks_job.existing_cluster_new_job_existing_notebooks](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) | resource |
337+
| [databricks_job.existing_cluster_new_job_new_notebooks](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) | resource |
338+
| [databricks_job.new_cluster_new_job_existing_notebooks](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) | resource |
339+
| [databricks_job.new_cluster_new_job_new_notebooks](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) | resource |
340+
| [databricks_library.maven](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/library) | resource |
341+
| [databricks_library.python_wheel](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/library) | resource |
342+
| [databricks_notebook.notebook_file](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/notebook) | resource |
343+
| [databricks_notebook.notebook_file_deployment](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/notebook) | resource |
344+
| [databricks_permissions.cluster](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/permissions) | resource |
345+
| [databricks_permissions.driver_pool](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/permissions) | resource |
346+
| [databricks_permissions.existing_cluster_new_job_existing_notebooks](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/permissions) | resource |
347+
| [databricks_permissions.existing_cluster_new_job_new_notebooks](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/permissions) | resource |
348+
| [databricks_permissions.jobs_notebook](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/permissions) | resource |
349+
| [databricks_permissions.new_cluster_new_job_existing_notebooks](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/permissions) | resource |
350+
| [databricks_permissions.new_cluster_new_job_new_notebooks](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/permissions) | resource |
351+
| [databricks_permissions.notebook](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/permissions) | resource |
352+
| [databricks_permissions.policy](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/permissions) | resource |
353+
| [databricks_permissions.worker_pool](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/permissions) | resource |
354+
| [databricks_secret_acl.spectators](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/secret_acl) | resource |
355+
| [databricks_user.users](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/user) | resource |
356+
| [databricks_current_user.me](https://registry.terraform.io/providers/databricks/databricks/latest/docs/data-sources/current_user) | data source |
357+
| [databricks_node_type.cluster_node_type](https://registry.terraform.io/providers/databricks/databricks/latest/docs/data-sources/node_type) | data source |
358+
| [databricks_spark_version.latest](https://registry.terraform.io/providers/databricks/databricks/latest/docs/data-sources/spark_version) | data source |
359+
360+
## Inputs
361+
362+
| Name | Description | Type | Default | Required |
363+
|------|-------------|------|---------|:--------:|
364+
| <a name="input_add_instance_profile_to_workspace"></a> [add\_instance\_profile\_to\_workspace](#input\_add\_instance\_profile\_to\_workspace) | Existing AWS instance profile ARN | `bool` | `false` | no |
365+
| <a name="input_allow_cluster_create"></a> [allow\_cluster\_create](#input\_allow\_cluster\_create) | This is a field to allow the group to have cluster create privileges. More fine grained permissions could be assigned with databricks\_permissions and cluster\_id argument. Everyone without allow\_cluster\_create argument set, but with permission to use Cluster Policy would be able to create clusters, but within boundaries of that specific policy. | `bool` | `true` | no |
366+
| <a name="input_allow_instance_pool_create"></a> [allow\_instance\_pool\_create](#input\_allow\_instance\_pool\_create) | This is a field to allow the group to have instance pool create privileges. More fine grained permissions could be assigned with databricks\_permissions and instance\_pool\_id argument. | `bool` | `true` | no |
367+
| <a name="input_always_running"></a> [always\_running](#input\_always\_running) | Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. | `bool` | `false` | no |
368+
| <a name="input_auto_scaling"></a> [auto\_scaling](#input\_auto\_scaling) | Number of min and max workers in auto scale. | `list(any)` | `null` | no |
369+
| <a name="input_aws_attributes"></a> [aws\_attributes](#input\_aws\_attributes) | Optional configuration block contains attributes related to clusters running on AWS. | `any` | `null` | no |
370+
| <a name="input_azure_attributes"></a> [azure\_attributes](#input\_azure\_attributes) | Optional configuration block contains attributes related to clusters running on Azure. | `any` | `null` | no |
371+
| <a name="input_category"></a> [category](#input\_category) | Node category, which can be one of: General purpose, Memory optimized, Storage optimized, Compute optimized, GPU | `string` | `"General purpose"` | no |
372+
| <a name="input_cluster_access_control"></a> [cluster\_access\_control](#input\_cluster\_access\_control) | Cluster access control | `any` | `null` | no |
373+
| <a name="input_cluster_autotermination_minutes"></a> [cluster\_autotermination\_minutes](#input\_cluster\_autotermination\_minutes) | cluster auto termination duration | `number` | `30` | no |
374+
| <a name="input_cluster_id"></a> [cluster\_id](#input\_cluster\_id) | Existing cluster id | `string` | `null` | no |
375+
| <a name="input_cluster_name"></a> [cluster\_name](#input\_cluster\_name) | Cluster name | `string` | `null` | no |
376+
| <a name="input_cluster_policy_id"></a> [cluster\_policy\_id](#input\_cluster\_policy\_id) | Exiting cluster policy id | `string` | `null` | no |
377+
| <a name="input_create_group"></a> [create\_group](#input\_create\_group) | Create a new group, if group already exists the deployment will fail. | `bool` | `false` | no |
378+
| <a name="input_create_user"></a> [create\_user](#input\_create\_user) | Create a new user, if user already exists the deployment will fail. | `bool` | `false` | no |
379+
| <a name="input_custom_tags"></a> [custom\_tags](#input\_custom\_tags) | Extra custom tags | `any` | `null` | no |
380+
| <a name="input_databricks_username"></a> [databricks\_username](#input\_databricks\_username) | User allowed to access the platform. | `string` | `""` | no |
381+
| <a name="input_deploy_cluster"></a> [deploy\_cluster](#input\_deploy\_cluster) | feature flag, true or false | `bool` | `false` | no |
382+
| <a name="input_deploy_cluster_policy"></a> [deploy\_cluster\_policy](#input\_deploy\_cluster\_policy) | feature flag, true or false | `bool` | `false` | no |
383+
| <a name="input_deploy_driver_instance_pool"></a> [deploy\_driver\_instance\_pool](#input\_deploy\_driver\_instance\_pool) | Driver instance pool | `bool` | `false` | no |
384+
| <a name="input_deploy_job_cluster"></a> [deploy\_job\_cluster](#input\_deploy\_job\_cluster) | feature flag, true or false | `bool` | `false` | no |
385+
| <a name="input_deploy_jobs"></a> [deploy\_jobs](#input\_deploy\_jobs) | feature flag, true or false | `bool` | `false` | no |
386+
| <a name="input_deploy_worker_instance_pool"></a> [deploy\_worker\_instance\_pool](#input\_deploy\_worker\_instance\_pool) | Worker instance pool | `bool` | `false` | no |
387+
| <a name="input_driver_node_type_id"></a> [driver\_node\_type\_id](#input\_driver\_node\_type\_id) | The node type of the Spark driver. This field is optional; if unset, API will set the driver node type to the same value as node\_type\_id. | `string` | `null` | no |
388+
| <a name="input_email_notifications"></a> [email\_notifications](#input\_email\_notifications) | Email notification block. | `any` | `null` | no |
389+
| <a name="input_fixed_value"></a> [fixed\_value](#input\_fixed\_value) | Number of nodes in the cluster. | `number` | `0` | no |
390+
| <a name="input_gb_per_core"></a> [gb\_per\_core](#input\_gb\_per\_core) | Number of gigabytes per core available on instance. Conflicts with min\_memory\_gb. Defaults to 0. | `string` | `0` | no |
391+
| <a name="input_gcp_attributes"></a> [gcp\_attributes](#input\_gcp\_attributes) | Optional configuration block contains attributes related to clusters running on GCP. | `any` | `null` | no |
392+
| <a name="input_gpu"></a> [gpu](#input\_gpu) | GPU required or not. | `bool` | `false` | no |
393+
| <a name="input_idle_instance_autotermination_minutes"></a> [idle\_instance\_autotermination\_minutes](#input\_idle\_instance\_autotermination\_minutes) | idle instance auto termination duration | `number` | `20` | no |
394+
| <a name="input_instance_pool_access_control"></a> [instance\_pool\_access\_control](#input\_instance\_pool\_access\_control) | Instance pool access control | `any` | `null` | no |
395+
| <a name="input_jobs_access_control"></a> [jobs\_access\_control](#input\_jobs\_access\_control) | Jobs access control | `any` | `null` | no |
396+
| <a name="input_libraries"></a> [libraries](#input\_libraries) | Installs a library on databricks\_cluster | `map(any)` | `{}` | no |
397+
| <a name="input_local_disk"></a> [local\_disk](#input\_local\_disk) | Pick only nodes with local storage. Defaults to false. | `string` | `true` | no |
398+
| <a name="input_local_notebooks"></a> [local\_notebooks](#input\_local\_notebooks) | Local path to the notebook(s) that will be used by the job | `any` | `[]` | no |
399+
| <a name="input_max_capacity"></a> [max\_capacity](#input\_max\_capacity) | instance pool maximum capacity | `number` | `3` | no |
400+
| <a name="input_max_concurrent_runs"></a> [max\_concurrent\_runs](#input\_max\_concurrent\_runs) | An optional maximum allowed number of concurrent runs of the job. | `number` | `null` | no |
401+
| <a name="input_max_retries"></a> [max\_retries](#input\_max\_retries) | An optional maximum number of times to retry an unsuccessful run. A run is considered to be unsuccessful if it completes with a FAILED result\_state or INTERNAL\_ERROR life\_cycle\_state. The value -1 means to retry indefinitely and the value 0 means to never retry. The default behavior is to never retry. | `number` | `0` | no |
402+
| <a name="input_min_cores"></a> [min\_cores](#input\_min\_cores) | Minimum number of CPU cores available on instance. Defaults to 0. | `string` | `0` | no |
403+
| <a name="input_min_gpus"></a> [min\_gpus](#input\_min\_gpus) | Minimum number of GPU's attached to instance. Defaults to 0. | `string` | `0` | no |
404+
| <a name="input_min_idle_instances"></a> [min\_idle\_instances](#input\_min\_idle\_instances) | instance pool minimum idle instances | `number` | `1` | no |
405+
| <a name="input_min_memory_gb"></a> [min\_memory\_gb](#input\_min\_memory\_gb) | Minimum amount of memory per node in gigabytes. Defaults to 0. | `string` | `0` | no |
406+
| <a name="input_min_retry_interval_millis"></a> [min\_retry\_interval\_millis](#input\_min\_retry\_interval\_millis) | An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried. | `number` | `null` | no |
407+
| <a name="input_ml"></a> [ml](#input\_ml) | ML required or not. | `bool` | `false` | no |
408+
| <a name="input_notebooks"></a> [notebooks](#input\_notebooks) | Local path to the notebook(s) that will be deployed | `any` | `[]` | no |
409+
| <a name="input_notebooks_access_control"></a> [notebooks\_access\_control](#input\_notebooks\_access\_control) | Notebook access control | `any` | `null` | no |
410+
| <a name="input_policy_access_control"></a> [policy\_access\_control](#input\_policy\_access\_control) | Policy access control | `any` | `null` | no |
411+
| <a name="input_policy_overrides"></a> [policy\_overrides](#input\_policy\_overrides) | Cluster policy overrides | `any` | `null` | no |
412+
| <a name="input_prjid"></a> [prjid](#input\_prjid) | (Required) Name of the project/stack e.g: mystack, nifieks, demoaci. Should not be changed after running 'tf apply' | `string` | n/a | yes |
413+
| <a name="input_remote_notebooks"></a> [remote\_notebooks](#input\_remote\_notebooks) | Path to notebook(s) in the databricks workspace that will be used by the job | `any` | `[]` | no |
414+
| <a name="input_retry_on_timeout"></a> [retry\_on\_timeout](#input\_retry\_on\_timeout) | An optional policy to specify whether to retry a job when it times out. The default behavior is to not retry on timeout. | `bool` | `false` | no |
415+
| <a name="input_schedule"></a> [schedule](#input\_schedule) | Job schedule configuration. | `map(any)` | `null` | no |
416+
| <a name="input_spark_conf"></a> [spark\_conf](#input\_spark\_conf) | Map with key-value pairs to fine-tune Spark clusters, where you can provide custom Spark configuration properties in a cluster configuration. | `any` | `null` | no |
417+
| <a name="input_spark_env_vars"></a> [spark\_env\_vars](#input\_spark\_env\_vars) | Map with environment variable key-value pairs to fine-tune Spark clusters. Key-value pairs of the form (X,Y) are exported (i.e., X='Y') while launching the driver and workers. | `any` | `null` | no |
418+
| <a name="input_spark_version"></a> [spark\_version](#input\_spark\_version) | Runtime version of the cluster. Any supported databricks\_spark\_version id. We advise using Cluster Policies to restrict the list of versions for simplicity while maintaining enough control. | `string` | `null` | no |
419+
| <a name="input_task_parameters"></a> [task\_parameters](#input\_task\_parameters) | Base parameters to be used for each run of this job. | `map(any)` | `{}` | no |
420+
| <a name="input_teamid"></a> [teamid](#input\_teamid) | (Required) Name of the team/group e.g. devops, dataengineering. Should not be changed after running 'tf apply' | `string` | n/a | yes |
421+
| <a name="input_timeout"></a> [timeout](#input\_timeout) | An optional timeout applied to each run of this job. The default behavior is to have no timeout. | `number` | `null` | no |
422+
| <a name="input_worker_node_type_id"></a> [worker\_node\_type\_id](#input\_worker\_node\_type\_id) | The node type of the Spark worker. | `string` | `null` | no |
423+
424+
## Outputs
425+
426+
| Name | Description |
427+
|------|-------------|
428+
| <a name="output_cluster_id"></a> [cluster\_id](#output\_cluster\_id) | databricks cluster id |
429+
| <a name="output_cluster_name"></a> [cluster\_name](#output\_cluster\_name) | databricks cluster name |
430+
| <a name="output_cluster_policy_id"></a> [cluster\_policy\_id](#output\_cluster\_policy\_id) | databricks cluster policy permissions |
431+
| <a name="output_databricks_group"></a> [databricks\_group](#output\_databricks\_group) | databricks group name |
432+
| <a name="output_databricks_group_member"></a> [databricks\_group\_member](#output\_databricks\_group\_member) | databricks group members |
433+
| <a name="output_databricks_secret_acl"></a> [databricks\_secret\_acl](#output\_databricks\_secret\_acl) | databricks secret acl |
434+
| <a name="output_databricks_user"></a> [databricks\_user](#output\_databricks\_user) | databricks user name |
435+
| <a name="output_databricks_user_id"></a> [databricks\_user\_id](#output\_databricks\_user\_id) | databricks user id |
436+
| <a name="output_existing_cluster_new_job_existing_notebooks_id"></a> [existing\_cluster\_new\_job\_existing\_notebooks\_id](#output\_existing\_cluster\_new\_job\_existing\_notebooks\_id) | databricks new cluster job id |
437+
| <a name="output_existing_cluster_new_job_existing_notebooks_job"></a> [existing\_cluster\_new\_job\_existing\_notebooks\_job](#output\_existing\_cluster\_new\_job\_existing\_notebooks\_job) | databricks new cluster job url |
438+
| <a name="output_existing_cluster_new_job_new_notebooks_id"></a> [existing\_cluster\_new\_job\_new\_notebooks\_id](#output\_existing\_cluster\_new\_job\_new\_notebooks\_id) | databricks new cluster job id |
439+
| <a name="output_existing_cluster_new_job_new_notebooks_job"></a> [existing\_cluster\_new\_job\_new\_notebooks\_job](#output\_existing\_cluster\_new\_job\_new\_notebooks\_job) | databricks new cluster job url |
440+
| <a name="output_instance_profile"></a> [instance\_profile](#output\_instance\_profile) | databricks instance profile ARN |
441+
| <a name="output_new_cluster_new_job_existing_notebooks_id"></a> [new\_cluster\_new\_job\_existing\_notebooks\_id](#output\_new\_cluster\_new\_job\_existing\_notebooks\_id) | databricks job id |
442+
| <a name="output_new_cluster_new_job_existing_notebooks_job"></a> [new\_cluster\_new\_job\_existing\_notebooks\_job](#output\_new\_cluster\_new\_job\_existing\_notebooks\_job) | databricks job url |
443+
| <a name="output_new_cluster_new_job_new_notebooks_id"></a> [new\_cluster\_new\_job\_new\_notebooks\_id](#output\_new\_cluster\_new\_job\_new\_notebooks\_id) | databricks job id |
444+
| <a name="output_new_cluster_new_job_new_notebooks_job"></a> [new\_cluster\_new\_job\_new\_notebooks\_job](#output\_new\_cluster\_new\_job\_new\_notebooks\_job) | databricks job url |
445+
| <a name="output_notebook_url"></a> [notebook\_url](#output\_notebook\_url) | databricks notebook url |
446+
| <a name="output_notebook_url_standalone"></a> [notebook\_url\_standalone](#output\_notebook\_url\_standalone) | databricks notebook url standalone |
308447
<!-- END_TF_DOCS -->

0 commit comments

Comments
 (0)