Databricks: Rest API

This post is how to communicate with Databricks using Rest API’s.

Databricks Resource ID = 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d

Get Bearer Token for Service Principal

  1. curl -X GET https://login.microsoft.com/<TENANTID>/oauth2/token -H 'Content-Type: application/x-www-form-urlencoded' -d'grant_type=client_credential&client_id=<CLIENTID>&resource=2ff814a6-3304-4ab8-85cb-cd0e6f879c1d&client_secret=<SECRET>

Get Bearer Token for Service Principal Using management.core.windows.net

  1. curl -X GET https://login.microsoftonline.com/<TENANTID>/oauth2/token -H 'Content-Type: application/x-www-form-urlencoded' -d'grant_type=client_credential&client_id=<CLIENTID>&resource=https://management.core.windows.net/&amp;client_secret=<SECRET>'

Start Cluster

  1. curl --location -g --trace -X --request POST -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/clusters/start -d '{ "cluster_id": "<CLUSTER_ID>"}'

Stop Cluster

  1. curl --location -g --trace -X --request POST -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/clusters/stop -d '{ "cluster_id": "<CLUSTER_ID>"}'

List Clusters

  1. curl --location -g --trace -X --request GET -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/clusters/list

Job List

  1. curl --location -g --trace -X --request GET -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/jobs/list

Job Python Run

  1. curl --location -g --trace -X --request POST -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/jobs/run-now -d '{"job_id": <JOB_ID>, "python_params": [] }'

Job Get

  1. curl --location -g --trace -X --request GET -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/jobs/runs/get?run_id=<JOB_RUN_ID>

Create Job

Databricks Create Job

  1. curl --location -g --trace -X --request POST -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/jobs/create -d '<PAYLOAD>'

Create Job Payload

  1. {
  2. "name": "<NAME>",
  3. "max_concurrent_runs": 1,
  4. "tasks": [
  5. {
  6. "task_key": "<TASK_KEY>",
  7. "run_if": "ALL_SUCCESS",
  8. "max_retries": 1,
  9. "timeout_seconds": <TIMEOUT_SECONDS>,
  10. "notebook_tasks": {
  11. "notebook_path": "<PATH>",
  12. "source": "WORKSPACE",
  13. "base_parameters": {
  14. "<KEY>": "<VALUE>",
  15. "<KEY2>": "<VALUE2>",
  16. }
  17. },
  18. "libraries": [
  19. {
  20. "pypi": {
  21. "package": "<PACKAGE_NAME==VERSION>",
  22. "coordinates": ""
  23. }
  24. },
  25. {
  26. "jar": "<LOCATION>"
  27. }
  28. ],
  29. "new_cluster": {
  30. "custom_tags": {
  31. "<TAG_NAME>": "<TAG_VALUE>"
  32. },
  33. "azure_attributes": {
  34. "first_on_demand": 1,
  35. "availability": "SPOT_AZURE",
  36. "spot_bid_max_price": 75
  37. },
  38. "instance_pool_id": "<WORKER_INSTANCE_POOL_ID>",
  39. "driver_instances_pool_id": "<DRIVER_INSTANCE_POOL_ID>",
  40. "data_security_mode": "SINGLE_USER",
  41. "spark_version": "<SPARK_VERSION>",
  42. "node_type_id": "<NODE_TYPE_ID>",
  43. "runtime_engine": "STANDARD",
  44. "policy_id": "<POLICY_ID>",
  45. "autoscale": {
  46. "min_workers": <MIN_WORKERS>,
  47. "max_workers": <MAX_WORKERS>
  48. },
  49. "spark_conf": {
  50. "<CONFIG_KEY>": "<CONFIG_VALUE>"
  51. },
  52. "cluster_log_conf": {
  53. "dbfs": {
  54. "destination": "<LOG_DESTINATION>"
  55. }
  56. },
  57. "spark_env_vars": {
  58. "<ENV_NAME>": "<ENV_VALUE>"
  59. },
  60. "init_scripts": [
  61. {
  62. "volumes": {
  63. "destination": "<INIT_SCRIPT_LOCATION>"
  64. }
  65. }
  66. ]
  67. }
  68. }
  69. ],
  70. "format": "SINGLE_TASK"
  71. }

Job Permission Patch

  1. curl --location -g --trace -X --request PATCH -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/permissions/jobs/<JOB_ID> -d '{ "access_control_list": [{ "group_name": "<GROUP_NAME>", "permission_level": "<PERMISSION>"}]}'

Get Service Principal List

  1. curl -X GET -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/preview/scim/v2/ServicePrincipals

Delete Service Principal List From Databricks ONLY

  1. curl --location -g --trace -X --request DELETE -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/preview/scim/v2/ServicePrincipals/<APPLICATION_ID>

Add Service Principal To Databricks

  1. curl --location --request POST 'https://<DATABRICKS_url>/api/2.0/preview/scim/v2/ServicePrincipals' --header 'Authorization: Bearer <TOKEN>' --header 'Content-Type: application/json' --data-raw '{ "schemas": ["urn:ietf:params:scim:schemas:core:2.0:ServicePrincipal"], "applicationId": "<CLIENTID>", "displayName": "<DISPLAYNAME>", "groups": [{"value": "<GROUP_ID>"}], "entitlements": [{ "value": "allow-cluster-create"}] }'

List Secret Scopes

  1. curl --location -g --trace -X --request GET -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/secrets/scopes/list

Create KeyVault Secret Scope

  1. curl --location -g --trace -X --request POST -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/secrets/scopes/create -d '{"scope": "<Keyvault_name>", "scope_backend_type": "AZURE_KEYVAULT", "backend_azure_keyvault": {"resource_id": "<RESOURCE_ID>", "dns_name": "<KEYVAULT_URL>"}, "initial_manage_principal": "users"}'

IP Access Lists

  1. curl -X GET -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/ip-access-lists

List Git Repos

  1. curl --location -g --trace -X --request GET -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/repos

Update Git Repo

  1. curl --location -g --trace -X --request POST -H 'Authorization: Bearer <TOKEN>' https://<DATABRICKS_url>/api/2.0/repos/<REPO_ID> -d '{ "branch": "<BRANCH_NAME>" }'

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Postgres: Tables

Below are some common functions for doing table creation and maintenance.

Table Creation:

  1. CREATE TABLE Public.mytable (
  2.       id BigSerial PRIMARY KEY,
  3.       text_column varchar NOT NULL,
  4.       int_column Integer NOT NULL,
  5.       date_column timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
  6. );

Create Schema:

  1. CREATE SCHEMA IF NOT EXISTS test;

Create Schema with Authorization:

  1. CREATE SCHEMA IF NOT EXISTS test AUTHORIZATION myUser;

Drop Schema Cascade:

  1. DROP SCHEMA IF EXISTS test CASCADE;

Comment On Table:

  1. COMMENT ON TABLE Public.mytable IS 'A List of data.';

Vacuum:
vacuum has options best to review them.

  1. vacuum (analyze,verbose);

Drop Constraint:

  1. ALTER TABLE mytable DROP CONSTRAINT mytable_id_pkey;

Add Constraint:

  1. ALTER TABLE public.mytable ADD CONSTRAINT mytable_id_pkey PRIMARY KEY (id);

Rename Constraint:

  1. ALTER TABLE mytable RENAME CONSTRAINT "mytable_id2_fkey" TO "mytable_id3__fkey";

Rename Table Column:

  1. ALTER TABLE mytable RENAME COLUMN text_column TO text_column2;

Rename Table:

  1. ALTER TABLE mytable RENAME TO mytable2;

Drop Table:

  1. DROP TABLE public.mytable;

Add Column to Table:

  1. ALTER TABLE Public.mytable ADD column_name boolean NOT NULL DEFAULT False;

Alter Column Data Type Json:

  1. ALTER TABLE public.mytable ALTER COLUMN json_col TYPE json USING (json_col::json);

Rename Sequence:

  1. ALTER SEQUENCE mytable_id_seq RENAME TO mytable_id_seq;

Sequence Table Owner:

  1. alter sequence mytable_id_seq owned by mytable.id;

Sequence Next Value:

  1. alter table mytable alter column mytable_id set default nextval('mytable_id_seq');

Add Foreign Key:

  1. alter table mytable ADD FOREIGN KEY (foreign_id) REFERENCES public.mytable2(foreign_id);

Create Index Json:

  1. CREATE INDEX mytable_idx ON Public.mytable((Data->'Key'->'Key'->>'value'));

Create Index:

  1. CREATE INDEX mytable_idx ON public.mytable(id);

Drop Index:

  1. DROP INDEX public.mytable_idx;

Re-Cluster Table:

  1. Cluster mytable using mytable_pkey;

Trigger:

  1. CREATE TRIGGER "tg_mytrigger" BEFORE UPDATE OF my_column OR INSERT ON public.mytable FOR EACH ROW EXECUTE PROCEDURE public.mytablestrigger();