Skip to content

Commit d09aa07

Browse files
authored
[HWORKS-2322] Add defaultArgs field to job docs and clarify nested fields (#511)
1 parent 147b097 commit d09aa07

File tree

6 files changed

+126
-72
lines changed

6 files changed

+126
-72
lines changed

.github/workflows/mkdocs-release.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,10 @@ on:
44
push:
55
branches: [branch-*\.*]
66

7+
concurrency:
8+
group: ${{ github.workflow }}
9+
cancel-in-progress: false
10+
711
jobs:
812
publish-release:
913
runs-on: ubuntu-latest

docs/user_guides/projects/jobs/notebook_job.md

Lines changed: 22 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,18 @@ description: Documentation on how to configure and execute a Jupyter Notebook jo
66

77
## Introduction
88

9+
This guide describes how to configure a job to execute a Jupyter Notebook (.ipynb) and visualize the evaluated notebook.
10+
911
All members of a project in Hopsworks can launch the following types of applications through a project's Jobs service:
1012

1113
- Python
1214
- Apache Spark
15+
- Ray
1316

1417
Launching a job of any type is very similar process, what mostly differs between job types is
15-
the various configuration parameters each job type comes with. After following this guide you will be able to create a Jupyter Notebook job.
18+
the various configuration parameters each job type comes with. Hopsworks support scheduling jobs to run on a regular basis,
19+
e.g backfilling a Feature Group by running your feature engineering pipeline nightly. Scheduling can be done both through the UI and the python API,
20+
checkout [our Scheduling guide](schedule_job.md).
1621

1722
## UI
1823

@@ -173,19 +178,22 @@ execution = job.run(args='-p a 2 -p b 5', await_termination=True)
173178
```
174179

175180
## Configuration
176-
The following table describes the JSON payload returned by `jobs_api.get_configuration("PYTHON")`
177-
178-
| Field | Type | Description | Default |
179-
|-------------------------|----------------|------------------------------------------------------|--------------------------|
180-
| `type` | string | Type of the job configuration | `"pythonJobConfiguration"` |
181-
| `appPath` | string | Project path to notebook (e.g `Resources/foo.ipynb`) | `null` |
182-
| `environmentName` | string | Name of the python environment | `"pandas-training-pipeline"` |
183-
| `resourceConfig.cores` | number (float) | Number of CPU cores to be allocated | `1.0` |
184-
| `resourceConfig.memory` | number (int) | Number of MBs to be allocated | `2048` |
185-
| `resourceConfig.gpus` | number (int) | Number of GPUs to be allocated | `0` |
186-
| `logRedirection` | boolean | Whether logs are redirected | `true` |
187-
| `jobType` | string | Type of job | `"PYTHON"` |
188-
| `files` | string | HDFS path(s) to files to be provided to the Notebook Job. Multiple files can be included in a single string, separated by commas. <br>Example: `"hdfs:///Project/<project_name>/Resources/file1.py,hdfs:///Project/<project_name>/Resources/file2.txt"` | `null` |
181+
The following table describes the job configuration parameters for a PYTHON job.
182+
183+
`conf = jobs_api.get_configuration("PYTHON")`
184+
185+
| Field | Type | Description | Default |
186+
|-------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
187+
| <nobr>`conf['type']`</nobr> | string | Type of the job configuration | `"pythonJobConfiguration"` |
188+
| <nobr>`conf['appPath']`</nobr> | string | Project relative path to notebook (e.g., `Resources/foo.ipynb`) | `null` |
189+
| <nobr>`conf['defaultArgs']`</nobr> | string | Arguments to pass to the notebook.<br>Will be overridden if arguments are passed explicitly via `Job.run(args="...")`.<br>Must conform to Papermill format `-p arg1 val1` | `null` |
190+
| <nobr>`conf['environmentName']`</nobr> | string | Name of the project Python environment to use | `"pandas-training-pipeline"` |
191+
| <nobr>`conf['resourceConfig']['cores']`</nobr> | float | Number of CPU cores to be allocated | `1.0` |
192+
| <nobr>`conf['resourceConfig']['memory']`</nobr> | int | Number of MBs to be allocated | `2048` |
193+
| <nobr>`conf['resourceConfig']['gpus']`</nobr> | int | Number of GPUs to be allocated | `0` |
194+
| <nobr>`conf['logRedirection']`</nobr> | boolean | Whether logs are redirected | `true` |
195+
| <nobr>`conf['jobType']`</nobr> | string | Type of job | `"PYTHON"` |
196+
| <nobr>`conf['files']`</nobr> | string | Comma-separated string of HDFS path(s) to files to be made available to the application. Example: `hdfs:///Project/<project>/Resources/file1.py,...` | `null` |
189197

190198

191199
## Accessing project data

docs/user_guides/projects/jobs/pyspark_job.md

Lines changed: 27 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,13 @@ description: Documentation on how to configure and execute a PySpark job on Hops
66

77
## Introduction
88

9+
This guide will describe how to configure a job to execute a pyspark script inside the cluster.
10+
911
All members of a project in Hopsworks can launch the following types of applications through a project's Jobs service:
1012

1113
- Python
1214
- Apache Spark
15+
- Ray
1316

1417
Launching a job of any type is very similar process, what mostly differs between job types is
1518
the various configuration parameters each job type comes with. Hopsworks clusters support scheduling to run jobs on a regular basis,
@@ -216,27 +219,30 @@ print(f_err.read())
216219
```
217220

218221
## Configuration
219-
The following table describes the JSON payload returned by `jobs_api.get_configuration("PYSPARK")`
220-
221-
| Field | Type | Description | Default |
222-
| ------------------------------------------ | -------------- |-----------------------------------------------------| -------------------------- |
223-
| `type` | string | Type of the job configuration | `"sparkJobConfiguration"` |
224-
| `appPath` | string | Project path to script (e.g `Resources/foo.py`) | `null` |
225-
| `environmentName` | string | Name of the project spark environment | `"spark-feature-pipeline"` |
226-
| `spark.driver.cores` | number (float) | Number of CPU cores allocated for the driver | `1.0` |
227-
| `spark.driver.memory` | number (int) | Memory allocated for the driver (in MB) | `2048` |
228-
| `spark.executor.instances` | number (int) | Number of executor instances | `1` |
229-
| `spark.executor.cores` | number (float) | Number of CPU cores per executor | `1.0` |
230-
| `spark.executor.memory` | number (int) | Memory allocated per executor (in MB) | `4096` |
231-
| `spark.dynamicAllocation.enabled` | boolean | Enable dynamic allocation of executors | `true` |
232-
| `spark.dynamicAllocation.minExecutors` | number (int) | Minimum number of executors with dynamic allocation | `1` |
233-
| `spark.dynamicAllocation.maxExecutors` | number (int) | Maximum number of executors with dynamic allocation | `2` |
234-
| `spark.dynamicAllocation.initialExecutors` | number (int) | Initial number of executors with dynamic allocation | `1` |
235-
| `spark.blacklist.enabled` | boolean | Whether executor/node blacklisting is enabled | `false` |
236-
| `files` | string | HDFS path(s) to files to be provided to the Spark application. Multiple files can be included in a single string, separated by commas. <br>Example: `"hdfs:///Project/<project_name>/Resources/file1.py,hdfs:///Project/<project_name>/Resources/file2.txt"` | `null` |
237-
| `pyFiles` | string | HDFS path(s) to Python files to be provided to the Spark application. These will be added to the `PYTHONPATH` so they can be imported as modules. Multiple files can be included in a single string, separated by commas. <br>Example: `"hdfs:///Project/<project_name>/Resources/module1.py,hdfs:///Project/<project_name>/Resources/module2.py"` | `null` |
238-
| `jars` | string | HDFS path(s) to JAR files to be provided to the Spark application. These will be added to the classpath. Multiple files can be included in a single string, separated by commas. <br>Example: `"hdfs:///Project/<project_name>/Resources/lib1.jar,hdfs:///Project/<project_name>/Resources/lib2.jar"` | `null` |
239-
| `archives` | string | HDFS path(s) to archive files to be provided to the Spark application. Multiple files can be included in a single string, separated by commas. <br>Example: `"hdfs:///Project/<project_name>/Resources/archive1.zip,hdfs:///Project/<project_name>/Resources/archive2.tar.gz"` | `null` |
222+
The following table describes the job configuration parameters for a PYSPARK job.
223+
224+
`conf = jobs_api.get_configuration("PYSPARK")`
225+
226+
| Field | Type | Description | Default |
227+
|----------------------------------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|
228+
| <nobr>`conf['type']`</nobr> | string | Type of the job configuration | `"sparkJobConfiguration"` |
229+
| <nobr>`conf['appPath']`</nobr> | string | Project path to spark program (e.g `Resources/foo.py`) | `null` |
230+
| <nobr>`conf['defaultArgs']`</nobr> | string | Arguments to pass to the program. Will be overridden if arguments are passed explicitly via `Job.run(args="...")` | `null` |
231+
| <nobr>`conf['environmentName']`</nobr> | string | Name of the project spark environment to use | `"spark-feature-pipeline"` |
232+
| <nobr>`conf['spark.driver.cores']`</nobr> | float | Number of CPU cores allocated for the driver | `1.0` |
233+
| <nobr>`conf['spark.driver.memory']`</nobr> | int | Memory allocated for the driver (in MB) | `2048` |
234+
| <nobr>`conf['spark.executor.instances']`</nobr> | int | Number of executor instances | `1` |
235+
| <nobr>`conf['spark.executor.cores']`</nobr> | float | Number of CPU cores per executor | `1.0` |
236+
| <nobr>`conf['spark.executor.memory']`</nobr> | int | Memory allocated per executor (in MB) | `4096` |
237+
| <nobr>`conf['spark.dynamicAllocation.enabled']`</nobr> | boolean | Enable dynamic allocation of executors | `true` |
238+
| <nobr>`conf['spark.dynamicAllocation.minExecutors']`</nobr> | int | Minimum number of executors with dynamic allocation | `1` |
239+
| <nobr>`conf['spark.dynamicAllocation.maxExecutors']`</nobr> | int | Maximum number of executors with dynamic allocation | `2` |
240+
| <nobr>`conf['spark.dynamicAllocation.initialExecutors']`</nobr> | int | Initial number of executors with dynamic allocation | `1` |
241+
| <nobr>`conf['spark.blacklist.enabled']`</nobr> | boolean | Whether executor/node blacklisting is enabled | `false` |
242+
| <nobr>`conf['files']`</nobr> | string | Comma-separated string of HDFS path(s) to files to be made available to the application. Example: `hdfs:///Project/<project_name>/Resources/file1.py,...` | `null` |
243+
| <nobr>`conf['pyFiles']`</nobr> | string | Comma-separated string of HDFS path(s) to python modules to be made available to the application. Example: `hdfs:///Project/<project_name>/Resources/file1.py,...` | `null` |
244+
| <nobr>`conf['jars']`</nobr> | string | Comma-separated string of HDFS path(s) to jars to be included in CLASSPATH. Example: `hdfs:///Project/<project_name>/Resources/app.jar,...` | `null` |
245+
| <nobr>`conf['archives']`</nobr> | string | Comma-separated string of HDFS path(s) to archives to be made available to the application. Example: `hdfs:///Project/<project_name>/Resources/archive.zip,...` | `null` |
240246

241247

242248
## Accessing project data

0 commit comments

Comments
 (0)