Skip to content

Commit a6a1e48

Browse files
authored
Corrected reference to blockwise to refer to apply_gufunc instead (#5383)
1 parent f9a535c commit a6a1e48

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

doc/examples/apply_ufunc_vectorize_1d.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -494,7 +494,7 @@
494494
"source": [
495495
"So far our function can only handle numpy arrays. A real benefit of `apply_ufunc` is the ability to easily parallelize over dask chunks _when needed_. \n",
496496
"\n",
497-
"We want to apply this function in a vectorized fashion over each chunk of the dask array. This is possible using dask's `blockwise` or `map_blocks`. `apply_ufunc` wraps `blockwise` and asking it to map the function over chunks using `blockwise` is as simple as specifying `dask=\"parallelized\"`. With this level of flexibility we need to provide dask with some extra information: \n",
497+
"We want to apply this function in a vectorized fashion over each chunk of the dask array. This is possible using dask's `blockwise`, `map_blocks`, or `apply_gufunc`. Xarray's `apply_ufunc` wraps dask's `apply_gufunc` and asking it to map the function over chunks using `apply_gufunc` is as simple as specifying `dask=\"parallelized\"`. With this level of flexibility we need to provide dask with some extra information: \n",
498498
" 1. `output_dtypes`: dtypes of all returned objects, and \n",
499499
" 2. `output_sizes`: lengths of any new dimensions. \n",
500500
" \n",
@@ -711,7 +711,7 @@
711711
"name": "python",
712712
"nbconvert_exporter": "python",
713713
"pygments_lexer": "ipython3",
714-
"version": "3.7.6"
714+
"version": "3.8.10"
715715
},
716716
"nbsphinx": {
717717
"allow_errors": true
@@ -732,5 +732,5 @@
732732
}
733733
},
734734
"nbformat": 4,
735-
"nbformat_minor": 1
735+
"nbformat_minor": 4
736736
}

0 commit comments

Comments
 (0)