Skip to content

Commit 085ce9a

Browse files
committed
Merge remote-tracking branch 'upstream/master' into map-blocks-schema
* upstream/master: (39 commits) Pint support for DataArray (pydata#3643) Apply blackdoc to the documentation (pydata#4012) ensure Variable._repr_html_ works (pydata#3973) Fix handling of abbreviated units like msec (pydata#3998) full_like: error on non-scalar fill_value (pydata#3979) Fix some code quality and bug-risk issues (pydata#3999) DOC: add pandas.DataFrame.to_xarray (pydata#3994) Better chunking error messages for zarr backend (pydata#3983) Silence sphinx warnings (pydata#3990) Fix distributed tests on upstream-dev (pydata#3989) Add multi-dimensional extrapolation example and mention different behavior of kwargs in interp (pydata#3956) keep attrs in interpolate_na (pydata#3970) actually use preformatted text in the details summary (pydata#3978) facetgrid: Ensure that colormap params are only determined once. (pydata#3915) RasterioDeprecationWarning (pydata#3964) Empty line missing for DataArray.assign_coords doc (pydata#3963) New coords to existing dim (doc) (pydata#3958) implement a more threadsafe call to colorbar (pydata#3944) Fix wrong order of coordinate converted from pd.series with MultiIndex (pydata#3953) Updated list of core developers (pydata#3943) ...
2 parents 66fe4c4 + 3820fb7 commit 085ce9a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+3112
-1199
lines changed

.deepsource.toml

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
version = 1
2+
3+
test_patterns = [
4+
"*/tests/**",
5+
"*/test_*.py"
6+
]
7+
8+
exclude_patterns = [
9+
"doc/**",
10+
"ci/**"
11+
]
12+
13+
[[analyzers]]
14+
name = "python"
15+
enabled = true
16+
17+
[analyzers.meta]
18+
runtime_version = "3.x.x"

.github/ISSUE_TEMPLATE/bug_report.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ assignees: ''
2727

2828
#### Versions
2929

30-
<details><summary>Output of `xr.show_versions()`</summary>
30+
<details><summary>Output of <tt>xr.show_versions()</tt></summary>
3131

3232
<!-- Paste the output here xr.show_versions() here -->
3333

.github/PULL_REQUEST_TEMPLATE.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
<!-- Feel free to remove check-list items aren't relevant to your change -->
22

3-
- [ ] Fixes #xxxx
3+
- [ ] Closes #xxxx
44
- [ ] Tests added
55
- [ ] Passes `isort -rc . && black . && mypy . && flake8`
66
- [ ] Fully documented, including `whats-new.rst` for all changes and `api.rst` for new API

.pre-commit-config.yaml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,14 @@ repos:
55
rev: 4.3.21-2
66
hooks:
77
- id: isort
8+
files: .+\.py$
89
# https://github.com/python/black#version-control-integration
910
- repo: https://github.com/python/black
1011
rev: stable
1112
hooks:
1213
- id: black
13-
- repo: https://github.com/pre-commit/pre-commit-hooks
14-
rev: v2.2.3
14+
- repo: https://gitlab.com/pycqa/flake8
15+
rev: 3.7.9
1516
hooks:
1617
- id: flake8
1718
- repo: https://github.com/pre-commit/mirrors-mypy

azure-pipelines.yml

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,8 @@ jobs:
2020
conda_env: py37
2121
py38:
2222
conda_env: py38
23+
py38-all-but-dask:
24+
conda_env: py38-all-but-dask
2325
py38-upstream-dev:
2426
conda_env: py38
2527
upstream_dev: true
@@ -32,16 +34,15 @@ jobs:
3234
steps:
3335
- template: ci/azure/unit-tests.yml
3436

35-
# excluded while waiting for https://github.com/conda-forge/libwebp-feedstock/issues/26
36-
# - job: MacOSX
37-
# strategy:
38-
# matrix:
39-
# py38:
40-
# conda_env: py38
41-
# pool:
42-
# vmImage: 'macOS-10.15'
43-
# steps:
44-
# - template: ci/azure/unit-tests.yml
37+
- job: MacOSX
38+
strategy:
39+
matrix:
40+
py38:
41+
conda_env: py38
42+
pool:
43+
vmImage: 'macOS-10.15'
44+
steps:
45+
- template: ci/azure/unit-tests.yml
4546

4647
- job: Windows
4748
strategy:

ci/requirements/py38-all-but-dask.yml

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
name: xarray-tests
2+
channels:
3+
- conda-forge
4+
dependencies:
5+
- python=3.8
6+
- black
7+
- boto3
8+
- bottleneck
9+
- cartopy
10+
- cdms2
11+
- cfgrib
12+
- cftime
13+
- coveralls
14+
- flake8
15+
- h5netcdf
16+
- h5py
17+
- hdf5
18+
- hypothesis
19+
- isort
20+
- lxml # Optional dep of pydap
21+
- matplotlib
22+
- mypy=0.761 # Must match .pre-commit-config.yaml
23+
- nc-time-axis
24+
- netcdf4
25+
- numba
26+
- numpy
27+
- pandas
28+
- pint
29+
- pip
30+
- pseudonetcdf
31+
- pydap
32+
- pynio
33+
- pytest
34+
- pytest-cov
35+
- pytest-env
36+
- rasterio
37+
- scipy
38+
- seaborn
39+
- setuptools
40+
- sparse
41+
- toolz
42+
- zarr
43+
- pip:
44+
- numbagg

doc/api-hidden.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,8 @@
1818
Dataset.any
1919
Dataset.argmax
2020
Dataset.argmin
21+
Dataset.idxmax
22+
Dataset.idxmin
2123
Dataset.max
2224
Dataset.min
2325
Dataset.mean
@@ -160,6 +162,8 @@
160162
DataArray.any
161163
DataArray.argmax
162164
DataArray.argmin
165+
DataArray.idxmax
166+
DataArray.idxmin
163167
DataArray.max
164168
DataArray.min
165169
DataArray.mean

doc/api.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -181,6 +181,8 @@ Computation
181181
:py:attr:`~Dataset.any`
182182
:py:attr:`~Dataset.argmax`
183183
:py:attr:`~Dataset.argmin`
184+
:py:attr:`~Dataset.idxmax`
185+
:py:attr:`~Dataset.idxmin`
184186
:py:attr:`~Dataset.max`
185187
:py:attr:`~Dataset.mean`
186188
:py:attr:`~Dataset.median`
@@ -365,6 +367,8 @@ Computation
365367
:py:attr:`~DataArray.any`
366368
:py:attr:`~DataArray.argmax`
367369
:py:attr:`~DataArray.argmin`
370+
:py:attr:`~DataArray.idxmax`
371+
:py:attr:`~DataArray.idxmin`
368372
:py:attr:`~DataArray.max`
369373
:py:attr:`~DataArray.mean`
370374
:py:attr:`~DataArray.median`

doc/combining.rst

Lines changed: 32 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,12 @@ Combining data
44
--------------
55

66
.. ipython:: python
7-
:suppress:
7+
:suppress:
88
99
import numpy as np
1010
import pandas as pd
1111
import xarray as xr
12+
1213
np.random.seed(123456)
1314
1415
* For combining datasets or data arrays along a single dimension, see concatenate_.
@@ -28,11 +29,10 @@ that dimension:
2829

2930
.. ipython:: python
3031
31-
arr = xr.DataArray(np.random.randn(2, 3),
32-
[('x', ['a', 'b']), ('y', [10, 20, 30])])
32+
arr = xr.DataArray(np.random.randn(2, 3), [("x", ["a", "b"]), ("y", [10, 20, 30])])
3333
arr[:, :1]
3434
# this resembles how you would use np.concatenate
35-
xr.concat([arr[:, :1], arr[:, 1:]], dim='y')
35+
xr.concat([arr[:, :1], arr[:, 1:]], dim="y")
3636
3737
In addition to combining along an existing dimension, ``concat`` can create a
3838
new dimension by stacking lower dimensional arrays together:
@@ -41,30 +41,30 @@ new dimension by stacking lower dimensional arrays together:
4141
4242
arr[0]
4343
# to combine these 1d arrays into a 2d array in numpy, you would use np.array
44-
xr.concat([arr[0], arr[1]], 'x')
44+
xr.concat([arr[0], arr[1]], "x")
4545
4646
If the second argument to ``concat`` is a new dimension name, the arrays will
4747
be concatenated along that new dimension, which is always inserted as the first
4848
dimension:
4949

5050
.. ipython:: python
5151
52-
xr.concat([arr[0], arr[1]], 'new_dim')
52+
xr.concat([arr[0], arr[1]], "new_dim")
5353
5454
The second argument to ``concat`` can also be an :py:class:`~pandas.Index` or
5555
:py:class:`~xarray.DataArray` object as well as a string, in which case it is
5656
used to label the values along the new dimension:
5757

5858
.. ipython:: python
5959
60-
xr.concat([arr[0], arr[1]], pd.Index([-90, -100], name='new_dim'))
60+
xr.concat([arr[0], arr[1]], pd.Index([-90, -100], name="new_dim"))
6161
6262
Of course, ``concat`` also works on ``Dataset`` objects:
6363

6464
.. ipython:: python
6565
66-
ds = arr.to_dataset(name='foo')
67-
xr.concat([ds.sel(x='a'), ds.sel(x='b')], 'x')
66+
ds = arr.to_dataset(name="foo")
67+
xr.concat([ds.sel(x="a"), ds.sel(x="b")], "x")
6868
6969
:py:func:`~xarray.concat` has a number of options which provide deeper control
7070
over which variables are concatenated and how it handles conflicting variables
@@ -84,16 +84,16 @@ To combine variables and coordinates between multiple ``DataArray`` and/or
8484

8585
.. ipython:: python
8686
87-
xr.merge([ds, ds.rename({'foo': 'bar'})])
88-
xr.merge([xr.DataArray(n, name='var%d' % n) for n in range(5)])
87+
xr.merge([ds, ds.rename({"foo": "bar"})])
88+
xr.merge([xr.DataArray(n, name="var%d" % n) for n in range(5)])
8989
9090
If you merge another dataset (or a dictionary including data array objects), by
9191
default the resulting dataset will be aligned on the **union** of all index
9292
coordinates:
9393

9494
.. ipython:: python
9595
96-
other = xr.Dataset({'bar': ('x', [1, 2, 3, 4]), 'x': list('abcd')})
96+
other = xr.Dataset({"bar": ("x", [1, 2, 3, 4]), "x": list("abcd")})
9797
xr.merge([ds, other])
9898
9999
This ensures that ``merge`` is non-destructive. ``xarray.MergeError`` is raised
@@ -116,7 +116,7 @@ used in the :py:class:`~xarray.Dataset` constructor:
116116

117117
.. ipython:: python
118118
119-
xr.Dataset({'a': arr[:-1], 'b': arr[1:]})
119+
xr.Dataset({"a": arr[:-1], "b": arr[1:]})
120120
121121
.. _combine:
122122

@@ -131,8 +131,8 @@ are filled with ``NaN``. For example:
131131

132132
.. ipython:: python
133133
134-
ar0 = xr.DataArray([[0, 0], [0, 0]], [('x', ['a', 'b']), ('y', [-1, 0])])
135-
ar1 = xr.DataArray([[1, 1], [1, 1]], [('x', ['b', 'c']), ('y', [0, 1])])
134+
ar0 = xr.DataArray([[0, 0], [0, 0]], [("x", ["a", "b"]), ("y", [-1, 0])])
135+
ar1 = xr.DataArray([[1, 1], [1, 1]], [("x", ["b", "c"]), ("y", [0, 1])])
136136
ar0.combine_first(ar1)
137137
ar1.combine_first(ar0)
138138
@@ -152,7 +152,7 @@ variables with new values:
152152

153153
.. ipython:: python
154154
155-
ds.update({'space': ('space', [10.2, 9.4, 3.9])})
155+
ds.update({"space": ("space", [10.2, 9.4, 3.9])})
156156
157157
However, dimensions are still required to be consistent between different
158158
Dataset variables, so you cannot change the size of a dimension unless you
@@ -170,7 +170,7 @@ syntax:
170170

171171
.. ipython:: python
172172
173-
ds['baz'] = xr.DataArray([9, 9, 9, 9, 9], coords=[('x', list('abcde'))])
173+
ds["baz"] = xr.DataArray([9, 9, 9, 9, 9], coords=[("x", list("abcde"))])
174174
ds.baz
175175
176176
Equals and identical
@@ -193,16 +193,16 @@ object:
193193

194194
.. ipython:: python
195195
196-
arr.identical(arr.rename('bar'))
196+
arr.identical(arr.rename("bar"))
197197
198198
:py:attr:`~xarray.Dataset.broadcast_equals` does a more relaxed form of equality
199199
check that allows variables to have different dimensions, as long as values
200200
are constant along those new dimensions:
201201

202202
.. ipython:: python
203203
204-
left = xr.Dataset(coords={'x': 0})
205-
right = xr.Dataset({'x': [0, 0, 0]})
204+
left = xr.Dataset(coords={"x": 0})
205+
right = xr.Dataset({"x": [0, 0, 0]})
206206
left.broadcast_equals(right)
207207
208208
Like pandas objects, two xarray objects are still equal or identical if they have
@@ -231,9 +231,9 @@ coordinates as long as any non-missing values agree or are disjoint:
231231

232232
.. ipython:: python
233233
234-
ds1 = xr.Dataset({'a': ('x', [10, 20, 30, np.nan])}, {'x': [1, 2, 3, 4]})
235-
ds2 = xr.Dataset({'a': ('x', [np.nan, 30, 40, 50])}, {'x': [2, 3, 4, 5]})
236-
xr.merge([ds1, ds2], compat='no_conflicts')
234+
ds1 = xr.Dataset({"a": ("x", [10, 20, 30, np.nan])}, {"x": [1, 2, 3, 4]})
235+
ds2 = xr.Dataset({"a": ("x", [np.nan, 30, 40, 50])}, {"x": [2, 3, 4, 5]})
236+
xr.merge([ds1, ds2], compat="no_conflicts")
237237
238238
Note that due to the underlying representation of missing values as floating
239239
point numbers (``NaN``), variable data type is not always preserved when merging
@@ -273,10 +273,12 @@ datasets into a doubly-nested list, e.g:
273273

274274
.. ipython:: python
275275
276-
arr = xr.DataArray(name='temperature', data=np.random.randint(5, size=(2, 2)), dims=['x', 'y'])
276+
arr = xr.DataArray(
277+
name="temperature", data=np.random.randint(5, size=(2, 2)), dims=["x", "y"]
278+
)
277279
arr
278280
ds_grid = [[arr, arr], [arr, arr]]
279-
xr.combine_nested(ds_grid, concat_dim=['x', 'y'])
281+
xr.combine_nested(ds_grid, concat_dim=["x", "y"])
280282
281283
:py:func:`~xarray.combine_nested` can also be used to explicitly merge datasets
282284
with different variables. For example if we have 4 datasets, which are divided
@@ -286,10 +288,10 @@ we wish to use ``merge`` instead of ``concat``:
286288

287289
.. ipython:: python
288290
289-
temp = xr.DataArray(name='temperature', data=np.random.randn(2), dims=['t'])
290-
precip = xr.DataArray(name='precipitation', data=np.random.randn(2), dims=['t'])
291+
temp = xr.DataArray(name="temperature", data=np.random.randn(2), dims=["t"])
292+
precip = xr.DataArray(name="precipitation", data=np.random.randn(2), dims=["t"])
291293
ds_grid = [[temp, precip], [temp, precip]]
292-
xr.combine_nested(ds_grid, concat_dim=['t', None])
294+
xr.combine_nested(ds_grid, concat_dim=["t", None])
293295
294296
:py:func:`~xarray.combine_by_coords` is for combining objects which have dimension
295297
coordinates which specify their relationship to and order relative to one
@@ -302,8 +304,8 @@ coordinates, not on their position in the list passed to ``combine_by_coords``.
302304
.. ipython:: python
303305
:okwarning:
304306
305-
x1 = xr.DataArray(name='foo', data=np.random.randn(3), coords=[('x', [0, 1, 2])])
306-
x2 = xr.DataArray(name='foo', data=np.random.randn(3), coords=[('x', [3, 4, 5])])
307+
x1 = xr.DataArray(name="foo", data=np.random.randn(3), coords=[("x", [0, 1, 2])])
308+
x2 = xr.DataArray(name="foo", data=np.random.randn(3), coords=[("x", [3, 4, 5])])
307309
xr.combine_by_coords([x2, x1])
308310
309311
These functions can be used by :py:func:`~xarray.open_mfdataset` to open many

0 commit comments

Comments
 (0)