Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -28,20 +28,24 @@ The categories below are as follows:
### deprecation
### new features
### improvements
- Support deterministic `torch.nn.Upsample` `mode="trilinear"` backward ([#154239](https://github.com/pytorch/pytorch/pull/154239))
### bug fixes
- Add ownership token when needed on GradientEdge ([#160098](https://github.com/pytorch/pytorch/pull/160098))
- Fix `torch.autograd.Function` memory leak due to `torch.utils.checkpiont` early stopping ([#161171](https://github.com/pytorch/pytorch/pull/161171))
- Fix `torch.autograd.graph.GradientEdge` for `torch.autograd.Function` ([#160098](https://github.com/pytorch/pytorch/pull/160098))
- Match 0-dim gradients device type regardless of subclass-ness ([#160165](https://github.com/pytorch/pytorch/pull/160165))

### performance
- Fix SVD forward-mode AD multiplication priority ([#161027](https://github.com/pytorch/pytorch/pull/161027))

### docs
- Improve `torch.inference_mode` docs and error message ([#161164](https://github.com/pytorch/pytorch/pull/161164))

### devs
### Untopiced
- Add basic torch.hash_tensor op ([#154149](https://github.com/pytorch/pytorch/pull/154149))
- [autograd] match 0-dim gradients device type regardless of subclassness ([#160165](https://github.com/pytorch/pytorch/pull/160165))
- Fix SVD forward-mode AD multiplication priority ([#161027](https://github.com/pytorch/pytorch/pull/161027))
- [BE] Improve torch.inference_mode docs and error message ([#161164](https://github.com/pytorch/pytorch/pull/161164))
- Clear custom autograd Function ctx.to_save earlier ([#161171](https://github.com/pytorch/pytorch/pull/161171))
- [while_loop][autograd] add hop while_loop_stack_output ([#160467](https://github.com/pytorch/pytorch/pull/160467))
- Remove guard_size_oblivious from default contiguity python check, and add aten.sym_is_contiguous. [attempt2] ([#160869](https://github.com/pytorch/pytorch/pull/160869))
### not user facing
- [autograd] torch._C._set_view_replay_enabled state leaking into other tests ([#159840](https://github.com/pytorch/pytorch/pull/159840))
- Add new parameter for gen_pyi.py to make it more configureable. ([#161772](https://github.com/pytorch/pytorch/pull/161772))
- Remove guard_size_oblivious from default contiguity python check, and add aten.sym_is_contiguous. [attempt2] ([#160869](https://github.com/pytorch/pytorch/pull/160869))
- [while_loop][autograd] add hop while_loop_stack_output ([#160467](https://github.com/pytorch/pytorch/pull/160467))

### security
6 changes: 3 additions & 3 deletions 2.9.0/miscategorized.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,6 @@ StableABI:
- Enable generating generic c_shim that doesn't bypass dispatcher ([#158974](https://github.com/pytorch/pytorch/pull/158974))
- Cut a version of TORCH_ERROR_CODE_CHECK in headeronly from AOTI ([#159604](https://github.com/pytorch/pytorch/pull/159604))

Autograd:
- Support deterministic upsample trilinear backward ([#154239](https://github.com/pytorch/pytorch/pull/154239))

CUDA:
- [CUDA] Fix missing `__syncthreads` in MultiMarginLoss backward ([#158994](https://github.com/pytorch/pytorch/pull/158994))

Expand All @@ -40,6 +37,9 @@ MPS:
- Add `avg_pool3d` backward pass for MPS ([#159089](https://github.com/pytorch/pytorch/pull/159089))
- Enable _int_mm on Intel GPU ([#157769](https://github.com/pytorch/pytorch/pull/157769))

Python Frontend:
- Add basic torch.hash_tensor op ([#154149](https://github.com/pytorch/pytorch/pull/154149))

## not user facing
- Fix Pandas version mismatch upon reinstalling numpy ([#158584](https://github.com/pytorch/pytorch/pull/158584))
- [CUDA-13] Implement workaround for cudaErrorNotSupported ([#162412](https://github.com/pytorch/pytorch/pull/162412))
Expand Down