Skip to content

Commit 038a5de

Browse files
committedMay 12, 2022
Update readme
1 parent be129e9 commit 038a5de

File tree

2 files changed

+8
-6
lines changed

2 files changed

+8
-6
lines changed
 

‎.gitignore

+2-1
Original file line numberDiff line numberDiff line change
@@ -129,4 +129,5 @@ dmypy.json
129129
.pyre/
130130

131131
# raw data
132-
data/
132+
data/
133+
.DS_Store

‎README.md

+6-5
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ Hence, we study different variants of extracting the diagonal and measure their
8585

8686
### Details
8787

88-
1. [`torch.diagonal`](https://pytorch.org/docs/stable/generated/torch.diagonal.html) on the dense
88+
1. **Torch**: [`torch.diagonal`](https://pytorch.org/docs/stable/generated/torch.diagonal.html) on the dense
8989
version of the matrix, obtained via
9090
[`torch.Tensor.to_dense`](https://pytorch.org/docs/stable/generated/torch.Tensor.to_dense.html).
9191
Due to materializing the dense matrix, this method requires a large amount of memory (O(n^2)).
@@ -95,7 +95,7 @@ Hence, we study different variants of extracting the diagonal and measure their
9595
d = torch.diagonal(matrix.to_dense())
9696
```
9797

98-
2. Python for-loop, and item access. Due to using a Python loop, this variant is likely to be
98+
2. **Python for-loop, and item access**: Due to using a Python loop, this variant is likely to be
9999
inefficient. Moreover, it is only applicable to the COO-format, an fails for CSR adjacency
100100
matrices (cf. details collapsible)
101101

@@ -145,7 +145,7 @@ Hence, we study different variants of extracting the diagonal and measure their
145145

146146
</details>
147147

148-
3. Python for-loop (CSR): Here, we iterate over the rows, select the corresponding column indices, and
148+
3. **Python for-loop (CSR)**: Here, we iterate over the rows, select the corresponding column indices, and
149149
determine, whether one of them corresponds to the diagonal entry. In that case, we select the
150150
corresponding value and copy it into the result.
151151

@@ -165,7 +165,7 @@ Hence, we study different variants of extracting the diagonal and measure their
165165
d[i] = v
166166
```
167167

168-
4. Coalesce: In this variant, we perform an element-wise multiplication with a sparse identity
168+
4. **Coalesce**: In this variant, we perform an element-wise multiplication with a sparse identity
169169
matrix, and then use [`torch.Tensor.values`](https://pytorch.org/docs/stable/generated/torch.Tensor.values.html)
170170
and [`torch.Tensor.indices`](https://pytorch.org/docs/stable/generated/torch.Tensor.indices.html)
171171
to obtain the values and indices of non-zero elements. This operation does only support the COO format.
@@ -182,11 +182,12 @@ Hence, we study different variants of extracting the diagonal and measure their
182182
d[indices] = values
183183
```
184184

185-
6. Manual Coalesce: Since the most expensive part of the previous solution is the coalesce step,
185+
5. **Manual Coalesce**: Since the most expensive part of the previous solution is the coalesce step,
186186
we investigate a variant with directly operates on the raw, uncoalesced `_values` and `_indices`.
187187
Hereby, we avoid having to coalesce non-diagonal entries. Since we operate on the uncoalesced
188188
view, there may be multiple entries for the same index, we need to be aggregated via
189189
[`scatter_add_`](https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add_.html).
190+
190191
```python
191192
n = matrix.shape[0]
192193
d = torch.zeros(n, device=matrix.device)

0 commit comments

Comments
 (0)
Please sign in to comment.