@@ -85,7 +85,7 @@ Hence, we study different variants of extracting the diagonal and measure their
85
85
86
86
### Details
87
87
88
- 1 . [ ` torch.diagonal ` ] ( https://pytorch.org/docs/stable/generated/torch.diagonal.html ) on the dense
88
+ 1 . ** Torch ** : [ ` torch.diagonal ` ] ( https://pytorch.org/docs/stable/generated/torch.diagonal.html ) on the dense
89
89
version of the matrix, obtained via
90
90
[ ` torch.Tensor.to_dense ` ] ( https://pytorch.org/docs/stable/generated/torch.Tensor.to_dense.html ) .
91
91
Due to materializing the dense matrix, this method requires a large amount of memory (O(n^2)).
@@ -95,7 +95,7 @@ Hence, we study different variants of extracting the diagonal and measure their
95
95
d = torch.diagonal(matrix.to_dense())
96
96
```
97
97
98
- 2 . Python for-loop, and item access. Due to using a Python loop, this variant is likely to be
98
+ 2 . ** Python for-loop, and item access** : Due to using a Python loop, this variant is likely to be
99
99
inefficient. Moreover, it is only applicable to the COO-format, an fails for CSR adjacency
100
100
matrices (cf. details collapsible)
101
101
@@ -145,7 +145,7 @@ Hence, we study different variants of extracting the diagonal and measure their
145
145
146
146
</details >
147
147
148
- 3 . Python for-loop (CSR): Here, we iterate over the rows, select the corresponding column indices, and
148
+ 3 . ** Python for-loop (CSR)** : Here, we iterate over the rows, select the corresponding column indices, and
149
149
determine, whether one of them corresponds to the diagonal entry. In that case, we select the
150
150
corresponding value and copy it into the result.
151
151
@@ -165,7 +165,7 @@ Hence, we study different variants of extracting the diagonal and measure their
165
165
d[i] = v
166
166
```
167
167
168
- 4 . Coalesce: In this variant, we perform an element-wise multiplication with a sparse identity
168
+ 4 . ** Coalesce** : In this variant, we perform an element-wise multiplication with a sparse identity
169
169
matrix, and then use [ ` torch.Tensor.values ` ] ( https://pytorch.org/docs/stable/generated/torch.Tensor.values.html )
170
170
and [ ` torch.Tensor.indices ` ] ( https://pytorch.org/docs/stable/generated/torch.Tensor.indices.html )
171
171
to obtain the values and indices of non-zero elements. This operation does only support the COO format.
@@ -182,11 +182,12 @@ Hence, we study different variants of extracting the diagonal and measure their
182
182
d[indices] = values
183
183
```
184
184
185
- 6 . Manual Coalesce: Since the most expensive part of the previous solution is the coalesce step,
185
+ 5 . ** Manual Coalesce** : Since the most expensive part of the previous solution is the coalesce step,
186
186
we investigate a variant with directly operates on the raw, uncoalesced ` _values ` and ` _indices ` .
187
187
Hereby, we avoid having to coalesce non-diagonal entries. Since we operate on the uncoalesced
188
188
view, there may be multiple entries for the same index, we need to be aggregated via
189
189
[ ` scatter_add_ ` ] ( https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add_.html ) .
190
+
190
191
``` python
191
192
n = matrix.shape[0 ]
192
193
d = torch.zeros(n, device = matrix.device)
0 commit comments