Skip to content

Commit 164b19e

Browse files
committed
[Project] Docs minor update & add perf section
1 parent 285874e commit 164b19e

File tree

5 files changed

+52
-7
lines changed

5 files changed

+52
-7
lines changed

CHANGELOG.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,6 @@ Added new vector C API, exposed vector primitive into python-package.
2222
- Vector creation (empty, from data, with random data)
2323
- Matrix-vector operations (matrix-vector and vector-matrix multiplication)
2424
- Vector-vector operations (element-wise addition)
25-
- Matrix operations (equality, reduce to value, extract sub-vector)
2625
- Vector data extraction (as list of indices)
2726
- Vector syntax sugar (pretty string printing, slicing, iterating through non-zero indices)
2827
- Matrix operations (extract row or matrix column as sparse vector, reduce matrix (optionally transposed) to vector)

README.md

Lines changed: 27 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -51,8 +51,8 @@ prototyping algorithms on a local computer for later running on a powerful serve
5151
### Platforms
5252

5353
- Linux based OS (tested on Ubuntu 20.04)
54-
- Windows (not tested yet)
55-
- macOS (not tested yet)
54+
- Windows (coming soon)
55+
- macOS (coming soon)
5656

5757
### Simple example
5858

@@ -74,9 +74,32 @@ b[2, 1] = True
7474
print(a, b, a.mxm(b), sep="\n")
7575
```
7676

77+
### Performance
78+
79+
Sparse Boolean matrix-matrix multiplication evaluation results are listed bellow.
80+
Machine configuration: PC with Ubuntu 20.04, Intel Core i7-6700 3.40GHz CPU, DDR4 64Gb RAM, GeForce GTX 1070 GPU with 8Gb VRAM.
81+
82+
![time](https://github.com/JetBrains-Research/cuBool/raw/master/docs/pictures/mxm-perf-time.svg?raw=true&sanitize=true)
83+
![mem](https://github.com/JetBrains-Research/cuBool/raw/master/docs/pictures/mxm-perf-mem.svg?raw=true&sanitize=true)
84+
85+
The matrix data is selected from the SuiteSparse Matrix Collection [link](https://sparse.tamu.edu).
86+
87+
| Matrix name | # Rows | Nnz M | Nnz/row | Max Nnz/row | Nnz M^2 |
88+
|--- |---: |---: |---: |---: |---: |
89+
| SNAP/amazon0312 | 400,727 | 3,200,440 | 7.9 | 10 | 14,390,544 |
90+
| LAW/amazon-2008 | 735,323 | 5,158,388 | 7.0 | 10 | 25,366,745 |
91+
| SNAP/web-Google | 916,428 | 5,105,039 | 5.5 | 456 | 29,710,164 |
92+
| SNAP/roadNet-PA | 1,090,920 | 3,083,796 | 2.8 | 9 | 7,238,920 |
93+
| SNAP/roadNet-TX | 1,393,383 | 3,843,320 | 2.7 | 12 | 8,903,897 |
94+
| SNAP/roadNet-CA | 1,971,281 | 5,533,214 | 2.8 | 12 | 12,908,450 |
95+
| DIMACS10/netherlands_osm | 2,216,688 | 4,882,476 | 2.2 | 7 | 8,755,758 |
96+
97+
Detailed comparison is available in the full paper text at
98+
[link](https://github.com/YaccConstructor/articles/blob/master/2021/GRAPL/Sparse_Boolean_Algebra_on_GPGPU/Sparse_Boolean_Algebra_on_GPGPU.pdf).
99+
77100
### Installation
78101

79-
If you are running **Linux based** OS (tested on Ubuntu 20.04) you can download the official
102+
If you are running **Linux-based** OS (tested on Ubuntu 20.04) you can download the official
80103
PyPI **pycubool** python package, which includes compiled library source code
81104
with Cuda and Sequential computations support. Installation process
82105
requires only `python3` to be installed on your machine. Python can be installed
@@ -102,7 +125,7 @@ These steps are required if you want to build library for your specific platform
102125

103126
### Requirements
104127

105-
- Linux based OS (tested on Ubuntu 20.04)
128+
- Linux-based OS (tested on Ubuntu 20.04)
106129
- CMake Version 3.15 or higher
107130
- CUDA Compatible GPU device (to run Cuda computations)
108131
- GCC Compiler

docs/pictures/mxm-perf-mem.svg

Lines changed: 1 addition & 0 deletions
Loading

docs/pictures/mxm-perf-time.svg

Lines changed: 1 addition & 0 deletions
Loading

python/README.md

Lines changed: 23 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,6 @@ prototyping algorithms on a local computer for later running on a powerful serve
3131

3232
### Features
3333

34-
- C API for performance-critical computations
35-
- Python package for every-day tasks
3634
- Cuda backend for computations
3735
- Cpu backend for computations
3836
- Matrix/vector creation (empty, from data, with random data)
@@ -47,6 +45,29 @@ prototyping algorithms on a local computer for later running on a powerful serve
4745
- GraphViz (export single matrix or set of matrices as a graph with custom color and label settings)
4846
- Debug (matrix string debug markers, logging)
4947

48+
### Performance
49+
50+
Sparse Boolean matrix-matrix multiplication evaluation results are listed bellow.
51+
Machine configuration: PC with Ubuntu 20.04, Intel Core i7-6700 3.40GHz CPU, DDR4 64Gb RAM, GeForce GTX 1070 GPU with 8Gb VRAM.
52+
53+
![time](https://github.com/JetBrains-Research/cuBool/raw/master/docs/pictures/mxm-perf-time.svg?raw=true&sanitize=true)
54+
![mem](https://github.com/JetBrains-Research/cuBool/raw/master/docs/pictures/mxm-perf-mem.svg?raw=true&sanitize=true)
55+
56+
The matrix data is selected from the SuiteSparse Matrix Collection [link](https://sparse.tamu.edu).
57+
58+
| Matrix name | # Rows | Nnz M | Nnz/row | Max Nnz/row | Nnz M^2 |
59+
|--- |---: |---: |---: |---: |---: |
60+
| SNAP/amazon0312 | 400,727 | 3,200,440 | 7.9 | 10 | 14,390,544 |
61+
| LAW/amazon-2008 | 735,323 | 5,158,388 | 7.0 | 10 | 25,366,745 |
62+
| SNAP/web-Google | 916,428 | 5,105,039 | 5.5 | 456 | 29,710,164 |
63+
| SNAP/roadNet-PA | 1,090,920 | 3,083,796 | 2.8 | 9 | 7,238,920 |
64+
| SNAP/roadNet-TX | 1,393,383 | 3,843,320 | 2.7 | 12 | 8,903,897 |
65+
| SNAP/roadNet-CA | 1,971,281 | 5,533,214 | 2.8 | 12 | 12,908,450 |
66+
| DIMACS10/netherlands_osm | 2,216,688 | 4,882,476 | 2.2 | 7 | 8,755,758 |
67+
68+
Detailed comparison is available in the full paper text at
69+
[link](https://github.com/YaccConstructor/articles/blob/master/2021/GRAPL/Sparse_Boolean_Algebra_on_GPGPU/Sparse_Boolean_Algebra_on_GPGPU.pdf).
70+
5071
### Simple example
5172

5273
Create sparse matrices, compute matrix-matrix product and print the result to the output:

0 commit comments

Comments
 (0)