-
Notifications
You must be signed in to change notification settings - Fork 196
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add segmented_reduce python api (#3906)
* Add algorithms.segmented_reduce Python API Also avoid recomputing cccl_value of init in both segmented_reduce and in reduce * Change to input_array fixture 1. Include np.complex64 2. Device output size in a variable and reuse it to avoid repeated occurrances of literal values 3. Generate real/imag values for complex arrays in a single call to sampling function for efficiency 4. Change range of generated integral arrays based on the signness of the integral data type. For unsigned types we continue to sample in interval [0, 10), for signed we sample from [-5, 5]. * Corrected docstring of segmented_reduce function * Add initial tests for segmented_reduce * Improve readability of test_segmented_reduce_api example * TransformIteratorKind need not override __eq__/__hash__ methods of the base Additionally, changed the __hash__ of IteratorKind to mix the hash of its value with hash of self.__class__. * Add AdvancedIterator(it, offset=1) function This is used to advance a given iterator `it` the `offset` steps without running into multiple definitions of the advance/derefence methods. * Add example for summing rows of a matrix using segmented_reduce * Implement IteratorBase.__add__(self, offset : int) using make_advanced_iterator * Use end_offsets = start_offsets + 1 This calls IteratorBase.__add__ to produce an iterator whose state is advanced by 1, but which shares the same advance/dereference methods. * Add a test for segmented_reduce on gpu_struct * Change hash of transform iterator to mix its kind * Rename variable n to sample_size Also make generation of complex array in test_reduce.py more efficient by genering real and imaginary components in a single call to np.random.random instead of using two calls. * Remove __hash__ and __eq__ special methods from some iterator classes These were only defined for TransformIterator and AdvancedIterator classes, but not for other classes. Implemented review suggestion to type type(self) instead of self.__class__ * Tweak test_scan_array_input to avoid integer overflows during host accumulation For short range data types we take a small slice of the input array to avoid running into the overflow problem. This works because input_array fixture samples from uniform discrete distribution with small upper range (8), hence using 31 uint8 elements can run up to 31 * 7 = 217 ( < 255) and fits in the type. * Add cccl.set_cccl_iterator_state utility function and use in segmented_reduce.py * Introduce _bindings.call_build utility This finds compute capability and include paths and appends them to the algorithm-specific arguments. Used the utility in segmented_reduce. * Make call_build take *args, **kwargs
- Loading branch information
1 parent
83ba38c
commit 0183959
Showing
11 changed files
with
490 additions
and
27 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
169 changes: 169 additions & 0 deletions
169
python/cuda_parallel/cuda/parallel/experimental/algorithms/segmented_reduce.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,169 @@ | ||
import ctypes | ||
from typing import Callable | ||
|
||
import numba | ||
import numpy as np | ||
from numba.cuda.cudadrv import enums | ||
|
||
from .. import _cccl as cccl | ||
from .._bindings import call_build, get_bindings | ||
from .._caching import CachableFunction, cache_with_key | ||
from .._utils import protocols | ||
from ..iterators._iterators import IteratorBase | ||
from ..typing import DeviceArrayLike, GpuStruct | ||
|
||
|
||
class _SegmentedReduce: | ||
def __del__(self): | ||
if self.build_result is None: | ||
return | ||
bindings = get_bindings() | ||
bindings.cccl_device_segmented_reduce_cleanup(ctypes.byref(self.build_result)) | ||
|
||
def __init__( | ||
self, | ||
d_in: DeviceArrayLike | IteratorBase, | ||
d_out: DeviceArrayLike, | ||
start_offsets_in: DeviceArrayLike | IteratorBase, | ||
end_offsets_in: DeviceArrayLike | IteratorBase, | ||
op: Callable, | ||
h_init: np.ndarray | GpuStruct, | ||
): | ||
self.build_result = None | ||
self.d_in_cccl = cccl.to_cccl_iter(d_in) | ||
self.d_out_cccl = cccl.to_cccl_iter(d_out) | ||
self.start_offsets_in_cccl = cccl.to_cccl_iter(start_offsets_in) | ||
self.end_offsets_in_cccl = cccl.to_cccl_iter(end_offsets_in) | ||
self.h_init_cccl = cccl.to_cccl_value(h_init) | ||
if isinstance(h_init, np.ndarray): | ||
value_type = numba.from_dtype(h_init.dtype) | ||
else: | ||
value_type = numba.typeof(h_init) | ||
sig = (value_type, value_type) | ||
self.op_wrapper = cccl.to_cccl_op(op, sig) | ||
self.build_result = cccl.DeviceSegmentedReduceBuildResult() | ||
self.bindings = get_bindings() | ||
error = call_build( | ||
self.bindings.cccl_device_segmented_reduce_build, | ||
ctypes.byref(self.build_result), | ||
self.d_in_cccl, | ||
self.d_out_cccl, | ||
self.start_offsets_in_cccl, | ||
self.end_offsets_in_cccl, | ||
self.op_wrapper, | ||
self.h_init_cccl, | ||
) | ||
if error != enums.CUDA_SUCCESS: | ||
raise ValueError("Error building reduce") | ||
|
||
def __call__( | ||
self, | ||
temp_storage, | ||
d_in, | ||
d_out, | ||
num_segments: int, | ||
start_offsets_in, | ||
end_offsets_in, | ||
h_init, | ||
stream=None, | ||
): | ||
set_state_fn = cccl.set_cccl_iterator_state | ||
set_state_fn(self.d_in_cccl, d_in) | ||
set_state_fn(self.d_out_cccl, d_out) | ||
set_state_fn(self.start_offsets_in_cccl, start_offsets_in) | ||
set_state_fn(self.end_offsets_in_cccl, end_offsets_in) | ||
self.h_init_cccl.state = h_init.__array_interface__["data"][0] | ||
|
||
stream_handle = protocols.validate_and_get_stream(stream) | ||
|
||
if temp_storage is None: | ||
temp_storage_bytes = ctypes.c_size_t() | ||
d_temp_storage = None | ||
else: | ||
temp_storage_bytes = ctypes.c_size_t(temp_storage.nbytes) | ||
d_temp_storage = protocols.get_data_pointer(temp_storage) | ||
|
||
error = self.bindings.cccl_device_segmented_reduce( | ||
self.build_result, | ||
ctypes.c_void_p(d_temp_storage), | ||
ctypes.byref(temp_storage_bytes), | ||
self.d_in_cccl, | ||
self.d_out_cccl, | ||
ctypes.c_ulonglong(num_segments), | ||
self.start_offsets_in_cccl, | ||
self.end_offsets_in_cccl, | ||
self.op_wrapper, | ||
self.h_init_cccl, | ||
ctypes.c_void_p(stream_handle), | ||
) | ||
|
||
if error != enums.CUDA_SUCCESS: | ||
raise ValueError("Error reducing") | ||
|
||
return temp_storage_bytes.value | ||
|
||
|
||
def _to_key(d_in: DeviceArrayLike | IteratorBase): | ||
"Return key for an input array-like argument or an iterator" | ||
d_in_key = ( | ||
d_in.kind if isinstance(d_in, IteratorBase) else protocols.get_dtype(d_in) | ||
) | ||
return d_in_key | ||
|
||
|
||
def make_cache_key( | ||
d_in: DeviceArrayLike | IteratorBase, | ||
d_out: DeviceArrayLike, | ||
start_offsets_in: DeviceArrayLike | IteratorBase, | ||
end_offsets_in: DeviceArrayLike | IteratorBase, | ||
op: Callable, | ||
h_init: np.ndarray, | ||
): | ||
d_in_key = _to_key(d_in) | ||
d_out_key = protocols.get_dtype(d_out) | ||
start_offsets_in_key = _to_key(start_offsets_in) | ||
end_offsets_in_key = _to_key(end_offsets_in) | ||
op_key = CachableFunction(op) | ||
h_init_key = h_init.dtype | ||
return ( | ||
d_in_key, | ||
d_out_key, | ||
start_offsets_in_key, | ||
end_offsets_in_key, | ||
op_key, | ||
h_init_key, | ||
) | ||
|
||
|
||
@cache_with_key(make_cache_key) | ||
def segmented_reduce( | ||
d_in: DeviceArrayLike | IteratorBase, | ||
d_out: DeviceArrayLike, | ||
start_offsets_in: DeviceArrayLike | IteratorBase, | ||
end_offsets_in: DeviceArrayLike | IteratorBase, | ||
op: Callable, | ||
h_init: np.ndarray, | ||
): | ||
"""Computes a device-wide segmented reduction using the specified binary ``op`` and initial value ``init``. | ||
Example: | ||
Below, ``segmented_reduce`` is used to compute the minimum value of a sequence of integers. | ||
.. literalinclude:: ../../python/cuda_parallel/tests/test_segmented_reduce_api.py | ||
:language: python | ||
:dedent: | ||
:start-after: example-begin segmented-reduce-min | ||
:end-before: example-end segmented-reduce-min | ||
Args: | ||
d_in: Device array or iterator containing the input sequence of data items | ||
d_out: Device array that will store the result of the reduction | ||
start_offsets_in: Device array or iterator containing offsets to start of segments | ||
end_offsets_in: Device array or iterator containing offsets to end of segments | ||
op: Callable representing the binary operator to apply | ||
init: Numpy array storing initial value of the reduction | ||
Returns: | ||
A callable object that can be used to perform the reduction | ||
""" | ||
return _SegmentedReduce(d_in, d_out, start_offsets_in, end_offsets_in, op, h_init) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.