Skip to content

Commit c41749f

Browse files
author
chchiu
committed
updated pipeline docs
1 parent a7fcabe commit c41749f

File tree

483 files changed

+10276
-6471
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

483 files changed

+10276
-6471
lines changed

docs/Algorithms.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@
4848
<h1>
4949
Taskflow Algorithms
5050
</h1>
51-
<p>Taskflow defines a collection of algorithm functions especially designed to be used on ranges of elements.</p><ul><li><a href="ParallelIterations.html" class="m-doc">Parallel Iterations</a></li><li><a href="ParallelTransforms.html" class="m-doc">Parallel Transforms</a></li><li><a href="ParallelReduction.html" class="m-doc">Parallel Reduction</a></li><li><a href="ParallelSort.html" class="m-doc">Parallel Sort</a></li><li>PipelineParallelism</li></ul>
51+
<p>Taskflow defines a collection of algorithm functions especially designed to be used on ranges of elements.</p><ul><li><a href="ParallelIterations.html" class="m-doc">Parallel Iterations</a></li><li><a href="ParallelTransforms.html" class="m-doc">Parallel Transforms</a></li><li><a href="ParallelReduction.html" class="m-doc">Parallel Reduction</a></li><li><a href="ParallelSort.html" class="m-doc">Parallel Sort</a></li><li><a href="ParallelPipeline.html" class="m-doc">Parallel Pipeline</a></li></ul>
5252
</div>
5353
</div>
5454
</div>
@@ -93,7 +93,7 @@ <h1>
9393
<div class="m-container">
9494
<div class="m-row">
9595
<div class="m-col-l-10 m-push-l-1">
96-
<p>Taskflow handbook is part of the <a href="https://taskflow.github.io">Taskflow project</a>, copyright © <a href="https://tsung-wei-huang.github.io/">Dr. Tsung-Wei Huang</a>, 2018&ndash;2021.<br />Generated by <a href="https://doxygen.org/">Doxygen</a> 1.8.13 and <a href="https://mcss.mosra.cz/">m.css</a>.</p>
96+
<p>Taskflow handbook is part of the <a href="https://taskflow.github.io">Taskflow project</a>, copyright © <a href="https://tsung-wei-huang.github.io/">Dr. Tsung-Wei Huang</a>, 2018&ndash;2021.<br />Generated by <a href="https://doxygen.org/">Doxygen</a> 1.8.14 and <a href="https://mcss.mosra.cz/">m.css</a>.</p>
9797
</div>
9898
</div>
9999
</div>

docs/AsyncTasking.html

Lines changed: 53 additions & 46 deletions
Large diffs are not rendered by default.

docs/BenchmarkTaskflow.html

Lines changed: 48 additions & 34 deletions
Large diffs are not rendered by default.

docs/CUDASTDExecutionPolicy.html

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -49,13 +49,21 @@ <h1>
4949
<span class="m-breadcrumb"><a href="cudaStandardAlgorithms.html">CUDA Standard Algorithms</a> &raquo;</span>
5050
Execution Policy
5151
</h1>
52-
<p>Taskflow provides standalone template methods for expressing common parallel algorithms on a GPU. Each of these methods is governed by an <em>execution policy object</em> to configure the kernel execution parameters.</p><section id="CUDASTDParameterizePerformance"><h2><a href="#CUDASTDParameterizePerformance">Parameterize Performance</a></h2><p>Taskflow parameterizes most CUDA algorithms in terms of <em>the number of threads per block</em> and <em>units of work per thread</em>, which can be specified in the execution policy template type, <a href="classtf_1_1cudaExecutionPolicy.html" class="m-doc">tf::<wbr />cudaExecutionPolicy</a>. The design is inspired by <a href="https://moderngpu.github.io/">Modern GPU Programming</a> authored by Sean Baxter to achieve high-performance GPU computing.</p></section><section id="CUDASTDDefineAnExecutionPolicy"><h2><a href="#CUDASTDDefineAnExecutionPolicy">Define an Execution Policy</a></h2><p>The following example defines an execution policy object, <code>policy</code>, which configures (1) each block to invoke 512 threads and (2) each of these <code>512</code> threads to perform <code>11</code> units of work. Block size must be a power of two. It is always a good idea to specify an odd number in the second parameter to avoid bank conflicts.</p><pre class="m-code"><span class="cp">#include</span> <span class="cpf">&lt;taskflow/cudaflow.hpp&gt;</span><span class="cp"></span>
52+
<div class="m-block m-default">
53+
<h3>Contents</h3>
54+
<ul>
55+
<li><a href="#CUDASTDParameterizePerformance">Parameterize Performance</a></li>
56+
<li><a href="#CUDASTDDefineAnExecutionPolicy">Define an Execution Policy</a></li>
57+
<li><a href="#CUDASTDAllocateMemoryBufferForAlgorithms">Allocate Memory Buffer for Algorithms</a></li>
58+
</ul>
59+
</div>
60+
<p>Taskflow provides standalone template methods for expressing common parallel algorithms on a GPU. Each of these methods is governed by an <em>execution policy object</em> to configure the kernel execution parameters.</p><section id="CUDASTDParameterizePerformance"><h2><a href="#CUDASTDParameterizePerformance">Parameterize Performance</a></h2><p>Taskflow parameterizes most CUDA algorithms in terms of <em>the number of threads per block</em> and <em>units of work per thread</em>, which can be specified in the execution policy template type, <a href="classtf_1_1cudaExecutionPolicy.html" class="m-doc">tf::<wbr />cudaExecutionPolicy</a>. The design is inspired by <a href="https://moderngpu.github.io/">Modern GPU Programming</a> authored by Sean Baxter to achieve high-performance GPU computing.</p></section><section id="CUDASTDDefineAnExecutionPolicy"><h2><a href="#CUDASTDDefineAnExecutionPolicy">Define an Execution Policy</a></h2><p>The following example defines an execution policy object, <code>policy</code>, which configures (1) each block to invoke 512 threads and (2) each of these <code>512</code> threads to perform <code>11</code> units of work. Block size must be a power of two. It is always a good idea to specify an odd number in the second parameter to avoid bank conflicts.</p><pre class="m-code"><span class="cp">#include</span><span class="w"> </span><span class="cpf">&lt;taskflow/cudaflow.hpp&gt;</span><span class="cp"></span>
5361

54-
<span class="n">tf</span><span class="o">::</span><span class="n">cudaExecutionPolicy</span><span class="o">&lt;</span><span class="mi">512</span><span class="p">,</span> <span class="mi">11</span><span class="o">&gt;</span> <span class="n">policy</span><span class="p">;</span></pre><aside class="m-note m-info"><h4>Note</h4><p>To use CUDA standard algorithms, you need to include the header <a href="cudaflow_8hpp.html" class="m-doc">taskflow/<wbr />cudaflow.hpp</a>.</p></aside><p>By default, the execution policy object is associated with the CUDA <em>default stream</em> (i.e., 0). Default stream can incur significant overhead due to the global synchronization. You can associate an execution policy with another stream as shown below:</p><pre class="m-code"><span class="c1">// assign a stream to a policy at construction time</span>
55-
<span class="n">tf</span><span class="o">::</span><span class="n">cudaExecutionPolicy</span><span class="o">&lt;</span><span class="mi">512</span><span class="p">,</span> <span class="mi">11</span><span class="o">&gt;</span> <span class="n">policy</span><span class="p">(</span><span class="n">my_stream</span><span class="p">);</span>
62+
<span class="n">tf</span><span class="o">::</span><span class="n">cudaExecutionPolicy</span><span class="o">&lt;</span><span class="mi">512</span><span class="p">,</span><span class="w"> </span><span class="mi">11</span><span class="o">&gt;</span><span class="w"> </span><span class="n">policy</span><span class="p">;</span><span class="w"></span></pre><aside class="m-note m-info"><h4>Note</h4><p>To use CUDA standard algorithms, you need to include the header <a href="cudaflow_8hpp.html" class="m-doc">taskflow/<wbr />cudaflow.hpp</a>.</p></aside><p>By default, the execution policy object is associated with the CUDA <em>default stream</em> (i.e., 0). Default stream can incur significant overhead due to the global synchronization. You can associate an execution policy with another stream as shown below:</p><pre class="m-code"><span class="c1">// assign a stream to a policy at construction time</span>
63+
<span class="n">tf</span><span class="o">::</span><span class="n">cudaExecutionPolicy</span><span class="o">&lt;</span><span class="mi">512</span><span class="p">,</span><span class="w"> </span><span class="mi">11</span><span class="o">&gt;</span><span class="w"> </span><span class="n">policy</span><span class="p">(</span><span class="n">my_stream</span><span class="p">);</span><span class="w"></span>
5664

5765
<span class="c1">// assign another stream to the policy</span>
58-
<span class="n">policy</span><span class="p">.</span><span class="n">stream</span><span class="p">(</span><span class="n">another_stream</span><span class="p">);</span></pre><p>All the CUDA standard algorithms in Taskflow are asynchronous with respect to the stream assigned to the execution policy. This enables high execution efficiency for large GPU workloads that call for many different algorithms. You can synchronize the execution at your own wish by calling <code>synchronize</code>.</p><pre class="m-code"><span class="n">policy</span><span class="p">.</span><span class="n">synchronize</span><span class="p">();</span> <span class="c1">// synchronize the associated stream</span></pre><p>The best-performing configurations for each algorithm, each GPU architecture, and each data type can vary significantly. You should experiment different configurations and find the optimal tuning parameters for your applications. A default policy is given in <a href="namespacetf.html#aa18f102977c3257b75e21fde05efdb68" class="m-doc">tf::<wbr />cudaDefaultExecutionPolicy</a>.</p><pre class="m-code"><span class="n">tf</span><span class="o">::</span><span class="n">cudaDefaultExecutionPolicy</span> <span class="n">default_policy</span><span class="p">;</span></pre></section><section id="CUDASTDAllocateMemoryBufferForAlgorithms"><h2><a href="#CUDASTDAllocateMemoryBufferForAlgorithms">Allocate Memory Buffer for Algorithms</a></h2><p>A key difference between our CUDA standard algorithms and others (e.g., Thrust) is the <em>memory management</em>. Unlike CPU-parallel algorithms, many GPU-parallel algorithms require extra buffer to store the temporary results during the multi-phase computation, for instance, <a href="namespacetf.html#a8a872d2a0ac73a676713cb5be5aa688c" class="m-doc">tf::<wbr />cuda_reduce</a> and <a href="namespacetf.html#a06804cb1598e965febc7bd35fc0fbbb0" class="m-doc">tf::<wbr />cuda_sort</a>. We <em>DO NOT</em> allocate any memory during these algorithms call but ask you to provide the memory buffer required for each of such algorithms. This decision seems to complicate the code a little bit, but it gives applications freedom to optimize the memory; also, it makes all algorithm calls capturable to a CUDA graph to improve the execution efficiency.</p></section>
66+
<span class="n">policy</span><span class="p">.</span><span class="n">stream</span><span class="p">(</span><span class="n">another_stream</span><span class="p">);</span><span class="w"></span></pre><p>All the CUDA standard algorithms in Taskflow are asynchronous with respect to the stream assigned to the execution policy. This enables high execution efficiency for large GPU workloads that call for many different algorithms. You can synchronize the execution at your own wish by calling <code>synchronize</code>.</p><pre class="m-code"><span class="n">policy</span><span class="p">.</span><span class="n">synchronize</span><span class="p">();</span><span class="w"> </span><span class="c1">// synchronize the associated stream</span></pre><p>The best-performing configurations for each algorithm, each GPU architecture, and each data type can vary significantly. You should experiment different configurations and find the optimal tuning parameters for your applications. A default policy is given in <a href="namespacetf.html#aa18f102977c3257b75e21fde05efdb68" class="m-doc">tf::<wbr />cudaDefaultExecutionPolicy</a>.</p><pre class="m-code"><span class="n">tf</span><span class="o">::</span><span class="n">cudaDefaultExecutionPolicy</span><span class="w"> </span><span class="n">default_policy</span><span class="p">;</span><span class="w"></span></pre></section><section id="CUDASTDAllocateMemoryBufferForAlgorithms"><h2><a href="#CUDASTDAllocateMemoryBufferForAlgorithms">Allocate Memory Buffer for Algorithms</a></h2><p>A key difference between our CUDA standard algorithms and others (e.g., Thrust) is the <em>memory management</em>. Unlike CPU-parallel algorithms, many GPU-parallel algorithms require extra buffer to store the temporary results during the multi-phase computation, for instance, <a href="namespacetf.html#a8a872d2a0ac73a676713cb5be5aa688c" class="m-doc">tf::<wbr />cuda_reduce</a> and <a href="namespacetf.html#a06804cb1598e965febc7bd35fc0fbbb0" class="m-doc">tf::<wbr />cuda_sort</a>. We <em>DO NOT</em> allocate any memory during these algorithms call but ask you to provide the memory buffer required for each of such algorithms. This decision seems to complicate the code a little bit, but it gives applications freedom to optimize the memory; also, it makes all algorithm calls capturable to a CUDA graph to improve the execution efficiency.</p></section>
5967
</div>
6068
</div>
6169
</div>
@@ -100,7 +108,7 @@ <h1>
100108
<div class="m-container">
101109
<div class="m-row">
102110
<div class="m-col-l-10 m-push-l-1">
103-
<p>Taskflow handbook is part of the <a href="https://taskflow.github.io">Taskflow project</a>, copyright © <a href="https://tsung-wei-huang.github.io/">Dr. Tsung-Wei Huang</a>, 2018&ndash;2021.<br />Generated by <a href="https://doxygen.org/">Doxygen</a> 1.8.13 and <a href="https://mcss.mosra.cz/">m.css</a>.</p>
111+
<p>Taskflow handbook is part of the <a href="https://taskflow.github.io">Taskflow project</a>, copyright © <a href="https://tsung-wei-huang.github.io/">Dr. Tsung-Wei Huang</a>, 2018&ndash;2021.<br />Generated by <a href="https://doxygen.org/">Doxygen</a> 1.8.14 and <a href="https://mcss.mosra.cz/">m.css</a>.</p>
104112
</div>
105113
</div>
106114
</div>

0 commit comments

Comments
 (0)