You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
T.-W. Huang, C.-X. Lin, G. Guo, and Martin D. F. Wong, "Cpp-Taskflow: Fast Task-based Parallel Programming using Modern C++,"<em>IEEE International Parallel and Distributed Processing Symposium (IPDPS)</em>, pp. 974-983, Rio de Janeiro, Brazil, 2019. </li>
100
+
T.-W. Huang, C.-X. Lin, G. Guo, and Martin D. F. Wong, "<ahref="ipdps19.pdf">Cpp-Taskflow: Fast Task-based Parallel Programming using Modern C++</a>,"<em>IEEE International Parallel and Distributed Processing Symposium (IPDPS)</em>, pp. 974-983, Rio de Janeiro, Brazil, 2019. </li>
101
101
<li>
102
-
C.-X. Lin, T.-W. Huang, G. Guo, and M. D. F. Wong, "A Modern C++ Parallel Task Programming Library,"<em>ACM Multimedia Conference (MM)</em>, pp. 2284-2287, Nice, France, 2019. </li>
102
+
C.-X. Lin, T.-W. Huang, G. Guo, and M. D. F. Wong, "<ahref="mm19.pdf">A Modern C++ Parallel Task Programming Library</a>,"<em>ACM Multimedia Conference (MM)</em>, pp. 2284-2287, Nice, France, 2019. </li>
103
103
<li>
104
-
C.-X. Lin, T.-W. Huang, G. Guo, and Martin D. F. Wong, "An Efficient and Composable Parallel Task Programming Library,"<em>IEEE High-performance and Extreme Computing Conference (HPEC)</em>, pp. 1-7, Waltham, MA, 2019. </li>
104
+
C.-X. Lin, T.-W. Huang, G. Guo, and Martin D. F. Wong, "<ahref="hpec19.pdf">An Efficient and Composable Parallel Task Programming Library</a>,"<em>IEEE High-performance and Extreme Computing Conference (HPEC)</em>, pp. 1-7, Waltham, MA, 2019. </li>
<divclass="textblock"><p>Cpp-Taskflow addresses a long-standing problem, <em>how can we make it easier for C++ developers to write efficient parallel and heterogeneous programs under complex task dependencies?</em></p>
95
+
<divclass="textblock"><p>Cpp-Taskflow addresses a long-standing problem, <em>how can we make it easier for C++ developers to quickly write parallel and heterogeneous programs with high performance scalability and simultaneous high productivity?</em></p>
96
96
<h1><aclass="anchor" id="TheEraOfMulticore"></a>
97
97
The Era of Multicore</h1>
98
98
<p>In the past, we embrace <em>free</em> performance scaling on our software thanks to advances in manufacturing technologies and micro-architectural innovations. Approximately for every 1.5 year we can speed up our programs by simply switching to new hardware and compiler vendors that brings 2x more transistors, faster clock rates, and higher instruction-level parallelism. However, this paradigm was challenged by the power wall and increasing difficulties in exploiting instruction-level parallelism. The boost to computing performance has stemmed from changes to multicore chip designs.</p>
<p>The above sweeping visualization (thanks to Prof. Mark Horowitz and his group) shows the evolution of computer architectures is moving toward multicore designs. Today, multicore processors and multiprocessor systems are common in many electronic products such as mobiles, laptops, desktops, and servers. In order to keep up with the performance scaling, it is becoming necessary for software developers to write parallel programs that utilize the number of available cores.</p>
<p>With the influence of artificial intelligence (AI) through new and merged workloads, heterogeneous computing becomes demanding and will continue to be heard for years to come. We have not just CPUs but GPUs, TPUs, FPGAs, and ASICs to accelerator a wide variety of scientific computing problems.</p>
<p>The question is: <em>How are we going to program these beasts?</em> Writing a high-performance sequential program is hard. Parallel programming is harder. Parallel programming of heterogeneous devices is extremely challenging if we care about performance and power efficiency. Programming models need to deal with productivity versus performance.</p>
<p>The most basic and simplest concept of parallel programming is <em>loop-level parallelism</em>, exploiting parallelism that exists among the iterations of a loop. The program typically partitions a loop of iterations into a set of of blocks, either fixed or dynamic, and run each block in parallel. Below the figure illustrates this pattern.</p>
<p>The main advantage of the loop-based approach is its simplicity in speeding up a regular workload in line with Amdahl's Law. Programmers only need to discover independence of each iteration within a loop and, once possible, the parallel decomposition strategy can be easily implemented. Many existing libraries have built-in support to write a parallel-for loop.</p>
<p>The above figure shows an example <em>task dependency graph</em>. Each node in the graph represents a task unit at function level and each edge indicates the task dependency between a pair of tasks. Task-based model offers a powerful means to express both regular and irregular parallelism in a top-down manner, and provides transparent scaling to large number of cores. In fact, it has been proven, both by the research community and the evolution of parallel programming standards, task-based approach scales the best with future processor generations and architectures.</p>
Challenges of Task-based Parallel Programming</h1>
119
-
<p>Parallel programs are notoriously hard to write correctly, regardless of loop-based approach or task-based model. A primary reason is <em>data dependency</em>, some data cannot be accessed until some other data becomes available. This dependency constraint introduces a number of challenges such as data race, thread contention, and consistencies when writing a correct parallel program. Through the evolution of parallel programming standards, it has been proven that the most effective way to overcome these obstacles is a suitable task-based programming model. The programming model affects software developments in various aspects, such as programmability, debugging effort, development costs, efficiencies, etc.</p>
120
124
<h1><aclass="anchor" id="TheProjectMantra"></a>
121
125
The Project Mantra</h1>
122
-
<p>The goal of Cpp-Taskflow is simple - <em>We aim to help C++ developers quickly write parallel programs with high performance scalability and simultaneous high productivity</em>. We want developers to write simple and effective parallel code, specifically with the following objectives:</p>
126
+
<p>The goal of Cpp-Taskflow is simple - <em>We help developers quickly write parallel programs with high performance scalability and simultaneous high productivity</em>. We want developers to write simple and effective parallel code, specifically with the following objectives:</p>
123
127
<ul>
124
128
<li>Expressiveness </li>
125
129
<li>Readability </li>
126
130
<li>Transparency</li>
127
131
</ul>
128
-
<p>In a nutshell, code written with Cpp-Taskflow explains itself. The transparency allows developers to forget about the difficult thread managements at programming time. They can focus on high-level implementation of parallel decomposition algorithms, leaving the concurrency details and scalability handled by Cpp-Taskflow. </p>
132
+
<p>In a nutshell, code written with Cpp-Taskflow explains itself. The transparency allows developers to focus on the development of application algorithms and parallel decomposition strategies, rather than low-level, system-specific details. </p>
<p>Cpp-Taskflow helps you quickly write parallel task programs with high performance scalability and simultaneous high productivity. It is by far faster, more expressive, fewer lines of code, and easier for drop-in integration than existing parallel task programming libraries such as <ahref="https://www.openmp.org/spec-html/5.0/openmpsu99.html">OpenMP Tasking</a> and Intel <ahref="https://www.threadingbuildingblocks.org/tutorial-intel-tbb-flow-graph">Thread Building Block (TBB) FlowGraph</a>.</p>
97
+
<p>Cpp-Taskflow helps you quickly write parallel task programs with high performance scalability and simultaneous high productivity. It is by far faster, more expressive, fewer lines of code, and easier for drop-in integration than existing parallel task programming frameworks such as <ahref="https://www.openmp.org/spec-html/5.0/openmpsu99.html">OpenMP Tasking</a> and Intel <ahref="https://www.threadingbuildingblocks.org/tutorial-intel-tbb-flow-graph">Thread Building Block (TBB) FlowGraph</a>.</p>
<p>Cpp-Taskflow is <em>header-only</em> and there is no need for installation. Simply download the source and copy the headers under the <code>taskflow</code> subdirectory to your project.</p>
145
+
<p>Cpp-Taskflow is <em>header-only</em> and there is no need for installation. Simply download the source and copy the headers under the directory <code>taskflow/</code> to your project.</p>
146
146
<divclass="fragment"><divclass="line">~$ git clone https://github.com/cpp-taskflow/cpp-taskflow.git</div><divclass="line">~$ cd cpp-taskflow/</div><divclass="line">~$ cp -r taskflow myproject/include/</div></div><!-- fragment --><h1><aclass="anchor" id="ASimpleFirstProgram"></a>
147
147
A Simple First Program</h1>
148
148
<p>Here is a rather simple program to get you started.</p>
0 commit comments