You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
tf::Task A = tf.emplace([&] (tf::SubflowBuilder& subflow) {
351
+
subflow.emplace([&]() { value = 10; });
354
352
subflow.detach();
355
-
return 100; // some arbitrary value
356
353
});
357
354
358
355
// create a task B after A
359
-
auto B = tf.silent_emplace([&] () {
356
+
tf::Task B = tf.emplace([&] () {
360
357
// no guarantee for value to be 10 nor fuA ready
361
358
});
359
+
362
360
A.precede(B);
363
361
```
364
362
@@ -381,17 +379,18 @@ The graph is not dispatched yet and you can dump it to a GraphViz format.
381
379
```cpp
382
380
// debug.cpp
383
381
tf::Taskflow tf(0); // use only the master thread
384
-
auto A = tf.silent_emplace([] () {}).name("A");
385
-
auto B = tf.silent_emplace([] () {}).name("B");
386
-
auto C = tf.silent_emplace([] () {}).name("C");
387
-
auto D = tf.silent_emplace([] () {}).name("D");
388
-
auto E = tf.silent_emplace([] () {}).name("E");
382
+
383
+
tf::Task A = tf.emplace([] () {}).name("A");
384
+
tf::Task B = tf.emplace([] () {}).name("B");
385
+
tf::Task C = tf.emplace([] () {}).name("C");
386
+
tf::Task D = tf.emplace([] () {}).name("D");
387
+
tf::Task E = tf.emplace([] () {}).name("E");
389
388
390
389
A.precede(B, C, E);
391
390
C.precede(D);
392
391
B.precede(D, E);
393
392
394
-
std::cout << tf.dump();
393
+
tf.dump(std::cout);
395
394
```
396
395
397
396
Run the program and inspect whether dependencies are expressed in the right way.
@@ -426,12 +425,12 @@ and then use the `dump_topologies` method.
426
425
```cpp
427
426
tf::Taskflow tf(0); // use only the master thread
428
427
429
-
auto A = tf.silent_emplace([](){}).name("A");
428
+
tf::Task A = tf.emplace([](){}).name("A");
430
429
431
430
// create a subflow of two tasks B1->B2
432
-
auto B = tf.silent_emplace([] (auto& subflow) {
433
-
auto B1 = subflow.silent_emplace([](){}).name("B1");
434
-
auto B2 = subflow.silent_emplace([](){}).name("B2");
431
+
tf::Task B = tf.emplace([] (tf::SubflowBuilder& subflow) {
432
+
tf::Task B1 = subflow.emplace([](){}).name("B1");
433
+
tf::Task B2 = subflow.emplace([](){}).name("B2");
435
434
B1.precede(B2);
436
435
}).name("B");
437
436
@@ -441,7 +440,7 @@ A.precede(B);
441
440
tf.dispatch().get();
442
441
443
442
// dump the entire graph (including dynamic tasks)
444
-
std::cout << tf.dump_topologies();
443
+
tf.dump_topologies(std::cout);
445
444
```
446
445
447
446
# API Reference
@@ -456,8 +455,7 @@ Visit [documentation][wiki] to see the complete list.
456
455
| -------- | --------- | ------- | ----------- |
457
456
| Taskflow | none | none | construct a taskflow with the worker count equal to max hardware concurrency |
458
457
| Taskflow | size | none | construct a taskflow with a given number of workers |
459
-
| emplace | callables | tasks, futures | insert nodes to execute the given callables; results can be retrieved from the returned futures |
460
-
| silent_emplace | callables | tasks | insert nodes to execute the given callables |
458
+
| emplace | callables | tasks | create a task with a given callable(s) |
461
459
| placeholder | none | task | insert a node without any work; work can be assigned later |
462
460
| linearize | task list | none | create a linear dependency in the given task list |
463
461
| parallel_for | beg, end, callable, group | task pair | apply the callable in parallel and group-by-group to the result of dereferencing every iterator in the range |
@@ -474,31 +472,22 @@ Visit [documentation][wiki] to see the complete list.
474
472
| dump | none | string | dump the current graph to a string of GraphViz format |
475
473
| dump_topologies | none | string | dump dispatched topologies to a string of GraphViz format |
476
474
477
-
### *emplace/silent_emplace/placeholder*
475
+
### *emplace/placeholder*
478
476
479
-
The main different between `emplace` and `silent_emplace` is the return value.
480
-
The method `emplace` gives you a [std::future][std::future] object to retrieve the result of the callable
481
-
when the task completes.
477
+
You can use `emplace` to create a task for a target callable.
482
478
483
479
```cpp
484
480
// create a task through emplace
485
-
auto [task, future] = tf.emplace([](){ return 1; });
486
-
tf.wait_for_all();
487
-
assert(future.get() == 1);
488
-
```
489
-
490
-
If you don't care the return result, using `silent_emplace` to create a task can give you slightly better performance.
When task cannot be determined beforehand, you can create a placeholder and assign the calalble later.
486
+
498
487
```cpp
499
488
// create a placeholder and use it to build dependency
500
-
auto A = tf.silent_emplace([](){});
501
-
auto B = tf.placeholder();
489
+
tf::Task A = tf.emplace([](){});
490
+
tf::Task B = tf.placeholder();
502
491
A.precede(B);
503
492
504
493
// assign the callable later in the control flow
@@ -771,7 +760,6 @@ The folder `example/` contains several examples and is a great place to learn to
771
760
| ------- | ----------- |
772
761
|[simple.cpp](./example/simple.cpp)| uses basic task building blocks to create a trivial taskflow graph |
773
762
|[debug.cpp](./example/debug.cpp)| inspects a taskflow through the dump method |
774
-
|[emplace.cpp](./example/emplace.cpp)| demonstrates the difference between the emplace method and the silent_emplace method |
775
763
|[matrix.cpp](./example/matrix.cpp)| creates two set of matrices and multiply each individually in parallel |
776
764
|[dispatch.cpp](./example/dispatch.cpp)| demonstrates how to dispatch a task dependency graph and assign a callback to execute |
777
765
|[multiple_dispatch.cpp](./example/multiple_dispatch.cpp)| illustrates dispatching multiple taskflow graphs as independent batches (which all run on the same threadpool) |
Copy file name to clipboardExpand all lines: docs/FAQ.dox
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -91,7 +91,7 @@ Try the tf::Taskflow::dump method to debug the graph before dispatching your tas
91
91
92
92
@subsection ProgrammingQuestions5 Q5: In the following example where B spawns a joined subflow of two tasks B1 and B2, do they run concurrently with task A?
93
93
94
-
@image html dynamic_graph.png width=60%
94
+
@image html images/dynamic_graph.png width=60%
95
95
96
96
No. The subflow is spawned during the execution of B, and at this point A must finish
97
97
because A precedes B. This gives rise to the fact B1 and B2 must run after A.
Copy file name to clipboardExpand all lines: docs/QuickStart.dox
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ namespace tf {
7
7
Cpp-Taskflow is by far faster, more expressive, fewer lines of code, and easier for drop-in integration
8
8
than existing parallel task programming libraries such as <a href="http://www.nersc.gov/users/software/programming-models/openmp/openmp-tasking/">OpenMP Tasking</a> and Intel <a href="https://www.threadingbuildingblocks.org/tutorial-intel-tbb-flow-graph">Thread Building Block (TBB) FlowGraph</a>.
9
9
10
-
@image html image/performance.jpg width=95%
10
+
@image html images/performance.jpg width=95%
11
11
12
12
Cpp-Taskflow is committed to support both academic and industry research projects,
13
13
making it reliable and cost-effective for long-term and large-scale developments.
<p>The traditional loop-level parallelism is simple but hardly allows users to exploit parallelism in more irregular applications such as graph algorithms, incremental flows, recursion, and dynamically-allocated data structures. To address these challenges, parallel programming and libraries are evolving from the tradition loop-based parallelism to the <em>task-based</em> model.</p>
<p>The above figure shows an example <em>task dependency graph</em>. Each node in the graph represents a task unit at function level and each edge indicates the task dependency between a pair of tasks. Task-based model offers a powerful means to express both regular and irregular parallelism in a top-down manner, and provides transparent scaling to large number of cores. In fact, it has been proven, both by the research community and the evolution of parallel programming standards, task-based approach scales the best with future processor generations and architectures.</p>
0 commit comments