You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+26-9Lines changed: 26 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -213,7 +213,8 @@ The table below summarizes its commonly used methods.
213
213
| placeholder | none | task | insert a node without any work; work can be assigned later |
214
214
| linearize | task list | none | create a linear dependency in the given task list |
215
215
| parallel_for | beg, end, callable, group | task pair | apply the callable in parallel and group-by-group to the result of dereferencing every iterator in the range |
216
-
| reduce | beg, end, res, op, group | task pair | apply a binary operator group-by-group to reduce a range of elements to a single result |
216
+
| reduce | beg, end, res, bop, group | task pair | apply a binary operator group-by-group to reduce a range of elements to a single result |
217
+
| transform_reduce | beg, end, res, bop, uop, group | task pair | apply a unary operator to each element in the range and reduce the returns to a single result group-by-group through a binary operator |
217
218
| dispatch | none | future | dispatch the current graph and return a shared future to block on completeness |
218
219
| silent_dispatch | none | none | dispatch the current graph |
219
220
| wait_for_all | none | none | dispatch the current graph and block until all graphs including previously dispatched ones finish |
@@ -279,13 +280,13 @@ auto [S, T] = tf.parallel_for(
279
280
v.end(), // end of range
280
281
[] (int i) {
281
282
std::cout << "parallel in " << i << '\n';
282
-
}
283
+
},
284
+
1 // execute one task at a time
283
285
);
284
-
285
286
// add dependencies via S and T.
286
287
```
287
288
288
-
By default, the group size is 1. Changing the group size can force intra-group tasks to run sequentially
289
+
Changing the group size can force intra-group tasks to run sequentially
289
290
and inter-group tasks to run in parallel.
290
291
Depending on applications, different group sizes can result in significant performance hit.
291
292
@@ -304,13 +305,13 @@ auto [S, T] = tf.parallel_for(
304
305
);
305
306
```
306
307
307
-
### *reduce*
308
+
### *reduce/transform_reduce*
308
309
309
310
The method `reduce` creates a subgraph that applies a binary operator to a range of items in a container.
310
311
The result will be stored in the referenced `res` object passed to the method.
311
312
It is your responsibility to assign it a correct initial value to reduce.
0 commit comments