gh-120496: Use CAS approach for rangeiter_next#120534
gh-120496: Use CAS approach for rangeiter_next#120534corona10 wants to merge 8 commits intopython:mainfrom
Conversation
|
Thanks for doing this! Do you have any benchmark results on how much the perf impact is? versus say using |
I will try to prepare :) |
|
I am not sure this will be a fair benchmark, but it shows different strengths in terms of technique and size of N. Single threadimport pyperf
runner = pyperf.Runner()
runner.timeit(name="bench iter single",
stmt="""
data = list(it)
""",
setup = """
N = 100
it = iter(range(N))
"""
)
Mutl threadimport pyperf
runner = pyperf.Runner()
runner.timeit(name="bench iter multi",
stmt="""
for _ in range(100):
it = iter(range(N))
with concurrent.futures.ThreadPoolExecutor() as executor:
data = list(executor.map(lambda _: next(it), range(N)))
""",
setup = """
import concurrent.futures
N = 100
"""
)
|
|
Hmmm the perf difference is not that large. That might imply that we should just use critical section for simplicity, and hope that someone on PyPI publishes a "prange()" or something like that. |
Yeah, I prefer this one. |
| if (len <= 0) { | ||
| return NULL; | ||
| } | ||
| long result = _Py_atomic_load_long_relaxed(&r->start); |
There was a problem hiding this comment.
What if we have a range object with len=1 and many threads simultaneously at this point? The check on len has already been done, so the threads each increment the value of start and we end up with a value for result larger than end.
There was a problem hiding this comment.
Maybe fail at _Py_atomic_compare_exchange_long?
There was a problem hiding this comment.
Ah.. yes understood what you pointed out.
There was a problem hiding this comment.
There will be a slight timing issue. Then yes, let's just use the critical section that @Fidget-Spinner also preferred.
There was a problem hiding this comment.
To prepare for the overcommitted issue, we can validate the value of the result by comparing the end value.
But let's just use critical section approach.
|
See: #120540 |
Uh oh!
There was an error while loading. Please reload this page.