-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch free_at_safepoint linked list (back) to a vector #1806
base: main
Are you sure you want to change the base?
Switch free_at_safepoint linked list (back) to a vector #1806
Conversation
This way we don't need to alloc every time something is added, and actually performing the free of the elements should be faster since we don't have to pointer chase through a linked list.
I think this is a good case for per-thread lists (or arrays). MVM_alloc_safepoint is only called on the GC orchestrator thread at a point where all other threads are just waiting. So it'd be safe to go through all thread contexts and process their free lists. The only thing I'm worried about with MVM_VECTOR is that it only grows and never shrinks. Thus we could be paying the memory cost for the rest of the program runtime after just one extreme situation. Therefore I'd destroy and re-initialize those vectors in MVM_alloc_safepoint if they are larger than some sane value (maybe 64?), e.g. |
The thread safety is exactly why it was a linked list. |
I'll try that.
When compiling CORE.c, the size of the free_at_safepoint linked list reached a max of 22k. Of the 1982 times MVM_alloc_safepoint was called, only 116 of them had a list size of less than 100. So if we do this I think the sane value would have to be much higher than 64. But I'm not sure what cost we'd be paying even without destroying the vector? Sure it only grows, but the MVM_VECTOR_POPs decrement its index. So after the MVM_alloc_safepoint the next MVM_VECTOR_PUSHes will start filling it in back at the beginning. It seems to me that this way we're reusing the memory and reducing churn? |
That's what I suspected, but thanks for confirming. I wonder if an MVM_ATOMIC_VECTOR would be useful in more places than just here? |
Trouble is that lock-free dynamic arrays are vastly more complex beasts. There's one paper here describing an approach, but seems that design is vulnerable to the ABA problem, although that probably isn't a concern for the use case mentioned here. |
This way we don't need to alloc every time something is added, and actually performing the free of the elements should be faster since we don't have to pointer chase through a linked list.
Heaptrack reports ~11m fewer allocation functions called and CORE.c's stage parse seems to be about 1s faster. However, this is not safe. Adding to the list with just a bare MVM_VECTOR_PUSH seems to cause problems in random spectests. But wrapping it in
uv_mutex_lock(&(tc->instance->mutex_free_at_safepoint)); ...; uv_mutex_unlock(&(tc->instance->mutex_free_at_safepoint));
is incredibly slow. Is there a safe-but-faster way?