-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not initialize the pinned mdarray at construction time #2478
Do not initialize the pinned mdarray at construction time #2478
Conversation
The problem is encountered in rapidsai/cuvs#261, which uses a custom container as a workaround at the moment. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure I introduced pinned mdarrays with the mdbuffer PR. I was squeamish about this at the time but was trying to stick to "official" container types from Thrust/RMM. If we still don't have a good pinned uvector type, though, I'm definitely in support of this change.
Would we want to do the allocation through RMM's pinned memory resource, or are we okay with a raw cudaMallocHost here?
Thanks, that's a good point! I've looked at the RMM resources and there were two pinned memory resources, so I was not sure which not to use. But perhaps more importantly, they have the stream semantics in their API, which is a bit misleading in the context of pinned arrays, which are created on the host and do not offer any stream ordering. |
It's also worth noting, that we don't need the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question on which pinned memory resource to use. As near as I can tell, the only difference is that pinned_host_memory_resource uses cudaHostAlloc
while pinned_memory_resource uses cudaMallocHost
. For modern platforms, I believe we should prefer the cudaHostAlloc
version. In terms of stream ordering, I'm not really sure why we pass a stream there. Maybe just for API compatibility with other memory resources?
Regardless, I'd definitely support merging as is. How we perform the allocation is an implementation detail we can revisit later unless we have easy answers right now.
/merge |
thrust::host_vector
initializes its elements at creation and requires the element type be default-constructible.This translates to
raft::pinned_mdarray
and makes the mdarray unusable for non-default-constructible objects, likecuda::atomic<>
(and many user-defined types). This is against all other mdarray types in raft, which are based onrmm::device_uvector
and are not initialized at construction time.The PR changes the underlying container to a plain pointer + cudaMallocHost/cudaFreeHost.
Breaking change: if anyone relies on the
pinned_mdarray
to initialize itself, the code will break (but mdarrays should not initialize at construction in raft anyway). The affected classes have different private members now, so the ABI changes as well.