Skip to content

nvgetcommunicator returning pointer to MPI_Comm leads to complicated designs. #275

Closed
@bangerth

Description

For the deal.II wrappers of the N_Vector interface, we've had several lengthy discussions about how to deal with the ops.nvgetcommunicator callback. The basic problem is that that interface requires one to write a function that returns a pointer to a communicator -- and that requires that that communicator has a memory location where it can be accessed.

But that's not always the case. Imagine a case where we're trying to wrap PETSc vectors. Then one might be tempted to write a function of this sort:

void * GetCommunicator (NVector v)
{
   MPI_Comm comm = PetscObjectComm(reinterpret_cast<PetscObject>(unpack 'v' to get at PETSc vector));
   return &comm;
}

...
object.ops.nvgetcommunicator = &GetCommunicator;

Except, of course, this doesn't work: We're now returning a pointer to a local variable. In other words, short of finding out where exactly PETSc happens to store the communicator, we don't quite know what address to return.

There needs to be a better design than this. MPI_Comm objects are designed to be passed around by value, and asking for a void * to a specific memory location just doesn't work without an unnecessarily large amount of work.

@sebproell @stefanozampini FYI.
A pointer to one of the discussions we're having: dealii/dealii#15086 (comment)

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions