Conversation
Open brain storming if we want to document which calls are collective and synchronizing.
|
On collective operations:
On the distinction between collective and synchronizing:
Our usage diverges from both terms:
Suggestion:
|
|
|
||
| A **collective** operation needs to be executed by *all* MPI ranks of the MPI communicator that was passed to ``openPMD::Series``. | ||
| Contrarily, **independent** operations can also be called by a subset of these MPI ranks. | ||
| A **synchronizing** operation will synchronize the MPI ranks that participate in it. |
There was a problem hiding this comment.
| A **synchronizing** operation will synchronize the MPI ranks that participate in it. | |
| A **synchronizing** operation will synchronize the MPI ranks that participate in it. All synchronizing operations are collective. |
EDIT: I think we should not use this term at all, see here.
|
Wouldn't "non-blocking collective" be the right description for our DetailsI think we just do not have as MPI has a "corresponding completion operation" (test/wait). (We kind of would need one generally, e.g., to ensure a checkpoint is done. So far I have closed, in doubt, to be sure. Of course, there is always the additional challenge that a consistent MPI view does not correspond to a consistent PFS view.) |
| ``::makeConstant`` [3]_ *backend-specific* no declare, write | ||
| ``::storeChunk`` [1]_ independent no write | ||
| ``::loadChunk`` independent no read | ||
| ``::availableChunks`` [4]_ independent no read, immediate result |
There was a problem hiding this comment.
@franzpoeschel can you double check the details on availableChunks and potentially fix this doc line in a separate PR?
Or maybe even simpler: consistent data declaration? |
Open brain storming if we want to document which calls are collective and synchronizing.
Wording loosely after the MPI standard, section 2.4:
https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf