Message Passing Interface - a standard and highly portable parallel programming interface for distributed memory systems.
Questions tagged [mpi]
106 questions
9
votes
4 answers
Can MPI messages be prioritized?
As far as I understand, the order in which non-blocking point-to-point MPI messages (Isend and Irecv) are received is consistent with the order in which they are sent. Are there any techniques for giving certain messages priority over others?
For…
Matthew Emmett
- 2,076
- 15
- 22
8
votes
4 answers
What is an efficient way of notifying MPI processes of receiving messages?
In MPI, is there any built-in mechanism to notify a group of processes that they need to receive messages from other processes?
In my application every process needs to send data to a group of processes with known rank IDs (which potentially…
mmirzadeh
- 1,435
- 1
- 10
- 17
8
votes
2 answers
Nonblocking version of MPI_Barrier in MPI 2
I have a bunch of MPI processes exchanging request messages back and forth. Processes do not know which other processes will send them messages, or how many. Given this situation, I want an efficient way to know whether all other processes…
Geoffrey Irving
- 3,969
- 18
- 41
6
votes
2 answers
Simulating the MPI_Isend/Irecv/Wait model with one-sided communication
I have a MPI computation with the following structure: each processor has a large region of read-only memory divided into chunks. During a compute epoch, each processor performs a (different) number of steps of the form:
Gather several chunks of…
Geoffrey Irving
- 3,969
- 18
- 41
4
votes
0 answers
Can I MPI_Test for an Ibarrier
In this code, all processes post a barrier, sleep a while for good measure, then first Test and then Wait for the barrier. The Test says no and the wait succeeds. A test should be like a non-blocking wait, so it should succeed. What am I missing?
…
Victor Eijkhout
- 1,330
- 10
- 13
4
votes
2 answers
How does one write MPI-implementation-independent code?
Specifically, how does one go about writing code that works with, say, both MPICH and OpenMPI?
I am currently in the process of cleaning up the build scripts and code for a distributed-memory stochastic mixed-integer linear programming package we…
Geoff Oxberry
- 30,394
- 9
- 64
- 127
3
votes
3 answers
Writing a parallel version of an algorithm. Only which someparts are worth distributing
Say if I wanted to parallelise an algorithm. Only certain parts of this algorithm can be parallelised. The other parts are trivial and not worth distributing. Do I do all of these trivial calculations on my master node and then broadcast the data to…
HMPARTICLE
- 147
- 5
3
votes
3 answers
Is busy waiting on both MPI_Iprobe and MPI_Testsome efficient?
I have an MPI application that needs to asynchronously respond to both incoming messages and request completions inside a dedicated communication thread. The obvious way to do this is a busy wait that alternately calls MPI_Iprobe and MPI_Testsome. …
Geoffrey Irving
- 3,969
- 18
- 41
3
votes
0 answers
What is the best way to stop an asynchronous-heterogeneous MPI program?
I want to stop immediately an MPI program, that has N processors and each processor runs a different algorithm (with different computational complexities).
Processors share information to each other using an unidirectional token ring topology. As…
user5244097
- 31
- 2
2
votes
1 answer
Passing a `MPI_Request*` to the send/recv functions
Below are two ways of writing what is seemingly (to me at least) exactly the same thing:
void do_some_work(MPI_Request* send_reqs, int* send_counter) {
for (int i = 0; i < someNumber; ++i) {
// working version
MPI_Request req =…
Gilles Poncelet
- 103
- 7
2
votes
2 answers
Is it possible to configure Linux OpenMPI v1.6.5 to use multiple working directories on a single node?
I am running some quantum chemical calculations on a refurbished PowerEdge rack server, with dual quad-core Xeons, 16GB RAM, and a single 500GB SATA HDD. OS is vanilla Debian Wheezy. For one of the calculation types I'm running, I think I'm…
hBy2Py
- 123
- 5
1
vote
2 answers
MapReduce with MPI question
I am doing an exercise using MPI to count frequencies of words distributed in several different files following similar steps in this instruction.
But I met a problem with step 2. In my implementation, I first sent out locally counted word-count…
user123
- 679
- 1
- 5
- 16
1
vote
1 answer
Is there an implementation of MPI_AllReduce which handles sparse data better?
I need to synchronize intermediate solutions of an optimization problem solved distributively over a number of worker processors. The solution vector is known to be sparse.
I have noticed that if I use MPI_AllReduce, the performance is good compared…
Soumitra
- 11
- 2
1
vote
1 answer
gather data from processors in FFTW3 MPI
The parallel FFTW3 distributes different portions of a large data to different processors, so that each processor obtains and manipulates only a small fraction of the large size data that wants to be fourier transformed.
I am wondering, at the end…
Katuru
- 19
- 1