|
|
|
|
In the next webpage, we will study the (more difficult) NON-blocking send and receive.
Function Name | Usage |
---|---|
MPI_Send(void *buff,
int count, MPI_Datatype type, int dest, int tag, int comm) |
Send a point-to-point message
to process
dest
in the communication group
comm
The message is stored at memory location buff and consists of count items of datatype type The message will be tagged with the tag value tag The MPI_Send() function will only return if the message sent has been received by the destination. (It's safe to reuse the buffer buff right away). |
MPI_Recv(void *buff,
int count, MPI_Datatype type, int source, int tag, int comm, MPI_Status *status) |
Receive a point-to-point message
The message MUST BE from the process source in the communication group comm AND the message MUST BE tagged with the tag value tag The message received will be stored at memory location buff which will have space to store count items of datatype type When the function exits, the exit status is stored in status. Information about the received message is returned in a status variable, e.g.,:
In most cases, you know the structure of data received and then you can ignore the status value... If you pass NULL as status parameter, MPI will not return the status value. The MPI_Recv() function will only return if the desired message (from source with tag tag) has been received - or exits with an error code... |
|
#include "mpi.h" int main(int argc, char **argv) { char reply[100]; char buff[128]; int numprocs; int myid; int i; MPI_Status stat; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); /* ----------------------------------------- Master process ----------------------------------------- */ if(myid == 0) { printf("WE have %d processors\n", numprocs); /* ----------------------------------------- Master process: send msg with tag 1234 ----------------------------------------- */ for(i=1;i < numprocs;i++) { sprintf(buff, "Hello %d", i); MPI_Send(buff, 128, MPI_CHAR, i, 1234, MPI_COMM_WORLD); } /* --------------------------------------------- Master process: wait for msg with tag 4444 --------------------------------------------- */ for(i=1;i < numprocs;i++) { MPI_Recv(buff, 128, MPI_CHAR, i, 4444, MPI_COMM_WORLD, &stat); cout << buff << endl; } } else /* ----------------------------------------- Slave process: receive msg with tag 1234 ----------------------------------------- */ { /* ----------------------------------------- Slave process: receive msg with tag 1234 ----------------------------------------- */ MPI_Recv(buff, 128, MPI_CHAR, 0, 1234, MPI_COMM_WORLD, &stat); sprintf(reply, " |--> Hello 0, Processor %d is present and accounted for !", myid); strcat(buff, reply); /* -------------------------------------------- Slave process: send back msg with tag 4444 -------------------------------------------- */ MPI_Send(buff, 128, MPI_CHAR, 0, 4444, MPI_COMM_WORLD); } MPI_Finalize(); } |
To compile the program:
|
To run the program:
|
Experiment:
|
Name of field | Usage |
---|---|
MPI_SOURCE | id of processor sending the message (integer) |
MPI_TAG | tag of the message (integer) |
MPI_ERROR | error code (integer) |
MPI_Send(buff, N, TYPE, dest, tag, comm); |
Effect of the function:
|
C/C++ Type | MPI symbolic constant |
---|---|
char | MPI_CHAR |
int | MPI_INT |
float | MPI_FLOAT |
double | MPI_DOUBLE |
MPI_Recv(buff, N, TYPE, dest, tag, comm, status); |
Effect of the function:
|
int main(int argc, char **argv) { char in[4]; // Send 4 characters int out; // Interprete as an integer int numprocs; int myid; int i; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); if(myid == 0) { cout << "We have " << numprocs << " processors" << endl; MPI_Recv(&out, 1, MPI_INT, 1, 0, MPI_COMM_WORLD, NULL); cout << "Received this number from proc 1: " << out << endl; } else if ( myid == 1 ) { in[0] = '2'; in[1] = 1; in[2] = 0; in[3] = 0; MPI_Send(in, 4, MPI_CHAR, 0, 0, MPI_COMM_WORLD); } MPI_Finalize(); } |
Result:
|
(Note: puma is a Intel based machine and uses little endian storage, that's why we need to put the '2' in the first byte)
To compile the program:
|
To run the program:
|
|
double f(double a) { return( 2.0 / sqrt(1 - a*a) ); } int main(int argc, char *argv[]) { int N; // Number of intervals double w, x; // width and x point int i, myid; double mypi, others_pi; MPI_Init(&argc,&argv); // Initialize MPI_Comm_size(MPI_COMM_WORLD, &num_procs); // Get # processors MPI_Comm_rank(MPI_COMM_WORLD, &myid); N = atoi(argv[1]); w = 1.0/(double) N; mypi = 0.0; for (i = myid; i < N; i = i + num_procs) { x = w*(i + 0.5); mypi = mypi + w*f(x); } /* ---------------------------------------------------- Proc 0 collects and others send data to proc 0 ---------------------------------------------------- */ if ( myid == 0 ) { for (i = 1; i < num_procs; i++) { MPI_Recv(&others_pi, 1, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, NULL); mypi += others_pi; } cout << "Pi = " << mypi<< endl << endl; } else { MPI_Send(&mypi, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD); } MPI_Finalize(); } |
Memory is NOT shared
by different MPI Processes.
In other words:
|
To compile the program:
|
To run the program:
|
MPI_Recv(buff, count, datatype, source, tag, comm, request); |
MPI_Recv(buff, count, type, MPI_ANY_SOURCE, MPI_ANY_TAG, status); |
status.SOURCE = source id of sender status.TAG = tag of message |
|
MPI_Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); |
The MPI_Probe() function will block a process until a message with matching source and matching tag has been received.
The MPI_Probe() function will not receive the message.
Instead, it initializes a status variable containing all information about the pending message
|
MPI_Status status; int nbytes; MPI_Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); // Allocate memory to receive data MPI_Get_count(&status, MPI_CHAR, &nbytes); if ( nbytes != MPI_UNDEFINED ) buff = malloc( nbytes ); // Allocate buffer to receive message MPI_Recv(buff, nbytes, MPI_CHAR, status.SOURCE, status.TAG, MPI_COMM_WORLD, &status); |
To compile the program:
|
To run the program:
|