CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Main CFD Forum (https://www.cfd-online.com/Forums/main/)
-   -   Point-to-point communication with MPI (https://www.cfd-online.com/Forums/main/145520-point-point-communication-mpi.html)

Joachim December 7, 2014 12:11

Point-to-point communication with MPI
 
Hey everyone!

I have a basic question regarding the best way to transfer data between multiple blocks using MPI. To make it simple, let's assume the following:

1. you have 9 blocks/processors (3x3 domain)

2. each block must send an integer to each of its neighbors (the ID of the neighbors are known)

How do you do that properly? Right now, I take each face one after another, and I do:

- mpi_isend to the other block to send the integer
- mpi_irev from this same other block to receive his integer
- mpi_wait to make sure the integer has been received.

Now that I am thinking about it, this method should hang in certain conditions. Is there a better to do it? Should I first send the integers from all blocks using mpi_isend, and then, in a second time, receive everything using mpi_recv?

Thank you very much for your help!

Joachim

valgrinda December 8, 2014 09:12

Mpi
 
Hi Joachim,

your way sounds good and it shouldn't hang if properly implemented. If this is the only MPI call you have and performance is not the issue, you could also use MPI_Alltoall and discard the non-neighbor values.
Here is a link to very good and compact manual about MPI:
http://www.ia.pw.edu.pl/~ens/epnm/mpi_course.pdf

Your problem is described in the pseudo code on page 21 for the non-blocking send/receive implementation. The different types of collective communications can be found on page 42, including MPI_Alltoall.

Regards
Hans

Joachim December 8, 2014 09:47

Thank you for your answer Hans! The pdf is very useful!

However, I use this for the BC treatment in my CFD code, so I am not sure I should use MPI_alltoall in this case.

What I was thinking is that if I have, let's say, 4 blocks (2x2)

1. block 0 sends data to 1, wait for 1
2. block 1 sends data to 2, wait for 2
3. block 3 sends data to...

Does it really guarantee that block 1 will eventually send its data to 0 for example? Couldn't the code hang in some cases? The way I am doing it now is basically doing a non blocking send but a blocking receive...

Joachim December 8, 2014 10:19

I read in more details the pdf you sent me. Would this code be alright according to you? :D

Code:

! non-blocking send (integer i)
  do n = 1, block%n_patches
    call mpi_isend( i, 1, mpi_integer, patch(n)%block2, 11, mpi_comm_world, isend(n), ierr )
  enddo

  ! non-blocking receive (integer j)
  do n = 1, block%n_patches
    call mpi_irecv( j, 1, mpi_integer, patch(n)%block2, 11, mpi_comm_world, irecv(n), ierr )
  enddo

  ! do other stuff, without touching either i or j
  ....

  ! make sure that the transfer is complete
  call mpi_waitall(block%n_patches, isend, statmpi, ierr)
  call mpi_waitall(block%n_patches, irecv, statmpi, ierr)

I only have two questions left:

1. is it ok to use the same tags for different mpi_isend? (to different processors though)
2. is the mpi_waitall necessary for the isend?

Thank you so much for your help!

Joachim

valgrinda December 8, 2014 12:01

Re: Mpi
 
Hi Joachim,

I would write the code like this:

! non-blocking send (integer i)
do n = 1, block%n_patches
call mpi_isend( i, 1, mpi_integer, patch(n)%block2, 11, mpi_comm_world, isend(n), ierr )
call mpi_irecv( j, 1, mpi_integer, patch(n)%block2, 11, mpi_comm_world, irecv(n), ierr )
enddo

! make sure that the transfer is complete
MPI_Wait(&send_request,&status);
MPI_Wait(&recv_request,&status);


! do other stuff
....


To your questions:
1. Tags are not mandatory for a correct sending/receiving. It can be an additional help to distinguish different types of data for the user.
2. Wait is necessary, otherwise your send/recv statements might not match. As you can see I would use the basic Wait function, and test for each of the send/recv statements separately.

Regards
Hans

Joachim December 8, 2014 12:17

Thank you Hans! I really appreciate your help!

what is the advantage of checking the isend/irecv separately (wait instead of waitall)?
also, in Fortran, would you have something like that instead?

Code:

! make sure that the transfer is complete
do n =1, block%n_patches
  MPI_Wait(isend(n),statsend(n),ierr)
  MPI_Wait(irecv(n),statrecv(n),ierr)
enddo

isn't that what waitall is doing?

Joachim December 9, 2014 08:46

ok I played with the code and everything seems to be working just fine.

One more question now! For the real case, I do not want to send an integer, but instead an array that could have a different size for each face. My question, will this code work?

Code:

! non-blocking send and receive
do n = 1, block%n_patches

  ! allocate temporary array to concatenate the data to be sent
  allocate(tempS(sizeS))
  tempS = ...

  ! send data
  call mpi_isend( tempS, sizeS, mpi_double_precision, patch(n)%block2, 11, mpi_comm_world, isend(n), ierr )

  ! allocate temporary array to receive data
  allocate(tempR(sizeR))

  ! receive the data
  call mpi_irecv( tempR, sizeR, mpi_double_precision, patch(n)%block2, 11,  mpi_comm_world, irecv(n), ierr )

  ! do whatever with it
  lala = tempR

  ! deallocate temporary arrays
  deallocate (tempS, tempR)

enddo

! wait for transfer to be complete
call mpi_wait...

Can I deallocate the temporary arrays before the transfers are complete?

Thanks!

Joachim


---------------------------------
EDIT: hmm, just thought about it, and it won't work since tempR hasn't been received when I do lala = tempR.
Does that mean that I have to first allocate all the temporary arrays and deallocate them only after the wait statement?


All times are GMT -4. The time now is 16:08.