CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

Point-to-point communication with MPI

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   December 7, 2014, 12:11
Default Point-to-point communication with MPI
  #1
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
Hey everyone!

I have a basic question regarding the best way to transfer data between multiple blocks using MPI. To make it simple, let's assume the following:

1. you have 9 blocks/processors (3x3 domain)

2. each block must send an integer to each of its neighbors (the ID of the neighbors are known)

How do you do that properly? Right now, I take each face one after another, and I do:

- mpi_isend to the other block to send the integer
- mpi_irev from this same other block to receive his integer
- mpi_wait to make sure the integer has been received.

Now that I am thinking about it, this method should hang in certain conditions. Is there a better to do it? Should I first send the integers from all blocks using mpi_isend, and then, in a second time, receive everything using mpi_recv?

Thank you very much for your help!

Joachim
Joachim is offline   Reply With Quote

Old   December 8, 2014, 09:12
Default Mpi
  #2
Super Moderator
 
Hans Bihs
Join Date: Jun 2009
Location: Trondheim, Norway
Posts: 377
Rep Power: 17
valgrinda is on a distinguished road
Hi Joachim,

your way sounds good and it shouldn't hang if properly implemented. If this is the only MPI call you have and performance is not the issue, you could also use MPI_Alltoall and discard the non-neighbor values.
Here is a link to very good and compact manual about MPI:
http://www.ia.pw.edu.pl/~ens/epnm/mpi_course.pdf

Your problem is described in the pseudo code on page 21 for the non-blocking send/receive implementation. The different types of collective communications can be found on page 42, including MPI_Alltoall.

Regards
Hans
valgrinda is offline   Reply With Quote

Old   December 8, 2014, 09:47
Default
  #3
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
Thank you for your answer Hans! The pdf is very useful!

However, I use this for the BC treatment in my CFD code, so I am not sure I should use MPI_alltoall in this case.

What I was thinking is that if I have, let's say, 4 blocks (2x2)

1. block 0 sends data to 1, wait for 1
2. block 1 sends data to 2, wait for 2
3. block 3 sends data to...

Does it really guarantee that block 1 will eventually send its data to 0 for example? Couldn't the code hang in some cases? The way I am doing it now is basically doing a non blocking send but a blocking receive...
Joachim is offline   Reply With Quote

Old   December 8, 2014, 10:19
Default
  #4
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
I read in more details the pdf you sent me. Would this code be alright according to you?

Code:
 ! non-blocking send (integer i)
  do n = 1, block%n_patches
    call mpi_isend( i, 1, mpi_integer, patch(n)%block2, 11, mpi_comm_world, isend(n), ierr )
  enddo

  ! non-blocking receive (integer j)
  do n = 1, block%n_patches
    call mpi_irecv( j, 1, mpi_integer, patch(n)%block2, 11, mpi_comm_world, irecv(n), ierr )
  enddo

  ! do other stuff, without touching either i or j
  ....

  ! make sure that the transfer is complete
  call mpi_waitall(block%n_patches, isend, statmpi, ierr)
  call mpi_waitall(block%n_patches, irecv, statmpi, ierr)
I only have two questions left:

1. is it ok to use the same tags for different mpi_isend? (to different processors though)
2. is the mpi_waitall necessary for the isend?

Thank you so much for your help!

Joachim
Joachim is offline   Reply With Quote

Old   December 8, 2014, 12:01
Default Re: Mpi
  #5
Super Moderator
 
Hans Bihs
Join Date: Jun 2009
Location: Trondheim, Norway
Posts: 377
Rep Power: 17
valgrinda is on a distinguished road
Hi Joachim,

I would write the code like this:

! non-blocking send (integer i)
do n = 1, block%n_patches
call mpi_isend( i, 1, mpi_integer, patch(n)%block2, 11, mpi_comm_world, isend(n), ierr )
call mpi_irecv( j, 1, mpi_integer, patch(n)%block2, 11, mpi_comm_world, irecv(n), ierr )
enddo

! make sure that the transfer is complete
MPI_Wait(&send_request,&status);
MPI_Wait(&recv_request,&status);


! do other stuff
....


To your questions:
1. Tags are not mandatory for a correct sending/receiving. It can be an additional help to distinguish different types of data for the user.
2. Wait is necessary, otherwise your send/recv statements might not match. As you can see I would use the basic Wait function, and test for each of the send/recv statements separately.

Regards
Hans
valgrinda is offline   Reply With Quote

Old   December 8, 2014, 12:17
Default
  #6
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
Thank you Hans! I really appreciate your help!

what is the advantage of checking the isend/irecv separately (wait instead of waitall)?
also, in Fortran, would you have something like that instead?

Code:
! make sure that the transfer is complete
do n =1, block%n_patches
  MPI_Wait(isend(n),statsend(n),ierr)
  MPI_Wait(irecv(n),statrecv(n),ierr)
enddo
isn't that what waitall is doing?
Joachim is offline   Reply With Quote

Old   December 9, 2014, 08:46
Default
  #7
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
ok I played with the code and everything seems to be working just fine.

One more question now! For the real case, I do not want to send an integer, but instead an array that could have a different size for each face. My question, will this code work?

Code:
! non-blocking send and receive
do n = 1, block%n_patches

  ! allocate temporary array to concatenate the data to be sent
  allocate(tempS(sizeS))
  tempS = ...

  ! send data
  call mpi_isend( tempS, sizeS, mpi_double_precision, patch(n)%block2, 11, mpi_comm_world, isend(n), ierr )

  ! allocate temporary array to receive data
  allocate(tempR(sizeR))

  ! receive the data
  call mpi_irecv( tempR, sizeR, mpi_double_precision, patch(n)%block2, 11,   mpi_comm_world, irecv(n), ierr )

  ! do whatever with it
  lala = tempR

  ! deallocate temporary arrays
  deallocate (tempS, tempR)

enddo

! wait for transfer to be complete
call mpi_wait...
Can I deallocate the temporary arrays before the transfers are complete?

Thanks!

Joachim


---------------------------------
EDIT: hmm, just thought about it, and it won't work since tempR hasn't been received when I do lala = tempR.
Does that mean that I have to first allocate all the temporary arrays and deallocate them only after the wait statement?
Joachim is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Floating Point Exception Error nyox FLUENT 11 November 30, 2018 12:31
MPI Error - simpleFoam - Floating Point Exception scott OpenFOAM Running, Solving & CFD 3 April 13, 2012 16:34
matching variable data with grid point data anfho OpenFOAM Programming & Development 0 May 6, 2011 15:28
Problem with MPI? cwang5 OpenFOAM Running, Solving & CFD 3 July 12, 2010 10:38
MPI and parallel computation Wang Main CFD Forum 7 April 15, 2004 11:25


All times are GMT -4. The time now is 19:49.