|
[Sponsors] |
April 14, 2005, 08:36 |
MPI buffer size for Send/Recv
|
#1 |
Guest
Posts: n/a
|
I am using Scali MPI in my code. I use the MPI_Send() and MPI_Recv() routines for sending information between the nodes. I increase the amount of data that is sent between nodes and everything works fine, until a certain amount of data when the communications just become idle and the program gets stucked, as if the different processors do not receive the information they are waiting for and just keep on waiting for ever.
I have read something about the size of the buffer that is assign for MPI communiations. Could that be the problem? That I am sending more information than the maximum buffer size? In this case, if I change to the nonblocking routines MPI_Isend() and MPI_Irecv(), do I solve the problem? Or is there any way of increasing the available buffer size? Thanks in adavance. scicomex |
|
April 15, 2005, 10:03 |
Re: MPI buffer size for Send/Recv
|
#2 |
Guest
Posts: n/a
|
I had a similar problem with my code. The fix for me was to use Isend and Irecv. Make sure you use a MPI_Waitall at an appropriate place to insure all the messages are complete.
|
|
April 15, 2005, 11:21 |
Re: MPI buffer size for Send/Recv
|
#3 |
Guest
Posts: n/a
|
Thanks very much! I will try that.
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Superlinear speedup in OpenFOAM 13 | msrinath80 | OpenFOAM Running, Solving & CFD | 18 | March 3, 2015 05:36 |
critical error during installation of openfoam | Fabio88 | OpenFOAM Installation | 21 | June 2, 2010 03:01 |
OF 1.6 | Ubuntu 9.10 (64bit) | GLIBCXX_3.4.11 not found | piprus | OpenFOAM Installation | 22 | February 25, 2010 13:43 |
Phase locked average in run time | panara | OpenFOAM | 2 | February 20, 2008 14:37 |
fluent add additional zones for the mesh file | SSL | FLUENT | 2 | January 26, 2008 11:55 |