Lately I have several times be
Lately I have several times been running into the followin problem. It is repeatable with the same case on two different hardware archs and with icc and gcc compilers:
During Shell refinement iteration (>1) an MPI-error occur:
[dagobah:01576] *** An error occurred in MPI_Bsend
[dagobah:01576] *** on communicator MPI_COMM_WORLD
[dagobah:01576] *** MPI_ERR_BUFFER: invalid buffer pointer
[dagobah:01576] *** MPI_ERRORS_ARE_FATAL (goodbye)
Here is the complete case
foamJob -p -s snappyHexMesh
I do not know if this is to be regarded a bug, or if it's only me...
its just you http://www.cfd-on
its just you http://www.cfd-online.com/OpenFOAM_D...part/happy.gif
OK I also get that error.
(maybe its a username issue)
Have you tried 1.5.x? If it do
Have you tried 1.5.x? If it does not work in that one please report as a bug.
I am running recent pull of 1.
I am running recent pull of 1.5.x. Reporting bug!
Thanks for testing and great suggestions Niklas! Actually changed my irl name to Bob The Builder and now everything works fine! :-)
Thanks Niklas and Mattijs
Hi I also face this error
I also face this error in snappyhexmesh parallel run?
*** An error occurred in MPI_Bsend
*** on communicator MPI_COMM_WORLD
*** MPI_ERR_BUFFER: invalid buffer pointer
*** MPI_ERRORS_ARE_FATAL (goodbye)
Would you tell me how and where I can change my name according to what Niklas said?
Hi, I have the same problem
I have the same problem that you described above in connetion with parallel meshing (blockMesh ->
decomposePar -> snappyHexMesh in package version 1.5)
My geometry was built up from several STL files, let's say 20. If I use only 19 parts, everything works fine, I have no problem. But when I used all the 20, I used to get that [MPI_ERRORS_ARE_FATAL...] message and SnappyHexMesh crashed again and again.
I tried to divide the task between the processors many different ways, different memory settings, etc. I checked the user names - as you suggested -, and made the meshing process ran with different users/root. Sorry, nothing has helped. I dublechecked the STL files, too, and used different combinations, the case is the same, when all of them are included the meshing crashes.
Do you have any good idea? Is it a bug? Or the problem is by MPI itself?
Thanks in advance,
Make sure your MPI_BUFFER_SIZE
Make sure your MPI_BUFFER_SIZE is plenty big, 200000000 or larger. Also check on your nodes that you are not running out of memory altogether.
Mattijs, Thank you very muc
Thank you very much, your advice has been absolutely useful. Extending [MPI_BUFFER_SIZE] has helped me solve the [MPI_ERRORS_ARE_FATAL...] problem described in my last post.
|All times are GMT -4. The time now is 00:33.|