CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > OpenFOAM Mesh Utilities

SnappyHexMesh in parallel openmpi

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Display Modes
Old   October 14, 2008, 07:18
Default Lately I have several times be
  #1
Member
 
Niklas Wikstrom
Join Date: Mar 2009
Posts: 85
Rep Power: 8
wikstrom is on a distinguished road
Lately I have several times been running into the followin problem. It is repeatable with the same case on two different hardware archs and with icc and gcc compilers:

During Shell refinement iteration (>1) an MPI-error occur:

[dagobah:01576] *** An error occurred in MPI_Bsend
[dagobah:01576] *** on communicator MPI_COMM_WORLD
[dagobah:01576] *** MPI_ERR_BUFFER: invalid buffer pointer
[dagobah:01576] *** MPI_ERRORS_ARE_FATAL (goodbye)


Here is the complete case
snappyHexMesh-coarse.tgz

To run:

blockMesh
decomposePar
foamJob -p -s snappyHexMesh


I do not know if this is to be regarded a bug, or if it's only me...

Cheers
Niklas
wikstrom is offline   Reply With Quote

Old   October 14, 2008, 07:24
Default its just you http://www.cfd-on
  #2
Super Moderator
 
niklas's Avatar
 
Niklas Nordin
Join Date: Mar 2009
Location: Stockholm, Sweden
Posts: 693
Rep Power: 19
niklas will become famous soon enough
its just you

OK I also get that error.

Niklas
(maybe its a username issue)
niklas is offline   Reply With Quote

Old   October 16, 2008, 03:05
Default Have you tried 1.5.x? If it do
  #3
Super Moderator
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,416
Rep Power: 16
mattijs is on a distinguished road
Have you tried 1.5.x? If it does not work in that one please report as a bug.
mattijs is offline   Reply With Quote

Old   October 17, 2008, 04:35
Default I am running recent pull of 1.
  #4
Member
 
Niklas Wikstrom
Join Date: Mar 2009
Posts: 85
Rep Power: 8
wikstrom is on a distinguished road
I am running recent pull of 1.5.x. Reporting bug!

Thanks for testing and great suggestions Niklas! Actually changed my irl name to Bob The Builder and now everything works fine! :-)

Thanks Niklas and Mattijs
wikstrom is offline   Reply With Quote

Old   October 21, 2008, 17:38
Default Hi I also face this error
  #5
Member
 
mohd mojab
Join Date: Mar 2009
Posts: 31
Rep Power: 8
mou_mi is on a distinguished road
Hi

I also face this error in snappyhexmesh parallel run?

*** An error occurred in MPI_Bsend
*** on communicator MPI_COMM_WORLD
*** MPI_ERR_BUFFER: invalid buffer pointer
*** MPI_ERRORS_ARE_FATAL (goodbye)

Would you tell me how and where I can change my name according to what Niklas said?

Thank you
mou
mou_mi is offline   Reply With Quote

Old   November 18, 2008, 12:31
Default Hi, I have the same problem
  #6
New Member
 
Attila Schwarczkopf
Join Date: Mar 2009
Location: Edinburgh / London / Budapest
Posts: 12
Rep Power: 8
schwarczi is on a distinguished road
Hi,

I have the same problem that you described above in connetion with parallel meshing (blockMesh ->
decomposePar -> snappyHexMesh in package version 1.5)

My geometry was built up from several STL files, let's say 20. If I use only 19 parts, everything works fine, I have no problem. But when I used all the 20, I used to get that [MPI_ERRORS_ARE_FATAL...] message and SnappyHexMesh crashed again and again.

I tried to divide the task between the processors many different ways, different memory settings, etc. I checked the user names - as you suggested -, and made the meshing process ran with different users/root. Sorry, nothing has helped. I dublechecked the STL files, too, and used different combinations, the case is the same, when all of them are included the meshing crashes.

Do you have any good idea? Is it a bug? Or the problem is by MPI itself?


Thanks in advance,
Schwarczi
schwarczi is offline   Reply With Quote

Old   November 18, 2008, 15:39
Default Make sure your MPI_BUFFER_SIZE
  #7
Super Moderator
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,416
Rep Power: 16
mattijs is on a distinguished road
Make sure your MPI_BUFFER_SIZE is plenty big, 200000000 or larger. Also check on your nodes that you are not running out of memory altogether.
mattijs is offline   Reply With Quote

Old   November 24, 2008, 10:52
Default Mattijs, Thank you very muc
  #8
New Member
 
Attila Schwarczkopf
Join Date: Mar 2009
Location: Edinburgh / London / Budapest
Posts: 12
Rep Power: 8
schwarczi is on a distinguished road
Mattijs,

Thank you very much, your advice has been absolutely useful. Extending [MPI_BUFFER_SIZE] has helped me solve the [MPI_ERRORS_ARE_FATAL...] problem described in my last post.

Thanks,
Sch.
schwarczi is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
SnappyHexMesh in Parallel bastil OpenFOAM Mesh Utilities 22 April 7, 2010 11:48
Parallel case setup boundry conditions snappyhexmesh oskar OpenFOAM Pre-Processing 5 September 11, 2009 01:12
SnappyHexMesh in parallel openmpi wikstrom OpenFOAM Bugs 18 November 26, 2008 06:55
OpenMPI performance vega OpenFOAM Running, Solving & CFD 13 November 27, 2007 02:28
Cant run in parallel on two nodes using OpenMPI CHristofer Main CFD Forum 0 October 26, 2007 09:54


All times are GMT -4. The time now is 04:37.