CFD Online Discussion Forums

CFD Online Discussion Forums (
-   OpenFOAM Programming & Development (
-   -   Problem for parallel running chtMultiregionFoam (

Giancarlo_IngChimico June 17, 2013 05:34

Problem for parallel running chtMultiregionFoam
Hi FOAMers,
I have developed a new solver that it is able to treat multi-region meshes. This solver has the same architecture of chtMultiregionFoam but it is able to solve material and energy balance for reactive system too.
When I launch a simulation in parallel I have a strange problem: it works with no problem for the first time-step, after that it crashes when reads the file: compressibleMultiRegionCourantNo.H.
This is very strange: why are there problems if it has to repeat the same operations of the first time step?

Can anyone help me?


Best regards


Giancarlo_IngChimico June 18, 2013 08:08

I post the error.

Can anyone help to understand the nature of error?




[1] [2] #0  #0  Foam::error::printStack(Foam::Ostream&)Foam::error::printStack(Foam::Ostream&) in "/home/OpenFOAM/OpenFOAM-2.1.x/platfo in "/home/cfduser1/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/"
[1] #rms/linux64GccDPOpt/lib/"1  Foam::sigSegv::sigHandler(int)
[2] #1  Foam::sigSegv::sigHandler(int) in "/home/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/"
[1] #2  in "/home/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/"
[2] #2  __restore_rt__restore_rt at sigaction.c:0
[2] #3  at sigaction.c:0
[1] #3  mainmain in "/home/GentileG/gianca/Run/parallel/multiFinal_test_parallel"
[1] #4  __libc_start_main in "/home/GentileG/gianca/Run/parallel/multiFinal_test_parallel"
[2] #4  __libc_start_main in "/lib64/"
[1] #5  in "/lib64/"
[2] #5  Foam::regIOobject::writeObject(Foam::IOstream::streamFormat, Foam::IOstream::versionNumber, Foam::IOstream::compressionType) constFoam::regIOobject::writeObject(Foam::IOstream::streamFormat, Foam::IOstream::versionNumber, Foam::IOstream::compressionType) const in "/home/GentileG/gianca/Run/parallel/multiFinal_test_parallel"
[compute:18709] *** Process received signal ***
[compute:18709] Signal: Segmentation fault (11)
[compute:18709] Signal code:  (-6)
[compute:18709] Failing at address: 0x623b00004915
[compute:18709] [ 0] /lib64/ [0x367c230280]
[compute:18709] [ 1] /lib64/ [0x367c230215]
[compute:18709] [ 2] /lib64/ [0x367c230280]
[compute:18709] [ 3] multiFinal_test_parallel [0x456912]
[compute:18709] [ 4] /lib64/ [0x367c21d974]
[compute:18709] [ 5] multiFinal_test_parallel(_ZNK4Foam11regIOobject11writeObjectENS_8IOstream12streamFormatENS1_13versionNumberENS1_15compressionTypeE+0x151) [0x4204a9]
[compute:18709] *** End of error message ***
mpirun noticed that process rank 1 with PID 18709 on node compute-3-11.local exited on signal 11 (Segmentation fault).
[compute.local:18707] 2 more processes have sent help message help-mpi-btl-base.txt / btl:no-nics
[compute.local:18707] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
vi log
[1]+  Exit 139                mpirun -np 3 multiFinal_test_parallel -parallel > log

mbay101 August 5, 2013 09:56

Hallo Giancarlo,

Im having exactly the same Problem. I noticed in my Simulation that the Tempratur Value in the Air Region are too high. At the second Time step i get the same Error that you have posted. It seems that OF having Problems when he try s to calculate h in Fluid Region.

You can try to run the case seriely without decomposen it.
If you get your case working please let me know, how did you do it.


wyldckat August 25, 2013 07:50

Greetings to all!

mbay101's problem is being addressed here:

@Giancarlo: The key issue is a bad memory access:



Where SIGSEGV is further explained here:

From your description, it looks like the problem is that at least one field/array has been destroyed when the iteration was finished.

Best regards,

All times are GMT -4. The time now is 18:43.