CFD Online Discussion Forums

CFD Online Discussion Forums (
-   OpenFOAM Programming & Development (
-   -   Problem for parallel running chtMultiregionFoam (

Giancarlo_IngChimico June 17, 2013 05:34

Problem for parallel running chtMultiregionFoam
Hi FOAMers,
I have developed a new solver that it is able to treat multi-region meshes. This solver has the same architecture of chtMultiregionFoam but it is able to solve material and energy balance for reactive system too.
When I launch a simulation in parallel I have a strange problem: it works with no problem for the first time-step, after that it crashes when reads the file: compressibleMultiRegionCourantNo.H.
This is very strange: why are there problems if it has to repeat the same operations of the first time step?

Can anyone help me?


Best regards


Giancarlo_IngChimico June 18, 2013 08:08

I post the error.

Can anyone help to understand the nature of error?




[1] [2] #0  #0  Foam::error::printStack(Foam::Ostream&)Foam::error::printStack(Foam::Ostream&) in "/home/OpenFOAM/OpenFOAM-2.1.x/platfo in "/home/cfduser1/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/"
[1] #rms/linux64GccDPOpt/lib/"1  Foam::sigSegv::sigHandler(int)
[2] #1  Foam::sigSegv::sigHandler(int) in "/home/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/"
[1] #2  in "/home/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/"
[2] #2  __restore_rt__restore_rt at sigaction.c:0
[2] #3  at sigaction.c:0
[1] #3  mainmain in "/home/GentileG/gianca/Run/parallel/multiFinal_test_parallel"
[1] #4  __libc_start_main in "/home/GentileG/gianca/Run/parallel/multiFinal_test_parallel"
[2] #4  __libc_start_main in "/lib64/"
[1] #5  in "/lib64/"
[2] #5  Foam::regIOobject::writeObject(Foam::IOstream::streamFormat, Foam::IOstream::versionNumber, Foam::IOstream::compressionType) constFoam::regIOobject::writeObject(Foam::IOstream::streamFormat, Foam::IOstream::versionNumber, Foam::IOstream::compressionType) const in "/home/GentileG/gianca/Run/parallel/multiFinal_test_parallel"
[compute:18709] *** Process received signal ***
[compute:18709] Signal: Segmentation fault (11)
[compute:18709] Signal code:  (-6)
[compute:18709] Failing at address: 0x623b00004915
[compute:18709] [ 0] /lib64/ [0x367c230280]
[compute:18709] [ 1] /lib64/ [0x367c230215]
[compute:18709] [ 2] /lib64/ [0x367c230280]
[compute:18709] [ 3] multiFinal_test_parallel [0x456912]
[compute:18709] [ 4] /lib64/ [0x367c21d974]
[compute:18709] [ 5] multiFinal_test_parallel(_ZNK4Foam11regIOobject11writeObjectENS_8IOstream12streamFormatENS1_13versionNumberENS1_15compressionTypeE+0x151) [0x4204a9]
[compute:18709] *** End of error message ***
mpirun noticed that process rank 1 with PID 18709 on node compute-3-11.local exited on signal 11 (Segmentation fault).
[compute.local:18707] 2 more processes have sent help message help-mpi-btl-base.txt / btl:no-nics
[compute.local:18707] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
vi log
[1]+  Exit 139                mpirun -np 3 multiFinal_test_parallel -parallel > log

mbay101 August 5, 2013 09:56

Hallo Giancarlo,

Im having exactly the same Problem. I noticed in my Simulation that the Tempratur Value in the Air Region are too high. At the second Time step i get the same Error that you have posted. It seems that OF having Problems when he try s to calculate h in Fluid Region.

You can try to run the case seriely without decomposen it.
If you get your case working please let me know, how did you do it.


wyldckat August 25, 2013 07:50

Greetings to all!

mbay101's problem is being addressed here:

@Giancarlo: The key issue is a bad memory access:



Where SIGSEGV is further explained here:

From your description, it looks like the problem is that at least one field/array has been destroyed when the iteration was finished.

Best regards,

manuc December 21, 2016 04:26


I have a new solver with structure of chtMultiregionfoam which behaves like bouyantboussinesqpimplefoam for fluid region. I ran the simulations in series and the solution converges. When I try to run it in parallel it crashes in the first step.

Used OF2.4.0

I tried to reduce the courant number to attain initial stability , but then the solver crashes after 2nd time step.



deltaT = 4.5530327e-107

--> FOAM Warning :
From function Time::operator++()
in file db/Time/Time.C at line 1061
Increased the timePrecision from 62 to 63 to distinguish between timeNames at time 2.0707573e-07
Time = 2.07075734119138104400662664383858668770699296146631240844726562e-07

Solving for fluid region air
DILUPBiCG: Solving for T, Initial residual = 0.010403611, Final residual = 2.7324966e-12, No Iterations 1
max(T) [0 0 0 1 0 0 0] 300.02011
DICPCG: Solving for p_rgh, Initial residual = 1, Final residual = 0.0099236724, No Iterations 251
time step continuity errors : sum local = 5.0400286e-07, global = 1.0693134e-19
mpirun noticed that process rank 4 with PID 17152 on node n11-42 exited on signal 8 (Floating point exception).

(I tried different decomposing methods (simple and scotch)..Also I varied simple coeff --delta-0.01 ...but no use))

problem solved:

All times are GMT -4. The time now is 12:00.