CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > OpenFOAM Running, Solving & CFD

Foam::error::PrintStack

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree26Likes

Reply
 
LinkBack Thread Tools Display Modes
Old   August 27, 2013, 06:17
Default regaeding error while running rhoSimplecFoam
  #21
Member
 
sonu
Join Date: Jul 2013
Location: delhi
Posts: 79
Rep Power: 4
yash.aesi is on a distinguished road
greetings bruno and everyone ,

i am trying to simulate my case of cold flow simulation using the rhoSimplecFoam solver but after some time of run it ends with the following error ..

can you plz help me to understand where i am doin wrong ? thanks in advance


Code:
Time = 43

GAMG:  Solving for Ux, Initial residual = 0.19957, Final residual = 0.0130805, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.432992, Final residual = 0.030986, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.00150254, Final residual = 0.000111488, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.164657, Final residual = 0.122731, No Iterations 1000
time step continuity errors : sum local = 0.0390443, global = -0.0148295, cumulative = -0.639136
rho max/min : 1 0.382345
GAMG:  Solving for epsilon, Initial residual = 0.000488404, Final residual = 2.53703e-08, No Iterations 1
bounding epsilon, min: -4.33742e-06 max: 857.902 average: 1.75282
GAMG:  Solving for k, Initial residual = 0.128439, Final residual = 0.0118675, No Iterations 1
bounding k, min: 1e-16 max: 5005.77 average: 1.11456
ExecutionTime = 1274.69 s  ClockTime = 1275 s

Time = 44

GAMG:  Solving for Ux, Initial residual = 0.00778892, Final residual = 0.000427613, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.262673, Final residual = 0.0165556, No Iterations 1
GAMG:  Solving for e, Initial residual = 1.08917e-05, Final residual = 6.74763e-07, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.198508, Final residual = 0.14558, No Iterations 1000
time step continuity errors : sum local = 0.0445931, global = -0.0148271, cumulative = -0.653963
rho max/min : 1 0.382345
GAMG:  Solving for epsilon, Initial residual = 0.000403613, Final residual = 1.28867e-07, No Iterations 1
bounding epsilon, min: -3.4861e-08 max: 857.872 average: 1.76686
GAMG:  Solving for k, Initial residual = 0.0963778, Final residual = 0.00239046, No Iterations 2
bounding k, min: -2.2769e-08 max: 18855.3 average: 2.6373
ExecutionTime = 1302.92 s  ClockTime = 1303 s

Time = 45

GAMG:  Solving for Ux, Initial residual = 0.823156, Final residual = 0.0483317, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.673439, Final residual = 0.034677, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.00301749, Final residual = 0.000137252, No Iterations 1
#0  Foam::error::printStack(Foam::Ostream&) at ??:?
#1  Foam::sigFpe::sigHandler(int) at ??:?
#2   in "/lib/x86_64-linux-gnu/libc.so.6"
#3  double Foam::sumProd<double>(Foam::UList<double> const&, Foam::UList<double> const&) at ??:?
#4  Foam::PBiCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const at ??:?
#5  Foam::GAMGSolver::solveCoarsestLevel(Foam::Field<double>&, Foam::Field<double> const&) const at ??:?
#6  Foam::GAMGSolver::Vcycle(Foam::PtrList<Foam::lduMatrix::smoother> const&, Foam::Field<double>&, Foam::Field<double> const&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::Field<double> >&, unsigned char) const at ??:?
#7  Foam::GAMGSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const at ??:?
#8  Foam::fvMatrix<double>::solveSegregated(Foam::dictionary const&) at ??:?
#9  Foam::fvMatrix<double>::solve(Foam::dictionary const&) at ??:?
#10  Foam::fvMatrix<double>::solve() at ??:?
#11  
 at ??:?
#12  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
#13  
 at ??:?
Floating point exception (core dumped)

Last edited by wyldckat; August 27, 2013 at 17:28. Reason: Added [CODE][/CODE]
yash.aesi is offline   Reply With Quote

Old   August 27, 2013, 17:37
Default
  #22
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,312
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Greetings yash.aesi,

Not much information to work with. But from the output you've showed:
Quote:
Originally Posted by yash.aesi View Post
Code:
Time = 43

GAMG:  Solving for Ux, Initial residual = 0.19957, Final residual = 0.0130805, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.432992, Final residual = 0.030986, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.00150254, Final residual = 0.000111488, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.164657, Final residual = 0.122731, No Iterations 1000
time step continuity errors : sum local = 0.0390443, global = -0.0148295, cumulative = -0.639136
rho max/min : 1 0.382345
GAMG:  Solving for epsilon, Initial residual = 0.000488404, Final residual = 2.53703e-08, No Iterations 1
 bounding epsilon, min: -4.33742e-06 max: 857.902 average: 1.75282
GAMG:  Solving for k, Initial residual = 0.128439, Final residual = 0.0118675, No Iterations 1
bounding k, min: 1e-16 max: 5005.77 average: 1.11456
ExecutionTime = 1274.69 s  ClockTime = 1275 s

Time = 44

GAMG:  Solving for Ux, Initial residual = 0.00778892, Final residual = 0.000427613, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.262673, Final residual = 0.0165556, No Iterations 1
GAMG:  Solving for e, Initial residual = 1.08917e-05, Final residual = 6.74763e-07, No Iterations 1
 GAMG:  Solving for p, Initial residual = 0.198508, Final residual = 0.14558, No Iterations 1000
time step continuity errors : sum local = 0.0445931, global = -0.0148271, cumulative = -0.653963
rho max/min : 1 0.382345
GAMG:  Solving for epsilon, Initial residual = 0.000403613, Final residual = 1.28867e-07, No Iterations 1
 bounding epsilon, min: -3.4861e-08 max: 857.872 average: 1.76686
GAMG:  Solving for k, Initial residual = 0.0963778, Final residual = 0.00239046, No Iterations 2
bounding k, min: -2.2769e-08 max: 18855.3 average: 2.6373
ExecutionTime = 1302.92 s  ClockTime = 1303 s

Time = 45

GAMG:  Solving for Ux, Initial residual = 0.823156, Final residual = 0.0483317, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.673439, Final residual = 0.034677, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.00301749, Final residual = 0.000137252, No Iterations 1
#0  Foam::error::printStack(Foam::Ostream&) at ??:?
#1  Foam::sigFpe::sigHandler(int) at ??:?
#2   in "/lib/x86_64-linux-gnu/libc.so.6"
#3  double Foam::sumProd<double>(Foam::UList<double> const&, Foam::UList<double> const&) at ??:?
#4  Foam::PBiCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const at ??:?
#5  Foam::GAMGSolver::solveCoarsestLevel(Foam::Field<double>&, Foam::Field<double> const&) const at ??:?
#6  Foam::GAMGSolver::Vcycle(Foam::PtrList<Foam::lduMatrix::smoother> const&, Foam::Field<double>&, Foam::Field<double> const&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::Field<double> >&, unsigned char) const at ??:?
#7  Foam::GAMGSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const at ??:?
#8  Foam::fvMatrix<double>::solveSegregated(Foam::dictionary const&) at ??:?
#9  Foam::fvMatrix<double>::solve(Foam::dictionary const&) at ??:?
#10  Foam::fvMatrix<double>::solve() at ??:?
#11  
 at ??:?
#12  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
#13  
 at ??:?
Floating point exception (core dumped)
In bold are the things to take into account:
  1. The pressure field/equation is not being solved properly. After 1000 iterations, it does not have a solution.
  2. Minimum "epsilon" is negative. If this "epsilon" is the one for the turbulence model, then the minimum value should be positive and greater than zero.
  3. SIGFPE triggered, at the sumProd method or function. In other words, it tried to do a summation and a product of 2 or more values and it went wrong, probably because one of the values was probably NaN or Inf.
All indicates that you have incorrectly defined the boundary conditions.

Best regards
Bruno
immortality, mecman and yash.aesi like this.
wyldckat is offline   Reply With Quote

Old   August 28, 2013, 03:58
Default
  #23
Member
 
sonu
Join Date: Jul 2013
Location: delhi
Posts: 79
Rep Power: 4
yash.aesi is on a distinguished road
thanks bruno ,
i wl try to check BC's
yash.aesi is offline   Reply With Quote

Old   August 28, 2013, 08:00
Default
  #24
Member
 
sonu
Join Date: Jul 2013
Location: delhi
Posts: 79
Rep Power: 4
yash.aesi is on a distinguished road
helo bruno ,

i tried to check my BC's. Now after giving a run its goin fine but i think the problem is not solved yet as the point you mentioned in last post about the pressure is not solved yet . The output is showing as (without giving error ):

Code:
Time = 298

GAMG:  Solving for Ux, Initial residual = 0.00248195, Final residual = 2.77057e-05, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.00686025, Final residual = 0.000236565, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.00343494, Final residual = 5.85371e-05, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.20715, Final residual = 0.152599, No Iterations 1000
time step continuity errors : sum local = 0.0140403, global = -0.011032, cumulative = -0.311197
rho max/min : 1 0.352192
GAMG:  Solving for epsilon, Initial residual = 0.000197142, Final residual = 7.86035e-06, No Iterations 1
GAMG:  Solving for k, Initial residual = 0.00474871, Final residual = 0.000184764, No Iterations 1
ExecutionTime = 860.06 s  ClockTime = 860 s

Time = 299

GAMG:  Solving for Ux, Initial residual = 0.00247746, Final residual = 2.75936e-05, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.0068694, Final residual = 0.000236605, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.00342726, Final residual = 5.82198e-05, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.209186, Final residual = 0.153251, No Iterations 1000
time step continuity errors : sum local = 0.0140198, global = -0.0110261, cumulative = -0.322224
rho max/min : 1 0.352192
GAMG:  Solving for epsilon, Initial residual = 0.000196554, Final residual = 7.82037e-06, No Iterations 1
GAMG:  Solving for k, Initial residual = 0.00473606, Final residual = 0.000183578, No Iterations 1
ExecutionTime = 889.64 s  ClockTime = 890 s
i am attaching the file of my BC's . can you please have a look on it and tell me whr i am doin wrong ??

thanks alot in advance
Attached Files
File Type: zip 0.zip (4.3 KB, 23 views)

Last edited by wyldckat; August 31, 2013 at 14:56. Reason: Added [CODE][/CODE]
yash.aesi is offline   Reply With Quote

Old   August 31, 2013, 15:05
Default
  #25
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,312
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Hi Sonu,

From what I can see, the pressure in the outlet should not be defined as being of type "calculated". Because that way you have an undefined boundary on the outlet, since you say that the velocity is of type "zeroGradient".

For ideas on what the boundary conditions should be, see the tutorials on OpenFOAM and see the link "Boundary Conditions" on this page: http://foam.sourceforge.net/docs/cpp/index.html

Best regards,
Bruno
yash.aesi likes this.
wyldckat is offline   Reply With Quote

Old   August 31, 2013, 15:19
Default
  #26
Member
 
sonu
Join Date: Jul 2013
Location: delhi
Posts: 79
Rep Power: 4
yash.aesi is on a distinguished road
Greetings Bruno ,
thanks bruno for suggesting these link which are useful to me for better understanding .
but i already changed the outlet BC to zeroGradient then simulation keep on running its not converging . Rite now i dnt have output to show but show you other day .


Regards ,
sonu
yash.aesi is offline   Reply With Quote

Old   August 31, 2013, 15:26
Default
  #27
Member
 
sonu
Join Date: Jul 2013
Location: delhi
Posts: 79
Rep Power: 4
yash.aesi is on a distinguished road
helo
here is what is shown in the output :

Code:
Time = 1499

GAMG:  Solving for Ux, Initial residual = 0.000370436, Final residual = 1.78681e-06, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.000186964, Final residual = 4.67884e-06, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.000322874, Final residual = 4.85704e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.0030172, Final residual = 0.000281682, No Iterations 8
time step continuity errors : sum local = 4.1905e-08, global = -8.58901e-19, cumulative = 2.08053e-17
rho max/min : 0.1 0.1
GAMG:  Solving for epsilon, Initial residual = 1.37661e-06, Final residual = 4.21716e-08, No Iterations 1
GAMG:  Solving for k, Initial residual = 0.000106335, Final residual = 3.04613e-06, No Iterations 1
ExecutionTime = 1238.57 s  ClockTime = 1404 s

Time = 1500

GAMG:  Solving for Ux, Initial residual = 0.000370261, Final residual = 1.78544e-06, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.000187208, Final residual = 4.67032e-06, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.000322533, Final residual = 4.85232e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.00300666, Final residual = 0.000292086, No Iterations 8
time step continuity errors : sum local = 4.34695e-08, global = -1.09357e-18, cumulative = 1.97117e-17
rho max/min : 0.1 0.1
GAMG:  Solving for epsilon, Initial residual = 1.37571e-06, Final residual = 4.21285e-08, No Iterations 1
GAMG:  Solving for k, Initial residual = 0.000106287, Final residual = 3.04439e-06, No Iterations 1
ExecutionTime = 1241.65 s  ClockTime = 1407 s.
this is the modified BC's pressure BC'S
Code:
dimensions      [1 -1 -2 0 0 0 0];

internalField   uniform 101325;


    boundaryField
{
    fuel_inlet          
    {
       type            zeroGradient;
        //type            mixed;
        refValue        uniform 101325;
        refGradient     uniform 0;
        valueFraction   uniform 0.3;
    }

  coflow_inlet          
    {
         type            zeroGradient;
        //type            mixed;
        refValue        uniform 101325;
        refGradient     uniform 0;
        valueFraction   uniform 0.3;
    }

    Outlet
    {
        type            zeroGradient;
        //type            mixed;
        //refValue        uniform 101325;
        //refGradient     uniform 0;
        //valueFraction   uniform 1;
        //type            transonicOutletPressure;
        //U               U;
        //phi             phi;
        //gamma           1.4;
        //psi             psi;
        //pInf            uniform 101325;
    }
 Axis
    {
        type            symmetryPlane;
    }
     Upper_wall       
    {
        type            zeroGradient;
    }

    frontAndBack
    {
        type            empty;
    }
thanks in advance for your concern .
Regards ,
Sonu

Last edited by wyldckat; August 31, 2013 at 16:07. Reason: Added [CODE][/CODE]
yash.aesi is offline   Reply With Quote

Old   August 31, 2013, 15:41
Default
  #28
Senior Member
 
immortality's Avatar
 
Ehsan
Join Date: Oct 2012
Location: Iran
Posts: 2,186
Rep Power: 16
immortality is on a distinguished road
outlet BC for pressure should be fixedValue,assign a back pressure there.
__________________
Injustice Anywhere is a Threat for Justice Everywhere.Martin Luther King.
To Be or Not To Be,Thats the Question!
The Only Stupid Question Is the One that Goes Unasked.
immortality is offline   Reply With Quote

Old   September 1, 2013, 06:31
Default
  #29
Member
 
sonu
Join Date: Jul 2013
Location: delhi
Posts: 79
Rep Power: 4
yash.aesi is on a distinguished road
greetings Ehsan and bruno ,

i changed my pressure outlet BC's from zeroGradient to fixedValue but now again its giving error :

Code:
Time = 19

GAMG:  Solving for Ux, Initial residual = 0.140082, Final residual = 0.0129345, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.111632, Final residual = 0.00749058, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.111638, Final residual = 0.00888687, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.0145399, Final residual = 0.00133764, No Iterations 14
time step continuity errors : sum local = 0.000262946, global = 2.02579e-05, cumulative = -3.88151e-05
rho max/min : 1.35725 0.657245
GAMG:  Solving for epsilon, Initial residual = 0.0506423, Final residual = 0.000552153, No Iterations 1
bounding epsilon, min: -0.164911 max: 1392.55 average: 1.11572
GAMG:  Solving for k, Initial residual = 0.0791398, Final residual = 0.000638985, No Iterations 1
ExecutionTime = 112.35 s  ClockTime = 115 s

Time = 20

GAMG:  Solving for Ux, Initial residual = 0.163277, Final residual = 0.00968811, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.102144, Final residual = 0.0040134, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.999875, Final residual = 0.0148186, No Iterations 2
#0  Foam::error::printStack(Foam::Ostream&) at ??:?
#1  Foam::sigFpe::sigHandler(int) at ??:?
#2   in "/lib/x86_64-linux-gnu/libc.so.6"
#3  Foam::hePsiThermo<Foam::psiThermo, Foam::pureMixture<Foam::sutherlandTransport<Foam::species::thermo<Foam::hConstThermo<Foam::perfectGas<Foam::specie> >, Foam::sensibleInternalEnergy> > > >::calculate() at ??:?
#4  Foam::hePsiThermo<Foam::psiThermo, Foam::pureMixture<Foam::sutherlandTransport<Foam::species::thermo<Foam::hConstThermo<Foam::perfectGas<Foam::specie> >, Foam::sensibleInternalEnergy> > > >::correct() at ??:?
#5  
 at ??:?
#6  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
#7  
 at ??:?
Floating point exception (core dumped)
now i am totally clueless whats goin wrong . can you suggest something ??

thanks
Attached Files
File Type: zip 0.zip (4.3 KB, 2 views)

Last edited by wyldckat; September 1, 2013 at 07:11. Reason: Added [CODE][/CODE]
yash.aesi is offline   Reply With Quote

Old   September 1, 2013, 07:46
Default
  #30
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,312
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Hi Sonu,

First, please follow the instructions on my second signature link, for when you need to post output or code, namely: How to post code using [CODE]

Second, you still have epsilon values that are outside of the normal zone of operations, namely:
Code:
epsilon, min: -0.164911
Third, I'm guessing that by injecting fluid at 35 m/s and 5 m/s, you have Reynold values that are extremely high. You should first study the characteristics of your geometry and flow, to assess if you are trying to simulate in the zone of subsonic, transonic or supersonic. In addition, you should check if your mesh is proper for the simulation you are trying to perform.

Last but not least, you should not jump directly to such high flow rates. With OpenFOAM, as well with anything you don't know well enough, the approach is to not jump directly into the final case set-up, because it's very unlikely that you will succeed to have a working simulation. And in that situation, you'll have too many possible reasons for the simulation to not work, making it nearly impossible to fix all of the problems in a single step. Therefore, you should gradually evolve from the simplest form of your problem, increasing the level of complexity one detail at a time.

Best regards,
Bruno
immortality and yash.aesi like this.

Last edited by wyldckat; September 1, 2013 at 09:16. Reason: fixed typos
wyldckat is offline   Reply With Quote

Old   September 26, 2013, 14:11
Default Some errors and doubts
  #31
New Member
 
Join Date: Jan 2013
Location: Lisboa-Funchal
Posts: 20
Rep Power: 4
guilha is on a distinguished road
I was running a compressible LES simulation when it stopped with a similar or the same error, displayed just below.

Code:
Mean and max Courant Numbers = 0.0327681 0.0831581
Time = 0.00128178

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 1.9902e-06, Final residual = 2.04476e-14, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 3.69969e-06, Final residual = 3.95952e-14, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 2.22835e-05, Final residual = 4.25565e-13, No Iterations 3
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for e, Initial residual = 9.02514e-06, Final residual = 8.30103e-13, No Iterations 3
ExecutionTime = 40520.5 s  ClockTime = 40657 s

Mean and max Courant Numbers = 0.0327679 0.083161
Time = 0.00128182

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 2.00094e-06, Final residual = 2.06337e-14, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 3.70838e-06, Final residual = 4.11045e-14, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 2.23139e-05, Final residual = 4.33116e-13, No Iterations 3
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
[24] #0  Foam::error::printStack(Foam::Ostream&)[56] #0  Foam::error::printStack(Foam::Ostream&)--------------------------------------------------------------------------
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process.  Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption.  The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.  

The process that invoked fork was:

  Local host:          g03 (PID 33539)
  MPI_COMM_WORLD rank: 24

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
[40] #0  Foam::error::printStack(Foam::Ostream&)[8] #0  Foam::error::printStack(Foam::Ostream&) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[24] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[8] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[56] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[40] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[24] #2   in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[8] #2   in "/lib/x86_64-linux-gnu/libc.so.6"
[8] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[56] #2   in "/lib/x86_64-linux-gnu/libc.so.6"
[24] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[8] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[8] #5   in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[24] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[40] #2   in "/lib/x86_64-linux-gnu/libc.so.6"
[56] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate()
 in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[24] #5   in "/lib/x86_64-linux-gnu/libc.so.6"
[40] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[56] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct()
[8]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[8] #6  __libc_start_main in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[56] #5  [24]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[24] #6  __libc_start_main in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[40] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/lib/x86_64-linux-gnu/libc.so.6"
[8] #7  
 in "/lib/x86_64-linux-gnu/libc.so.6"
[24] #7  

 in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[40] #5  [56]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[56] #6  __libc_start_main[8]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g03:33523] *** Process received signal ***
[g03:33523] Signal: Floating point exception (8)
[g03:33523] Signal code:  (-6)
[g03:33523] Failing at address: 0x3f8000082f3
[g03:33523] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b8d0a824480]
[g03:33523] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x2b8d0a824405]
[g03:33523] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b8d0a824480]
[g03:33523] [ 3] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE9calculateEv+0x3f9) [0x2b8d081471e9]
[g03:33523] [ 4] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE7correctEv+0x32) [0x2b8d0814d262]
[g03:33523] [ 5] rhoCentralFoam() [0x4236cb]
[g03:33523] [ 6] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2b8d0a810ead]
[g03:33523] [ 7] rhoCentralFoam() [0x41c709]
[g03:33523] *** End of error message ***
[24]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g03:33539] *** Process received signal ***
[g03:33539] Signal: Floating point exception (8)
[g03:33539] Signal code:  (-6)
[g03:33539] Failing at address: 0x3f800008303
[g03:33539] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2af004454480]
[g03:33539] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x2af004454405]
[g03:33539] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2af004454480]
[g03:33539] [ 3] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE9calculateEv+0x3f9) [0x2af001d771e9]
[g03:33539] [ 4] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE7correctEv+0x32) [0x2af001d7d262]
[g03:33539] [ 5] rhoCentralFoam() [0x4236cb]
[g03:33539] [ 6] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2af004440ead]
[g03:33539] [ 7] rhoCentralFoam() [0x41c709]
[g03:33539] *** End of error message ***
 in "/lib/x86_64-linux-gnu/libc.so.6"
[56] #7  

[40]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[40] #6  __libc_start_main[56]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g02:10983] *** Process received signal ***
[g02:10983] Signal: Floating point exception (8)
[g02:10983] Signal code:  (-6)
[g02:10983] Failing at address: 0x3f800002ae7
[g02:10983] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b7f3585c480]
[g02:10983] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x2b7f3585c405]
[g02:10983] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b7f3585c480]
[g02:10983] [ 3] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE9calculateEv+0x3f9) [0x2b7f3317f1e9]
[g02:10983] [ 4] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE7correctEv+0x32) [0x2b7f33185262]
[g02:10983] [ 5] rhoCentralFoam() [0x4236cb]
[g02:10983] [ 6] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2b7f35848ead]
[g02:10983] [ 7] rhoCentralFoam() [0x41c709]
[g02:10983] *** End of error message ***
 in "/lib/x86_64-linux-gnu/libc.so.6"
[40] #7  
[40]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g02:10967] *** Process received signal ***
[g02:10967] Signal: Floating point exception (8)
[g02:10967] Signal code:  (-6)
[g02:10967] Failing at address: 0x3f800002ad7
[g02:10967] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2ae44b8f5480]
[g02:10967] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x2ae44b8f5405]
[g02:10967] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2ae44b8f5480]
[g02:10967] [ 3] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE9calculateEv+0x3f9) [0x2ae4492181e9]
[g02:10967] [ 4] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE7correctEv+0x32) [0x2ae44921e262]
[g02:10967] [ 5] rhoCentralFoam() [0x4236cb]
[g02:10967] [ 6] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2ae44b8e1ead]
[g02:10967] [ 7] rhoCentralFoam() [0x41c709]
[g02:10967] *** End of error message ***
[g03:33510] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork
[g03:33510] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
--------------------------------------------------------------------------
mpirun noticed that process rank 56 with PID 10983 on node g02 exited on signal 8 (Floating point exception).
--------------------------------------------------------------------------
After watching the thread, I could not figure why I get the error. SigFpe, the moderator wyldckat said it maybe maths problems. But, I had the simulation stopped at 0.00128182s, running in 64 processors of 2 machines (each machine with 32 processors). However, the same simulation, but now running on 32 processors (in only one machine) has already passed the time which the previous had failed and no errors until now (0.0018 s). And the same case but with a coarser mesh, but the same Courant number, running in 64 processors gave me a similar error, and after changing from 64 to 32 processors it run alright until the end.
I also checked my mesh with the command checkMesh, the output was

Code:
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create polyMesh for time = 0

Time = 0

Mesh stats
    points:           5440550
    faces:            15827778
    internal faces:   15337566
    cells:            5194224
    boundary patches: 6
    point zones:      0
    face zones:       0
    cell zones:       0

Overall number of cells of each type:
    hexahedra:     5194224
    prisms:        0
    wedges:        0
    pyramids:      0
    tet wedges:    0
    tetrahedra:    0
    polyhedra:     0

Checking topology...
    Boundary definition OK.
    Cell to face addressing OK.
    Point usage OK.
    Upper triangular ordering OK.
    Face vertices OK.
    Number of regions: 1 (OK).

Checking patch topology for multiply connected surfaces ...
    Patch               Faces    Points   Surface topology                  
    entrada             6840     7150     ok (non-closed singly connected)  
    topo                14784    15425    ok (non-closed singly connected)  
    saida               6840     7150     ok (non-closed singly connected)  
    parede              28896    30125    ok (non-closed singly connected)  
    tras                216426   217622   ok (non-closed singly connected)  
    frente              216426   217622   ok (non-closed singly connected)  

Checking geometry...
    Overall domain bounding box (-0.05 0 -0.06) (0.2 0.22 0.06)
    Mesh (non-empty, non-wedge) directions (1 1 1)
    Mesh (non-empty) directions (1 1 1)
    Boundary openness (-2.66602e-15 1.51299e-14 4.75939e-14) OK.
    Max cell openness = 2.21102e-16 OK.
    Max aspect ratio = 15.8197 OK.
    Minumum face area = 1.00095e-07. Maximum face area = 2.88462e-06.  Face area magnitudes OK.
    Min volume = 5.00473e-10. Max volume = 1.2275e-09.  Total volume = 0.00372.  Cell volumes OK.
    Mesh non-orthogonality Max: 0 average: 0
    Non-orthogonality check OK.
    Face pyramids OK.
    Max skewness = 4.99527e-06 OK.

Mesh OK.

End
It seems ok...

And, as I see from the residuals and Courant number, all of them seem alright.

Resuming, when I run simulations with two computers, this errors happens. So is there a problem, when parallel computing is being performed with processors of two machines ? I mean the communication between them ?
__________________
Se Urso Vires Foge Tocando Gaita Para Hamburgo
guilha is offline   Reply With Quote

Old   September 27, 2013, 18:35
Default
  #32
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,312
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Greetings guilha,

To assess if the problem is related the communication between the two machines, try running with 16 cores on each machine, therefore having 32 cores in total. This way you can isolate if the problem is due to using too many cores, or a bad decomposition or if it's related to the communication.

In addition, there are a few other things that can affect this:
  1. If your case has any cyclic patches, then those might be introducing the problem here.
  2. Check what parameter is being used in the file "$WM_PROJECT_DIR/etc/controlDict" for the keyword "commsType".
  3. What file sharing system is being used?
    1. Is it NFS? If it is, which version is it using? NFS 3 or 4?
  4. What kind of connection is being used for communicating between machines? In other words: is it Ethernet or Infiniband?
  5. Run a full check on the mesh:
    Code:
    checkMesh -allGeometry -allTopology
Best regards,
Bruno
wyldckat is offline   Reply With Quote

Old   September 30, 2013, 10:02
Default
  #33
New Member
 
Join Date: Jan 2013
Location: Lisboa-Funchal
Posts: 20
Rep Power: 4
guilha is on a distinguished road
Hello Bruno and all other FOAMers, thanks for your help and patience

I did not have time to test the communication between the machines.

Now, following your list:

1 - Yes, I have cyclic patches, my case is almost two dimensional and LES;

2 - The "commsType" is set with nonBlocking. If I change do I have to compile anything ?

3 - I know it not, I talked to the administrator and we both do not know, but probably it is because I do not understand what really means the file sharing, however it seems not to be NFS as she said we did not use it;

4 - The communication between the machines is Ethernet.

5 - It is ok, and the output is just below (checkMesh with more options).

Code:
/*---------------------------------------------------------------------------*\
| =========                 |                                                 |
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |
|  \\    /   O peration     | Version:  2.0.1                                 |
|   \\  /    A nd           | Web:      www.OpenFOAM.com                      |
|    \\/     M anipulation  |                                                 |
\*---------------------------------------------------------------------------*/
Build  : 2.0.1-51f1de99a4bc
Exec   : checkMesh -allGeometry -allTopology
Date   : Sep 30 2013
Time   : 11:24:57
Host   : g01
PID    : 47387
Case   : /home/guilha/cavidade_216kx24_les_smagorinsky_galego_euler
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create polyMesh for time = 0

Time = 0

Mesh stats
    points:           5440550
    faces:            15827778
    internal faces:   15337566
    cells:            5194224
    boundary patches: 6
    point zones:      0
    face zones:       0
    cell zones:       0

Overall number of cells of each type:
    hexahedra:     5194224
    prisms:        0
    wedges:        0
    pyramids:      0
    tet wedges:    0
    tetrahedra:    0
    polyhedra:     0

Checking topology...
    Boundary definition OK.
    Cell to face addressing OK.
    Point usage OK.
    Upper triangular ordering OK.
    Face vertices OK.
    Topological cell zip-up check OK.
    Face-face connectivity OK.
    Number of regions: 1 (OK).

Checking patch topology for multiply connected surfaces ...
    Patch               Faces    Points   Surface topology                   Bounding box
    entrada             6840     7150     ok (non-closed singly connected)   (-0.05 0.12 -0.06) (-0.05 0.22 0.06)
    topo                14784    15425    ok (non-closed singly connected)   (-0.05 0.22 -0.06) (0.2 0.22 0.06)
    saida               6840     7150     ok (non-closed singly connected)   (0.2 0.12 -0.06) (0.2 0.22 0.06)
    parede              28896    30125    ok (non-closed singly connected)   (-0.05 0 -0.06) (0.2 0.12 0.06)
    tras                216426   217622   ok (non-closed singly connected)   (-0.05 0 -0.06) (0.2 0.22 -0.06)
    frente              216426   217622   ok (non-closed singly connected)   (-0.05 0 0.06) (0.2 0.22 0.06)

Checking geometry...
    Overall domain bounding box (-0.05 0 -0.06) (0.2 0.22 0.06)
    Mesh (non-empty, non-wedge) directions (1 1 1)
    Mesh (non-empty) directions (1 1 1)
    Boundary openness (-2.66602e-15 1.51299e-14 4.75939e-14) OK.
    Max cell openness = 2.21102e-16 OK.
    Max aspect ratio = 15.8197 OK.
    Minumum face area = 1.00095e-07. Maximum face area = 2.88462e-06.  Face area magnitudes OK.
    Min volume = 5.00473e-10. Max volume = 1.2275e-09.  Total volume = 0.00372.  Cell volumes OK.
    Mesh non-orthogonality Max: 0 average: 0
    Non-orthogonality check OK.
    Face pyramids OK.
    Max skewness = 4.99527e-06 OK.
    Face tets OK.
    Min/max edge length = 0.000316061 0.005 OK.
    All angles in faces OK.
    Face flatness (1 = flat, 0 = butterfly) : average = 1  min = 1
    All face flatness OK.
    Cell determinant (wellposedness) : minimum: 0.031003 average: 0.371808
    Cell determinant check OK.
    Concave cell check OK.

Mesh OK.

End
After talking to the administrator, I got some other views of the problem. The problem can be in the decomposition, she told me that if I decompose the problem with a script (in order to use 64 processors of two machines) maybe the machines or the mpi will be prepared, and run without any problems. While I was viewing some threads titles, I saw this one
How to "reconstructPar" with multiple cores? I do not need to do the decomposition with parallel computing but it gave me the alert, is the problem in my decomposition ? I have been doing it by simply typing decomposePar -force, and if I wish to do it with more processors do I have to change anything in the command ? I ask it because I thought I could simply write on the script the number of processors used to do the decomposition. Although I am not using scripts to do the decomposition.

And to finish, the two cases I was running, now on SINGLE MACHINES, one of them stopped with almost the same erros (almost, because I checked the messages and they have very few differences). I must remember that the two cases are the same, but one with a more refined mesh. And the one that had given the error is the most refined case, the time of simulation at which appeared the error is almost the same as a particle at the speed of U0 have traveled the distance of the whole domain 2 times. The other case, I am running to get more samples for statistic issues, and until now no problems. So I am totally dizzied. It does not seem to be anything unphysical, I mean the residuals and Courant (and the other coarser mesh gave great results), the new error message is this one:

Code:
Mean and max Courant Numbers = 0.0354678 0.0909748
Time = 0.00225237

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 1.52504e-06, Final residual = 2.11393e-14, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 2.81461e-06, Final residual = 2.76122e-14, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 1.2395e-05, Final residual = 1.6969e-13, No Iterations 3
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for e, Initial residual = 5.03938e-06, Final residual = 4.94582e-13, No Iterations 3
ExecutionTime = 214443 s  ClockTime = 216156 s

Mean and max Courant Numbers = 0.0354677 0.0909748
Time = 0.00225242

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 1.52931e-06, Final residual = 2.19558e-14, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 2.82524e-06, Final residual = 2.91866e-14, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 1.24457e-05, Final residual = 1.75215e-13, No Iterations 3
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
[16] #0  Foam::error::printStack(Foam::Ostream&)[0] #0  Foam::error::printStack(Foam::Ostream&)--------------------------------------------------------------------------
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process.  Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption.  The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.  

The process that invoked fork was:

  Local host:          g01 (PID 38756)
  MPI_COMM_WORLD rank: 16

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
 in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[16] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[0] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[16] #2   in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[0] #2   in "/lib/x86_64-linux-gnu/libc.so.6"
[16] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/lib/x86_64-linux-gnu/libc.so.6"
[0] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[16] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[0] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[0] #5   in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[16] #5  

[16]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[16] #6  __libc_start_main[0]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[0] #6  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
[16] #7   in "/lib/x86_64-linux-gnu/libc.so.6"
[0] #7  

[0]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g01:38740] *** Process received signal ***
[g01:38740] Signal: Floating point exception (8)
[g01:38740] Signal code:  (-6)
[g01:38740] Failing at address: 0x3f800009754
[g01:38740] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b4c82bf3480]
[g01:38740] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x2b4c82bf3405]
[g01:38740] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b4c82bf3480]
[g01:38740] [ 3] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE9calculateEv+0x3f9) [0x2b4c805161e9]
[g01:38740] [ 4] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE7correctEv+0x32) [0x2b4c8051c262]
[g01:38740] [ 5] rhoCentralFoam() [0x4236cb]
[g01:38740] [ 6] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2b4c82bdfead]
[g01:38740] [ 7] rhoCentralFoam() [0x41c709]
[g01:38740] *** End of error message ***
[16]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g01:38756] *** Process received signal ***
[g01:38756] Signal: Floating point exception (8)
[g01:38756] Signal code:  (-6)
[g01:38756] Failing at address: 0x3f800009764
[g01:38756] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2add71707480]
[g01:38756] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x2add71707405]
[g01:38756] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2add71707480]
[g01:38756] [ 3] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE9calculateEv+0x3f9) [0x2add6f02a1e9]
[g01:38756] [ 4] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE7correctEv+0x32) [0x2add6f030262]
[g01:38756] [ 5] rhoCentralFoam() [0x4236cb]
[g01:38756] [ 6] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2add716f3ead]
[g01:38756] [ 7] rhoCentralFoam() [0x41c709]
[g01:38756] *** End of error message ***
[g01:38739] 1 more process has sent help message help-mpi-runtime.txt / mpi_init:warn-fork
[g01:38739] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
--------------------------------------------------------------------------
mpirun noticed that process rank 16 with PID 38756 on node g01 exited on signal 8 (Floating point exception).
--------------------------------------------------------------------------
__________________
Se Urso Vires Foge Tocando Gaita Para Hamburgo
guilha is offline   Reply With Quote

Old   October 6, 2013, 11:58
Default
  #34
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,312
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Hi guilha,

Quote:
Originally Posted by guilha View Post
I did not have time to test the communication between the machines.
Have you managed to the tests I suggested?

Quote:
Originally Posted by guilha View Post
1 - Yes, I have cyclic patches, my case is almost two dimensional and LES;
OK, special caution is necessary for cyclic patches, as explained here: Cyclic patches and parallel postprocessing problems #8

Either your case is 2D or it isn't. In OpenFOAM, "2D" is when we use front and back patches defined as "empty" and there is only one cell thickness in the Z direction.

As for LES in 2D... I vaguely remember that it's not exactly a good idea... because the turbulence is actually 3D. But then again, I vaguely remember that OpenFOAM has got one or two tutorials working with LES in 2D.

Quote:
Originally Posted by guilha View Post
2 - The "commsType" is set with nonBlocking. If I change do I have to compile anything ?
No need to rebuild. This controls how the parallel processes will handle the waiting period between data exchanges. And that's the default option, as shown here: https://github.com/OpenFOAM/OpenFOAM...ontrolDict#L58 - so there shouldn't be any problems here.

Quote:
Originally Posted by guilha View Post
3 - I know it not, I talked to the administrator and we both do not know, but probably it is because I do not understand what really means the file sharing, however it seems not to be NFS as she said we did not use it;
Soooo, how are you able to share the files between the two machines, while the simulation is running? I ask this because OpenFOAM does not do this automatically, unless you use the strategy described here: Running OpenFOAM in parallel with different locations for each process


Quote:
Originally Posted by guilha View Post
4 - The communication between the machines is Ethernet.
Since it's only two machines, I guess that's enough.

Quote:
Originally Posted by guilha View Post
5 - It is ok, and the output is just below (checkMesh with more options).
Yes, the mesh seems to be perfectly fine.

Quote:
Originally Posted by guilha View Post
After talking to the administrator, I got some other views of the problem. The problem can be in the decomposition, she told me that if I decompose the problem with a script (in order to use 64 processors of two machines) maybe the machines or the mpi will be prepared, and run without any problems. While I was viewing some threads titles, I saw this one
How to "reconstructPar" with multiple cores? I do not need to do the decomposition with parallel computing but it gave me the alert, is the problem in my decomposition ? I have been doing it by simply typing decomposePar -force, and if I wish to do it with more processors do I have to change anything in the command ? I ask it because I thought I could simply write on the script the number of processors used to do the decomposition. Although I am not using scripts to do the decomposition.
I don't think that's the problem. The only few possible problems could be:
  • Problems with the cyclic patches, as indicate in the first point.
  • Using scotch vs simple vs hierarchical might be a good exercise, for making sure if it's a problem concerning the decomposition method itself.

Quote:
Originally Posted by guilha View Post
And to finish, the two cases I was running, now on SINGLE MACHINES, one of them stopped with almost the same erros (almost, because I checked the messages and they have very few differences). I must remember that the two cases are the same, but one with a more refined mesh. And the one that had given the error is the most refined case, the time of simulation at which appeared the error is almost the same as a particle at the speed of U0 have traveled the distance of the whole domain 2 times. The other case, I am running to get more samples for statistic issues, and until now no problems. So I am totally dizzied. It does not seem to be anything unphysical, I mean the residuals and Courant (and the other coarser mesh gave great results), the new error message is this one:

Code:
[...]

[16] #0  Foam::error::printStack(Foam::Ostream&)[0] #0  Foam::error::printStack(Foam::Ostream&)--------------------------------------------------------------------------

[...]

[16] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[0] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[16] #2   in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[0] #2   in "/lib/x86_64-linux-gnu/libc.so.6"
[16] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/lib/x86_64-linux-gnu/libc.so.6"
[0] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[16] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[0] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[0] #5   in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[16] #5  

[16]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[16] #6  __libc_start_main[0]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[0] #6  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
[16] #7   in "/lib/x86_64-linux-gnu/libc.so.6"
[0] #7  

[0]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g01:38740] *** Process received signal ***
[g01:38740] Signal: Floating point exception (8)
[g01:38740] Signal code:  (-6)

[...]
The last occurrence of "Signal: Floating point exception" relates to the few first error messages that say "Foam::sigFpe::sigHandler(int)". This means that there was a division by 0 somewhere. More specifically, it was when:
Code:
 Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas>  > > > >::calculate()
was called, namely this method: https://github.com/OpenFOAM/OpenFOAM...siThermo.C#L33
Problem here is that there is no clear indication of which operation might have given a division by zero.


I would choose to write frequent time snapshots near the crashing time location and then visually inspect where the fields are getting high or rather low values.

Best regards,
Bruno
wyldckat is offline   Reply With Quote

Old   October 14, 2013, 12:37
Default
  #35
New Member
 
Join Date: Jan 2013
Location: Lisboa-Funchal
Posts: 20
Rep Power: 4
guilha is on a distinguished road
Hello Bruno, thank you for your analysis. I have been too much busy lately.

The test between the machines, I did not do because I can not use x processors of one machine and y processors of the other. However, I did a test (the case is the same, only has a bigger time step), and I run it in a single processor, it failed here:
Code:
Mean and max Courant Numbers = 0.200668 0.571628
Time = 2.12e-05

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 0.000425058, Final residual = 9.90843e-09, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 0.000855807, Final residual = 9.09303e-09, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 0.00843877, Final residual = 1.09127e-11, No Iterations 6
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for e, Initial residual = 0.00155423, Final residual = 9.73893e-12, No Iterations 6
ExecutionTime = 1511.9 s  ClockTime = 1513 s

Mean and max Courant Numbers = 0.200544 0.571627
Time = 2.16e-05

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 0.000381028, Final residual = 9.74391e-09, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 0.000849587, Final residual = 8.09615e-09, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 0.00710211, Final residual = 8.182e-12, No Iterations 6
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for e, Initial residual = 0.00131698, Final residual = 1.03579e-11, No Iterations 6
ExecutionTime = 1547.3 s  ClockTime = 1549 s

Mean and max Courant Numbers = 0.200412 0.593619
Time = 2.2e-05

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 0.000346998, Final residual = 8.89842e-09, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 0.000814717, Final residual = 7.78361e-09, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 0.00670942, Final residual = 6.52104e-12, No Iterations 6
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for e, Initial residual = 0.00145455, Final residual = 9.90931e-08, No Iterations 3
ExecutionTime = 1567.72 s  ClockTime = 1569 s

Mean and max Courant Numbers = 0.200281 0.711964
Time = 2.24e-05

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 0.000390677, Final residual = 8.75177e-09, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 0.000795014, Final residual = 7.67302e-09, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 0.0075196, Final residual = 8.23954e-12, No Iterations 6
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
#0  Foam::error::printStack(Foam::Ostream&) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#2   in "/lib/libc.so.6"
#3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
#4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
#5  
 in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
#6  __libc_start_main in "/lib/libc.so.6"
#7  
 in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
Floating point exception
I have put the previous time steps purposely, because the Courant number was growing too much. Something tells me, that this error is due to the time step. And after running this case with a smaller time step, there is no problems, at least until where I stopped (3e-05 s).

About my simulations, they are LES 3D, of course. What I meant for almost 2D, was the type of flow, which is quasi-2D.

For the cases where my time step is smaller than the one I posted in code, and running in parallel, the errors are random. And I can not have the results stored since it leads to a lot of memory usage.
But for the last test (relatively big time step) I did (which was in a single processor), the error is not random, I run the case twice and confirmed it. From the post processing, my velocity grows in a sharp corner, and this is the reason why Courant increases, but I think it is compatible with the perfect fluid solution. But in this simulation, I think it gets unstable due to the Courant increase, that is with a smaller time step it might be bounded to the stability limit.

Also I saw the function, which you told me about, the ePsiThermo. Where there is an alpha. For certains boundaries conditions (alphaSGS, muSGS and muTilda), I used a standard one (as I could not find in the literature any value for these variables) which I saw on an OpenFOAM tutorial, and they are essentially 0.

Regarding the cyclic patch, in this link Cyclic patches and parallel postprocessing problems, I think I have this in the computer,
Code:
//- Keep owner and neighbour on same processor for faces in patches:
//  (makes sense only for cyclic patches)
//preservePatches (cyclic_half0 cyclic_half1);
And the scotch method. Are you suggesting that I should uncomment the last line ?
__________________
Se Urso Vires Foge Tocando Gaita Para Hamburgo
guilha is offline   Reply With Quote

Old   October 14, 2013, 17:02
Default
  #36
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,312
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Hi guilha,

Quote:
Originally Posted by guilha View Post
I have put the previous time steps purposely, because the Courant number was growing too much. Something tells me, that this error is due to the time step. And after running this case with a smaller time step, there is no problems, at least until where I stopped (3e-05 s).
The rule of thumb is to only allow a maximum Courant number of 0.5; but if still diverges, try lower values.
Higher values should only be used if you know what you are doing

Quote:
Originally Posted by guilha View Post
For the cases where my time step is smaller than the one I posted in code, and running in parallel, the errors are random. And I can not have the results stored since it leads to a lot of memory usage.
Honestly, I suspect that this is one of those bugs that have already been fixed in the more recent versions of OpenFOAM. If you were able to create a small test case that replicates the errors you are observing, I could test it with the more recent versions of OpenFOAM.

Quote:
Originally Posted by guilha View Post
But for the last test (relatively big time step) I did (which was in a single processor), the error is not random, I run the case twice and confirmed it. From the post processing, my velocity grows in a sharp corner, and this is the reason why Courant increases, but I think it is compatible with the perfect fluid solution. But in this simulation, I think it gets unstable due to the Courant increase, that is with a smaller time step it might be bounded to the stability limit.
Like I wrote above: the maximum Courant number that should be allowed is 0.5 or lower, depending on your simulation.

Quote:
Originally Posted by guilha View Post
Also I saw the function, which you told me about, the ePsiThermo. Where there is an alpha. For certains boundaries conditions (alphaSGS, muSGS and muTilda), I used a standard one (as I could not find in the literature any value for these variables) which I saw on an OpenFOAM tutorial, and they are essentially 0.
Yeah, about that... don't trust the name "alpha". It tends to appear in many shapes and forms and many of them aren't even related. If I'm not mistaken, there are at least 3 kinds of alpha in OpenFOAM: as turbulence, as heat and as phase (multiphase).

Quote:
Originally Posted by guilha View Post
Regarding the cyclic patch, in this link Cyclic patches and parallel postprocessing problems, I think I have this in the computer,
Code:
//- Keep owner and neighbour on same processor for faces in patches:
//  (makes sense only for cyclic patches)
//preservePatches (cyclic_half0 cyclic_half1);
And the scotch method. Are you suggesting that I should uncomment the last line ?
Yes, uncomment that before running decomposePar. And don't forget to replace the names therein with yours, for example:
Code:
preservePatches (
    up_patch
    downPatch
    patch_left
    thenRight
);
Best regards,
Bruno
wyldckat is offline   Reply With Quote

Old   February 15, 2014, 17:32
Default
  #37
New Member
 
Join Date: Jan 2013
Location: Lisboa-Funchal
Posts: 20
Rep Power: 4
guilha is on a distinguished road
Good evening,

I am again in this thread because recently I have had a wierd error. When I run my case in 16 processors or 24 (the cases tested), no problems appear, however with more processors like 30, 32 or 64 (the cases tested) it appears this error

Code:
/*---------------------------------------------------------------------------*\
| =========                 |                                                 |
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |
|  \\    /   O peration     | Version:  2.0.1                                 |
|   \\  /    A nd           | Web:      www.OpenFOAM.com                      |
|    \\/     M anipulation  |                                                 |
\*---------------------------------------------------------------------------*/
Build  : 2.0.1-51f1de99a4bc
Exec   : rhoCentralFoam -parallel
Date   : Feb 15 2014
Time   : 21:23:14
Host   : g01
PID    : 35702
Case   : /home/guilha/cavidade_LES_130kx24_smagorinsky_v1_perfil_power_law_v4
nProcs : 32
Slaves : 
31
(
g01.35703
g01.35704
g01.35705
g01.35706
g01.35707
g01.35708
g01.35709
g01.35710
g01.35711
g01.35712
g01.35713
g01.35714
g01.35715
g01.35716
g01.35717
g01.35718
g01.35719
g01.35720
g01.35721
g01.35722
g01.35723
g01.35724
g01.35725
g01.35726
g01.35727
g01.35728
g01.35729
g01.35730
g01.35731
g01.35732
g01.35733
)

Pstream initialized with:
    floatTransfer     : 0
    nProcsSimpleSum   : 0
    commsType         : nonBlocking
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0

[g01:35706] *** Process received signal ***
[g01:35706] Signal: Segmentation fault (11)
[g01:35706] Signal code: Address not mapped (1)
[g01:35706] Failing at address: 0xfffffffe03990ad8
[g01:35706] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2ad84978a480]
[g01:35706] [ 1] /lib/x86_64-linux-gnu/libc.so.6(+0x728fa) [0x2ad8497ca8fa]
[g01:35706] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x74d64) [0x2ad8497ccd64]
[g01:35706] [ 3] /lib/x86_64-linux-gnu/libc.so.6(__libc_malloc+0x70) [0x2ad8497cf420]
[g01:35706] [ 4] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_Znwm+0x1d) [0x2ad84907268d]
[g01:35706] [ 5] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_Znam+0x9) [0x2ad8490727a9]
[g01:35706] [ 6] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam5error10printStackERNS_7OstreamE+0x128b) [0x2ad848ad81db]
[g01:35706] [ 7] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam7sigSegv10sigHandlerEi+0x30) [0x2ad848acaec0]
[g01:35706] [ 8] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2ad84978a480]
[g01:35706] [ 9] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam18processorPolyPatch10updateMeshERNS_14PstreamBuffersE+0x2da) [0x2ad84894af3a]
[g01:35706] [10] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam16polyBoundaryMesh10updateMeshEv+0x2b1) [0x2ad848951fc1]
[g01:35706] [11] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam8polyMeshC2ERKNS_8IOobjectE+0x10ea) [0x2ad8489a316a]
[g01:35706] [12] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libfiniteVolume.so(_ZN4Foam6fvMeshC1ERKNS_8IOobjectE+0x19) [0x2ad8462d25f9]
[g01:35706] [13] rhoCentralFoam() [0x41f624]
[g01:35706] [14] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2ad849776ead]
[g01:35706] [15] rhoCentralFoam() [0x41c709]
[5] #0  Foam::error::printStack(Foam::Ostream&)[g01:35706] *** End of error message ***
--------------------------------------------------------------------------
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process.  Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption.  The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.  

The process that invoked fork was:

  Local host:          g01 (PID 35707)
  MPI_COMM_WORLD rank: 5

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
 in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #1  Foam::sigSegv::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #2   in "/lib/x86_64-linux-gnu/libc.so.6"
[5] #3  Foam::processorPolyPatch::updateMesh(Foam::PstreamBuffers&) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #4  Foam::polyBoundaryMesh::updateMesh()--------------------------------------------------------------------------
mpirun noticed that process rank 4 with PID 35706 on node g01 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
I run the checkMesh utility with the options -allGeometry and -allTopology and everything is fine.
__________________
Se Urso Vires Foge Tocando Gaita Para Hamburgo
guilha is offline   Reply With Quote

Old   February 15, 2014, 17:50
Default
  #38
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,312
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Hi guilha,

It could be a problem in the installation of OpenFOAM on one of the machines. Try running checkMesh in parallel, the same way you run rhoCentralFoam.

And a few questions (I don't remember the details):
  1. How many cells does the mesh have?
  2. Does it have any special patches/boundary conditions? Such as cyclics, mapped or wedges?
  3. How many machines are being used for each processor distribution?
  4. What does decomposePar tell you for the 32 and 64 decompositions?
Best regards,
Bruno
wyldckat is offline   Reply With Quote

Old   February 15, 2014, 18:37
Default
  #39
New Member
 
Join Date: Jan 2013
Location: Lisboa-Funchal
Posts: 20
Rep Power: 4
guilha is on a distinguished road
Bruno thanks a lot for your replies and all the support.

Running the checkMesh in parallel gives an error yes

Code:
/*---------------------------------------------------------------------------*\
| =========                 |                                                 |
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |
|  \\    /   O peration     | Version:  2.0.1                                 |
|   \\  /    A nd           | Web:      www.OpenFOAM.com                      |
|    \\/     M anipulation  |                                                 |
\*---------------------------------------------------------------------------*/
Build  : 2.0.1-51f1de99a4bc
Exec   : checkMesh -parallel
Date   : Feb 15 2014
Time   : 22:29:05
Host   : g04
PID    : 3313
Case   : /home/guilha/testes
nProcs : 32
Slaves : 
31
(
g04.3314
g04.3315
g04.3316
g04.3317
g04.3318
g04.3319
g04.3320
g04.3321
g04.3322
g04.3323
g04.3324
g04.3325
g04.3326
g04.3327
g04.3328
g04.3329
g04.3330
g04.3331
g04.3332
g04.3333
g04.3334
g04.3335
g04.3336
g04.3337
g04.3338
g04.3339
g04.3340
g04.3341
g04.3342
g04.3343
g04.3344
)

Pstream initialized with:
    floatTransfer     : 0
    nProcsSimpleSum   : 0
    commsType         : nonBlocking
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
[0] 
[0] 
[0] --> FOAM FATAL ERROR: 
[0] checkMesh: cannot open case directory "/home/guilha/testes/processor0"
[0] 
[0] 
FOAM parallel run exiting
[0] 
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD 
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 3313 on
node g04 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
My mesh has 3,136,272 cells.
I have cyclic boundary conditions.
The decomposePar I think it works perfectly fine, the output is for the 32 processors
Code:
Processor 0
    Number of cells = 97416
    Number of faces shared with processor 1 = 2736
    Number of faces shared with processor 2 = 1243
    Number of faces shared with processor 3 = 1205
    Number of faces shared with processor 30 = 720
    Number of processor patches = 4
    Number of processor faces = 5904
    Number of boundary faces = 9750

Processor 1
    Number of cells = 98736
    Number of faces shared with processor 0 = 2736
    Number of faces shared with processor 12 = 1632
    Number of faces shared with processor 13 = 864
    Number of faces shared with processor 14 = 768
    Number of faces shared with processor 30 = 1560
    Number of processor patches = 5
    Number of processor faces = 7560
    Number of boundary faces = 8444

Processor 2
    Number of cells = 97046
    Number of faces shared with processor 0 = 1243
    Number of faces shared with processor 3 = 2014
    Number of faces shared with processor 3 = 3
    Number of faces shared with processor 7 = 1272
    Number of processor patches = 4
    Number of processor faces = 4532
    Number of boundary faces = 10102

Processor 3
    Number of cells = 97426
    Number of faces shared with processor 0 = 1205
    Number of faces shared with processor 2 = 2014
    Number of faces shared with processor 2 = 3
    Number of faces shared with processor 6 = 1289
    Number of faces shared with processor 7 = 103
    Number of processor patches = 5
    Number of processor faces = 4614
    Number of boundary faces = 9914

Processor 4
    Number of cells = 97976
    Number of faces shared with processor 5 = 2142
    Number of faces shared with processor 5 = 3
    Number of faces shared with processor 5 = 1
    Number of faces shared with processor 6 = 1222
    Number of processor patches = 4
    Number of processor faces = 3368
    Number of boundary faces = 11280

Processor 5
    Number of cells = 97168
    Number of faces shared with processor 4 = 2142
    Number of faces shared with processor 4 = 1
    Number of faces shared with processor 4 = 3
    Number of faces shared with processor 6 = 74
    Number of faces shared with processor 7 = 1224
    Number of processor patches = 5
    Number of processor faces = 3444
    Number of boundary faces = 11238

Processor 6
    Number of cells = 98412
    Number of faces shared with processor 3 = 1289
    Number of faces shared with processor 4 = 1222
    Number of faces shared with processor 5 = 74
    Number of faces shared with processor 7 = 2178
    Number of faces shared with processor 7 = 3
    Number of processor patches = 5
    Number of processor faces = 4766
    Number of boundary faces = 10228

Processor 7
    Number of cells = 97044
    Number of faces shared with processor 2 = 1272
    Number of faces shared with processor 3 = 103
    Number of faces shared with processor 5 = 1224
    Number of faces shared with processor 6 = 2178
    Number of faces shared with processor 6 = 3
    Number of processor patches = 5
    Number of processor faces = 4780
    Number of boundary faces = 9894

Processor 8
    Number of cells = 99552
    Number of faces shared with processor 9 = 2040
    Number of faces shared with processor 10 = 216
    Number of faces shared with processor 11 = 1392
    Number of faces shared with processor 24 = 1392
    Number of processor patches = 4
    Number of processor faces = 5040
    Number of boundary faces = 9880

Processor 9
    Number of cells = 98677
    Number of faces shared with processor 8 = 2040
    Number of faces shared with processor 10 = 1152
    Number of faces shared with processor 12 = 1480
    Number of faces shared with processor 13 = 750
    Number of faces shared with processor 24 = 1008
    Number of faces shared with processor 29 = 600
    Number of processor patches = 6
    Number of processor faces = 7030
    Number of boundary faces = 8224

Processor 10
    Number of cells = 98044
    Number of faces shared with processor 8 = 216
    Number of faces shared with processor 9 = 1152
    Number of faces shared with processor 11 = 1728
    Number of faces shared with processor 13 = 528
    Number of faces shared with processor 15 = 1260
    Number of processor patches = 5
    Number of processor faces = 4884
    Number of boundary faces = 9534

Processor 11
    Number of cells = 98040
    Number of faces shared with processor 8 = 1392
    Number of faces shared with processor 10 = 1728
    Number of processor patches = 2
    Number of processor faces = 3120
    Number of boundary faces = 11242

Processor 12
    Number of cells = 99813
    Number of faces shared with processor 1 = 1632
    Number of faces shared with processor 9 = 1480
    Number of faces shared with processor 13 = 2268
    Number of faces shared with processor 29 = 1224
    Number of faces shared with processor 30 = 576
    Number of processor patches = 5
    Number of processor faces = 7180
    Number of boundary faces = 8322

Processor 13
    Number of cells = 100166
    Number of faces shared with processor 1 = 864
    Number of faces shared with processor 9 = 750
    Number of faces shared with processor 10 = 528
    Number of faces shared with processor 12 = 2268
    Number of faces shared with processor 14 = 1512
    Number of faces shared with processor 15 = 1968
    Number of processor patches = 6
    Number of processor faces = 7890
    Number of boundary faces = 8342

Processor 14
    Number of cells = 100368
    Number of faces shared with processor 1 = 768
    Number of faces shared with processor 13 = 1512
    Number of faces shared with processor 15 = 1944
    Number of processor patches = 3
    Number of processor faces = 4224
    Number of boundary faces = 11580

Processor 15
    Number of cells = 100364
    Number of faces shared with processor 10 = 1260
    Number of faces shared with processor 13 = 1968
    Number of faces shared with processor 14 = 1944
    Number of processor patches = 3
    Number of processor faces = 5172
    Number of boundary faces = 10312

Processor 16
    Number of cells = 96459
    Number of faces shared with processor 17 = 2438
    Number of faces shared with processor 17 = 2
    Number of faces shared with processor 18 = 1392
    Number of faces shared with processor 19 = 96
    Number of processor patches = 4
    Number of processor faces = 3928
    Number of boundary faces = 11084

Processor 17
    Number of cells = 97625
    Number of faces shared with processor 16 = 2438
    Number of faces shared with processor 16 = 2
    Number of faces shared with processor 19 = 1248
    Number of faces shared with processor 22 = 336
    Number of faces shared with processor 23 = 1707
    Number of faces shared with processor 23 = 2
    Number of processor patches = 6
    Number of processor faces = 5733
    Number of boundary faces = 9469

Processor 18
    Number of cells = 97056
    Number of faces shared with processor 16 = 1392
    Number of faces shared with processor 19 = 1944
    Number of faces shared with processor 31 = 1416
    Number of processor patches = 3
    Number of processor faces = 4752
    Number of boundary faces = 9816

Processor 19
    Number of cells = 97107
    Number of faces shared with processor 16 = 96
    Number of faces shared with processor 17 = 1248
    Number of faces shared with processor 18 = 1944
    Number of faces shared with processor 22 = 1986
    Number of faces shared with processor 28 = 1512
    Number of faces shared with processor 31 = 528
    Number of processor patches = 6
    Number of processor faces = 7314
    Number of boundary faces = 8090

Processor 20
    Number of cells = 95424
    Number of faces shared with processor 21 = 1824
    Number of faces shared with processor 23 = 1320
    Number of processor patches = 2
    Number of processor faces = 3144
    Number of boundary faces = 11048

Processor 21
    Number of cells = 96816
    Number of faces shared with processor 20 = 1824
    Number of faces shared with processor 22 = 1560
    Number of faces shared with processor 23 = 192
    Number of faces shared with processor 26 = 96
    Number of faces shared with processor 27 = 1512
    Number of processor patches = 5
    Number of processor faces = 5184
    Number of boundary faces = 9412

Processor 22
    Number of cells = 96477
    Number of faces shared with processor 17 = 336
    Number of faces shared with processor 19 = 1986
    Number of faces shared with processor 21 = 1560
    Number of faces shared with processor 23 = 1584
    Number of faces shared with processor 26 = 1848
    Number of faces shared with processor 28 = 48
    Number of processor patches = 6
    Number of processor faces = 7362
    Number of boundary faces = 8042

Processor 23
    Number of cells = 96844
    Number of faces shared with processor 17 = 1707
    Number of faces shared with processor 17 = 2
    Number of faces shared with processor 20 = 1320
    Number of faces shared with processor 21 = 192
    Number of faces shared with processor 22 = 1584
    Number of processor patches = 5
    Number of processor faces = 4805
    Number of boundary faces = 9659

Processor 24
    Number of cells = 98250
    Number of faces shared with processor 8 = 1392
    Number of faces shared with processor 9 = 1008
    Number of faces shared with processor 25 = 2225
    Number of faces shared with processor 29 = 1176
    Number of processor patches = 4
    Number of processor faces = 5801
    Number of boundary faces = 9339

Processor 25
    Number of cells = 96462
    Number of faces shared with processor 24 = 2225
    Number of faces shared with processor 26 = 1224
    Number of faces shared with processor 27 = 1176
    Number of faces shared with processor 28 = 216
    Number of faces shared with processor 29 = 744
    Number of processor patches = 5
    Number of processor faces = 5585
    Number of boundary faces = 9407

Processor 26
    Number of cells = 97678
    Number of faces shared with processor 21 = 96
    Number of faces shared with processor 22 = 1848
    Number of faces shared with processor 25 = 1224
    Number of faces shared with processor 27 = 2250
    Number of faces shared with processor 28 = 2232
    Number of processor patches = 5
    Number of processor faces = 7650
    Number of boundary faces = 8150

Processor 27
    Number of cells = 98402
    Number of faces shared with processor 21 = 1512
    Number of faces shared with processor 25 = 1176
    Number of faces shared with processor 26 = 2250
    Number of processor patches = 3
    Number of processor faces = 4938
    Number of boundary faces = 10182

Processor 28
    Number of cells = 99528
    Number of faces shared with processor 19 = 1512
    Number of faces shared with processor 22 = 48
    Number of faces shared with processor 25 = 216
    Number of faces shared with processor 26 = 2232
    Number of faces shared with processor 29 = 1656
    Number of faces shared with processor 30 = 192
    Number of faces shared with processor 31 = 1536
    Number of processor patches = 7
    Number of processor faces = 7392
    Number of boundary faces = 8294

Processor 29
    Number of cells = 98184
    Number of faces shared with processor 9 = 600
    Number of faces shared with processor 12 = 1224
    Number of faces shared with processor 24 = 1176
    Number of faces shared with processor 25 = 744
    Number of faces shared with processor 28 = 1656
    Number of faces shared with processor 30 = 1320
    Number of processor patches = 6
    Number of processor faces = 6720
    Number of boundary faces = 8182

Processor 30
    Number of cells = 98856
    Number of faces shared with processor 0 = 720
    Number of faces shared with processor 1 = 1560
    Number of faces shared with processor 12 = 576
    Number of faces shared with processor 28 = 192
    Number of faces shared with processor 29 = 1320
    Number of faces shared with processor 31 = 2136
    Number of processor patches = 6
    Number of processor faces = 6504
    Number of boundary faces = 8838

Processor 31
    Number of cells = 98856
    Number of faces shared with processor 18 = 1416
    Number of faces shared with processor 19 = 528
    Number of faces shared with processor 28 = 1536
    Number of faces shared with processor 30 = 2136
    Number of processor patches = 4
    Number of processor faces = 5616
    Number of boundary faces = 9486

Number of processor faces = 87968
Max number of cells = 100368 (2.407444% above average 98008.5)
Max number of processor patches = 7 (51.35135% above average 4.625)
Max number of faces between processors = 7890 (43.50673% above average 5498)


Processor 0: field transfer
Processor 1: field transfer
Processor 2: field transfer
Processor 3: field transfer
Processor 4: field transfer
Processor 5: field transfer
Processor 6: field transfer
Processor 7: field transfer
Processor 8: field transfer
Processor 9: field transfer
Processor 10: field transfer
Processor 11: field transfer
Processor 12: field transfer
Processor 13: field transfer
Processor 14: field transfer
Processor 15: field transfer
Processor 16: field transfer
Processor 17: field transfer
Processor 18: field transfer
Processor 19: field transfer
Processor 20: field transfer
Processor 21: field transfer
Processor 22: field transfer
Processor 23: field transfer
Processor 24: field transfer
Processor 25: field transfer
Processor 26: field transfer
Processor 27: field transfer
Processor 28: field transfer
Processor 29: field transfer
Processor 30: field transfer
Processor 31: field transfer

End.
It is 32 processors of the same machine.
__________________
Se Urso Vires Foge Tocando Gaita Para Hamburgo
guilha is offline   Reply With Quote

Old   February 15, 2014, 18:41
Default
  #40
New Member
 
Join Date: Jan 2013
Location: Lisboa-Funchal
Posts: 20
Rep Power: 4
guilha is on a distinguished road
In the previous post the checkMesh output was shown without the decomposition, after decomposing there is an error

Code:
/*---------------------------------------------------------------------------*\
| =========                 |                                                 |
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |
|  \\    /   O peration     | Version:  2.0.1                                 |
|   \\  /    A nd           | Web:      www.OpenFOAM.com                      |
|    \\/     M anipulation  |                                                 |
\*---------------------------------------------------------------------------*/
Build  : 2.0.1-51f1de99a4bc
Exec   : checkMesh -parallel
Date   : Feb 15 2014
Time   : 22:38:21
Host   : g04
PID    : 3429
Case   : /home/guilha/testes
nProcs : 32
Slaves : 
31
(
g04.3430
g04.3431
g04.3432
g04.3433
g04.3434
g04.3435
g04.3436
g04.3437
g04.3438
g04.3439
g04.3440
g04.3441
g04.3442
g04.3443
g04.3444
g04.3445
g04.3446
g04.3447
g04.3448
g04.3449
g04.3450
g04.3451
g04.3452
g04.3453
g04.3454
g04.3455
g04.3456
g04.3457
g04.3458
g04.3459
g04.3460
)

Pstream initialized with:
    floatTransfer     : 0
    nProcsSimpleSum   : 0
    commsType         : nonBlocking
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create polyMesh for time = 0

[g04:03433] *** Process received signal ***
[g04:03433] Signal: Segmentation fault (11)
[g04:03433] Signal code: Address not mapped (1)
[g04:03433] Failing at address: 0xfffffffe0351be78
[g04:03433] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b6814c14480]
[g04:03433] [ 1] /lib/x86_64-linux-gnu/libc.so.6(+0x728fa) [0x2b6814c548fa]
[g04:03433] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x74d64) [0x2b6814c56d64]
[g04:03433] [ 3] /lib/x86_64-linux-gnu/libc.so.6(__libc_malloc+0x70) [0x2b6814c59420]
[g04:03433] [ 4] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_Znwm+0x1d) [0x2b68144fc68d]
[g04:03433] [ 5] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_Znam+0x9) [0x2b68144fc7a9]
[g04:03433] [ 6] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam5error10printStackERNS_7OstreamE+0x128b) [0x2b6813f631db]
[g04:03433] [ 7] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam7sigSegv10sigHandlerEi+0x30) [0x2b6813f55ec0]
[g04:03433] [ 8] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b6814c14480]
[g04:03433] [ 9] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam18processorPolyPatch10updateMeshERNS_14PstreamBuffersE+0x2da) [0x2b6813dd5f3a]
[g04:03433] [10] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam16polyBoundaryMesh10updateMeshEv+0x2b1) [0x2b6813ddcfc1]
[g04:03433] [11] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam8polyMeshC1ERKNS_8IOobjectE+0xd0b) [0x2b6813e2b8bb]
[5] #0  Foam::error::printStack(Foam::Ostream&)[g04:03433] [12] checkMesh() [0x41b1d4]
[g04:03433] [13] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2b6814c00ead]
[g04:03433] [14] checkMesh() [0x407f79]
[g04:03433] *** End of error message ***
--------------------------------------------------------------------------
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process.  Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption.  The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.  

The process that invoked fork was:

  Local host:          g04 (PID 3434)
  MPI_COMM_WORLD rank: 5

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
 in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #1  Foam::sigSegv::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #2   in "/lib/x86_64-linux-gnu/libc.so.6"
[5] #3  Foam::processorPolyPatch::updateMesh(Foam::PstreamBuffers&) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #4  Foam::polyBoundaryMesh::updateMesh() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #5  Foam::polyMesh::polyMesh(Foam::IOobject const&)--------------------------------------------------------------------------
mpirun noticed that process rank 4 with PID 3433 on node g04 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
__________________
Se Urso Vires Foge Tocando Gaita Para Hamburgo
guilha is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
FoamerrorprintStack mayank OpenFOAM Running, Solving & CFD 38 November 25, 2011 23:58


All times are GMT -4. The time now is 00:47.