CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Bugs

Differences between serial and parallel runs

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree6Likes
  • 2 Post By henry
  • 1 Post By henry
  • 2 Post By carsten
  • 1 Post By henry

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 11, 2008, 10:35
Default Hi there! I'm working withi
  #1
Member
 
Carsten Thorenz
Join Date: Mar 2009
Location: Germany
Posts: 34
Rep Power: 17
carsten is on a distinguished road
Hi there!

I'm working within the Federal Waterway Research Institute in Germany and am currently evaluating the possibilities to use OpenFOAM for our tasks. Currently we're using an University-code (NaSt3D) and a commercial code (Comet, CD-Adapco).

After successfully compiling OpenFOAM and all of its dependencies on our cluster (HP, ~550 cores, it was a $§%& to do it) I started a series of scaling tests. During these tests I observed, that the serial version of lesInterFoam is more stable than the parallel version. In the appended protocol you can see, that the serial version keeps the timestep length (based on CFL-number) approximately constant, while the parallel version suddenly diverges and thus heavily decreases the timestep length.

I tried to tighten the solver tolerances a little (switched from relative to absolute tolerances) and observed that the parallel solver has more difficulties (=needs more iterations) to reach the same residuals.

Can anybody help me out?

Thanks,

Carsten

*********************************************
Protocol of serial run:
*********************************************


thorenz@manager:/cwork/home/thorenz/OpenFOAM/thorenz-1.5/run/test/lesInter> lesInterFoam
/*---------------------------------------------------------------------------*\
| ========= | |
| \ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \ / O peration | Version: 1.5 |
| \ / A nd | Web: http://www.OpenFOAM.org |
| \/ M anipulation | |
\*---------------------------------------------------------------------------*/
Exec : lesInterFoam
Date : Sep 11 2008
Time : 15:16:43
Host : manager
PID : 3366
Case : /cwork/home/thorenz/OpenFOAM/thorenz-1.5/run/test/lesInter
nProcs : 1

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0


Reading environmentalProperties
Reading field pd

Reading field gamma

Reading field U

Reading/calculating face flux field phi

Reading transportProperties

Selecting incompressible transport model Newtonian
Selecting incompressible transport model Newtonian
Calculating field g.h

Selecting LES turbulence model oneEqEddy
oneEqEddyCoeffs
{
ck 0.07;
ce 1.05;
}

time step continuity errors : sum local = 1.90779e-06, global = -5.47287e-11, cumulative = -5.47287e-11
GAMG: Solving for pcorr, Initial residual = 1, Final residual = 0.00933394, No Iterations 2
GAMG: Solving for pcorr, Initial residual = 0.0512361, Final residual = 0.000329517, No Iterations 3
GAMG: Solving for pcorr, Initial residual = 0.0104888, Final residual = 8.43108e-05, No Iterations 3
time step continuity errors : sum local = 3.49924e-10, global = 5.72863e-20, cumulative = -5.47287e-11
Courant Number mean: 0.00198285 max: 0.0870045

Starting time loop

Courant Number mean: 0.00911609 max: 0.4
deltaT = 0.00459746
Time = 0.00459746

MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = 0 Max(gamma) = 1
MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = 0 Max(gamma) = 1
DILUPBiCG: Solving for k, Initial residual = 1, Final residual = 7.93338e-08, No Iterations 3
GAMG: Solving for pd, Initial residual = 1, Final residual = 0.00320971, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.0167016, Final residual = 9.48718e-05, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.00380673, Final residual = 2.70428e-05, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.00432735, Final residual = 3.22664e-05, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.000806934, Final residual = 4.26216e-06, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.000406143, Final residual = 3.9094e-06, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.000464116, Final residual = 2.95828e-06, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.000104657, Final residual = 8.68852e-07, No Iterations 3
GAMG: Solving for pd, Initial residual = 2.20723e-05, Final residual = 1.40261e-07, No Iterations 4
time step continuity errors : sum local = 7.32954e-12, global = 3.03771e-19, cumulative = -5.47287e-11
ExecutionTime = 246.65 s ClockTime = 247 s

Courant Number mean: 0.00911652 max: 0.349656
deltaT = 0.0051234
Time = 0.00972087

MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = 0 Max(gamma) = 1
MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = 0 Max(gamma) = 1
DILUPBiCG: Solving for k, Initial residual = 0.0908774, Final residual = 2.74663e-07, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.091115, Final residual = 0.000455897, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.0105126, Final residual = 5.57734e-05, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.0020783, Final residual = 1.13856e-05, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.00458803, Final residual = 2.51287e-05, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.000711904, Final residual = 7.00776e-06, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.00020928, Final residual = 1.67668e-06, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.000257756, Final residual = 2.29231e-06, No Iterations 3
GAMG: Solving for pd, Initial residual = 5.52424e-05, Final residual = 3.29901e-07, No Iterations 4
GAMG: Solving for pd, Initial residual = 2.64538e-05, Final residual = 2.04274e-07, No Iterations 2
time step continuity errors : sum local = 7.20551e-12, global = 3.81516e-19, cumulative = -5.47287e-11
ExecutionTime = 425.79 s ClockTime = 426 s

Courant Number mean: 0.0101599 max: 0.38734
deltaT = 0.00529085
Time = 0.0150117

MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = 0 Max(gamma) = 1
MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = 0 Max(gamma) = 1
DILUPBiCG: Solving for k, Initial residual = 0.0281467, Final residual = 1.8522e-07, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.110235, Final residual = 0.000596557, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.0224509, Final residual = 0.000128465, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.00580166, Final residual = 4.29631e-05, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.00530598, Final residual = 2.74172e-05, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.00113712, Final residual = 6.09731e-06, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.000345295, Final residual = 2.0767e-06, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.000285847, Final residual = 2.27749e-06, No Iterations 3
GAMG: Solving for pd, Initial residual = 6.69878e-05, Final residual = 5.03613e-07, No Iterations 4
GAMG: Solving for pd, Initial residual = 2.31656e-05, Final residual = 1.10384e-07, No Iterations 4
time step continuity errors : sum local = 4.11883e-12, global = 3.21917e-19, cumulative = -5.47287e-11
ExecutionTime = 614.52 s ClockTime = 615 s

Courant Number mean: 0.0104904 max: 0.374926
deltaT = 0.00564469
Time = 0.0206564

MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = 0 Max(gamma) = 1
MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = 0 Max(gamma) = 1
DILUPBiCG: Solving for k, Initial residual = 0.0149358, Final residual = 1.07743e-07, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.111377, Final residual = 0.000546566, No Iterations 5
GAMG: Solving for pd, Initial residual = 0.0443143, Final residual = 0.000378587, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.00348986, Final residual = 2.93438e-05, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.00580005, Final residual = 3.25267e-05, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.00090779, Final residual = 5.57326e-06, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.000202187, Final residual = 8.91621e-07, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.000261984, Final residual = 1.46064e-06, No Iterations 4
GAMG: Solving for pd, Initial residual = 8.81133e-05, Final residual = 6.75653e-07, No Iterations 3
GAMG: Solving for pd, Initial residual = 2.19288e-05, Final residual = 1.23339e-07, No Iterations 4
time step continuity errors : sum local = 5.2089e-12, global = -2.91032e-20, cumulative = -5.47287e-11
ExecutionTime = 806.01 s ClockTime = 807 s

End





*********************************************
Protocol of (small) parallel run:
*********************************************

thorenz@manager:/cwork/home/thorenz/OpenFOAM/thorenz-1.5/run/test/lesInter> ShowOutput caesar 8303.manager
JOBNAME: lesInter Running on cn113 in directory /cwork/home/thorenz/OpenFOAM/thorenz-1.5/run/test/lesInter
/*---------------------------------------------------------------------------*\
| ========= | |
| \ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \ / O peration | Version: 1.5 |
| \ / A nd | Web: http://www.OpenFOAM.org |
| \/ M anipulation | |
\*---------------------------------------------------------------------------*/
Exec : /home/thorenz/OpenFOAM/OpenFOAM-1.5/applications/bin/linux64GccDPOpt/lesInterFoa m -parallel
Date : Sep 11 2008
Time : 15:58:06
Host : cn113
PID : 14560
Case : /cwork/home/thorenz/OpenFOAM/thorenz-1.5/run/test/lesInter
nProcs : 4
Slaves :
3
(
cn113.14561
cn113.14562
cn113.14563
)

Pstream initialized with:
floatTransfer : 1
nProcsSimpleSum : 0
commsType : nonBlocking

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0


Reading environmentalProperties
Reading field pd

Reading field gamma

Reading field U

Reading/calculating face flux field phi

Reading transportProperties

Selecting incompressible transport model Newtonian
Selecting incompressible transport model Newtonian
Calculating field g.h

Selecting LES turbulence model oneEqEddy
oneEqEddyCoeffs
{
ck 0.07;
ce 1.05;
}

time step continuity errors : sum local = 1.95216e-06, global = -2.19068e-08, cumulative = -2.19068e-08
GAMG: Solving for pcorr, Initial residual = 1, Final residual = 0.00789225, No Iterations 4
GAMG: Solving for pcorr, Initial residual = 0.124184, Final residual = 0.000873301, No Iterations 3
GAMG: Solving for pcorr, Initial residual = 0.00939945, Final residual = 5.94039e-05, No Iterations 5
time step continuity errors : sum local = 2.82378e-10, global = -9.42772e-16, cumulative = -2.19068e-08
Courant Number mean: 0.00197842 max: 0.0866421

Starting time loop

Courant Number mean: 0.00913374 max: 0.4
deltaT = 0.00461669
Time = 0.00461669

MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = -1.37317e-09 Max(gamma) = 1
MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = -2.09824e-08 Max(gamma) = 1
DILUPBiCG: Solving for k, Initial residual = 1, Final residual = 8.32085e-08, No Iterations 3
GAMG: Solving for pd, Initial residual = 1, Final residual = 0.00905161, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.030631, Final residual = 0.000249962, No Iterations 5
GAMG: Solving for pd, Initial residual = 0.0279553, Final residual = 0.000215157, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.0061856, Final residual = 3.29705e-05, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.0010291, Final residual = 5.35965e-06, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.000249627, Final residual = 2.24523e-06, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.000487803, Final residual = 3.99529e-06, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.000113732, Final residual = 8.44721e-07, No Iterations 4
GAMG: Solving for pd, Initial residual = 2.91779e-05, Final residual = 2.47841e-07, No Iterations 5
time step continuity errors : sum local = 1.42955e-11, global = -2.7288e-13, cumulative = -2.19071e-08
ExecutionTime = 107.67 s ClockTime = 111 s

Courant Number mean: 0.00913342 max: 0.350108
deltaT = 0.00514415
Time = 0.00976084

MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = -8.75307e-09 Max(gamma) = 1
MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = -2.95853e-09 Max(gamma) = 1
DILUPBiCG: Solving for k, Initial residual = 0.090851, Final residual = 2.60962e-07, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.103055, Final residual = 0.000658385, No Iterations 7
GAMG: Solving for pd, Initial residual = 0.0534587, Final residual = 0.000471622, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.00457859, Final residual = 3.11229e-05, No Iterations 6
GAMG: Solving for pd, Initial residual = 0.00674019, Final residual = 4.25671e-05, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.000957393, Final residual = 6.8116e-06, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.000305495, Final residual = 2.13162e-06, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.000338653, Final residual = 2.22752e-06, No Iterations 4
GAMG: Solving for pd, Initial residual = 9.83227e-05, Final residual = 8.53568e-07, No Iterations 4
GAMG: Solving for pd, Initial residual = 1.96595e-05, Final residual = 1.80463e-07, No Iterations 6
time step continuity errors : sum local = 7.49885e-12, global = -6.0042e-13, cumulative = -2.19077e-08
ExecutionTime = 195.31 s ClockTime = 199 s

Courant Number mean: 0.0101748 max: 0.387677
deltaT = 0.00530766
Time = 0.0150685

MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = -1.13155e-08 Max(gamma) = 1
MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = -3.14748e-09 Max(gamma) = 1
DILUPBiCG: Solving for k, Initial residual = 0.0280745, Final residual = 2.50708e-07, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.135927, Final residual = 0.0008367, No Iterations 6
GAMG: Solving for pd, Initial residual = 0.12072, Final residual = 0.000949937, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.0067283, Final residual = 6.02786e-05, No Iterations 6
GAMG: Solving for pd, Initial residual = 0.0110492, Final residual = 9.74167e-05, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.00167134, Final residual = 1.60604e-05, No Iterations 5
GAMG: Solving for pd, Initial residual = 0.000762118, Final residual = 5.91042e-06, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.000432036, Final residual = 3.90235e-06, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.0001005, Final residual = 9.58262e-07, No Iterations 5
GAMG: Solving for pd, Initial residual = 3.66069e-05, Final residual = 3.42478e-07, No Iterations 13
time step continuity errors : sum local = 1.73401e-11, global = -1.14339e-12, cumulative = -2.19088e-08
ExecutionTime = 291.55 s ClockTime = 296 s

Courant Number mean: 0.010502 max: 2.84169
deltaT = 0.000747113
Time = 0.0158156

MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = -4.97554e-09 Max(gamma) = 1
MULES: Solving for gamma
MULES: Solving for gamma
Liquid phase volume fraction = 0.634909 Min(gamma) = -2.17027e-08 Max(gamma) = 1
DILUPBiCG: Solving for k, Initial residual = 0.00198818, Final residual = 3.3322e-07, No Iterations 1
GAMG: Solving for pd, Initial residual = 0.279262, Final residual = 0.00205902, No Iterations 5
GAMG: Solving for pd, Initial residual = 0.25002, Final residual = 0.00172857, No Iterations 2
GAMG: Solving for pd, Initial residual = 0.017356, Final residual = 0.000112756, No Iterations 6
GAMG: Solving for pd, Initial residual = 0.0192683, Final residual = 0.00017176, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.00822677, Final residual = 6.5594e-05, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.00343701, Final residual = 2.67641e-05, No Iterations 4
GAMG: Solving for pd, Initial residual = 0.00187906, Final residual = 1.08599e-05, No Iterations 5
GAMG: Solving for pd, Initial residual = 0.00078521, Final residual = 7.42016e-06, No Iterations 3
GAMG: Solving for pd, Initial residual = 0.000177669, Final residual = 1.20193e-06, No Iterations 9
time step continuity errors : sum local = 2.14555e-12, global = -1.17363e-13, cumulative = -2.19089e-08
ExecutionTime = 378.51 s ClockTime = 383 s

Courant Number mean: 0.00147797 max: 0.109817
deltaT = 0.000896536
Time = 0.0167122

[snipped]
carsten is offline   Reply With Quote

Old   September 11, 2008, 11:15
Default Solver convergence behaviour i
  #2
Senior Member
 
Join Date: Mar 2009
Posts: 854
Rep Power: 22
henry is on a distinguished road
Solver convergence behaviour is indeed affected by running in parallel because the effectiveness of the preconditioning and smoothing operations is reduced in the cells adjacent to the processor boundaries which usually causes a modest increase in the number of iterations. But it doesn't look like this is the root of your problems, take a look at the solution and find out where in the domain the Courant number has jumped.

H
dkxls and wc34071209 like this.
henry is offline   Reply With Quote

Old   September 11, 2008, 11:24
Default P.S. your pdFinal tolerance ap
  #3
Senior Member
 
Join Date: Mar 2009
Posts: 854
Rep Power: 22
henry is on a distinguished road
P.S. your pdFinal tolerance appears to vary, are you running with a relative tolerance on pdFinal? It should be relTol = 0. Also for efficiency you may benefit from running with fewer correctors and use the GAMG preconditioned CG solver for pdFinal; take a look at the lesInterFoam tutorial.

H
wc34071209 likes this.
henry is offline   Reply With Quote

Old   September 11, 2008, 11:32
Default Hi Henry, thanks for your e
  #4
Member
 
Carsten Thorenz
Join Date: Mar 2009
Location: Germany
Posts: 34
Rep Power: 17
carsten is on a distinguished road
Hi Henry,

thanks for your explanation about the effectiveness of preconditioning in parallel (didn't know that).

Relative tolerance: Yes. But the main problem is not influenced by this. When I switched to constant tolerances, the general behaviour stays the same. (i.e. more iterations in parallel than in serial runs, sudden jumps of the velocity). I will try to find out were it occurs.

Many thanks,

Carsten
carsten is offline   Reply With Quote

Old   September 11, 2008, 12:08
Default You seem to be currently runni
  #5
Senior Member
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26
mattijs is on a distinguished road
You seem to be currently running with floatTransfer (i.e. all doubles get converted to floats just before transfer). This will loose precision which might affect the solver.
mattijs is offline   Reply With Quote

Old   September 12, 2008, 02:48
Default @Mattijs: Thanks for your c
  #6
Member
 
Carsten Thorenz
Join Date: Mar 2009
Location: Germany
Posts: 34
Rep Power: 17
carsten is on a distinguished road
@Mattijs:

Thanks for your comment. That might result in severe truncation errors. I will try it.


@Henry:

Switching the PD-solver to AMG-preconditioned PCG increased reliability significantly. Thanks a lot. Now I can start some scaling tests on our machine.

Allow me one more question (though in the wrong forum): Did I understand it correctly, that the pressure p is replaced by pd+rho*g*h? As I understand it, this removes the hydrostatic pressure from the solution and thus results in "less work" for the solver. But this will only be successful, if the waterlevel is located at z=0, otherwise it will be (strongly) contraproductiv? Wouldn't it be more useful to remove the pressure field of the last timestep from the solution and thus express p_new=p_old+pd (with pd=p_new-p_old) and relax updating of p_old?

Bye,

Carsten
carsten is offline   Reply With Quote

Old   September 12, 2008, 03:58
Default Yes pd is solved for rather th
  #7
Senior Member
 
Join Date: Mar 2009
Posts: 854
Rep Power: 22
henry is on a distinguished road
Yes pd is solved for rather than p but this is not an approximation, it is a reformulation, note the corresponding change to the buoyancy term. This reformulation allows the buoyancy term to be handled in the pd equation in a much better way in the collocated grid arrangement we use.

H
henry is offline   Reply With Quote

Old   September 12, 2008, 05:51
Default I agree that it is just a refo
  #8
Member
 
Carsten Thorenz
Join Date: Mar 2009
Location: Germany
Posts: 34
Rep Power: 17
carsten is on a distinguished road
I agree that it is just a reformulation. But I'm not sure if is useful _in_general_. I.e. in my oppinion it only has a positive effect, if the watertable nearly horizontal?

Just a comment, not an offense

Carsten
carsten is offline   Reply With Quote

Old   September 12, 2008, 06:10
Default @Mattijs: I tried to set fl
  #9
Member
 
Carsten Thorenz
Join Date: Mar 2009
Location: Germany
Posts: 34
Rep Power: 17
carsten is on a distinguished road
@Mattijs:

I tried to set floatTransfer to 0 in my controlDict, but it is not recognized. Where should I set it? Is it hardcoded?

Thanks,

Carsten
carsten is offline   Reply With Quote

Old   September 12, 2008, 06:14
Default It is a necessary reformulatio
  #10
Senior Member
 
Join Date: Mar 2009
Posts: 854
Rep Power: 22
henry is on a distinguished road
It is a necessary reformulation for numerical reasons and does not introduce any error or restriction.

H
henry is offline   Reply With Quote

Old   September 12, 2008, 09:58
Default floatTransfer: I found that it
  #11
Member
 
Carsten Thorenz
Join Date: Mar 2009
Location: Germany
Posts: 34
Rep Power: 17
carsten is on a distinguished road
floatTransfer: I found that it must be changed in the _global_ controlDict.

Finally, I think the behaviour is a bug. Well, let's say an over-optimization

floatTransfer should default to 0. It is not reasonable, to decrease the accuracy during mpi-transmission for double-precision compilations. Why should I use double-precision at all if I loose the precision again?

About performance: On our machine, the code runs faster with floatTransfer set to 0. The stability is much better (forget all of the above about solver settings) and it needs less iterations to converge. The overhead for the mpi-transmissions is negligible.

So, please change the floatTransfer default value to 0 (again).

Bye,

Carsten
wanrui and lpz456 like this.
carsten is offline   Reply With Quote

Old   September 12, 2008, 11:16
Default floatTransfer is not a bug but
  #12
Senior Member
 
Join Date: Mar 2009
Posts: 854
Rep Power: 22
henry is on a distinguished road
floatTransfer is not a bug but a carefully formulated method to get the most accuracy from least comms overhead and has proved very effective when running on cluster machines with Gbit networking for with the overhead of the MPI transmissions is significant. However it is not always appropriate as you have found and I agree that it should not be the default and I will change it in our git repository.

Thanks for sending details of your experience with floatTransfer.

H
lpz456 likes this.
henry is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Differences solutions in Parallel and Serial wizard1301 OpenFOAM Running, Solving & CFD 2 January 30, 2009 05:38
More DPM incompletes in parallel than in serial Paul FLUENT 0 December 16, 2008 09:27
Serial vs parallel different results luca OpenFOAM Bugs 2 December 3, 2008 10:12
Memory requirements for serial and parallel runs denner OpenFOAM Running, Solving & CFD 0 August 26, 2008 15:11
parallel Vs. serial co2 FLUENT 1 December 31, 2003 02:19


All times are GMT -4. The time now is 09:03.