CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Community Contributions (https://www.cfd-online.com/Forums/openfoam-community-contributions/)
-   -   [swak4Foam] swak4foam 0.4.2, groovyBC parallel diverging, openfoam extend 4.0 (https://www.cfd-online.com/Forums/openfoam-community-contributions/231889-swak4foam-0-4-2-groovybc-parallel-diverging-openfoam-extend-4-0-a.html)

Kombinator November 22, 2020 12:41

swak4foam 0.4.2, groovyBC parallel diverging, openfoam extend 4.0
 
Dear all,

I am using openfoam-extend 4.0 and swak-foam 0.4.2. Recently, I faced a problem with the parallel using of the groovyBC feature. I have the following boundary condition for the scalar value (electrical potential):

Code:

   
MENISCUS
    {
        type            groovyBC;
        gradientExpression "normal()&(U^B0)";
        fractionExpression "0";
        value          uniform 0;
    }

When I run my simulation in a serial mode, everything looks normal and my residuals of the electrical potential equation are fine (the electrical potential equation is using the above mentioned modified BC, I highlighted the line):

Code:

Courant Number mean: 0 max: 0.323067 velocity magnitude: 0.637633
Time = 0.0005

PIMPLE: iteration 1
BiCGStab:  Solving for Ux, Initial residual = 1, Final residual = 1.14672e-12, No Iterations 2
BiCGStab:  Solving for Uy, Initial residual = 1, Final residual = 3.16692e-12, No Iterations 2
BiCGStab:  Solving for Uz, Initial residual = 1, Final residual = 1.33198e-12, No Iterations 2
GAMG:  Solving for p, Initial residual = 1, Final residual = 0.00978763, No Iterations 370
time step continuity errors : sum local = 6.24569e-08, global = -9.6658e-09, cumulative = -9.6658e-09
GAMG:  Solving for p, Initial residual = 0.0240702, Final residual = 9.83996e-07, No Iterations 244
time step continuity errors : sum local = 1.4049e-09, global = -3.13278e-10, cumulative = -9.97908e-09
bounding k, min: -1.27743 max: 1.04867 average: 5.39138e-05
swak4Foam: Allocating new repository for sampledGlobalVariables
GAMG:  Solving for PotE, Initial residual = 1, Final residual = 8.72425e-07, No Iterations 119
ExecutionTime = 106.58 s  ClockTime = 106 s

However, if I switch my simulation to a parallel mode then the electrical potential equation diverges:

Code:

Courant Number mean: 0 max: 0.323067 velocity magnitude: 0.637633
Time = 0.0005

PIMPLE: iteration 1
BiCGStab:  Solving for Ux, Initial residual = 1, Final residual = 1.9242e-08, No Iterations 2
BiCGStab:  Solving for Uy, Initial residual = 1, Final residual = 2.27917e-08, No Iterations 2
BiCGStab:  Solving for Uz, Initial residual = 1, Final residual = 2.98058e-08, No Iterations 2
GAMG:  Solving for p, Initial residual = 1, Final residual = 0.00964027, No Iterations 352
time step continuity errors : sum local = 6.15166e-08, global = -1.67908e-08, cumulative = -1.67908e-08
GAMG:  Solving for p, Initial residual = 0.024065, Final residual = 9.83998e-07, No Iterations 228
time step continuity errors : sum local = 1.40527e-09, global = -5.68618e-10, cumulative = -1.73594e-08
bounding k, min: -1.27755 max: 1.04845 average: 5.3917e-05
[0] swak4Foam: Allocating new repository for sampledGlobalVariables
[1] swak4Foam: Allocating new repository for sampledGlobalVariables
[2] swak4Foam: Allocating new repository for sampledGlobalVariables
[3] swak4Foam: Allocating new repository for sampledGlobalVariables
GAMG:  Solving for PotE, Initial residual = 1, Final residual = 344.02, No Iterations 1000
ExecutionTime = 111.69 s  ClockTime = 112 s

I tried various combinations of processor amounts but I did not succeed - the simulation always diverges after a few iterations in the parallel mode with groovyBC. So to conclude:
1. The simulation works without groovyBC in both parallel/single mode.
2. The simulation works with groovyBC in a single mode.
3. The simulation doesn't work with groovyBC in a parallel mode.

Did anybody encounter the same problem? Thank you very much in advance.

Regards,
Artem

gschaider November 22, 2020 20:09

Quote:

Originally Posted by Kombinator (Post 788495)
Dear all,

I am using openfoam-extend 4.0 and swak-foam 0.4.2. Recently, I faced a problem with the parallel using of the groovyBC feature. I have the following boundary condition for the scalar value (electrical potential):

Code:

   
MENISCUS
    {
        type            groovyBC;
        gradientExpression "normal()&(U^B0)";
        fractionExpression "0";
        value          uniform 0;
    }

When I run my simulation in a serial mode, everything looks normal and my residuals of the electrical potential equation are fine (the electrical potential equation is using the above mentioned modified BC, I highlighted the line):

Code:

Courant Number mean: 0 max: 0.323067 velocity magnitude: 0.637633
Time = 0.0005

PIMPLE: iteration 1
BiCGStab:  Solving for Ux, Initial residual = 1, Final residual = 1.14672e-12, No Iterations 2
BiCGStab:  Solving for Uy, Initial residual = 1, Final residual = 3.16692e-12, No Iterations 2
BiCGStab:  Solving for Uz, Initial residual = 1, Final residual = 1.33198e-12, No Iterations 2
GAMG:  Solving for p, Initial residual = 1, Final residual = 0.00978763, No Iterations 370
time step continuity errors : sum local = 6.24569e-08, global = -9.6658e-09, cumulative = -9.6658e-09
GAMG:  Solving for p, Initial residual = 0.0240702, Final residual = 9.83996e-07, No Iterations 244
time step continuity errors : sum local = 1.4049e-09, global = -3.13278e-10, cumulative = -9.97908e-09
bounding k, min: -1.27743 max: 1.04867 average: 5.39138e-05
swak4Foam: Allocating new repository for sampledGlobalVariables
GAMG:  Solving for PotE, Initial residual = 1, Final residual = 8.72425e-07, No Iterations 119
ExecutionTime = 106.58 s  ClockTime = 106 s

However, if I switch my simulation to a parallel mode then the electrical potential equation diverges:

Code:

Courant Number mean: 0 max: 0.323067 velocity magnitude: 0.637633
Time = 0.0005

PIMPLE: iteration 1
BiCGStab:  Solving for Ux, Initial residual = 1, Final residual = 1.9242e-08, No Iterations 2
BiCGStab:  Solving for Uy, Initial residual = 1, Final residual = 2.27917e-08, No Iterations 2
BiCGStab:  Solving for Uz, Initial residual = 1, Final residual = 2.98058e-08, No Iterations 2
GAMG:  Solving for p, Initial residual = 1, Final residual = 0.00964027, No Iterations 352
time step continuity errors : sum local = 6.15166e-08, global = -1.67908e-08, cumulative = -1.67908e-08
GAMG:  Solving for p, Initial residual = 0.024065, Final residual = 9.83998e-07, No Iterations 228
time step continuity errors : sum local = 1.40527e-09, global = -5.68618e-10, cumulative = -1.73594e-08
bounding k, min: -1.27755 max: 1.04845 average: 5.3917e-05
[0] swak4Foam: Allocating new repository for sampledGlobalVariables
[1] swak4Foam: Allocating new repository for sampledGlobalVariables
[2] swak4Foam: Allocating new repository for sampledGlobalVariables
[3] swak4Foam: Allocating new repository for sampledGlobalVariables
GAMG:  Solving for PotE, Initial residual = 1, Final residual = 344.02, No Iterations 1000
ExecutionTime = 111.69 s  ClockTime = 112 s

I tried various combinations of processor amounts but I did not succeed - the simulation always diverges after a few iterations in the parallel mode with groovyBC. So to conclude:
1. The simulation works without groovyBC in both parallel/single mode.
2. The simulation works with groovyBC in a single mode.
3. The simulation doesn't work with groovyBC in a parallel mode.

Did anybody encounter the same problem? Thank you very much in advance.

Regards,
Artem


Usually groovyBC works in parallel without problems and that expression should be unproblematic because it is purely local. I know it is not good manners to try to shift the blame but could you check 2 things:
  • that this happens as well if you use a different solver than GAMG
  • how this is decomposed (sometimes for big decompositions there are cells where the number of boundary faces - to other processors or boundary conditions - is bigger than the number of internal faces and these sometimes are behaving badly)

Kombinator November 23, 2020 04:51

Quote:

Originally Posted by gschaider (Post 788519)
Usually groovyBC works in parallel without problems and that expression should be unproblematic because it is purely local. I know it is not good manners to try to shift the blame but could you check 2 things:
  • that this happens as well if you use a different solver than GAMG
  • how this is decomposed (sometimes for big decompositions there are cells where the number of boundary faces - to other processors or boundary conditions - is bigger than the number of internal faces and these sometimes are behaving badly)

Dear Bernhard,

Thank you for your answer and for your hints.
I checked the other solvers and it seems that BiCGStab and PCG indeed work with groovyBC in a parallel mode, though they are much slower than GAMG (now it takes 1500 iteration in order to achieve tolerance e-7).
Regarding decomposition - I use a simple method and 4 processors.

Code:

numberOfSubdomains 4;

method simple;

simpleCoeffs
{
    n                (1 1 4);
    delta        0.001;
}



May I ask you if it is possible to use GAMG with groovyBC in a parallel mode? This will be approx. twice faster for my simulation.
I tried to change a smoother and nCellsInCoarsestLevel in GAMG settings but it did not change anything, as expected.

Code:

        solver          GAMG;
        tolerance      1e-06;
        relTol          0.0;
        smoother        GaussSeidel;
        cacheAgglomeration true;
        nCellsInCoarsestLevel 500;
        agglomerator    faceAreaPair;
        mergeLevels    1;

Regards,
Artem

gschaider November 23, 2020 08:40

Quote:

Originally Posted by Kombinator (Post 788544)
Dear Bernhard,

Thank you for your answer and for your hints.
I checked the other solvers and it seems that BiCGStab and PCG indeed work with groovyBC in a parallel mode, though they are much slower than GAMG (now it takes 1500 iteration in order to achieve tolerance e-7).
Regarding decomposition - I use a simple method and 4 processors.

Code:

numberOfSubdomains 4;

method simple;

simpleCoeffs
{
    n        (1 1 4);
    delta    0.001;
}

May I ask you if it is possible to use GAMG with groovyBC in a parallel mode? This will be approx. twice faster for my simulation.
I tried to change a smoother and nCellsInCoarsestLevel in GAMG settings but it did not change anything, as expected.

Code:

        solver          GAMG;
        tolerance      1e-06;
        relTol          0.0;
        smoother        GaussSeidel;
        cacheAgglomeration true;
        nCellsInCoarsestLevel 500;
        agglomerator    faceAreaPair;
        mergeLevels    1;

Regards,
Artem


I'm afraid the problem is not groovyBC. It is the (physical) boundary condition in combination with AMG: I'm pretty sure that if you coded the same boundary condition in C++ it will fail in a similar way


The problem is probably the way AMG works: it lumps together cells to get a coarser mesh. Solves on that and uses the solution as a starting point for the finer mesh. That works fine if inside the cell groups values don't change to fast (but for instance if the cell group has some kind of boundary layer then then this is broken). With parallel cases things become worse because the algorithm can only grop cells from one processors and therefore can't collect cells that it would have collected before


Sorry. But that is where my wisdom ends. Maybe somebody with more knowledge about GAMG can help you (also: how old are your OF-sources? Maybe that problem was already addressed in foam-extend 4.1)

Kombinator November 24, 2020 08:42

Quote:

Originally Posted by gschaider (Post 788578)
I'm afraid the problem is not groovyBC. It is the (physical) boundary condition in combination with AMG: I'm pretty sure that if you coded the same boundary condition in C++ it will fail in a similar way


The problem is probably the way AMG works: it lumps together cells to get a coarser mesh. Solves on that and uses the solution as a starting point for the finer mesh. That works fine if inside the cell groups values don't change to fast (but for instance if the cell group has some kind of boundary layer then then this is broken). With parallel cases things become worse because the algorithm can only grop cells from one processors and therefore can't collect cells that it would have collected before


Sorry. But that is where my wisdom ends. Maybe somebody with more knowledge about GAMG can help you (also: how old are your OF-sources? Maybe that problem was already addressed in foam-extend 4.1)

Dear Bernhard,

It seems that there is definitely a problem below groovyBC, I tested different things and found that setReference works strange after the decomposition in foam extend 4.0, that in turn also contributes to divergence.
Anyway, I did a workaround - I just added into my solver a piece of code for updating the necessary boundary condition as it could be done in the same manner by groovyBC. So fat this fix works, even with GAMG solver. Here is the code for those who can have the same problem, just replace "MENISCUS" with your patch name and change the PotE and my expressions for your boundary variable and expression:

Code:

//---This file describes dynamic boundary conditions

  //---Find a patch for the dynamic boundary condition
  label patchID = mesh.boundaryMesh().findPatchID("MENISCUS");
 
  if(patchID == -1)
 
    {
      FatalError << "Slip patch not found!" << exit(FatalError);     
    }
 
  //---Define the class off the boundary condition
  fixedGradientFvPatchScalarField& PotEpatch =
      refCast<fixedGradientFvPatchScalarField>
      (
          PotE.boundaryField()[patchID]
      );
  //---Calculate the patch normal 
  const vectorField nf = PotEpatch.patch().nf();

  //---Calculate the boundary condition
  forAll(PotEpatch, faceI)
  {
    vector UB = U.boundaryField()[patchID][faceI] ^ B0.boundaryField()[patchID][faceI];
    PotEpatch.gradient()[faceI] = UB & nf[faceI];
  }

Thank you Bernhard for your help.

Regards,
Artem


All times are GMT -4. The time now is 16:33.