CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Community Contributions

[swak4Foam] swak4foam 0.4.2, groovyBC parallel diverging, openfoam extend 4.0

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   November 22, 2020, 12:41
Default swak4foam 0.4.2, groovyBC parallel diverging, openfoam extend 4.0
  #1
New Member
 
Artem
Join Date: Apr 2014
Posts: 29
Rep Power: 12
Kombinator is on a distinguished road
Dear all,

I am using openfoam-extend 4.0 and swak-foam 0.4.2. Recently, I faced a problem with the parallel using of the groovyBC feature. I have the following boundary condition for the scalar value (electrical potential):

Code:
    
MENISCUS
    {
        type            groovyBC;
        gradientExpression "normal()&(U^B0)";
        fractionExpression "0";
        value           uniform 0;
    }
When I run my simulation in a serial mode, everything looks normal and my residuals of the electrical potential equation are fine (the electrical potential equation is using the above mentioned modified BC, I highlighted the line):

Code:
Courant Number mean: 0 max: 0.323067 velocity magnitude: 0.637633
Time = 0.0005

PIMPLE: iteration 1
BiCGStab:  Solving for Ux, Initial residual = 1, Final residual = 1.14672e-12, No Iterations 2
BiCGStab:  Solving for Uy, Initial residual = 1, Final residual = 3.16692e-12, No Iterations 2
BiCGStab:  Solving for Uz, Initial residual = 1, Final residual = 1.33198e-12, No Iterations 2
GAMG:  Solving for p, Initial residual = 1, Final residual = 0.00978763, No Iterations 370
time step continuity errors : sum local = 6.24569e-08, global = -9.6658e-09, cumulative = -9.6658e-09
GAMG:  Solving for p, Initial residual = 0.0240702, Final residual = 9.83996e-07, No Iterations 244
time step continuity errors : sum local = 1.4049e-09, global = -3.13278e-10, cumulative = -9.97908e-09
bounding k, min: -1.27743 max: 1.04867 average: 5.39138e-05
swak4Foam: Allocating new repository for sampledGlobalVariables
GAMG:  Solving for PotE, Initial residual = 1, Final residual = 8.72425e-07, No Iterations 119
ExecutionTime = 106.58 s  ClockTime = 106 s
However, if I switch my simulation to a parallel mode then the electrical potential equation diverges:

Code:
Courant Number mean: 0 max: 0.323067 velocity magnitude: 0.637633
Time = 0.0005

PIMPLE: iteration 1
BiCGStab:  Solving for Ux, Initial residual = 1, Final residual = 1.9242e-08, No Iterations 2
BiCGStab:  Solving for Uy, Initial residual = 1, Final residual = 2.27917e-08, No Iterations 2
BiCGStab:  Solving for Uz, Initial residual = 1, Final residual = 2.98058e-08, No Iterations 2
GAMG:  Solving for p, Initial residual = 1, Final residual = 0.00964027, No Iterations 352
time step continuity errors : sum local = 6.15166e-08, global = -1.67908e-08, cumulative = -1.67908e-08
GAMG:  Solving for p, Initial residual = 0.024065, Final residual = 9.83998e-07, No Iterations 228
time step continuity errors : sum local = 1.40527e-09, global = -5.68618e-10, cumulative = -1.73594e-08
bounding k, min: -1.27755 max: 1.04845 average: 5.3917e-05
[0] swak4Foam: Allocating new repository for sampledGlobalVariables
[1] swak4Foam: Allocating new repository for sampledGlobalVariables
[2] swak4Foam: Allocating new repository for sampledGlobalVariables
[3] swak4Foam: Allocating new repository for sampledGlobalVariables
GAMG:  Solving for PotE, Initial residual = 1, Final residual = 344.02, No Iterations 1000
ExecutionTime = 111.69 s  ClockTime = 112 s
I tried various combinations of processor amounts but I did not succeed - the simulation always diverges after a few iterations in the parallel mode with groovyBC. So to conclude:
1. The simulation works without groovyBC in both parallel/single mode.
2. The simulation works with groovyBC in a single mode.
3. The simulation doesn't work with groovyBC in a parallel mode.

Did anybody encounter the same problem? Thank you very much in advance.

Regards,
Artem
Kombinator is offline   Reply With Quote

Old   November 22, 2020, 20:09
Default
  #2
Assistant Moderator
 
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51
gschaider will become famous soon enoughgschaider will become famous soon enough
Quote:
Originally Posted by Kombinator View Post
Dear all,

I am using openfoam-extend 4.0 and swak-foam 0.4.2. Recently, I faced a problem with the parallel using of the groovyBC feature. I have the following boundary condition for the scalar value (electrical potential):

Code:
    
MENISCUS
    {
        type            groovyBC;
        gradientExpression "normal()&(U^B0)";
        fractionExpression "0";
        value           uniform 0;
    }
When I run my simulation in a serial mode, everything looks normal and my residuals of the electrical potential equation are fine (the electrical potential equation is using the above mentioned modified BC, I highlighted the line):

Code:
Courant Number mean: 0 max: 0.323067 velocity magnitude: 0.637633
Time = 0.0005

PIMPLE: iteration 1
BiCGStab:  Solving for Ux, Initial residual = 1, Final residual = 1.14672e-12, No Iterations 2
BiCGStab:  Solving for Uy, Initial residual = 1, Final residual = 3.16692e-12, No Iterations 2
BiCGStab:  Solving for Uz, Initial residual = 1, Final residual = 1.33198e-12, No Iterations 2
GAMG:  Solving for p, Initial residual = 1, Final residual = 0.00978763, No Iterations 370
time step continuity errors : sum local = 6.24569e-08, global = -9.6658e-09, cumulative = -9.6658e-09
GAMG:  Solving for p, Initial residual = 0.0240702, Final residual = 9.83996e-07, No Iterations 244
time step continuity errors : sum local = 1.4049e-09, global = -3.13278e-10, cumulative = -9.97908e-09
bounding k, min: -1.27743 max: 1.04867 average: 5.39138e-05
swak4Foam: Allocating new repository for sampledGlobalVariables
GAMG:  Solving for PotE, Initial residual = 1, Final residual = 8.72425e-07, No Iterations 119
ExecutionTime = 106.58 s  ClockTime = 106 s
However, if I switch my simulation to a parallel mode then the electrical potential equation diverges:

Code:
Courant Number mean: 0 max: 0.323067 velocity magnitude: 0.637633
Time = 0.0005

PIMPLE: iteration 1
BiCGStab:  Solving for Ux, Initial residual = 1, Final residual = 1.9242e-08, No Iterations 2
BiCGStab:  Solving for Uy, Initial residual = 1, Final residual = 2.27917e-08, No Iterations 2
BiCGStab:  Solving for Uz, Initial residual = 1, Final residual = 2.98058e-08, No Iterations 2
GAMG:  Solving for p, Initial residual = 1, Final residual = 0.00964027, No Iterations 352
time step continuity errors : sum local = 6.15166e-08, global = -1.67908e-08, cumulative = -1.67908e-08
GAMG:  Solving for p, Initial residual = 0.024065, Final residual = 9.83998e-07, No Iterations 228
time step continuity errors : sum local = 1.40527e-09, global = -5.68618e-10, cumulative = -1.73594e-08
bounding k, min: -1.27755 max: 1.04845 average: 5.3917e-05
[0] swak4Foam: Allocating new repository for sampledGlobalVariables
[1] swak4Foam: Allocating new repository for sampledGlobalVariables
[2] swak4Foam: Allocating new repository for sampledGlobalVariables
[3] swak4Foam: Allocating new repository for sampledGlobalVariables
GAMG:  Solving for PotE, Initial residual = 1, Final residual = 344.02, No Iterations 1000
ExecutionTime = 111.69 s  ClockTime = 112 s
I tried various combinations of processor amounts but I did not succeed - the simulation always diverges after a few iterations in the parallel mode with groovyBC. So to conclude:
1. The simulation works without groovyBC in both parallel/single mode.
2. The simulation works with groovyBC in a single mode.
3. The simulation doesn't work with groovyBC in a parallel mode.

Did anybody encounter the same problem? Thank you very much in advance.

Regards,
Artem

Usually groovyBC works in parallel without problems and that expression should be unproblematic because it is purely local. I know it is not good manners to try to shift the blame but could you check 2 things:
  • that this happens as well if you use a different solver than GAMG
  • how this is decomposed (sometimes for big decompositions there are cells where the number of boundary faces - to other processors or boundary conditions - is bigger than the number of internal faces and these sometimes are behaving badly)
__________________
Note: I don't use "Friend"-feature on this forum out of principle. Ah. And by the way: I'm not on Facebook either. So don't be offended if I don't accept your invitation/friend request
gschaider is offline   Reply With Quote

Old   November 23, 2020, 04:51
Default
  #3
New Member
 
Artem
Join Date: Apr 2014
Posts: 29
Rep Power: 12
Kombinator is on a distinguished road
Quote:
Originally Posted by gschaider View Post
Usually groovyBC works in parallel without problems and that expression should be unproblematic because it is purely local. I know it is not good manners to try to shift the blame but could you check 2 things:
  • that this happens as well if you use a different solver than GAMG
  • how this is decomposed (sometimes for big decompositions there are cells where the number of boundary faces - to other processors or boundary conditions - is bigger than the number of internal faces and these sometimes are behaving badly)
Dear Bernhard,

Thank you for your answer and for your hints.
I checked the other solvers and it seems that BiCGStab and PCG indeed work with groovyBC in a parallel mode, though they are much slower than GAMG (now it takes 1500 iteration in order to achieve tolerance e-7).
Regarding decomposition - I use a simple method and 4 processors.

Code:
numberOfSubdomains 4;

method simple;

simpleCoeffs
{
    n		(1 1 4);
    delta	0.001;
}


May I ask you if it is possible to use GAMG with groovyBC in a parallel mode? This will be approx. twice faster for my simulation.
I tried to change a smoother and nCellsInCoarsestLevel in GAMG settings but it did not change anything, as expected.

Code:
        solver          GAMG;
        tolerance       1e-06;
        relTol          0.0;
        smoother        GaussSeidel;
        cacheAgglomeration true;
        nCellsInCoarsestLevel 500;
        agglomerator    faceAreaPair;
        mergeLevels     1;
Regards,
Artem
Kombinator is offline   Reply With Quote

Old   November 23, 2020, 08:40
Default
  #4
Assistant Moderator
 
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51
gschaider will become famous soon enoughgschaider will become famous soon enough
Quote:
Originally Posted by Kombinator View Post
Dear Bernhard,

Thank you for your answer and for your hints.
I checked the other solvers and it seems that BiCGStab and PCG indeed work with groovyBC in a parallel mode, though they are much slower than GAMG (now it takes 1500 iteration in order to achieve tolerance e-7).
Regarding decomposition - I use a simple method and 4 processors.

Code:
numberOfSubdomains 4;

method simple;

simpleCoeffs
{
    n        (1 1 4);
    delta    0.001;
}
May I ask you if it is possible to use GAMG with groovyBC in a parallel mode? This will be approx. twice faster for my simulation.
I tried to change a smoother and nCellsInCoarsestLevel in GAMG settings but it did not change anything, as expected.

Code:
        solver          GAMG;
        tolerance       1e-06;
        relTol          0.0;
        smoother        GaussSeidel;
        cacheAgglomeration true;
        nCellsInCoarsestLevel 500;
        agglomerator    faceAreaPair;
        mergeLevels     1;
Regards,
Artem

I'm afraid the problem is not groovyBC. It is the (physical) boundary condition in combination with AMG: I'm pretty sure that if you coded the same boundary condition in C++ it will fail in a similar way


The problem is probably the way AMG works: it lumps together cells to get a coarser mesh. Solves on that and uses the solution as a starting point for the finer mesh. That works fine if inside the cell groups values don't change to fast (but for instance if the cell group has some kind of boundary layer then then this is broken). With parallel cases things become worse because the algorithm can only grop cells from one processors and therefore can't collect cells that it would have collected before


Sorry. But that is where my wisdom ends. Maybe somebody with more knowledge about GAMG can help you (also: how old are your OF-sources? Maybe that problem was already addressed in foam-extend 4.1)
__________________
Note: I don't use "Friend"-feature on this forum out of principle. Ah. And by the way: I'm not on Facebook either. So don't be offended if I don't accept your invitation/friend request
gschaider is offline   Reply With Quote

Old   November 24, 2020, 08:42
Default
  #5
New Member
 
Artem
Join Date: Apr 2014
Posts: 29
Rep Power: 12
Kombinator is on a distinguished road
Quote:
Originally Posted by gschaider View Post
I'm afraid the problem is not groovyBC. It is the (physical) boundary condition in combination with AMG: I'm pretty sure that if you coded the same boundary condition in C++ it will fail in a similar way


The problem is probably the way AMG works: it lumps together cells to get a coarser mesh. Solves on that and uses the solution as a starting point for the finer mesh. That works fine if inside the cell groups values don't change to fast (but for instance if the cell group has some kind of boundary layer then then this is broken). With parallel cases things become worse because the algorithm can only grop cells from one processors and therefore can't collect cells that it would have collected before


Sorry. But that is where my wisdom ends. Maybe somebody with more knowledge about GAMG can help you (also: how old are your OF-sources? Maybe that problem was already addressed in foam-extend 4.1)
Dear Bernhard,

It seems that there is definitely a problem below groovyBC, I tested different things and found that setReference works strange after the decomposition in foam extend 4.0, that in turn also contributes to divergence.
Anyway, I did a workaround - I just added into my solver a piece of code for updating the necessary boundary condition as it could be done in the same manner by groovyBC. So fat this fix works, even with GAMG solver. Here is the code for those who can have the same problem, just replace "MENISCUS" with your patch name and change the PotE and my expressions for your boundary variable and expression:

Code:
//---This file describes dynamic boundary conditions 

  //---Find a patch for the dynamic boundary condition
  label patchID = mesh.boundaryMesh().findPatchID("MENISCUS");
  
  if(patchID == -1)
  
    {
      FatalError << "Slip patch not found!" << exit(FatalError);      
    }
   
  //---Define the class off the boundary condition
  fixedGradientFvPatchScalarField& PotEpatch =
      refCast<fixedGradientFvPatchScalarField>
      (
	  PotE.boundaryField()[patchID]
      );
   //---Calculate the patch normal   
   const vectorField nf = PotEpatch.patch().nf(); 

  //---Calculate the boundary condition
  forAll(PotEpatch, faceI)
  {
    vector UB = U.boundaryField()[patchID][faceI] ^ B0.boundaryField()[patchID][faceI];
    PotEpatch.gradient()[faceI] = UB & nf[faceI];
  }
Thank you Bernhard for your help.

Regards,
Artem
Kombinator is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[swak4Foam] undefined symbols in .so libraries when compiling swak4foam 0.4.2 on foam extend 4.1 mAlletto OpenFOAM Community Contributions 3 August 29, 2018 06:02
OpenFOAM 4.0 Released CFDFoundation OpenFOAM Announcements from OpenFOAM Foundation 2 October 6, 2017 05:40
OpenFOAM Training Jan-Jul 2017, Virtual, London, Houston, Berlin CFDFoundation OpenFOAM Announcements from Other Sources 0 January 4, 2017 06:15
OpenFOAM Training, London, Chicago, Munich, Houston 2016-2017 cfd.direct OpenFOAM Announcements from Other Sources 0 September 14, 2016 03:19
Superlinear speedup in OpenFOAM 13 msrinath80 OpenFOAM Running, Solving & CFD 18 March 3, 2015 05:36


All times are GMT -4. The time now is 04:40.