CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Bugs (https://www.cfd-online.com/Forums/openfoam-bugs/)
-   -   Bug in fvc descretization toolbox during paralle run (https://www.cfd-online.com/Forums/openfoam-bugs/62542-bug-fvc-descretization-toolbox-during-paralle-run.html)

adona058 September 27, 2007 13:59

Description: I have run into
 
Description:
I have run into a problem during the parallization a solver which, unless I am mistaken, may be a bug in some of the descretization schemes / processor communications.

in the solver, I am performing the following higher order derivative operations:

newFlux = fvc::snGrad ( fvc::laplacian(gamma)) * mesh.magSf();
gammaNew = gammaOld - fvc::div(newFlux, gamma);
gammaNew.correctBoundaryConditions();

I am using a Gauss cubic corrected scheme for the laplacian, corrected for the surface normal gradient, and a vanLeer scheme for the div calculation.

When operated on a single processor, there is no problem (i.e. the solution approaches that which is expected, see attached profiles). When run on 4 processors, the solution changes depending on the way the grid is decomposed. I have shown results for the grid split 4 times in the x-direction, and for the grid split in the x and y direction. The results for a single processor is overlaid with the grid split locations of the other simulations for comparison.

It appears as if either the divergence term or the snGrad of laplacian(gamma) is not being propertly evaluated across the processor boundaries. I have traced every other calculation related to this portion of the code, and as far as I can tell the error is being introduced through these expressions. Note that when run in 3D, variations in the value of gamma appeared to propegate outwards from the locations where the processor boundarys were present between the decomposed sections of the grid. The grid in all cases is orthogonal, consisting of a standard hex with equal cell widths along each face.

The scope of simulations I am looking to perform are too large to do in a reasonable time on single processors. If anyone knows how I can resolve this issue, your input would be appreciated.


Solver/Application:
Custom Solver, Bug related to fvc:: operations in parallel

Source file:
N/A

Testcase:
N/A

Platform:
linux cluster.

Version:
1.4

adona058 September 27, 2007 14:04

Single Processor Solution:
 
Single Processor Solution:

http://www.cfd-online.com/OpenFOAM_D...s/126/5511.gif

Split 4 times Horizontally:

http://www.cfd-online.com/OpenFOAM_D...s/126/5512.gif

Split once in each direction:

http://www.cfd-online.com/OpenFOAM_D...s/126/5513.gif

adona058 September 27, 2007 14:06

As it may be difficult to note
 
As it may be difficult to note the differences by viewing them like this, the difference becomes relatively easy to see if you download them and overlay them (or view them sequentially).

hjasak September 27, 2007 14:19

What happens if you use Gauss
 
What happens if you use Gauss linear corrected instead?

Hrv

adona058 September 27, 2007 15:03

I applied the Gauss linear cor
 
I applied the Gauss linear corrected scheme to the laplacian of gamma, and there was no visible change from the results shown above.

I have had similar results using the MUSCL, limitedVanLeer and limitedMUSCL schemes for the convection term.

I have only tried corrected for the snGrad term.

adona058 September 27, 2007 15:48

I have found a solution, altho
 
I have found a solution, although it seems rather odd. The newFlux variable I have been using was either defined locally (i.e. surfaceScalarField newFlux = expression) or defined in the createField.H file with the IOobject configuration including IOobject::NO_READ and IOobject::NO_WRITE (which I assumed would be fine as I have no interest in writing these variables to the time folders). However, when IOobject::AUTO_WRITE is used, this issue is no longer present...

Is that because the AUTO_WRITE property needs to be set in order for the processor boundaries to be taken into account when subsequently working with the variable...?

adona058 September 27, 2007 16:00

Scratch the last. it was the
 
Scratch the last. it was the other modification I had made.

I have a zero-flux condition on the wall of the grid, which I need to correct the boundary conditions for. The error is being introduced when I call correctBoundaryConditions() for the newFlux variable... As far as I can tell, the processor boundary conditions are over-writing the current value of newFlux with whatever was stored during the previous time step.

adona058 September 27, 2007 16:09

This raises another issue for
 
This raises another issue for me though... how do I ensure that the newFlux is equal to zero where a zero-flux boundary condition exists without over-writing the processor boundary with the previous values... The velocity uses the correctBoundaryConditions() call in the pressure-velocity coupling loop without a similar problem, any idea why it is appearing with this variable?


All times are GMT -4. The time now is 08:20.