CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Bugs

Bug in fvc descretization toolbox during paralle run

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 27, 2007, 13:59
Default Description: I have run into
  #1
Member
 
Adam Donaldson
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37
Rep Power: 17
adona058 is on a distinguished road
Description:
I have run into a problem during the parallization a solver which, unless I am mistaken, may be a bug in some of the descretization schemes / processor communications.

in the solver, I am performing the following higher order derivative operations:

newFlux = fvc::snGrad ( fvc::laplacian(gamma)) * mesh.magSf();
gammaNew = gammaOld - fvc::div(newFlux, gamma);
gammaNew.correctBoundaryConditions();

I am using a Gauss cubic corrected scheme for the laplacian, corrected for the surface normal gradient, and a vanLeer scheme for the div calculation.

When operated on a single processor, there is no problem (i.e. the solution approaches that which is expected, see attached profiles). When run on 4 processors, the solution changes depending on the way the grid is decomposed. I have shown results for the grid split 4 times in the x-direction, and for the grid split in the x and y direction. The results for a single processor is overlaid with the grid split locations of the other simulations for comparison.

It appears as if either the divergence term or the snGrad of laplacian(gamma) is not being propertly evaluated across the processor boundaries. I have traced every other calculation related to this portion of the code, and as far as I can tell the error is being introduced through these expressions. Note that when run in 3D, variations in the value of gamma appeared to propegate outwards from the locations where the processor boundarys were present between the decomposed sections of the grid. The grid in all cases is orthogonal, consisting of a standard hex with equal cell widths along each face.

The scope of simulations I am looking to perform are too large to do in a reasonable time on single processors. If anyone knows how I can resolve this issue, your input would be appreciated.


Solver/Application:
Custom Solver, Bug related to fvc:: operations in parallel

Source file:
N/A

Testcase:
N/A

Platform:
linux cluster.

Version:
1.4
adona058 is offline   Reply With Quote

Old   September 27, 2007, 14:04
Default Single Processor Solution:
  #2
Member
 
Adam Donaldson
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37
Rep Power: 17
adona058 is on a distinguished road
Single Processor Solution:



Split 4 times Horizontally:



Split once in each direction:


adona058 is offline   Reply With Quote

Old   September 27, 2007, 14:06
Default As it may be difficult to note
  #3
Member
 
Adam Donaldson
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37
Rep Power: 17
adona058 is on a distinguished road
As it may be difficult to note the differences by viewing them like this, the difference becomes relatively easy to see if you download them and overlay them (or view them sequentially).
adona058 is offline   Reply With Quote

Old   September 27, 2007, 14:19
Default What happens if you use Gauss
  #4
Senior Member
 
Hrvoje Jasak
Join Date: Mar 2009
Location: London, England
Posts: 1,905
Rep Power: 33
hjasak will become famous soon enough
What happens if you use Gauss linear corrected instead?

Hrv
__________________
Hrvoje Jasak
Providing commercial FOAM/OpenFOAM and CFD Consulting: http://wikki.co.uk
hjasak is offline   Reply With Quote

Old   September 27, 2007, 15:03
Default I applied the Gauss linear cor
  #5
Member
 
Adam Donaldson
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37
Rep Power: 17
adona058 is on a distinguished road
I applied the Gauss linear corrected scheme to the laplacian of gamma, and there was no visible change from the results shown above.

I have had similar results using the MUSCL, limitedVanLeer and limitedMUSCL schemes for the convection term.

I have only tried corrected for the snGrad term.
adona058 is offline   Reply With Quote

Old   September 27, 2007, 15:48
Default I have found a solution, altho
  #6
Member
 
Adam Donaldson
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37
Rep Power: 17
adona058 is on a distinguished road
I have found a solution, although it seems rather odd. The newFlux variable I have been using was either defined locally (i.e. surfaceScalarField newFlux = expression) or defined in the createField.H file with the IOobject configuration including IOobject::NO_READ and IOobject::NO_WRITE (which I assumed would be fine as I have no interest in writing these variables to the time folders). However, when IOobject::AUTO_WRITE is used, this issue is no longer present...

Is that because the AUTO_WRITE property needs to be set in order for the processor boundaries to be taken into account when subsequently working with the variable...?
adona058 is offline   Reply With Quote

Old   September 27, 2007, 16:00
Default Scratch the last. it was the
  #7
Member
 
Adam Donaldson
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37
Rep Power: 17
adona058 is on a distinguished road
Scratch the last. it was the other modification I had made.

I have a zero-flux condition on the wall of the grid, which I need to correct the boundary conditions for. The error is being introduced when I call correctBoundaryConditions() for the newFlux variable... As far as I can tell, the processor boundary conditions are over-writing the current value of newFlux with whatever was stored during the previous time step.
adona058 is offline   Reply With Quote

Old   September 27, 2007, 16:09
Default This raises another issue for
  #8
Member
 
Adam Donaldson
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37
Rep Power: 17
adona058 is on a distinguished road
This raises another issue for me though... how do I ensure that the newFlux is equal to zero where a zero-flux boundary condition exists without over-writing the processor boundary with the previous values... The velocity uses the correctBoundaryConditions() call in the pressure-velocity coupling loop without a similar problem, any idea why it is appearing with this variable?
adona058 is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
How to run ICEM-CFD in paralle model? wayne CFX 5 April 4, 2013 18:39
How or When I can get CGNS Paralle Abillity? dragson Main CFD Forum 1 June 12, 2007 11:11
Gas Dynamics Toolbox Alina Main CFD Forum 0 October 21, 2005 13:23
descretization of equations in Fuent faiz rauf FLUENT 1 October 3, 2004 09:01
Matlab pde toolbox srijit goswami Main CFD Forum 1 January 30, 2001 20:34


All times are GMT -4. The time now is 19:17.