|
[Sponsors] |
Puzzling behavior of BC/Wall Function Code in Parallel |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
January 9, 2017, 14:59 |
Puzzling behavior of BC/Wall Function Code in Parallel
|
#1 |
New Member
Nate
Join Date: Oct 2013
Location: Amherst, MA
Posts: 13
Rep Power: 12 |
Hi All,
I'm programming a custom BC based on some of the wall function implementations in foam-extend-3.2. In serial, the code behaves nicely, but in parallel something weird is happening. Here's a simplification of my code: Code:
void tppsiLowReRoughWallTPFvPatchVectorField::updateCoeffs() { if (updated()) { return; } // Get patch indices const label patchI = patch().index(); const volScalarField& nutw = db().lookupObject<volScalarField>("nut"); const fvPatchVectorField& Uw = lookupPatchField<volVectorField, vector>("U"); const scalarField magGradUw = mag(Uw.snGrad()); Info<< "nut size: "<< nutw.boundaryField()[patchI].size() <<endl; Info<< "Uw size: "<< Uw.size() <<endl; Info<< "Face Cells size: "<< patch().faceCells().size() <<endl; vectorField& tppsiw = *this; forAll(nutw, faceI) { /* Calculate and set tppsiw */ } fixedValueFvPatchVectorField::updateCoeffs(); } In serial, when I check the sizes of nut, Uw, and faceCells on the patch, I get the correct number of elements for the 3 patches on which this BC is set: Code:
nut size: 336 Uw size: 336 Face Cells size: 336 nut size: 688 Uw size: 688 Face Cells size: 688 nut size: 768 Uw size: 768 Face Cells size: 768 Code:
nut size: 0 Uw size: 0 Face Cells size: 0 nut size: 0 Uw size: 0 Face Cells size: 0 nut size: 0 Uw size: 0 Face Cells size: 0 Is there something I'm missing about the parallel implementation of BCs and/or lookups from the registry? Any tips on where to start digging to troubleshoot this? Thanks for your help! |
|
January 9, 2017, 22:50 |
More info...
|
#2 |
New Member
Nate
Join Date: Oct 2013
Location: Amherst, MA
Posts: 13
Rep Power: 12 |
So it seems that if I reduce the number of processors from 16 to 4 to 2, I get more and more faces included in the "size()" call. Does this mean that when running with 16 processors, only one processor is being used for the lookup (i.e. one that contains zero faces of the patch)?
This made me think to try "globalFaceZones" in decomposeParDict...so I created a faceZone from the faces on my 3 patches. This had no effect on the behaviour. |
|
January 10, 2017, 04:17 |
|
#3 |
Senior Member
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 29 |
Of course every processor will see a different size after decomposition (depending on how much of the boundary is included in the subdomain). I'm guessing you are only seeing the master processor size information.
__________________
*On twitter @akidTwit *Spend as much time formulating your questions as you expect people to spend on their answer. |
|
January 10, 2017, 10:52 |
|
#4 |
New Member
Nate
Join Date: Oct 2013
Location: Amherst, MA
Posts: 13
Rep Power: 12 |
Yes, my understanding (after testing this and seeing your response) is that the "size()" function runs on every processor, but that the "Info" statement will only ever output information from the master processor. So clearly this is not a good way to debug in parallel.
I found a thread where someone had a similar issue...Info statements made it look like a field had zero size...the answer is to use "Pout" instead of "Info" and look at all the processors. Here's that thread: .boundary() in parallel Hopefully this helps someone in the future! |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[mesh manipulation] Importing Multiple Meshes | thomasnwalshiii | OpenFOAM Meshing & Mesh Conversion | 18 | December 19, 2015 18:57 |
simpleFoam in parallel issue | plucas | OpenFOAM Running, Solving & CFD | 3 | July 17, 2013 11:30 |
[blockMesh] BlockMesh FOAM warning | gaottino | OpenFOAM Meshing & Mesh Conversion | 7 | July 19, 2010 14:11 |
Version 15 on Mac OS X | gschaider | OpenFOAM Installation | 113 | December 2, 2009 10:23 |