CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

OF-2.2.x: Can't access cellZones in parallel run

Register Blogs Community New Posts Updated Threads Search

Like Tree3Likes
  • 3 Post By A_Pete

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 13, 2014, 13:52
Default OF-2.2.x: Can't access cellZones in parallel run
  #1
Member
 
Join Date: Jul 2011
Posts: 54
Rep Power: 14
A_Pete is on a distinguished road
Hi,

I was writing a small functionObject to monitor p and U values in selected cellSets. The sets are created with topoSet and then converted to cellZones with setsToZones. It does all run well in a serial simulation, but not if I try to run it in parallel using mpirun. It will then just think that the cellZones are empty.

What I did before, when having to handle cellZones, is that I executed the topoSet and setsToZones in every processor directory explicitly, which seemed to solve the problems. This time that did not solve the problem.

I have a guess that just one processor is looped over in my functionObject and not all of them, and that the processors don't communicate in this cellZone loop, as they would in a loop over a volume field.


Code:
const fvMesh& mesh(refCast<const fvMesh>(obr_));
const volScalarField& p(mesh.lookupObject<volScalarField>("p"));
const volVectorField& U(mesh.lookupObject<volVectorField>("U"));
const cellZoneMesh& cellZones(mesh.cellZones());

forAll(cellZones, zoneID)
        {
            const word& zoneName(cellZones.names()[zoneID]);
            const labelUList& cellZone(cellZones[zoneID]);

            scalar pSumme(0);
            scalar UXSumme(0);
            scalar UYSumme(0);
            scalar UZSumme(0);
            scalar count(0);

            forAll(cellZone, i)
            {
                pSumme += p[cellZone[i]];
                UXSumme += U[cellZone[i]].x();
                UYSumme += U[cellZone[i]].y();
                UZSumme += U[cellZone[i]].z();
                count++;
            }
            
            scalar pAvg = pSumme/(count+VSMALL);
            scalar UXAvg = UXSumme/(count+VSMALL);
            scalar UYAvg = UYSumme/(count+VSMALL);
            scalar UZAvg = UZSumme/(count+VSMALL);

            Info<< nl << zoneName << ": " << pAvg << tab 
                << UXAvg << tab << UYAvg << tab
                << UZAvg << nl << endl;
}
The problem is that my cellZone, which the parallel run gives me output for, seems to be empty and I did an "Info<< cellZone.size()" which verified that. So how can I access all the cellZones, when running in parallel.

The method I chose isn't the most elegant one, but I don't know how I can average in an easier way and I don't know if that even matters, because the cellZones can't be found.

Anyone having an idea?
A_Pete is offline   Reply With Quote

Old   February 14, 2014, 04:24
Default Problem solved
  #2
Member
 
Join Date: Jul 2011
Posts: 54
Rep Power: 14
A_Pete is on a distinguished road
Found a solution in an old thread in the forums:

Use the reduce() function to assure communication between the processors. In my case this has to be done on the averaged fields. An example is shown below:

Code:
reduce(pSumme, sumOp<scalar>());
reduce(count, sumOp<scalar>());

scalar pAvg = pSumme/(count+VSMALL);
That gave me the right values for an averaged p over all processors.
reza2031, yuhan1991 and wht like this.

Last edited by A_Pete; February 17, 2014 at 01:37. Reason: Mistake in example
A_Pete is offline   Reply With Quote

Old   January 4, 2017, 03:05
Smile
  #3
New Member
 
Yu Han
Join Date: Nov 2014
Posts: 3
Rep Power: 11
yuhan1991 is on a distinguished road
Hi Pete,
Your solution really helps me a lot. Think you very much!
yuhan1991 is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Case running in serial, but Parallel run gives error atmcfd OpenFOAM Running, Solving & CFD 18 March 26, 2016 12:40
dynamicMesh parallel run popcorn OpenFOAM Running, Solving & CFD 0 October 2, 2012 12:34
Script to Run Parallel Jobs in Rocks Cluster asaha OpenFOAM Running, Solving & CFD 12 July 4, 2012 22:51
Parallel run in fluent for multiphase flow apurv FLUENT 2 August 3, 2011 19:44
Unable to run OF in parallel on a multiple-node cluster quartzian OpenFOAM 3 November 24, 2009 13:37


All times are GMT -4. The time now is 17:19.