CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

Harmonic mean of a volScalarField in parallel

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree1Likes
  • 1 Post By akimbrell

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   October 27, 2011, 03:28
Default Harmonic mean of a volScalarField in parallel
  #1
New Member
 
Austin Kimbrell
Join Date: Feb 2011
Location: Tennessee, USA
Posts: 8
Rep Power: 15
akimbrell is on a distinguished road
Hello all,

I need to multiply a quantity by a harmonic average of a volScalarField in my calculation. The quantity being averaged is defined at cell centers. I have come up with the following algorithm which seems to work properly in serial:
Code:
volScalarField Mult = test;
        volScalarField reciprocal = 1/(test+dSMALL);
forAll(mesh.C(), celli) {
            scalar sum = SMALL;
            const labelList& neighbors = mesh.cellCells()[celli];
            forAll(neighbors, n) {
                label cell = neighbors[n];
                sum += reciprocal[cell];
            }
            sum += reciprocal[celli];
            Mult[celli] *= (neighbors.size()+1)/sum;
        }
How would I go about ensuring that this works properly in parallel? I have read some about a reduce function but I don't really understand how it works.

Also might there be a better way to perform this calculation? Or maybe I should ask, if someone has a better way might they share it?

And before someone mentions it I believe the 'harmonic' interpolation does not apply to my case, from my understanding it is performing a harmonic interpolation between cell P and each neighbor separately, I need the harmonic mean of the center cell and all neighbors.

Thanks!
hawkeye321 likes this.
akimbrell is offline   Reply With Quote

Old   October 27, 2011, 04:37
Default
  #2
Senior Member
 
akidess's Avatar
 
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 29
akidess will become famous soon enough
You only need a reduce if you want a global quantity, e.g. the global sum or the global average. In this case, reduce will reduce e.g. all the local processor maximums to a single global maximum. Since you are only summing up locally on cells, you will always stay on the same processor domain.

Now, what might be a problem is that on cells that are next to a processor boundary, I believe you will have less neighbors, because a processor boundary is a patch, not a layer of ghost cells. Reduce won't help you here though, instead you'll have to use the gather and scatter functions of PStream. Unfortunately I don't know any more than that.
akidess is offline   Reply With Quote

Old   October 27, 2011, 12:31
Default
  #3
New Member
 
Austin Kimbrell
Join Date: Feb 2011
Location: Tennessee, USA
Posts: 8
Rep Power: 15
akimbrell is on a distinguished road
Ok thank you, now I understand what the reduce operation is supposed to be used for. This is clearly not what I need to use as I am performing a local operation on each cell.

I'll try looking into PStream and see what information I can find, if anyone else has experience with this sort of thing please share.
akimbrell is offline   Reply With Quote

Old   October 28, 2011, 18:54
Default
  #4
Member
 
Ivor Clifford
Join Date: Mar 2009
Location: Switzerland
Posts: 94
Rep Power: 17
cliffoi is on a distinguished road
I think you might be missing something here. Since you are looping over cell neighbours you will need to specifically include those neighbours that lie on the opposite side of coupled patches, i.e. on the other processors. You can forget about gather, scatter, etc. since the processor and other coupled boundaries handle all the communication for you and all you need to do is access the neighbouring cell values using patchNeighbourField().
Something along these lines might do the trick...

Code:
volScalarField Mult = test;
volScalarField reciprocal = 1/(test+SMALL);

scalarField sum(mesh.nCells(), 0.0);
labelList count(mesh.nCells(), 0);

forAll(mesh.C(), celli)
{
    sum[celli] = SMALL;
    count[celli] = 0;

    const labelList& neighbors = mesh.cellCells()[celli];
    forAll(neighbors, n)
    {
        label cell = neighbors[n];
        sum[celli] += reciprocal[cell];
    }
    sum[celli] += reciprocal[celli];
    count[celli] += neighbors.size()+1;
}

//- Now we include the boundaries
forAll(mesh.boundaryMesh(), patchI)
{
    const fvPatchField& pf = reciprocal.boundaryField()[patchI];
    const unallocLabelList& faceCells = pf.patch().faceCells();

    //- Coupled boundaries (processor, cyclic, etc)
    if (pf.coupled())
    {
        scalarField neighborReciprocal = pf.patchNeighbourField();

        forAll(faceCells, faceI)
        {
            sum[faceCells[faceI]] += neighbourReciprocal[faceI];
            count[faceCells[faceI] ++;
        }
    }
    //- Standard boundaries - if you want to include them
    else
    {
        forAll(faceCells, faceI)
        {
            sum[faceCells[faceI]] += pf[faceI];
            count[faceCells[faceI] ++;
        }
    }
}


Mult.internalField() *= count/sum;
cliffoi is offline   Reply With Quote

Old   October 30, 2011, 11:48
Default
  #5
New Member
 
Austin Kimbrell
Join Date: Feb 2011
Location: Tennessee, USA
Posts: 8
Rep Power: 15
akimbrell is on a distinguished road
Ivor,

Thank you for the additional code, it makes perfect sense from an application standpoint. I had difficulties, however, compiling the code - it stops at this line:
Code:
 const fvPatchField& pf = reciprocal.boundaryField()[patchI];
Any suggestions? The error states that it is not fully specifying the proper template and additional arguments are required. If we can get this line working I believe the rest should fall into place.

Edit: I think I figured it out. You have to use fvPatchScalarField, for example, rather than using fvPatchField directly. It depends on the type for reciprocal. I will post the code once it is working and debugged.

Last edited by akimbrell; October 30, 2011 at 14:02.
akimbrell is offline   Reply With Quote

Old   October 31, 2011, 11:36
Default
  #6
Member
 
Ivor Clifford
Join Date: Mar 2009
Location: Switzerland
Posts: 94
Rep Power: 17
cliffoi is on a distinguished road
Sorry, yes should be fvPatchScalarField. There will probably be some other errors as you go... I didn't debug the code, just sent it as a suggested approach.
cliffoi is offline   Reply With Quote

Old   November 1, 2011, 00:50
Default
  #7
New Member
 
Austin Kimbrell
Join Date: Feb 2011
Location: Tennessee, USA
Posts: 8
Rep Power: 15
akimbrell is on a distinguished road
Well I have finished debugging, testing checks out on a 1D array, it returns the expected answer in parallel as well as serial. The one necessary line that was missing is a correctBoundaryConditions() call for the reciprocal field - otherwise the processors do not return the correct value for the neighbors and it messes up the calculation. The following is written using the U velocity field, hence the dimensions, but can easily be applied for any harmonic mean:

Code:
volScalarField Result
    (
        IOobject
        (
            "Result",
            runTime.timeName(),
            mesh,
            IOobject::NO_READ,
            IOobject::AUTO_WRITE
        ),
        mesh,
        dimensionedScalar("zero", dimless, 0.0)
    );
    
        dimensionedScalar dSMALL("dSMALL", dimVelocity, SMALL);
        volScalarField reciprocal = 1/(mag(U)+dSMALL);
        reciprocal.correctBoundaryConditions();
        scalarField sum(mesh.nCells(), SMALL);
        scalarField count(mesh.nCells(), 0);
        
        //loop over all cells in each processor
        forAll(mesh.C(), celli) {
            const labelList& neighbors = mesh.cellCells()[celli];
            forAll(neighbors, n) {
                sum[celli] += reciprocal[neighbors[n]];
            }
            sum[celli] += reciprocal[celli];
            count[celli] += neighbors.size()+1;
        }
        //loop over all boundaries to get processor patch neighbors
        forAll(mesh.boundaryMesh(), patchi) {
            const fvPatchScalarField& pf = reciprocal.boundaryField()[patchi];
            const unallocLabelList& faceCells = pf.patch().faceCells();
            //look for coupled boundaries
            if(pf.coupled()) {
                scalarField neighborReciprocal = pf.patchNeighbourField();
                forAll(faceCells, facei) {
                    sum[faceCells[facei]] += neighborReciprocal[facei];
                    count[faceCells[facei]]++;
                }
            }
        }
        Result.internalField() = count/sum;
        Result.dimensions().reset(dimensionSet(0, 1, -1, 0, 0, 0, 0));
        Result.write();
Thanks a bunch for the help!
akimbrell is offline   Reply With Quote

Reply

Tags
harmonic mean, not implemented, parallel

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Looping over a volScalarField in a parallel run dohnie OpenFOAM Programming & Development 9 December 13, 2016 06:32
Script to Run Parallel Jobs in Rocks Cluster asaha OpenFOAM Running, Solving & CFD 12 July 4, 2012 23:51
Cumulative continuity error large in parallel simulations xiao OpenFOAM 3 March 7, 2011 05:13
Cyclic patches and parallel postprocessing problems askjak OpenFOAM Bugs 18 October 27, 2010 04:35
Parallel Computing Classes at San Diego Supercomputer Center Jan. 20-22 Amitava Majumdar Main CFD Forum 0 January 5, 1999 13:00


All times are GMT -4. The time now is 12:51.