CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   OpenFOAM Programming & Development (http://www.cfd-online.com/Forums/openfoam-programming-development/)
-   -   Face weighted Velocity average at a boundary patch (http://www.cfd-online.com/Forums/openfoam-programming-development/108271-face-weighted-velocity-average-boundary-patch.html)

luhawk October 18, 2012 05:33

Face weighted Velocity average at a boundary patch
 
Hi everybody,

I am currently trying to implement the calculation of a cell face weighted Velocity average in flow direction as a dimensioned scalar. The cell face for weighting shall be the one at the defined boundary patch.

I tried:
Code:

dimensionedScalar weightedVelocity = (flowDirection & U)().weightedAverage(mesh.boundary()[patchID].Sf());
but the prompt (as I interpret it) says that weightedAverage will only accept a volume Mesh field not a field:

Code:

error: no matching function for call to ‘Foam::GeometricField<double, Foam::fvPatchField, Foam::volMesh>::weightedAverage(const Foam::Field<Foam::Vector<double> >&)’
/opt/OpenFOAM/OpenFOAM-2.1.0/src/OpenFOAM/lnInclude/DimensionedField.C:381: note: candidates are: Foam::dimensioned<Type> Foam::DimensionedField<Type, GeoMesh>::weightedAverage(const Foam::DimensionedField<double, GeoMesh>&) const [with Type = double, GeoMesh = Foam::volMesh]
/opt/OpenFOAM/OpenFOAM-2.1.0/src/OpenFOAM/lnInclude/DimensionedField.C:399: note:                Foam::dimensioned<Type> Foam::DimensionedField<Type, GeoMesh>::weightedAverage(const Foam::tmp<Foam::DimensionedField<double, GeoMesh> >&) const [with Type = double, GeoMesh = Foam::volMesh]

so I tried using fvc::average as follows:

Code:

dimensionedScalar weightedVelocity = average(flowDirection & U.boundaryField()[patchID]);
It compiles but when i run the code i get the dimension error:

Code:

--> FOAM FATAL ERROR:
LHS and RHS of - have different dimensions
    dimensions : [0 1 -1 0 0 0 0] - [0 0 0 0 0 0 0]

where weightedVelocity is the RHS. So my question is why do i loose the dimension when averaging or are there any other ideas to get the weighted average?

Thanks for your answers,

Cheers Lu

wyldckat October 19, 2012 15:44

Greetings Lukas,

I did a quick search on this subject and found this thread: http://www.cfd-online.com/Forums/ope...onditions.html

From it you'll see that they mention patchAverage. The source code for this application is located here: "applications/utilities/postProcessing/patch/patchAverage/patchAverage.C"
If I'm not mistaken, this code uses area weighted average on patches.

If you need to integrate, looks like patchAverage has a sibling named patchIntegrate ;)

Best regards,
Bruno

luhawk October 25, 2012 04:06

Hi Bruno,

thanks for your answer, I also stumbled upon this thread during my search for a solution. The problem is that I want to integrate the calculation into my solver and use the average value as a corrector. As far as I can see this is just for post processing. But I'll have a closer look at patchAverage, maybe it'll help. I'll post the solution when I found it.

Cheers Luk

luhawk October 25, 2012 11:20

Hello again Bruno,
hello everybody,

I worked something out now but only for serial computation. In parallel it does not work because the area of the patch on which I want to average is not calculated properly.

My code for that is:
Code:

                label PatchID = mesh.boundaryMesh().findPatchID(patchName);

                scalar area = gSum(mesh.magSf().boundaryField()[PatchID]);

                reduce(area, sumOp<scalar>() );

                Info<< "area: "<< area<< endl;

As I said it works neatly for serial computation and the prompt returns the right area (some hundret square millimeters). For parallel computation the prompt gives 0 which then leads to the crash of my computation since I devide by "area" later. Is the reduce command not the right thing to do? Fyi the patch on which I do the computation is a cyclic BC if this has anything to do with it.

I appreciate your help,

Luk

/edit:
Now I added an output to see what the areas of the surface on the different processors are: It gives:
Code:

[3] area_proc: 0
[4] area_proc: 0
[5] area_proc: 0
[6] area_proc: 0
[7] area_proc: 0
[8] area_proc: 0
[9] area_proc: 0
[10] area_proc: 0
[11] area_proc: 0
[12] area_proc: 0
[13] area_proc: 0
[14] area_proc: 0
[15] area_proc: 0
[0] area_proc: 0
area: 0
[1] area_proc: 0
[2] area_proc: 0

Since I used simple decomposition (amon others) this is definitely not true for processor 0 since the patch is only located there. Any clues why this does not work? Thanks!

wyldckat October 26, 2012 18:05

Hi Lukas,

That's strange, the code seems good enough to work in parallel as well.

Have you tried running patchAverage with your case, both in serial and parallel? Because if patchAverage is not working correctly, then the problem could be in OpenFOAM itself!

Best regards,
Bruno

luhawk October 27, 2012 08:52

Hi Bruno,

first of all thanks for concerning yourself with my stuff.

I tried to run the patchAverage on my case in parallel. Since I want to run in on the x-Component of the velocity in my solver I first did the foamCalc operation:

Code:

lukas@cluster:/scratch/luk/testcase$ mpirun -np 16 foamCalc components U -parallel

Selecting calcType components
.
.
.
Selecting calcType components
.
.
.
Pstream initialized with:
    floatTransfer    : 0
    nProcsSimpleSum  : 0
    commsType        : nonBlocking
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0

Time = 0
    Reading U
    Calculating Ux
    Calculating Uy
    Calculating Uz

Time = 0.5
    Reading U
    Calculating Ux
    Calculating Uy
    Calculating Uz

Time = 1
    Reading U
    Calculating Ux
    Calculating Uy
    Calculating Uz

.
.
.

Time = 15
    Reading U
    Calculating Ux
    Calculating Uy
    Calculating Uz

End

Finalising parallel run

So this seems to have worked the Ux files are filled with values. The I ran the patchAverage tool on the Ux field which yields in the following output:

Code:

lukas@cluster:/scratch/luk/testcase$ mpirun -np 16 patchAverage Ux INLET -parallel

.
.
.

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0

Time = 0
    Reading volScalarField Ux
    Average of Ux over patch INLET[5] = 0

Time = 0.5
    Reading volScalarField Ux
    Average of Ux over patch INLET[5] = 0

Time = 1
    Reading volScalarField Ux
    Average of Ux over patch INLET[5] = 0

Time = 1.5
    Reading volScalarField Ux
    Average of Ux over patch INLET[5] = 0
.
.
.
Time = 14.5
    Reading volScalarField Ux
    Average of Ux over patch INLET[5] = 0

Time = 15
    Reading volScalarField Ux
    Average of Ux over patch INLET[5] = 0

End

So this seems like the patchAverage program itself is not perallelized correctly. To exclude foamCalc as an error source (which is rather unlikely since the Ux files in the processor time directories are fille with values) I also ran patchAverage on the pressure. This also gives an average of 0 which confirms this conclusion.

Up to now I'm a bit clueless on the reason but I'll try the patchAverage on the pitzDaily tutorial later in paralell to verify that the problem is not in my case.

So long, best regards,

Lukas

wyldckat October 27, 2012 12:39

Hi Lukas,

Which OpenFOAM version are you using?
I've used both OpenFOAM 2.1.1 and 2.1.x, did a quick test with the tutorial "multiphase/interFoam/laminar/damBreak" in both serial and parallel with these commands:
Code:

blockMesh
cp 0/alpha1.org 0/alpha1
setFields

#for parallel mode
decomposePar
foamJob -p -s interFoam
foamJob -s -p patchAverage alpha1 lowerWall

#for serial mode
interFoam
patchAverage alpha1 lowerWall

And both worked without any problems!

I haven't tried on a cyclic patch yet... I'll have to find a tutorial first... I'll edit this post if and when I find one and test it.


edit: I've tested with the tutorial "incompressible/pimpleFoam/TJunctionFan", by modifying the last line in Allrun from this:
Code:

runApplication $application
To this:
Code:

runApplication decomposePar
runParallel $application 4
mv log.$application log.$application.parallel

runApplication $application

Copied the decomposition dictionary from the other tutorial:
Code:

cp $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/system/decompo* system/
Then ran:
Code:

./Allrun
and then patchAverage in both modes:
Code:

patchAverage p fan_half0
# visually check the last couple of values then run in parallel
foamJob -p -s patchAverage p fan_half0

The values were a bit different, but not null! And "fan_half0" is a cyclic patch!

edit2: The other corresponding cyclic patch "fan_half1" gave veeeery different values, by about an order of magnitude of 10x! For comparison, at time = 1.5s and "p" field:
  • fan_half1 parallel: 30.055
  • fan_half1 serial: 25.9596
  • fan_half0 serial: -73.0923
  • fan_half0 parallel: -69.0298
Since it's a motor fan, it makes sense that one one side has negative pressure and positive on the other side. Nonetheless, there is a substantial difference in values between serial and parallel.
I'm going to test with scotch decomposition instead of simple, because the cyclic patch was split between 2 processor sub-domains...

edit3: With scotch decomposition the patch wasn't split:
  • fan_half1 parallel: 26.1313
  • fan_half1 serial: 25.9596
  • fan_half0 serial: -73.0923
  • fan_half0 parallel: -72.9261
OK, this is a lot better. It does seem that patchAverage can't handle very well patches that cross over processor boundaries.


Best regards,
Bruno

luhawk November 7, 2012 10:13

Hi Bruno,
hi all.

first of all thank you Bruno for your help, your last sentence gave the hint on the actual bug which is in both my code and the patchAverage tool. I used OFv 2.1.0 for all the previous and following stuff.

I ran the tutorials you gave and got the same results, but what lightened me up was the channel395 tutorial which works with channelFoam. When a cyclic case is decomposed the and the processor boundary is at the cyclic patch the patch is reassigned from e.g. "INLET" to "processorXtoprocessorYthroughINLET". The "INLET" patch however will still be present in the files. This yields that

Code:

label PatchID = mesh.boundaryMesh().findPatchID(patchName);
where patchName is "INLET" does return an ID but this ID is no longer correspondant to the patch "processorXtoprocessorYthroughINLET". If parts of the "INLET" patch are not covered by an inter-processor boundary patchAverage will still give results, but wrong ones. If all of the "INLET" patch is inter-processor boundaries the area of "INLET" will be zero since all of inlet is covered by one or several "processorXtoprocessorYthroughINLET" processorCyclic patches and patchAverage will simply return 0.

Well that is that. Since I still want my solver to work I now need the patch ID's of all patches which involve the string stored in patchName in their name. Is there something like a wildcard in openFoam like findPatchID(.*patchName) (seen it the BCs of the motorbike tutorial)? (This does not work, I already tried it :D). Since the class for one single patchID was label I tried to extract the patchIDs of all patches which contain the string stored in patchName as a labelList as follows:

Code:

        labelList UbarPatches;

        scalar count = 0;
        UbarPatches[count] = -1;

        const fvPatchList& patches = mesh.boundary();

        forAll(patches, patchI)
        {
                if (patches[patchI].name() == ".*patchName")
                {
                        UbarPatches[count] = patchI;
                        count += 1;
                }
        }

This compiles but gives a SegFault when ran. Does anybody have an Idea how to extract a list of the IDs of the patch named "somethingsomethingpatchName" this would be very helpful.

Thanks in advance, your help is appreciated!

Best regards, Luk

wyldckat November 9, 2012 17:56

Hi Lukas,

There is a somewhat easier way to take care of this. Using as a reference the previous example tutorial:
  1. Edit the file "system/decomposeParDict".
  2. Add this line:
    Code:

    preservePatches (fan_half0 fan_half1);
  3. Now decompose the case and run the solver!
That line in the dictionary does what it says: preserves the patches in a single processor!


Best regards,
Bruno

luhawk November 11, 2012 08:38

Works perfectly fine. Thank you very much!

sam1364 May 13, 2013 21:53

Hey guys,

I am trying to calculate the patch average value of phi (or mass flux) when running in parallel. I have written the code below which works well for serial computation. is there any suggestion how to make it work for parallel computation?

word inletPatchName = "inflow_cyclic";

//access boundary elements
const fvPatchList& patches = mesh.boundary();

//loop over boundaries
forAll(patches, patchI)
{

const fvPatch& cPatch = patches[patchI];

//check boundary name
if(cPatch.name() == inletPatchName)
{



//loop over all faces of selected
//boundary
forAll(cPatch, faceI)
{

//calculate boundary face mass flux
dimensionedScalar cFaceFlux
= phi.boundaryField()[patchI][faceI]*magUbar;
//sum face mass fluxes
massFluxInlet += mag(cFaceFlux);

luhawk May 14, 2013 05:25

Hi pooyan,

Which decomposition method do you use? Have you used the preservePatches option for your cyclic patches?

Best regards,

Lukas

sam1364 May 14, 2013 09:07

I have used simple method. And have included preservePatches(cyclic boundaries) in the decomposeParDict. Even when I use simple method and use (4 1 1), I expect that the decomposition is made in x direction, so only one processor is working on my inflow_cyclic and therfore It should be able to calculate the right flux. Any suggestion?

Thanks

luhawk May 22, 2013 10:16

Hi pooyan,

sorry for replying late but i was out of office for a few days. Actually I have no idea why it does work in serial but not in parallel. But I think I have a general problem with understanding your calculation. You want to calculate the "average value of the flux over your inlet patch" is this correct?

Your computation:

Quote:

Originally Posted by sam1364 (Post 427293)

//calculate boundary face mass flux
dimensionedScalar cFaceFlux
= phi.boundaryField()[patchI][faceI]*magUbar;
//sum face mass fluxes
massFluxInlet += mag(cFaceFlux);

First of all, why do you multiply with magUbar? Phi is already a flux with [m^3/s], isn't it? If you multiply it with a velocity (magUbar) you should get something with [m^4/s^2] which is no flux.
You could do an averaging with the number of faces considered. But imo you should calculate a flux average weighted with they area of the considered faces. Something like
Code:

scalar area = gSum(mesh.magSf().boundaryField()[patchI); // Total area of your boundary Patch
dimensionedScalar faceWeightedFlux
(
      "faceWeightedFlux",
      dimensionSet(0,3,-1,0,0,0,0),
      gSum(mesh.magSf().boundaryField()[patchI] * phi.boundaryField([patchI]) /area
);

COULD do the work (no guarantee, did not try it). If I have not understood your issue pls excuse.

Regards Luk


All times are GMT -4. The time now is 13:27.