CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Main CFD Forum (https://www.cfd-online.com/Forums/main/)
-   -   Processor boundary error surface tension force (https://www.cfd-online.com/Forums/main/172717-processor-boundary-error-surface-tension-force.html)

jameswilson620 June 3, 2016 12:42

Processor boundary error surface tension force
 
5 Attachment(s)
All,

I have modified interFoam to include a different form for the CSF, surface tension force. Instead of a force that is proportional to the interface curvature, I am simply adding a constant field where the sign may be positive or negative, reversing the direction of the force.

In an attempt to remedy the problem, I removed features from my solver only to find that the problem lies solely in the surface tension force field.

In pEqn.H from interFoam 2.3.0, the CSF is represented by:

Code:

   
    surfaceScalarField phig
    (
        (
            fvc::interpolate(interface.sigmaK())*fvc::snGrad(alpha1)
          - ghf*fvc::snGrad(rho)
        )*rAUf*mesh.magSf()
    );

where the surface force is bolded.

Here is what i have added in interFoam:

Code:

    surfaceScalarField phig
    (
        (
            fvc::interpolate(surfaceTensionForce)*fvc::snGrad(alpha1)
          - ghf*fvc::snGrad(rho)
        )*rAUf*mesh.magSf()
    );

I have defined surfaceTensionForce in createFields and it is updated in the main .C file as series of constants that results in a uniform field:

Code:

                       
        surfaceTensionForce = interface.sigmaK();
        forAll(surfaceTensionForce,celli)
        {
            if((porosity[celli]!=1)&(capillaryPressurePorous))
            {
                surfaceTensionForce[celli] = -porosity[celli]*4*interface.sigma().value()*Foam::cos(contactAnglePorous*2*3.14159/360)/dPores.value();
            }
                                   
        }

As you will see in images 4 and 5, this operation is carried out successfully.

So heres the situation:

In serial mode, this implementation is flawless. In parallel, the capillary force at the liquid/vapor interface, at the processor boundary, seems to be too high (i believe), resulting in the regression of the liquid/vapor interface back to the processor boundary. I have tried scotch and simple with 3-6 nodes. there is no problem in regions using the traditional interface.sigmaK() field, only in the updated, uniform cells along processor boundaries.

I have attached a few images:

1. Initial starting point for the simulation. By design, the liquid/vapor interface is set ON TOP/ACROSS the processor boundary so that the very next write interval will reveal whether or not any solver remedies/modifications have worked.

2. next write interval in SERIAL. Gravitational and capillary forces pulling the liquid/vapor interface down as expected.

3. same as 2. but PARALLEL. Note that the processor boundary is horizontal and aligns with the flat spots on the liquid/vapor interface. This behaviour is incorrect and is definitely inconsistent with the volScalarField, surfaceTensionForce that I have defined.

4. This is the surfaceTensionForce field in SERIAL. Note that interface.sigmaK() can be seen as active in the upper portion of the domain where the modified region (uniform value of -398) is active in the lower region.

5. same as 4. but in PARALLEL. Note the uniform, properly defined, surfaceTensionForce field in the lower region where the interface gets, for a lack of a better word, wonky.

The new surfaceTensionForce field is simply a modified version of the original, i.e. the field is a place holder for whatever i want to put in it, so long as the units are consistent. I can't figure out what is happening leading up to interpolation in pEqn.H (which I will be investigating next), i.e. the volScalarField looks great so I cant explain why modifying it causes the solver to fail at the processor boundaries in the uniform region.

Any help or suggestions are welcome.

Thank you, James W.

jameswilson620 June 3, 2016 12:50

2 Attachment(s)
Heres the solver and test case. again, OF2.3.0.

1. compile (goes in FOAM_USER_APPBIN)

2. open test case and run

Code:

sh cleanCase.sh
3. Serial:
Code:

interFoamTestCapillary
3. Parallel:
Code:

mpirun -np 3 interFoamTestCapillary -parallel

jameswilson620 June 19, 2016 13:35

Bump

Anyone have any idea?

Ive even modified interfaceProperties.C and .H to add the necessary fields and constants to mimic the manner in which sigmaK is defined, and still, fvc::interpolate() results in an inconsistent value at the processor boundaries.. In other words, when i call fvc:: interpolate() on a field of a uniform value of 300, at the boundaries I get a value of 150.

jameswilson620 June 20, 2016 19:39

Hey, James, James here. I found a solution to our problem.

Turns out after modifying the field surfaceTensionForce EXPLICITLY, you need to throw in a surfaceTensionForce.correctBoundaryConditions(). We've since modified our code to look like this:

Code:


        forAll (porosity_,celli)
        {
                if ((porosity_[celli]!=1)&(capillaryPressurePorous_==1))
            {
                K_[celli] = -porosity_[celli] * 4/dPores_.value()*cos(contactAnglePorous_.value()*convertToRad);//2004-Resi-numerical simulation of drop impact...
            }               
        }
        K_.correctBoundaryConditions();
        sigmaK_ = K_*sigma_;
        sigmaKf_=fvc::interpolate(sigmaK_);

note the K_.correctBoundaryConditions();

Now the interesting thing is this member function "updateBoundaryConditions()" is (must be) active at processor boundaries. So the question is now: Why does modifying K_ explicitly, cell by cell, cause the use of fvc::interpolate(interface.sigmaK()) to break when introduced in pEqn.H in parallel but not serial? In other words, what does correctBoundaryConditions() do at the processor boundaries?

My theory is that the definition of K_:

Code:

K_ = -fvc::div(nHatf_);
utilizes whatever libraries/utilities necessary to carry out this operation (fvc::div()) efficiently and accurately in parallel as it does in serial. The processor boundaries are then updated properly and without the requirement for the user to do so manually with .correctBoundaryConditions(). But when we manually modify K_ cell by cell, the processor boundaries have NOT been updated as they are in:

Code:

K_=fvc::div()
and therefore require the use of K_.correctBoundaryConditions().

What do we think?

tom_flint2012 October 30, 2017 17:31

parallel surface tension
 
Hi James,

Thankyou so much for your posts on this topic. I had the exact same problem with a custom multiphaseeulerfoam solver. A custom surface tension force was working perfectly in serial but crashing the solver in parallel. I've been pulling my hair out for days before I found your post. I saw that you didn't get much kudos for your efforts so just wanted to say thankyou. I'm guessing the appropriate libraries to evaluate the surface tension forces across the processor boundaries and correct them are implicitly called in the default case. There is even an explicit correction for the angle in the eulerfoam case so I should have probably picked up on that being the problem far earlier. Anyway, thankyou, and it's nice to hear of other peoples multphase work.

All the best,

Tom


All times are GMT -4. The time now is 21:46.