CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Bugs (https://www.cfd-online.com/Forums/openfoam-bugs/)
-   -   strange processor boundary behavior with linearUpwindV (https://www.cfd-online.com/Forums/openfoam-bugs/94085-strange-processor-boundary-behavior-linearupwindv.html)

akimbrell November 4, 2011 20:42

strange processor boundary behavior with linearUpwindV
 
1 Attachment(s)
Code used is a recent version of OpenFOAM-1.6-ext.

I observed the following behavior when running a test case in parallel - see attached image. The case is a point vortex with P = 0 boundaries and U boundaries set to zeroGradient. I am running standard icoFoam solver with 3 processors. For discretization schemes I am using the default Gauss linear everywhere except div(phi,U), for that one I use Gauss linearUpwindV Gauss linear. My max Courant number is ~0.3, grid resolution is reasonable.

The quantity shown is vorticity magnitude - you can see that when the scale is minimized, data is not being shared properly across the processor boundaries - they effectively show up in the solution when they should be invisible. I have run this case previously in serial and have not seen anything like this before. Furthermore, most other schemes for the convection term do not show this behavior. I have observed something similar also using SFCDV but it dissipates shortly after initialization.

Is this a known problem with the linearUpwindV scheme? I have also tried pure linearUpwind and did not see this problem at all.

alberto November 5, 2011 13:45

This is typically due to a missing update of the BC's. If you can reproduce it in 2.0.x, please report it as a bug on http://www.openfoam.com/mantisbt/main_page.php .

david February 11, 2013 15:56

http://www.openfoam.org/mantisbt/view.php?id=676

https://github.com/OpenFOAM/OpenFOAM...01782e30bd4291

Regards
David

s.m November 6, 2013 04:53

Hi
would you please tell me that how linearUpwind can be define in OpenFOAM-1.6-ext?
e.g linearUpwind in openFoam 2.2.0 is defind in this way:
div(phi,U) Gauss linearUpwindV grad(U);

i want to know how should i define it in OpenFOAM-1.6-ext?

thank you very much:)

wyldckat November 9, 2013 14:16

Greetings s.m,

You can find tutorials that use this scheme by running:
Code:

grep -R "linearUpwindV" $FOAM_TUTORIALS/
Best regards,
Bruno

ma-tri-x May 18, 2014 20:04

compressible InterFoam
 
Hi !

I worked several months on simualtion of cavitation bubbles. I started with the compressibleInterFoam solver and modified it, but the problem still also remains with the original one(s) (versions: 2.1.1, 2.2.1, 2.3.0):

When at least one of the processor patches crosses the interface, the velocity field is not computed correctly. You observe numerical fragments parallel to the patch like in the following picture
https://www.dropbox.com/s/a4cr4qpyoqjhue2/bug.png

The only version I discovered to be able to deal with the decomposition is the openfoam-extend version.

Has anyone observed this? Is this the same error as discussed here?

Furthermore I discovered that even without decomposing, running a 2D axis-symmetric case has some overestimating properties in the cell of the interface directly at the axis. So if you place a bubble that will collapse onto the axis, it will unphysically get thinner at the axis. This also seems to be no problem in the extend version 3.0 (absolutely same input files).

Regards,
Max

david May 20, 2014 06:04

Hi Max

Do you see the numerical fragments only with linearUpwindV or also with other schemes? I think it would be good to report this bug. Do you have a simple case that demonstrates your problems?

Regards
David

ma-tri-x May 20, 2014 12:13

I don't use linearUpwind
 
1 Attachment(s)
Hi David!

I don't use linearupwind. My fvSchemes looks like:
Code:

ddtSchemes
{
    default        Euler;
}

gradSchemes
{
    default        Gauss linear;
}

divSchemes
{
    default            Gauss vanLeer;
    div(phirb,alpha) Gauss interfaceCompression 1;
}

laplacianSchemes
{
  default              Gauss linear corrected;
}

interpolationSchemes
{
    default        linear;
}

snGradSchemes
{
    default              none;
//    default              Gauss skewCorrected linear;
    snGrad(pd)          limited 0.5;
    snGrad(rho)          limited 0.5;
    snGrad(alpha1)      limited 0.5;
    snGrad(p_rgh)      limited 0.8;
    snGrad(p)          limited 0.8;
}

fluxRequired
{
    default              none;
    p_rgh;
    p;
    pcorr;
    alpha1;
}

A simple example which is reproducable... erm ... I don't know and I'm currently writing my thesis so I have little time to prepare one... :(

In August, I will be able to prepare one...

I think you will immediately see the symmetry-axis-effect when you set up a bubble with very low pressure on an axis of an axis-symmetric mesh. in the very first timestep, where the bubble wall starts to accelerate, you find that the two cells where the interface hits the axis accelerate faster. It's hidden in the next timesteps because I think you cannot set a threshold for the U-field magnitude in paraview (the two cells are only slightly faster and the range of U-magnitude over the mesh is much larger). I cannot show you this unfortunately, because I didn't save a screenshot. But I can show you another freaky example:

Once I decomposed with scotch and some processor boundaries were parallel to the bubble wall at some time. When the bubble passed the processor patch, droplets were formed. Maybe I can put the screenshot here. A yes, here's an attachment. The left picture is at t=8.18393e-5, the right is at t=8.33299e-5. In between the bubble collapsed further and the interface (="bubble wall") passed the scotch-processor-patch.

ma-tri-x October 17, 2015 15:57

Solved
 
Quote:

Originally Posted by akimbrell (Post 330781)
Code used is a recent version of OpenFOAM-1.6-ext.

I observed the following behavior when running a test case in parallel - see attached image. The case is a point vortex with P = 0 boundaries and U boundaries set to zeroGradient. I am running standard icoFoam solver with 3 processors. For discretization schemes I am using the default Gauss linear everywhere except div(phi,U), for that one I use Gauss linearUpwindV Gauss linear. My max Courant number is ~0.3, grid resolution is reasonable.

The quantity shown is vorticity magnitude - you can see that when the scale is minimized, data is not being shared properly across the processor boundaries - they effectively show up in the solution when they should be invisible. I have run this case previously in serial and have not seen anything like this before. Furthermore, most other schemes for the convection term do not show this behavior. I have observed something similar also using SFCDV but it dissipates shortly after initialization.

Is this a known problem with the linearUpwindV scheme? I have also tried pure linearUpwind and did not see this problem at all.

Hi!

After years of calculating in parallel with openFOAM of several versions, I finally found the bug:
have a look at your
Code:

controlDict
. Do you adjustTimeStep? Then you have to go have a look into the several processorN-directories of your run and compare the times. Are they the same? Probably NOT. It is because there exist as many CoNums (Courant numbers) as there are processors used. You have to explicitly take the minimum CoNum of ALL processors. So go into the sources of your solver
Code:

cd $WM_PROJECT_DIR/applications/solvers/path-to-solver
and edit the file
Code:

CorantNo.H
from
Code:

    CoNum = max(SfUfbyDelta/mesh.magSf()).value()*deltaT;
to
Code:

    CoNum = max(SfUfbyDelta/mesh.magSf()).value()*deltaT;
    reduce(CoNum,minOp<scalar>());

if you are using also the acousticCoNum then you have to do it for this one as well.
Now run
Code:

./Allwmake
and you'll be fine forever. I think.

wyldckat October 17, 2015 16:17

Hi ma-tri-x,

Many thanks for sharing this solution!
I've taken a look into how the latest OpenFOAM versions deal with this and they calculate the Courant numbers differently and are able to be consistent with the values for all processors.

I've checked and this issue currently only occurs in foam-extend (and their old OpenFOAM 1.6-ext). Please report this at their bug tracker here: http://sourceforge.net/p/openfoam-ex...extendrelease/

Best regards,
Bruno

Edit: Since 7 days have passed, I've reported this here, so that it wouldn't be lost: https://sourceforge.net/p/openfoam-e...ndrelease/295/

anandsudhi November 3, 2015 00:44

Hi

It would be a huge help if you can tell me the extra modification done to linearUpwindV to account for direction of the field. I understand there is a new variable 'maxCorr' in the code. But i am not able to understand its implementation. What is the purpose of this modification?

Also does this modification make linearUpwindV bounded with out the need of specifying limited gradients?

Any comments will be very much appreciated.

Thank you

Anand

ma-tri-x November 4, 2015 10:20

Quote:

Edit: Since 7 days have passed, I've reported this here, so that it wouldn't be lost: https://sourceforge.net/p/openfoam-e...ndrelease/295/
Hi Bruno!
Sorry, I didn't receive an email that you replied to my post. Thanks a lot for reporting this "bug"! Strangely enough, I only observed this in some special cases. In other cases, the time steps were synchronized but it may have occured that some cores just waited for the others to complete intermediate time steps and afterwards these intermediate ones were deleted again. It seems that deep inside the source code there's something not working properly (because the same version of openmpi sometimes works, sometimes this bug occurs).

@Anand: I don't have a clue about the upwind thing. You must ask someone of the previous posts. I suggested that it doesn't have to do with linearUpdwind since I didn't use that one and got the bug as well... At least it looked alike.

Kind regards,
Max

CarlesCQL April 12, 2016 05:49

3 Attachment(s)
Hi everyone!

I'm having a similar problem with the processor boundary conditions trying to simulate a trombe wall with chtMultiRegionFoam with 6 processors. The pictures show the velocity in the air region (inside de building and surrounding the trombe wall). The first one is the initial condition I used mapped from a chtMultiRegionSimpleFoam parallel run. In the second one the strange behaviour in the processor boundaries shows up.

It seemed that it was a problem with the 1.6-ext version, but I'm using 2.4, and not seen the inconsistency in the time directories. The tutorial multiRegionHeater runs ok in parallel.

In my case this problem only occurs in the transient run and in the fluid region.

Any idea? Thanks!


All times are GMT -4. The time now is 21:12.