CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   Compressor Simulation using rhoPimpleDyMFoam (https://www.cfd-online.com/Forums/openfoam-solving/143340-compressor-simulation-using-rhopimpledymfoam.html)

RodriguezFatz November 11, 2014 04:39

Hey jetfire, maybe we can speed up this a little. If you want that,
1) post some log output (one time step is enough)
2) how do you decompose?
3) post your current solver settings (fvSolution)

Tobi November 11, 2014 04:58

Quote:

Originally Posted by Jetfire (Post 518251)
Hi,
Sorry for the late reply , i was not at the work station

for the rhomin/max , with reference to the air properties at atmospheric pressure in the link http://www.engineeringtoolbox.com/ai...ies-d_156.html
I think my rhomin/max should not exceed the limits 0.524/2.793. But looking at the output there are timeSteps deviating from this for example.
Code:

CALCULATED density
rhoEqn max/min : 2.15688 0.182153 . .
GAMG:  Solving for p, Initial residual = 1.49854e-09, Final residual = 1.49854e-09, No Iterations 0 
ALL CELLS WHICH ARE NOT WITHIN THE DENSITY INTERVAL THAT YOU SET ARE BOUNDED AND THEN:
rho max/min : 2 0.5

Why is this happening ? any idea.

Please notice that in 300.000 rpm your pressure is not 1bar ...
The density changes due to pressure too! Can you check which pressure you will have in between your blades? I dont know which model you are using but the ideal gas rule is:

p \cdot V = m \cdot R_s \cdot T

\rho = \frac{m}{V}
means its proportional to

\rho \sim \frac{p}{T}
  • p = konst, and T increase --> rho decrease (like your table) and vice versa
  • p increase and T constant --> rho increase and vice versa
  • p and T not constant - depend on the ratio


As I expect, your rotor gives a high partial vacuum -> very small densities!
That is the reason (in my opinion) why your density is in that range. Therefor you should decrease your rhoMin to 0.1 or whatever.

vasava November 11, 2014 05:02

@Tobi: Thanks for the clarification.

Jetfire November 11, 2014 05:24

@Tobi,


Thanks for the tips to speeden up my simulation, I will try implementing them and let you know the results.

Can you message me your email id , i will send my output log file to you. Cannot post it here as it is exceeding max file size.

Tobi November 11, 2014 05:29

Quote:

Originally Posted by RodriguezFatz (Post 518485)
Hey jetfire, maybe we can speed up this a little. If you want that,
1) post some log output (one time step is enough)
2) how do you decompose?
3) post your current solver settings (fvSolution)

Its sufficient if you post only the last time step with all pimple loops like Philipp told. As you see we both had the same thoughts :)

Jetfire November 11, 2014 05:36

1 Attachment(s)
Quote:

Originally Posted by RodriguezFatz (Post 518485)
Hey jetfire, maybe we can speed up this a little. If you want that,
1) post some log output (one time step is enough)
2) how do you decompose?
3) post your current solver settings (fvSolution)


1. Please find the output for 2 timeSteps in attachments

2. I am using hierarchical decomposition with 8 cores.
Code:

numberOfSubdomains 8;

method          hierarchical;

hierarchicalCoeffs
{
    n              (2 2 2);
    delta          0.001;
    order          xyz;
}

3. FvSolution
Code:

/*--------------------------------*- C++ -*----------------------------------*\
| =========                |                                                |
| \\      /  F ield        | OpenFOAM: The Open Source CFD Toolbox          |
|  \\    /  O peration    | Version:  2.3.0                                |
|  \\  /    A nd          | Web:      www.OpenFOAM.org                      |
|    \\/    M anipulation  |                                                |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version    2.0;
    format      ascii;
    class      dictionary;
    location    "system";
    object      fvSolution;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

solvers
{
    p
    {
        solver          GAMG;
        smoother        GaussSeidel;
        cacheAgglomeration on;
        agglomerator    faceAreaPair;
        nCellsInCoarsestLevel 100;
        mergeLevels      1;
        tolerance      1e-06;
        relTol          0.01;
    }

    pFinal
    {
        $p;
        relTol          0;
    }

    pcorr
    {
        $p;
        tolerance      1e-2;
        relTol          0;
    }

    "(rho|U|h)"
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance      1e-06;
        relTol          0.1;
    }

    "(rho|U|h)Final"
    {
        $U;
        relTol          0;
    }

    "(k|epsilon|omega)"
    {
    solver        PBiCG;
    preconditioner    DILU;
    tolerance    1e-10;
    relTol        0.1;
    }

    "(k|epsilon|omega)Final"
    {
    $k;
    relTol        0;
    }

}

PIMPLE
{
    momentumPredictor  yes;
    transonic          no;
    nOuterCorrectors    100;
    nCorrectors        2;
    nNonOrthogonalCorrectors 1;
    turbOnFinalIterOnly        false;

    rhoMin          rhoMin [ 1 -3 0 0 0 ] 0.1;
    rhoMax          rhoMax [ 1 -3 0 0 0 ] 2.5;

    residualControl
    {
        "(U|k|omega)"
        {
            tolerance 1e-05;
            relTol          0;
                }

        p
                {
            tolerance 1e-04;
            relTol          0;
                }
    }
}

relaxationFactors
{
    fields
    {
    p    0.3;
    pFinal    1;
    }
    equations
    {
      "(U|h|k|epsilon|omega)"        0.4;
      "(U|h|k|epsilon|omega)Final"      1;
    }
}


// ************************************************************************* //


Tobi November 11, 2014 05:40

Quote:

Originally Posted by Jetfire (Post 518504)

2. I am using hierarchical decomposition with 8 cores.
Code:

numberOfSubdomains 8;

method          hierarchical;

hierarchicalCoeffs
{
    n              (2 2 2);
    delta          0.001;
    order          xyz;
}


That is not useful.
Please show us the output of your decomposition:
Code:

decomposePar > log
In your case, copy paste your project, decompose it again, store the output and publish it. After that you can remove your copy folder again.

fvSolution:
Code:

tolerance for U|h|rho -> 1e-9;
relTol 0.05;

nCorrectors = 1

residual control p-> 1e-5;

Was mich aber grad etwas stört ist deine Massenerhaltung:
Code:

time step continuity errors : sum local = 4.538409924e-06, global = -4.522927178e-06, cumulative = -0.002405985855
How many time steps did you calculate since now?
Please check your logfile with pyFoam (pyFoamPlotWatcher).

Additionally you can check your meshCourantNo if you insert "checkMeshCourantNo true" into the PIMPLE directory of your fvSolution. I think it is possible to increase your maxCo no to 3.

Jetfire November 11, 2014 05:42

I have the output of decomposePar on my terminal , here it is
Code:

/*---------------------------------------------------------------------------*\
| =========                |                                                |
| \\      /  F ield        | OpenFOAM: The Open Source CFD Toolbox          |
|  \\    /  O peration    | Version:  2.3.0                                |
|  \\  /    A nd          | Web:      www.OpenFOAM.org                      |
|    \\/    M anipulation  |                                                |
\*---------------------------------------------------------------------------*/
Build  : 2.3.0-f5222ca19ce6
Exec  : decomposePar
Date  : Nov 10 2014
Time  : 16:45:40
Host  : "EAT-Standalone"
PID    : 8606
Case  : /home/eatin/OpenFOAM/eatin-2.3.0/run/tutorials/TurboCharger/Trial_run2
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time



Decomposing mesh region0

Create mesh

Calculating distribution of cells
Selecting decompositionMethod hierarchical

Finished decomposition in 4.13 s

Calculating original mesh data

Distributing cells to processors

Distributing faces to processors

Distributing points to processors

Constructing processor meshes

Processor 0
    Number of cells = 828661
    Number of faces shared with processor 1 = 24114
    Number of faces shared with processor 2 = 10498
    Number of faces shared with processor 4 = 10485
    Number of processor patches = 3
    Number of processor faces = 45097
    Number of boundary faces = 67775

Processor 1
    Number of cells = 828661
    Number of faces shared with processor 0 = 24114
    Number of faces shared with processor 2 = 1298
    Number of faces shared with processor 3 = 6458
    Number of faces shared with processor 4 = 1879
    Number of faces shared with processor 5 = 6472
    Number of processor patches = 5
    Number of processor faces = 40221
    Number of boundary faces = 80623

Processor 2
    Number of cells = 828661
    Number of faces shared with processor 0 = 10498
    Number of faces shared with processor 1 = 1298
    Number of faces shared with processor 3 = 17656
    Number of faces shared with processor 4 = 4197
    Number of faces shared with processor 6 = 12647
    Number of faces shared with processor 7 = 715
    Number of processor patches = 6
    Number of processor faces = 47011
    Number of boundary faces = 62623

Processor 3
    Number of cells = 828661
    Number of faces shared with processor 1 = 6458
    Number of faces shared with processor 2 = 17656
    Number of faces shared with processor 7 = 6219
    Number of processor patches = 3
    Number of processor faces = 30333
    Number of boundary faces = 72641

Processor 4
    Number of cells = 828661
    Number of faces shared with processor 0 = 10485
    Number of faces shared with processor 1 = 1879
    Number of faces shared with processor 2 = 4197
    Number of faces shared with processor 5 = 15088
    Number of faces shared with processor 6 = 13676
    Number of processor patches = 5
    Number of processor faces = 45325
    Number of boundary faces = 81249

Processor 5
    Number of cells = 828661
    Number of faces shared with processor 1 = 6472
    Number of faces shared with processor 4 = 15088
    Number of faces shared with processor 6 = 2022
    Number of faces shared with processor 7 = 6375
    Number of processor patches = 4
    Number of processor faces = 29957
    Number of boundary faces = 78931

Processor 6
    Number of cells = 828661
    Number of faces shared with processor 2 = 12647
    Number of faces shared with processor 4 = 13676
    Number of faces shared with processor 5 = 2022
    Number of faces shared with processor 7 = 16593
    Number of processor patches = 4
    Number of processor faces = 44938
    Number of boundary faces = 70372

Processor 7
    Number of cells = 828661
    Number of faces shared with processor 2 = 715
    Number of faces shared with processor 3 = 6219
    Number of faces shared with processor 5 = 6375
    Number of faces shared with processor 6 = 16593
    Number of processor patches = 4
    Number of processor faces = 29902
    Number of boundary faces = 75444

Number of processor faces = 156392
Max number of cells = 828661 (0% above average 828661)
Max number of processor patches = 6 (41.17647059% above average 4.25)
Max number of faces between processors = 47011 (20.2388869% above average 39098)

Time = 0

Processor 0: field transfer
Processor 1: field transfer
Processor 2: field transfer
Processor 3: field transfer
Processor 4: field transfer
Processor 5: field transfer
Processor 6: field transfer
Processor 7: field transfer

End.


Tobi November 11, 2014 05:53

Hi,

Code:

Processor 1
    Number of cells = 828661
    Number of faces shared with processor 0 = 24114
    Number of faces shared with processor 2 = 1298
    Number of faces shared with processor 3 = 6458
    Number of faces shared with processor 4 = 1879
    Number of faces shared with processor 5 = 6472
    Number of processor patches = 5
    Number of processor faces = 40221
    Number of boundary faces = 80623

Processor 2
    Number of cells = 828661
    Number of faces shared with processor 0 = 10498
    Number of faces shared with processor 1 = 1298
    Number of faces shared with processor 3 = 17656
    Number of faces shared with processor 4 = 4197
    Number of faces shared with processor 6 = 12647
    Number of faces shared with processor 7 = 715

    Number of processor patches = 6
    Number of processor faces = 47011
    Number of boundary faces = 62623


Processor 7
    Number of cells = 828661
    Number of faces shared with processor 2 = 715
    Number of faces shared with processor 3 = 6219
    Number of faces shared with processor 5 = 6375
    Number of faces shared with processor 6 = 16593
    Number of processor patches = 4
    Number of processor faces = 29902
    Number of boundary faces = 75444

That is not well decomposed. As you can See its not really weighted. Other strategy would be much better, decompose your case only in x direction (if x is the length of your your pump).

Additionally after decomposing, renumbering!

Jetfire November 11, 2014 06:03

@Tobi

Here is my complete domain

http://www.cfd-online.com/Forums/ope...tml#post515511

Since large no. of elements are there in all 3 directions i have given '(2 2 2)' to make it 8 cores. The similar decomposition method is used for Propeller tutorial which is similar to my case. Please see the domain and let me know if i have to change the decomposition approach or change the subdomains.

RodriguezFatz November 11, 2014 06:41

Jetfire, did you try to use some other decomposition method? I had a very simple pipe flow and thought it is a clever idea to use "simple" decomposition. It showed low number of shared faces and all that stuff, but for some reason it was slower than just using "scotch" without any additional settings. You can just try some different methods and write down the different execution times. It's worth it for such long simulations to try a bit at the beginning.

Just a general question: Why is every time step converged to insanity? I mean, do you really get different results for these kind of problems with 30 pimple loops compared to - let's say a 3 times smaller time step and PISO solver (thus 1 outer iteration per time step)? Your Courant number is close to "1" anyway, so for stability of PISO just a slightly smaller time step would be needed. All the LES guys use PISO... is this that much different from what you are doing?

Also: Why the first pressure corrector with 60 iterations? For me, this looks like something is going utterly wrong. I think each linear solver should not take more than just a few iterations. Maybe this is again due to your solver, but please can someone elaborate this?

RodriguezFatz November 11, 2014 06:48

Another point is: Did you try different settings for the GAMG solver? I did this for a case of mine and found out that playing with "mergeLevels" decreased the simulation time (in my case "2" was the best). Also changing the pre,post,finest sweeps settings changed a lot:
Code:

  "(p|pFinal)"
        {
                solver          GAMG;
                tolerance        1e-12;
                relTol          0.1;

                maxIter          100;

                smoother        GaussSeidel;

                nPreSweeps      1;
                nPostSweeps      1;
                nFinestSweeps    2;

                cacheAgglomeration true;

                nCellsInCoarsestLevel 400;
                agglomerator    faceAreaPair;
                mergeLevels      2;

        }

This was the best for me.

Tobi November 11, 2014 07:23

Quote:

Originally Posted by RodriguezFatz (Post 518520)
Just a general question: Why is every time step converged to insanity? I mean, do you really get different results for these kind of problems with 30 pimple loops compared to - let's say a 3 times smaller time step and PISO solver (thus 1 outer iteration per time step)? Your Courant number is close to "1" anyway, so for stability of PISO just a slightly smaller time step would be needed. All the LES guys use PISO... is this that much different from what you are doing?

I did not get the point. Co should be increase to 2 during simulation (if it is set in the controlDict), you also can set it to 3 or 4 to get bigger time steps (depend on your relax factors also). But why should he use PISO? It would be much more expensiv and maybe not stable.

Quote:

Also: Why the first pressure corrector with 60 iterations? For me, this looks like something is going utterly wrong. I think each linear solver should not take more than just a few iterations. Maybe this is again due to your solver, but please can someone elaborate this?
I dont know which time step iteration is used but at the beginning in such chases it could be normal. Sometimes I also get 100 iterations for the pressure equation. Depend on the system and I never had rpm 300.000.

If you get 1000 Iterations something is wrong.
In the pressure calculation (my experience) it could occur. But it also could be possible that the BC are wrong for that problem. However, I first should have a look at the calculation procedure in that solver but there is not time for that now.

Quote:

Since large no. of elements are there in all 3 directions i have given '(2 2 2)' to make it 8 cores. The similar decomposition method is used for Propeller tutorial which is similar to my case. Please see the domain and let me know if i have to change the decomposition approach or change the subdomains.
My hints to you:
  • move your mesh that the long pipe is colinear with the z-axis (transform points)
  • then use manuelDecomposition
  • If its to complex try using scotch and have a look to the meshes
@Philipp, here in our institute we get the best results with simple or hierarchical instead of scotch. The gain was round about 13% because scotch decomposed our mesh in a very strange way.

RodriguezFatz November 11, 2014 07:29

Quote:

Originally Posted by Tobi (Post 518525)
I did not get the point. Co should be increase to 2 during simulation (if it is set in the controlDict), you also can set it to 3 or 4 to get bigger time steps (depend on your relax factors also). But why should he use PISO? It would be much more expensiv and maybe not stable.

Ok, the point was: Even with Co=3 or 4 and like 25 outer iterations I don't see a reason to use PIMPLE. In such cases, one PIMPLE time step would be like 25 PISO time steps, right? So, if you can reduce the time step by a factor of - let's say 8 - to get a Co of 0.5 and use PISO, you still have less computational time to spend. And you have a much finer time resolution.

Or is this just a matter of initialization? Does the number of PIMPLE loops decrease drastically after a few time steps?

Tobi November 11, 2014 07:47

Quote:

Originally Posted by RodriguezFatz (Post 518527)
Ok, the point was: Even with Co=3 or 4 and like 25 outer iterations I don't see a reason to use PIMPLE. In such cases, one PIMPLE time step would be like 25 PISO time steps, right?

Is there a paper who say that 25 PISO steps = 1 PIMPLE loop (with 25 iterations). In my opinion its not true because you can not compare the algorithms directly.

Quote:

So, if you can reduce the time step by a factor of - let's say 8 - to get a Co of 0.5 and use PISO, you still have less computational time to spend. And you have a much finer time resolution.
You are right, the time resolution would be better. But you have to check what is necessary in your case. So lets say you want simulate 1s and your mesh is big and the system is very stiff you. Therefor you have to set Co to (lets say) 0.3 to be stable. Therefor you will get a dT = 1e-9 which is very bad in that case. Also if there is one cell which makes trouble, this cell would reduce the whole time step, maybe to 1e-11 till it will crash. With relaxation you can handle this better.

  • well you are right you are more accurate in time
  • But if you have some problem cells that will reduce the time to 1e-11 its bad.
Quote:

Or is this just a matter of initialization? Does the number of PIMPLE loops decrease drastically after a few time steps?
It is definitiv also an initialization and boundary problem. If the boundary are bad defined or wrong, all algorithm will blow up or crash. The outer iterations will decrease during time steps. Sometimes I have loops like 80 at the beginning and after a few timesteps I get 8 - 10 and my timestep is much bigger than PISO. The computational costs decrease and the linear system is stable due to under-relaxation.

If there would be no advantage of PIMPLE compared to PISO then nobody would use PIMPLE.

RodriguezFatz November 11, 2014 07:56

Quote:

Originally Posted by Tobi (Post 518525)
If there would be no advantage of PIMPLE compared to PISO then nobody would use PIMPLE.

I got the feeling that he doesn't use the advantage of PIMPLE if he iterates more often than the number of PISO steps he actually needs if he reduces the time step. This is just an educated guess, I never tryed it.

Tobi November 11, 2014 08:56

He also can try something like that:

  • nOuterCorrection 3;
  • nCorrections 2;
Then the first 2 iterations are relaxed and the last one is without relaxation. But this could give stability problems because its relativ similar to PISO then but with 2 relaxation iterations. I am not sure if this case is working without relaxation. He can also switch to PISO again but then he should use maxCo 0.1 - 0.7 but not 1.

I can not test the case because I do not have it (:

Tobi November 11, 2014 10:26

Question: are you using the boundary condition which you attached in this post: http://www.cfd-online.com/Forums/ope...tml#post516087

The reason why I ask is due to the U - file, if it is like that you have an error which could be the reason of the pcorr iteration.


Quote:

Originally Posted by RodriguezFatz (Post 518530)
I got the feeling that he doesn't use the advantage of PIMPLE if he iterates more often than the number of PISO steps he actually needs if he reduces the time step. This is just an educated guess, I never tryed it.

That could be possible. I will check if I find some papers about that.

Jetfire November 11, 2014 23:06

1 Attachment(s)
Hi,

Bad news , Simulation crashed showing the same error as posted earlier after few timeSteps. check the output in attachments


  • Is this problem related to my thermoPhysicalProperties ? should I change my thermoPhysical settings?

Jetfire November 11, 2014 23:12

1 Attachment(s)
Quote:

Originally Posted by Tobi (Post 518564)
Question: are you using the boundary condition which you attached in this post: http://www.cfd-online.com/Forums/ope...tml#post516087

The reason why I ask is due to the U - file, if it is like that you have an error which could be the reason of the pcorr iteration.




That could be possible. I will check if I find some papers about that.

@Tobi do not refer to those boundary conditions, those were from annular thermal mixer tutorial, I am attaching my final 0 folder for your reference, do let me know if you need any other details.


All times are GMT -4. The time now is 12:18.