CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Help to reduce 3D computation time?

Register Blogs Community New Posts Updated Threads Search

Like Tree1Likes
  • 1 Post By flavio_galeazzo

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   October 15, 2009, 12:38
Default Help to reduce 3D computation time?
  #1
lth
Member
 
lth's Avatar
 
lth
Join Date: Mar 2009
Location: Madison, WI, USA
Posts: 37
Blog Entries: 45
Rep Power: 17
lth is on a distinguished road
Dear Foam Community,

Can someone give some insight on this matter?

I am running a 3D simulation and the solver is taking forever to compute, so I attempted to resolve with GAMG solver but the times are not changing.

Do I need to fix my blockMeshDict when choosing GAMG as a solver/preconditioner?

I do not have clusters and am working from a 4 processor workstation.

Sincerly, Lori

Here is my FvSolutions:
____________________________________
solvers
{

p GAMG
{
preconditioner FDIC;
mergeLevels 1;
smoother GaussSeidel;
agglomerator faceAreaPair;
nCellsInCoarsestLevel 100;
tolerance 1e-07;
relTol 0;
};

U GAMG
{
preconditioner DILU;
mergeLevels 1;
smoother GaussSeidel;
agglomerator faceAreaPair;
nCellsInCoarsestLevel 100;
tolerance 1e-06;
relTol 0;
};

taufirst PBiCG
{
preconditioner DILU;
tolerance 1e-06;
relTol 0;
};

}

PISO
{
momentumPredictor yes;
nCorrectors 2;
nNonOrthogonalCorrectors 1;
pRefCell 0;
pRefValue 0;
}

relaxationFactors
{
p 0.3;
U 0.5;
taufirst 0.3;
}
______________________________________________
FvSchemes
______________________________________________
ddtSchemes
{
default CrankNicholson 1;
}

gradSchemes
{
default leastSquares;
grad(p) Gauss linear;
grad(U) Gauss linear;

}

divSchemes
{
default none;
div(phi,U) Gauss upwind;
div(phi,taufirst) Gauss upwind;
div(tau) Gauss linear;

}

laplacianSchemes
{
default none;
laplacian(etaPEff,U) Gauss linear corrected;
laplacian(etaPEff+etaS,U) Gauss linear corrected;
laplacian((1|A(U)),p) Gauss linear corrected;
}

interpolationSchemes
{
default linear;
interpolate(HbyA) linear;
}

snGradSchemes
{
default corrected;
}

fluxRequired
{
default no;
p;
}
_______________________________________________
lth is offline   Reply With Quote

Old   October 16, 2009, 05:41
Default
  #2
Senior Member
 
MadsR's Avatar
 
Mads Reck
Join Date: Aug 2009
Location: Copenhagen, Denmark
Posts: 177
Rep Power: 17
MadsR is on a distinguished road
Hi.

"forever" is relative :-) CFD simulations can take from a few seconds to days, months and even years to complete, so we need some figures here to see if yours "forever" is too long :-)

Well, some input could be, and I don't know if it's any news/use for you, but initial conditions and mesh severely affects how numerical solutions behave. If you haven't tried, and it makes sense in your case (depends what you're doing) you could try to initialize the domain with potentialFoam and maybe also take a look on any given mesh skewness.

It seems that you are doing transient simulation, so maybe you could ease the simulation on by running with a reduced time step and then increase, if you have some violent startup effects in the flow.

One could argue that your relaxation parameter for p is low, but for some simulations it's high. If you increase it, you might get quicker convergence - or divergence and numerical explosions (which I guess is the same thing :-))

The tolerance on p is also strict, if you "only" run an engineering problem and not need scientific/academic precision.

But, all suggestions may completely off for your case, but if you haven't dug into them, you might try to look into these.

/Mads
__________________
Online free airfoil-mesher for OpenFOAM here
MadsR is offline   Reply With Quote

Old   October 16, 2009, 05:51
Default
  #3
Member
 
Flavio Galeazzo
Join Date: Mar 2009
Location: Karlsruhe, Germany
Posts: 34
Rep Power: 18
flavio_galeazzo is on a distinguished road
Hello Ith,

I agree with Mads, that "forever" is very relative. What can help us to estimate if your are having problems with you setup or if you have a problem that is too big to your computational resources is the following:

- How big is your grid, and what type of elements are used
- The solver you are using
- The numerical scheme (you already give us that)
- Your hardware configuration: type and number of CPUs and ammount of memory

Regards,

flga
flavio_galeazzo is offline   Reply With Quote

Old   October 16, 2009, 10:50
Default
  #4
lth
Member
 
lth's Avatar
 
lth
Join Date: Mar 2009
Location: Madison, WI, USA
Posts: 37
Blog Entries: 45
Rep Power: 17
lth is on a distinguished road
Dear Mads and Flavio,

First off, thank you for taking the time to look at this with me.
Forever is long, sorry, for 3D, it is taking about 3 to 6 weeks for a single run on a single processor to reach steady state. (2D took about 10 minutes to 3 hours). I did not know about potentiaFoam and can look into this, thanks and I will dig into some of the other suggestions. I was hoping GAMG would work better based on the OF Userguide suggestions but wondered if I was using this solver incorrectly or not.

1. I will attach the blockMesh where I use hex elements.
2. The physical solver is viscoelasticfluidFoam written by Jovani Favero.
The numerical solver is GAMG with PBiCG, and PISO for Pressure and Velocity convergence. My first 3D runs with with the numerical solvers (attached below) which seems faster than GAMG?

4. Ubuntu Release 8.10 (intrepid) / Hardware Memory 3.2 GiB / 4 Processors: Intel Core2 Quad CPU Q9550@ 2.83GHz each.

Let me know if i did not answer correctly,
Thank you, Lori



first 3D run solver (FvSolutions) info:
__________________________________
solvers
{

p PCG
{
preconditioner DIC;
tolerance 1e-07;
relTol 0;
};

U PBiCG
{
preconditioner DILU;
tolerance 1e-06;
relTol 0;
};

taufirst PBiCG
{
preconditioner DILU;
tolerance 1e-06;
relTol 0;
};

}

PISO
{
momentumPredictor yes;
nCorrectors 2;
nNonOrthogonalCorrectors 1;
pRefCell 0;
pRefValue 0;
}

relaxationFactors
{
p 0.3;
U 0.5;
taufirst 0.3;
}
_________________________________________


3D blockMesh file:
__________________________________________________ _____________

convertToMeters 0.0032;

vertices
(
(0 0 0) //0
(80 0 0) //1***
(0 1 0) //2
(80 1 0) //3***
(0 2.5 0)//4
(80 2.5 0)//5
(0 4 0) //6
(80 4 0) //7
(130 0 3.75)//8
(130 1 3.75) //9
(0 0 10) //10
(80 0 10)//11
(0 1 10) //12
(80 1 10)//13
(0 2.5 10)//14
(80 2.5 10)//15
(0 4 10) //16
(80 4 10) //17
(130 0 6.25) //18
(130 1 6.25) //19
(80 0 6.25) //20
(80 1 6.25) //21
(80 2.5 6.25)//22
(0 0 6.25) //23
(0 1 6.25) //24
(0 2.5 6.25) //25
(0 4 6.25) //26
(80 4 6.25) //27
//*** indent front face 09/20/09
(80 0 3.75) //28
(80 1 3.75) //29
(0 1 3.75) //30
(0 0 3.75) //31
(0 2.5 3.75) //32
(0 4 3.75) //33
(80 4 3.75) //34
(80 2.5 3.75) //35


);

blocks
(

hex (23 20 21 24 10 11 13 12) (150 30 30) simpleGrading (0.002 0.3 1)
hex (31 28 29 30 23 20 21 24) (150 30 20) simpleGrading (0.002 0.3 1)
hex (0 1 3 2 31 28 29 30) (150 30 30) simpleGrading (0.002 0.3 1)
hex (2 3 5 4 30 29 35 32) (150 30 30) simpleGrading (0.002 4 1)
hex (30 29 35 32 24 21 22 25) (150 30 20) simpleGrading (0.002 4 1)
hex (24 21 22 25 12 13 15 14) (150 30 30) simpleGrading (0.002 4 1)
hex (4 5 7 6 32 35 34 33) (150 30 30) simpleGrading (0.002 0.3 1)
hex (32 35 34 33 25 22 27 26) (150 30 20) simpleGrading (0.002 0.3 1)
hex (25 22 27 26 14 15 17 16) (150 30 30) simpleGrading (0.002 0.3 1)
hex (28 8 9 29 20 18 19 21) (90 30 20) simpleGrading (500 0.3 1)
);

edges
(
);

patches
(
patch inlet
(
(0 2 30 31) //0
(31 30 24 23)
(23 24 12 10) //2
(2 4 32 30)
(30 32 25 24) //4
(24 25 14 12)
(4 6 33 32) //6
(32 33 26 25)
(25 26 16 14) //8
)
wall fixedWalls
(
(6 33 34 7) //top of wall R upstream
(33 26 27 34) //top of wall M upstream
(26 16 17 27) //top of wall L upstream
(1 3 29 28) //contract R btm wall
(3 5 35 29) //contract R mid wall
(5 7 34 35) //contract R top wall
(29 35 22 21) //contract M mid wall
(35 34 27 22) //contract M top wall
(20 21 13 11) //contract L btm wall
(21 22 15 13) //contract L mid wall
(22 27 17 15) //contract L top wall
(29 21 19 9) //top of wall downstream

)
patch outlet
(
(8 9 19 18) //9
)
symmetryPlane simetry
(
(0 1 28 31) //btm sym R upstream
(31 28 20 23) //btm sym M upstream
(23 20 11 10) //btm sym L upstream

(28 8 18 20) //btm sym downstream
)
wall frontAndBack
(
(0 2 3 1) //btm frontface upstream
(2 4 5 3) //mid frontface upstream
(4 6 7 5) //top frontface upstream
(28 29 9 8) //btm frontface downstream
(10 12 13 11) //btm backface upstream
(12 14 15 13) //mid backface upstream
(14 16 17 15) //top backface upstream
(20 21 19 18) //btm backface upstream
)
);

mergePatchPairs
(
);


// ************************************************** *********************** //

Last edited by lth; October 16, 2009 at 12:14.
lth is offline   Reply With Quote

Old   October 19, 2009, 02:59
Default
  #5
Senior Member
 
MadsR's Avatar
 
Mads Reck
Join Date: Aug 2009
Location: Copenhagen, Denmark
Posts: 177
Rep Power: 17
MadsR is on a distinguished road
Hi Lori.

I am not familiar with the viscoelasticfluidFoam solver, but more than a month for a 1mio cell-case (if I am counting right) seems like a long time. Obvious, this also depends on the case you are trying to solve.

Actually I'd guess that a viscoelastic case would exhibit quite rigid behaviour and quickly converge (?), but this does not seem to be the case here. I am sorry that I can't be of any help here.

/Mads
__________________
Online free airfoil-mesher for OpenFOAM here
MadsR is offline   Reply With Quote

Old   October 20, 2009, 03:10
Default
  #6
New Member
 
bigred's Avatar
 
Matthew Philpott
Join Date: Aug 2009
Location: Belgium
Posts: 24
Rep Power: 16
bigred is on a distinguished road
If you're running a solver over multiple CPU's, don't you have to break up the mesh into the number of CPU's and then solve each part using each CPU? I'm not sure if this applies to your situation or not. If you use the process manager (system manager or something, the one that shows CPU usage) in ubuntu does it show that all CPU's are being used during solving?

Have alook in the manual for decomposePar and mpirun under the heading, "running applications in paralell".
__________________
CAELinux 2009 + OF1.5
Ubuntu 9.04 x64 (jaunty jackalope) + OF1.6
bigred is offline   Reply With Quote

Old   October 21, 2009, 12:13
Default
  #7
lth
Member
 
lth's Avatar
 
lth
Join Date: Mar 2009
Location: Madison, WI, USA
Posts: 37
Blog Entries: 45
Rep Power: 17
lth is on a distinguished road
Dear Mads,

These are dilute polymers so do not behaves as rigid bodies. They are highly non-linear constitutive equations due to their convective stress terms. Thank you still for taking the time to look.

Dear Bigred,

Yes, I should look into using my computer as a parallel processor and have been running up to 4 separate cases on these processors to date. Good Point though. I am still believing that a multigrid method is the cheapest way to go from a computation time in any 3D viscoelastic case. I would like to be able to do both.

Thank you, Lori
lth is offline   Reply With Quote

Old   October 23, 2009, 05:19
Default
  #8
Member
 
Flavio Galeazzo
Join Date: Mar 2009
Location: Karlsruhe, Germany
Posts: 34
Rep Power: 18
flavio_galeazzo is on a distinguished road
Hello Ith,

Sorry for the late reply. I hope I can still help.

You have a grid of 1+ million elements, it will take a bit time to converge in only one processor. As an example, my grids are larger (6 million, tetra), and running in 12 processors (3 x quad core) it takes 40 hours to converge using a solver base on the simpleFoam.

You can take a look on the convergence behavior of the pressure correction, in my case is what takes the most of the time. I am using GAMG for the pressure, and more standard solvers for the other variables. My fvSolution is like this:

solvers
{
p GAMG
{
tolerance 1e-08;
relTol 0.01;
smoother GaussSeidel;
cacheAgglomeration true;
nCellsInCoarsestLevel 1200;
agglomerator faceAreaPair;
mergeLevels 1;
};
U PBiCG
{
preconditioner DILU;
tolerance 1e-07;
relTol 0.1;
};
F PBiCG
{
preconditioner DILU;
tolerance 1e-07;
relTol 0.1;
};
k PBiCG
{
preconditioner DILU;
tolerance 1e-07;
relTol 0.1;
};
epsilon PBiCG
{
preconditioner DILU;
tolerance 1e-07;
relTol 0.1;
};
}
SIMPLE
{
nNonOrthogonalCorrectors 1;
}
relaxationFactors
{
p 0.3;
U 0.7;
F 0.9;
k 0.4;
epsilon 0.4;
}

Regards,

flga
lth likes this.
flavio_galeazzo is offline   Reply With Quote

Old   March 16, 2011, 09:06
Default
  #9
Senior Member
 
lakeat's Avatar
 
Daniel WEI (老魏)
Join Date: Mar 2009
Location: Beijing, China
Posts: 689
Blog Entries: 9
Rep Power: 21
lakeat is on a distinguished road
Send a message via Skype™ to lakeat
Hi flga,

Do you know what is the rule to set
nCellsInCoarsestLevel?

my case is an external incomoressible flow, using pisoFoam. 5m grids, 80 cpus
(I am taking your advice to make each time step runs one sec)

Thanks
__________________
~
Daniel WEI
-------------
Boeing Research & Technology - China
Beijing, China
Email
lakeat is offline   Reply With Quote

Old   March 29, 2011, 04:05
Default
  #10
Member
 
Flavio Galeazzo
Join Date: Mar 2009
Location: Karlsruhe, Germany
Posts: 34
Rep Power: 18
flavio_galeazzo is on a distinguished road
Hello lakeat,

I have learned (after posting my fvSolution file in this thread) that the number you specify in nCellsInCoarsestLevel is for each partition, and not for the whole domain as I have thought. So I am using nCellsInCoarsestLevel = 100 in my new simulations.

80 cpus for 5 million nodes seems adequate for me
flavio_galeazzo is offline   Reply With Quote

Old   March 29, 2011, 09:20
Default
  #11
Senior Member
 
lakeat's Avatar
 
Daniel WEI (老魏)
Join Date: Mar 2009
Location: Beijing, China
Posts: 689
Blog Entries: 9
Rep Power: 21
lakeat is on a distinguished road
Send a message via Skype™ to lakeat
Quote:
Originally Posted by flavio_galeazzo View Post
Hello lakeat,

I have learned (after posting my fvSolution file in this thread) that the number you specify in nCellsInCoarsestLevel is for each partition, and not for the whole domain as I have thought. So I am using nCellsInCoarsestLevel = 100 in my new simulations.

80 cpus for 5 million nodes seems adequate for me

Thanks, this is exactly what I am using
__________________
~
Daniel WEI
-------------
Boeing Research & Technology - China
Beijing, China
Email
lakeat is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Transient simulation not converging skabilan OpenFOAM Running, Solving & CFD 14 December 16, 2019 23:12
Differences between serial and parallel runs carsten OpenFOAM Bugs 11 September 12, 2008 11:16
Computation Time compared to OpenFOAM Florian Fruth CFX 4 June 29, 2007 10:18
VOF özgür FLUENT 8 January 6, 2004 08:23
Can periodic function reduce the time and cost? Lam FLUENT 3 December 8, 2003 12:24


All times are GMT -4. The time now is 16:29.