CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

OpenFOAM version 1.6 details

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree7Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   August 23, 2009, 09:54
Default
  #21
Senior Member
 
Alberto Passalacqua
Join Date: Mar 2009
Location: Ames, Iowa, United States
Posts: 1,912
Rep Power: 36
alberto will become famous soon enoughalberto will become famous soon enough
Quote:
Originally Posted by sandy View Post
Hi Alberto, if I give the pdRefcell and pdRefvalue, is it still an ill posed problem?? I feel grey. Now I am actually in the Nuemann BC's CFD, namely zero Gradient in the academic informations. Thank you very much.
That's what you do for example in the cavity tutorial in OpenFOAM. Setting a pressure reference allows you to find a unique solution for the pressure.

Best,
__________________
Alberto Passalacqua

GeekoCFD - A free distribution based on openSUSE 64 bit with CFD tools, including OpenFOAM. Available as in both physical and virtual formats (current status: http://albertopassalacqua.com/?p=1541)
OpenQBMM - An open-source implementation of quadrature-based moment methods.

To obtain more accurate answers, please specify the version of OpenFOAM you are using.
alberto is offline   Reply With Quote

Old   August 23, 2009, 10:15
Default
  #22
Senior Member
 
Sandy Lee
Join Date: Mar 2009
Posts: 213
Rep Power: 18
sandy is on a distinguished road
Quote:
Originally Posted by lakeat View Post
Hi, Sandy, There are advisors and advisors ...
YES, lakeat, Alberto is I said he said ...
sandy is offline   Reply With Quote

Old   August 23, 2009, 10:30
Default
  #23
Senior Member
 
Alberto Passalacqua
Join Date: Mar 2009
Location: Ames, Iowa, United States
Posts: 1,912
Rep Power: 36
alberto will become famous soon enoughalberto will become famous soon enough
Quote:
Originally Posted by sandy View Post
YES, lakeat, Alberto is I said he said ...
I'm not an advisor, which is not easy if you want to do it right.
I was very very lucky during my studies and now in my post-doc to have good advisors though!
__________________
Alberto Passalacqua

GeekoCFD - A free distribution based on openSUSE 64 bit with CFD tools, including OpenFOAM. Available as in both physical and virtual formats (current status: http://albertopassalacqua.com/?p=1541)
OpenQBMM - An open-source implementation of quadrature-based moment methods.

To obtain more accurate answers, please specify the version of OpenFOAM you are using.
alberto is offline   Reply With Quote

Old   August 24, 2009, 07:45
Default
  #24
Senior Member
 
Sandy Lee
Join Date: Mar 2009
Posts: 213
Rep Power: 18
sandy is on a distinguished road
Quote:
Originally Posted by lakeat View Post

I am using pisoFoam simulating a cylinder flow...
...
The cylinder case I am solving is about 860000 grids, I found 32 processors or even 64 processors ...
Hi lakeat, why you need so many grids to simulate a cylinder flow. I ever read a paper to just use 50000 grids to simulate a hydraufoil with the Kunz's cavitation model. How could they get it? You think.
sandy is offline   Reply With Quote

Old   August 24, 2009, 07:54
Default
  #25
Senior Member
 
Alberto Passalacqua
Join Date: Mar 2009
Location: Ames, Iowa, United States
Posts: 1,912
Rep Power: 36
alberto will become famous soon enoughalberto will become famous soon enough
Hi Sandy,

I'm not sure how Lakeat is trying to simulate the flow, but if the Reynolds number is high enough (and it does not need to be very high), and he is doing LES, that number of cells does not seem huge to me. Keep in mind that in LES you must resolve scales until well inside the inertial subrange of the spectrum.

Best,
__________________
Alberto Passalacqua

GeekoCFD - A free distribution based on openSUSE 64 bit with CFD tools, including OpenFOAM. Available as in both physical and virtual formats (current status: http://albertopassalacqua.com/?p=1541)
OpenQBMM - An open-source implementation of quadrature-based moment methods.

To obtain more accurate answers, please specify the version of OpenFOAM you are using.
alberto is offline   Reply With Quote

Old   August 24, 2009, 08:02
Default
  #26
Senior Member
 
Sandy Lee
Join Date: Mar 2009
Posts: 213
Rep Power: 18
sandy is on a distinguished road
WOW~, in 2D, it still needs so huge grids? If that, I should choose a good rest but work so hard.
sandy is offline   Reply With Quote

Old   August 24, 2009, 08:08
Default
  #27
Senior Member
 
lakeat's Avatar
 
Daniel WEI (老魏)
Join Date: Mar 2009
Location: Beijing, China
Posts: 689
Blog Entries: 9
Rep Power: 21
lakeat is on a distinguished road
Send a message via Skype™ to lakeat
Quote:
Originally Posted by alberto View Post
Hi Sandy,

I'm not sure how Lakeat is trying to simulate the flow, but if the Reynolds number is high enough (and it does not need to be very high), and he is doing LES, that number of cells does not seem huge to me. Keep in mind that in LES you must resolve scales until well inside the inertial subrange of the spectrum.

Best,
Yes, I am performing a LES simulation, this is what my thesis focuses.
I have never do an extensive survey, but I am not sure how good it will be to use RANS for a cylinder flow??????????

Will RANS give the Cd and Cl correct enough????

But Apparently, I will not expect good results from 2D simulation. The energy cascade is completely wrong in 2D simulation.

The Re (3900) is not high though, but already costs me a lot. This is LES.

I am expecting a much larger mesh for my next case Re=140000, the grid number would be 7 million.

You know what, someone told me that the "Bird Nest" in Beijing, certain man use just 80000 grids to do the simulation, and I was shocked, "how could he manage to do that?" "How could he get the content of TURBULENCE?"
__________________
~
Daniel WEI
-------------
Boeing Research & Technology - China
Beijing, China
Email
lakeat is offline   Reply With Quote

Old   August 24, 2009, 08:24
Default
  #28
Senior Member
 
Alberto Passalacqua
Join Date: Mar 2009
Location: Ames, Iowa, United States
Posts: 1,912
Rep Power: 36
alberto will become famous soon enoughalberto will become famous soon enough
Quote:
Originally Posted by lakeat View Post
But Apparently, I will not expect good results from 2D simulation. The energy cascade is completely wrong in 2D simulation.
In 2D you should not even do LES, since eddies are definetly 3D structures.

Quote:
The Re (3900) is not high though, but already costs me a lot. This is LES.
Re = 3900 (referred to the mean velocity and the diameter I assume) is not high at all, and it should be possible to do that in 3D with a very high resolution. Is your domain a simple cylinder (pipe)? Can you describe it with periodic conditions?

Quote:
You know what, someone told me that the "Bird Nest" in Beijing, certain man use just 80000 grids to do the simulation, and I was shocked, "how could he manage to do that?" "How could he get the content of TURBULENCE?"
Modelling it in some way, probably, as done in many cases for practical applications. Does he claim he is doing LES? You could check what kind of resolution he gets with the simple estimation formulas provided in turbulence books (Pope, for example).

Best,
__________________
Alberto Passalacqua

GeekoCFD - A free distribution based on openSUSE 64 bit with CFD tools, including OpenFOAM. Available as in both physical and virtual formats (current status: http://albertopassalacqua.com/?p=1541)
OpenQBMM - An open-source implementation of quadrature-based moment methods.

To obtain more accurate answers, please specify the version of OpenFOAM you are using.
alberto is offline   Reply With Quote

Old   August 24, 2009, 09:17
Default
  #29
Senior Member
 
lakeat's Avatar
 
Daniel WEI (老魏)
Join Date: Mar 2009
Location: Beijing, China
Posts: 689
Blog Entries: 9
Rep Power: 21
lakeat is on a distinguished road
Send a message via Skype™ to lakeat
First, I want to say thank you, Alberto, and also thank you Sandy.

Okay, let me make myself clearer,

the cylinder case mesh is an O-type mesh, with Nx*Ny*Nz=165*165*32, Re=3900, with benchmark of LES simulation already published that can be easily followed.

To achieve Re=3900, I simply set inlet velocity 1m/s, and cylinder diameter 1m. Flow region is a circle with diameter of 15m, which means standing at the cylinder center, the distance to the inlet and out let are both 7.5m.

The timeStep is set to 0.0025 to keep Courant number no larger than unity.


Of course, this is full LES, a wall resolved LES, completely 3D simulation for a 2D geometry, as we can imagine from a wind tunnel section model. Periodic B.C. are applied in the front and back, I do not use convectiveOutlet for the moment.

Then I need to consider:
1. To achieve the highest efficiency, how many CPUs do I need for this case?
2. To achieve the highest efficiency and accuracy, which solver should I use for p and for U?

The 1st one, your experience is 8~16 processors are enough, right?
The 2nd one, you recommend GAMG for p and smoothsolver for U, right?

Here are some findings, some of which make me headache now,

No. 1
My teacher strongly doubt my simulation result, he said the simulation time is too long, and is unacceptable.
Code:
Time = 397.925

Courant Number mean: 0.0201989 max: 0.821767
smoothSolver:  Solving for Ux, Initial residual = 0.000263646, Final residual = 9.52965e-08, No Iterations 2
smoothSolver:  Solving for Uy, Initial residual = 0.00125789, Final residual = 4.36729e-07, No Iterations 2
smoothSolver:  Solving for Uz, Initial residual = 0.00347622, Final residual = 1.45703e-06, No Iterations 2
GAMG:  Solving for p, Initial residual = 0.0120711, Final residual = 0.000402602, No Iterations 2
time step continuity errors : sum local = 2.96528e-09, global = 7.05016e-12, cumulative = -3.29373e-09
GAMG:  Solving for p, Initial residual = 0.000504671, Final residual = 4.92814e-05, No Iterations 3
time step continuity errors : sum local = 3.62961e-10, global = 3.00537e-12, cumulative = -3.29072e-09
ExecutionTime = 164607 s  ClockTime = 167103 s

Time = 397.927

Courant Number mean: 0.0202005 max: 0.821096
smoothSolver:  Solving for Ux, Initial residual = 0.000263663, Final residual = 9.53374e-08, No Iterations 2
smoothSolver:  Solving for Uy, Initial residual = 0.00125653, Final residual = 4.36351e-07, No Iterations 2
smoothSolver:  Solving for Uz, Initial residual = 0.00347678, Final residual = 1.45956e-06, No Iterations 2
GAMG:  Solving for p, Initial residual = 0.0120541, Final residual = 0.000401538, No Iterations 2
time step continuity errors : sum local = 2.95737e-09, global = 6.11715e-12, cumulative = -3.2846e-09
GAMG:  Solving for p, Initial residual = 0.000502906, Final residual = 4.82624e-05, No Iterations 3
time step continuity errors : sum local = 3.5545e-10, global = 2.07349e-12, cumulative = -3.28253e-09
ExecutionTime = 164610 s  ClockTime = 167106 s
So what do you think, is there something wrong? Why my simulation is that long?

No. 2
Based on your experience, can you give a judge, are these figures correct? (I use pyFoam to get these figures)
I mean time spent.
Graph3-m.jpgGraph4-m.jpg
Note also the question in the picture. Will you meet the problem of unstable simulation time (ie. ExecutionTime) since the flow is unsteady?


Waiting for you,
__________________
~
Daniel WEI
-------------
Boeing Research & Technology - China
Beijing, China
Email
lakeat is offline   Reply With Quote

Old   August 24, 2009, 10:23
Default
  #30
Senior Member
 
Alberto Passalacqua
Join Date: Mar 2009
Location: Ames, Iowa, United States
Posts: 1,912
Rep Power: 36
alberto will become famous soon enoughalberto will become famous soon enough
Quote:
Originally Posted by lakeat View Post
First, I want to say thank you, Alberto, and also thank you Sandy.

Okay, let me make myself clearer,

the cylinder case mesh is an O-type mesh, with Nx*Ny*Nz=165*165*32, Re=3900, with benchmark of LES simulation already published that can be easily followed.

To achieve Re=3900, I simply set inlet velocity 1m/s, and cylinder diameter 1m. Flow region is a circle with diameter of 15m, which means standing at the cylinder center, the distance to the inlet and out let are both 7.5m.
If you want a constant mean velocity of 1 m/s in your pipe, this is not correct, since the dissipation will slow your flow down. Furthermore, how can you fix the velocity at the inlet if the domain is periodic? You should use something like channelFoam (OpenFOAM 1.6) or channelOodles (OpenFOAM 1.5), where the pressure gradient is adapted per each time step so that the mean flow rate is kept constant.

In addition, what is your initial condition? If you use a uniform, not perturbed, condition it is going to take a long time to actually develop turbulence structures. Eugene de Villiers published a code to initialize a perturbed flow in a cylinder: search the forum for it.

Moreover, the lenght of the system has to be enough not to feel the effect of periodic conditions (I would say this was checked in the original work).

Quote:
Of course, this is full LES, a wall resolved LES, completely 3D simulation for a 2D geometry, as we can imagine from a wind tunnel section model. Periodic B.C. are applied in the front and back, I do not use convectiveOutlet for the moment.
I don't get it again. A cylinder has one wall BC and two sections as boundary conditions. If you specify periodic conditions in the flow direction, you cannot have other form of inlets/outlets.

Quote:
The 1st one, your experience is 8~16 processors are enough, right?
Well, with ~800,000 cells, and a single-phase flow, yes, definetly.

Quote:
My teacher strongly doubt my simulation result, he said the simulation time is too long, and is unacceptable.
Usually, if you start from a good initial condition, with fully developed turbulence, you need at least 20 characteristic times to extract statistics.
In other words, you initialize a perturbed flow as said above, you run until the flow is completely developed (and this takes time, a lot!), and then you reset the averages and start averaging for a sufficient number of characteristic times to get your statistics.

What does your teacher takes as reference to say the computational time is unacceptable?

P.S. What are you using as pressure residual?
Best,
A.
__________________
Alberto Passalacqua

GeekoCFD - A free distribution based on openSUSE 64 bit with CFD tools, including OpenFOAM. Available as in both physical and virtual formats (current status: http://albertopassalacqua.com/?p=1541)
OpenQBMM - An open-source implementation of quadrature-based moment methods.

To obtain more accurate answers, please specify the version of OpenFOAM you are using.
alberto is offline   Reply With Quote

Old   August 24, 2009, 10:26
Default
  #31
Senior Member
 
Sandy Lee
Join Date: Mar 2009
Posts: 213
Rep Power: 18
sandy is on a distinguished road
I never think that GAMG should be chosen in parallel because it is so complex. U equation does not spend your CPU time too much, whatever which solver is chose. The key is P equation forever.
sandy is offline   Reply With Quote

Old   August 24, 2009, 13:02
Default
  #32
Senior Member
 
Alberto Passalacqua
Join Date: Mar 2009
Location: Ames, Iowa, United States
Posts: 1,912
Rep Power: 36
alberto will become famous soon enoughalberto will become famous soon enough
Quote:
Originally Posted by sandy View Post
I never think that GAMG should be chosen in parallel because it is so complex.
Are you stating GAMG solvers are suitable only for serial calculations? I would not think so.
__________________
Alberto Passalacqua

GeekoCFD - A free distribution based on openSUSE 64 bit with CFD tools, including OpenFOAM. Available as in both physical and virtual formats (current status: http://albertopassalacqua.com/?p=1541)
OpenQBMM - An open-source implementation of quadrature-based moment methods.

To obtain more accurate answers, please specify the version of OpenFOAM you are using.
alberto is offline   Reply With Quote

Old   August 24, 2009, 22:00
Default
  #33
Member
 
Simon Lapointe
Join Date: May 2009
Location: Québec, Qc, Canada
Posts: 33
Rep Power: 16
Simon Lapointe is on a distinguished road
Hi guys,

I feel there is some miscommunication between you. From what I understood of Lakeat's problem (correct me if I'm wrong), he is simulating the external flow around a cylinder, not an internal pipe flow. When he says he uses periodic conditions at front and back I think he means at both ends of the cylinder.

Concerning Lakeat's concerns,

1) Many factors can affect your parallel performance.
First, it depends on the computer you're using. A poor connection between the computing nodes can really slow down the calculation, causing a big difference in ExcutionTime and ClockTime.
Second, if the connection is good, I've found that the OpenFOAM compilation can greatly influence the computational time. Are you using a pre-compiled OpenFOAM ? If you do, I strongly suggest you compile OpenFOAM locally on the computer you're using for your parallel computations, preferably with the local compiler (such as an Intel compiler).
Concerning the decomposition, a "simple" decomposition can be used on simple geometries. If you aren't so sure how to decompose it, you should use MeTis.
From my personal experience of unsteady simulations of external flows, I've seen good speedup (linear or better) with as less as 10-15k cells per processor. But that depends on the computer and the case.

2) I've used the GAMG solver for the pressure equation in many parallel cases and I think it performs well, generally faster than PCG. I don't agree with sandy. How the does fact that it is more complex make it slower ? Did you compare it with other solvers in parallel cases ?

Hope this helps
Simon Lapointe is offline   Reply With Quote

Old   August 24, 2009, 22:53
Default
  #34
Senior Member
 
Alberto Passalacqua
Join Date: Mar 2009
Location: Ames, Iowa, United States
Posts: 1,912
Rep Power: 36
alberto will become famous soon enoughalberto will become famous soon enough
Quote:
Originally Posted by Simon Lapointe View Post
Hi guys,

I feel there is some miscommunication between you. From what I understood of Lakeat's problem (correct me if I'm wrong), he is simulating the external flow around a cylinder, not an internal pipe flow. When he says he uses periodic conditions at front and back I think he means at both ends of the cylinder.
Oops. My fault. I should have thought to that when he described his system. Sorry about the confusion, please ignore my comment about the pipe flow simulation, the initialization of a perturbed flow and such.

About the domain decomposition, in the case of a flow around a cylinder, I agree with Simon when he suggests to use METIS, since it becomes more complicated to do by hand. I'm a bit skeptical about 10.000 cells/processor. In my experience a good trade-off is something between 40.000 and 80.000 cells per processor (meaning core), especially if you are not in a hurry to get the results and you share resources with others ;-)

Quote:
2) I've used the GAMG solver for the pressure equation in many parallel cases and I think it performs well, generally faster than PCG. I don't agree with sandy. How the does fact that it is more complex make it slower ? Did you compare it with other solvers in parallel cases ?
I agree on GAMG. Also in my experience it is generally faster than PCG. The only side note I have is that sometime (I'm thinking to some multi-phase case), PCG are stabler, while with GAMG I had crashes in some runs. However this should not be the case for a simple single phase flow.

Best,
__________________
Alberto Passalacqua

GeekoCFD - A free distribution based on openSUSE 64 bit with CFD tools, including OpenFOAM. Available as in both physical and virtual formats (current status: http://albertopassalacqua.com/?p=1541)
OpenQBMM - An open-source implementation of quadrature-based moment methods.

To obtain more accurate answers, please specify the version of OpenFOAM you are using.
alberto is offline   Reply With Quote

Old   August 24, 2009, 22:54
Default
  #35
Senior Member
 
lakeat's Avatar
 
Daniel WEI (老魏)
Join Date: Mar 2009
Location: Beijing, China
Posts: 689
Blog Entries: 9
Rep Power: 21
lakeat is on a distinguished road
Send a message via Skype™ to lakeat
Quote:
Originally Posted by Simon Lapointe View Post
Hi guys,

I feel there is some miscommunication between you. From what I understood of Lakeat's problem (correct me if I'm wrong), he is simulating the external flow around a cylinder, not an internal pipe flow. When he says he uses periodic conditions at front and back I think he means at both ends of the cylinder.
Yes, I'm very sorry for the miscommunication, my case is not a internal flow but an external flow past a circular cylinder. And thank you Simon for your patience to read so long a post.

Quote:
Concerning Lakeat's concerns,

1) Many factors can affect your parallel performance.
First, it depends on the computer you're using. A poor connection between the computing nodes can really slow down the calculation, causing a big difference in ExcutionTime and ClockTime.
"big difference", no, I did not see big difference between the two.
You can see from my last post:

Quote:
Time = 397.927
..................................
ExecutionTime = 164610 s ClockTime = 167106 s
ClockTime-ExecutionTime=167106-164610=2496, and 2496/164610~1.516%.

And 1.516% is not that big, right?

Quote:
A poor connection between the computing nodes
Yes, but this is hardly the case of me, the super-computer I am using have 32 processors per node, even between different nodes, they are connected by what so called infiniband. So I guess this won't be a problem for me, right. But the question is raised here:

Will different mpi implementations like openmpi, mpich, mvapich, make great differences on the speed? Say, if I am decomposing the case to several nodes, and if I am using just one node.

And I remember wiki writes:
Quote:
In computing, wall clock time is the actual time taken by a computer to complete a task. It is the sum of three terms: CPU time, I/O time, and the communication channel delay (e.g. if data are scattered on multiple machines). In contrast to CPU time, which measures only the time during which the processor is actively working on a certain task, wall time measures the total time for the process to complete. The difference between the two consists of time that passes due to programmed delays or waiting for resources to become available.

Quote:
Second, if the connection is good, I've found that the OpenFOAM compilation can greatly influence the computational time. Are you using a pre-compiled OpenFOAM ? If you do, I strongly suggest you compile OpenFOAM locally on the computer you're using for your parallel computations, preferably with the local compiler (such as an Intel compiler).
Really? You know, in fact, I have compared the two, one is pre-compiled (I guess it was compiled on opensuse), the other is compiled one (The super-computer is installed with SUSE), and currently, I did not see compiled one superior to the pre-compiled one. Because I have also recompiled the openmpi, so I am not sure now, why that I did not see the "superior" to the pre-compiled OpenFOAM? Is it because my recompiled openmpi did not work very good...... I have no idea now.

Quote:
Concerning the decomposition, a "simple" decomposition can be used on simple geometries. If you aren't so sure how to decompose it, you should use MeTis.
I hope someone will clear this up, you can see from the official website, new features of version-1.6, it seems scotch is obviously superior to metis, please correct me if I was wrong. Second, for a O-type grid, unless, you decompose it into 4 regions (2*2*1), I still cannot see why simple will work better than metis and scotch method. I one decomposed them into 8 and 16 and 32, and here are the output log files.

simple one:
Quote:
Number of processor faces = 102046
Max number of processor patches = 6
Max number of faces between processors = 10888
metis one:
Quote:
Number of processor faces = 68674
Max number of processor patches = 7
Max number of faces between processors = 5904
You see there is a lot of differences.

Quote:
From my personal experience of unsteady simulations of external flows, I've seen good speedup (linear or better) with as less as 10-15k cells per processor. But that depends on the computer and the case.
Okay, this experience is very helpful, you know.
And for a first try, my cylinder case mesh, 860000/15000~57 processors??!!!! Is this what you meant??


Quote:
I've used the GAMG solver for the pressure equation in many parallel cases and I think it performs well, generally faster than PCG. I don't agree with sandy. How the does fact that it is more complex make it slower ? Did you compare it with other solvers in parallel cases?

Hope this helps
Yes, your information is very very helpful.

PS: pls call me Daniel...

Thank you all.
__________________
~
Daniel WEI
-------------
Boeing Research & Technology - China
Beijing, China
Email
lakeat is offline   Reply With Quote

Old   August 24, 2009, 23:14
Default
  #36
Senior Member
 
Alberto Passalacqua
Join Date: Mar 2009
Location: Ames, Iowa, United States
Posts: 1,912
Rep Power: 36
alberto will become famous soon enoughalberto will become famous soon enough
Hi,

if you do not use a hardware-optimized implementation of the parallel libraries (HP, Intel, ...), you won't notice much difference switching between OpenMPI (which comes from LAM) and MPICH generic libraries. Performance mainly depends on how you compile them and your code.

Quote:
Originally Posted by lakeat View Post
I hope someone will clear this up, you can see from the official website, new features of version-1.6, it seems scotch is obviously superior to metis, please correct me if I was wrong. Second, for a O-type grid, unless, you decompose it into 4 regions (2*2*1), I still cannot see why simple will work better than metis and scotch method. I one decomposed them into 8 and 16 and 32, and here are the output log files.
A "simple" decomposition works OK if the geometry is not complicated and the mesh is quite uniform. In this way you can decompose the domain by hand in quite a precise way and control the decomposition easily. If the domain is more complicated and the mesh not simple either, an automatic decomposition saves you a lot of time and effort, and gives better results.

Quote:
Okay, this experience is very helpful, you know.
And for a first try, my cylinder case mesh, 860000/15000~57 processors??!!!! Is this what you meant??
Hehe, your sysadmin and colleagues will be happy when you'll run bigger cases.

Best,
__________________
Alberto Passalacqua

GeekoCFD - A free distribution based on openSUSE 64 bit with CFD tools, including OpenFOAM. Available as in both physical and virtual formats (current status: http://albertopassalacqua.com/?p=1541)
OpenQBMM - An open-source implementation of quadrature-based moment methods.

To obtain more accurate answers, please specify the version of OpenFOAM you are using.
alberto is offline   Reply With Quote

Old   August 24, 2009, 23:37
Default
  #37
Member
 
Simon Lapointe
Join Date: May 2009
Location: Québec, Qc, Canada
Posts: 33
Rep Power: 16
Simon Lapointe is on a distinguished road
Quote:
Originally Posted by lakeat View Post

Yes, but this is hardly the case of me, the super-computer I am using have 32 processors per node, even between different nodes, they are connected by what so called infiniband. So I guess this won't be a problem for me, right. But the question is raised here:

Will different mpi implementations like openmpi, mpich, mvapich, make great differences on the speed? Say, if I am decomposing the case to several nodes, and if I am using just one node.
I did not mean to say a poor interconnection was your problem, but I mentioned it because it is a common cause of slow parallel computations. Since it seems your system has a good interconnect this is probably not the case here.

Concerning the mpi implementations I'm not so sure, since with OpenFOAM I've only used OpenMPI. The best way to answer your question about the number of nodes is to try different combinations of nodes and processors per nodes. The best ratio will depend on the connection speed between the nodes, the connection speed between the processors and the usage of the processors on the different nodes you're using.

Quote:
Originally Posted by lakeat View Post
Really? You know, in fact, I have compared the two, one is pre-compiled (I guess it was compiled on opensuse), the other is compiled one (The super-computer is installed with SUSE), and currently, I did not see compiled one superior to the pre-compiled one. Because I have also recompiled the openmpi, so I am not sure now, why that I did not see the "superior" to the pre-compiled OpenFOAM? Is it because my recompiled openmpi did not work very good...... I have no idea now.
In my case, it made a huge difference (note I'm using 1.5-dev). With the pre-compiled version, there was a small speedup at 4 or 8 processors, but far from a linear speed. When using more than 8 processors, it was slower and the clocktime was much larger than the cpu time. After being compiled on the supercomputer with the Intel compiler, the speedup is much better.

Which compiler did you use ? Are there different compilers available ? You could try using a different compiler.

Quote:
Originally Posted by lakeat View Post
I hope someone will clear this up, you can see from the official website, new features of version-1.6, it seems scotch is obviously superior to metis, please correct me if I was wrong. Second, for a O-type grid, unless, you decompose it into 4 regions (2*2*1), I still cannot see why simple will work better than metis and scotch method.
I've never used Scotch so I can't comment on its performances. I didn't say simple would be better than metis or scotch, just that it works well on simple meshes. I suggest you try both MeTis and Scotch and compare your results. Others would be interested in such a comparison. Where did you see that Scotch was "obviously superior to metis" ?

Quote:
Originally Posted by lakeat View Post
Okay, this experience is very helpful, you know.
And for a first try, my cylinder case mesh, 860000/15000~57 processors??!!!! Is this what you meant??
Yes, this is what I meant. I've split meshes of 750k to 1M cells in 48 or 64 processors and the speedup was very good. But, this is very dependent of the computer used and the case, so I can't promise you anything...Also, as Alberto said, unless the usage of your system is low, it is much more reasonable to use a smaller number of processors (maybe 50k cells/processors). This way you'll still get decent speed and you could use a higher number of processors for cases that really need them.
Simon Lapointe is offline   Reply With Quote

Old   August 25, 2009, 00:21
Default
  #38
Senior Member
 
Sandy Lee
Join Date: Mar 2009
Posts: 213
Rep Power: 18
sandy is on a distinguished road
Quote:
Originally Posted by alberto View Post
Are you stating GAMG solvers are suitable only for serial calculations? I would not think so.
Hi Alberto, I appreciate so much for your patiently and repeatly directing.

In addtion, maybe I will try GAMG for p equation. In fact, GAMG is solved by ICCG or Block-ICCG (Why not is BiCG?!!! In coarse grid, all matrices are symmetric??!!! Who can explain it??) in coarse grids, and maybe solved by Guass Seidel in the fine grid? So, it should not be difficult to uderstand this method maybe is faster than PCG.

WOW~, really?? GAMG just can be used in solving symmetrical matrix namely p equation?? I just find it!

If that, what kinds of Mutil-Grids methods can be used to solve the asymmetrical matrix in OpenFOAM?

@Sorry Daniel, I actually know nothing to parallel.
sandy is offline   Reply With Quote

Old   August 25, 2009, 05:13
Default
  #39
Senior Member
 
Fabian Braennstroem
Join Date: Mar 2009
Posts: 407
Rep Power: 19
braennstroem is on a distinguished road
Hi Daniel,

another way to accelerate flow development is, to map fluctuations of a 'boxTurb' generated velocity field onto your initial RANS velocities.

Fabian
braennstroem is offline   Reply With Quote

Old   August 25, 2009, 08:51
Default
  #40
Senior Member
 
Alberto Passalacqua
Join Date: Mar 2009
Location: Ames, Iowa, United States
Posts: 1,912
Rep Power: 36
alberto will become famous soon enoughalberto will become famous soon enough
Quote:
Originally Posted by sandy View Post
Hi Alberto, I appreciate so much for your patiently and repeatly directing.

WOW~, really?? GAMG just can be used in solving symmetrical matrix namely p equation?? I just find it!
There is not such a restriction. Take a look at this document, where in one example (page 15), GAMG is used also for U: http://www.tfd.chalmers.se/~hani/kur...report-fin.pdf

Best,
__________________
Alberto Passalacqua

GeekoCFD - A free distribution based on openSUSE 64 bit with CFD tools, including OpenFOAM. Available as in both physical and virtual formats (current status: http://albertopassalacqua.com/?p=1541)
OpenQBMM - An open-source implementation of quadrature-based moment methods.

To obtain more accurate answers, please specify the version of OpenFOAM you are using.
alberto is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
OpenFOAM - Validation of Results Ahmed OpenFOAM Running, Solving & CFD 10 May 13, 2018 19:28
Superlinear speedup in OpenFOAM 13 msrinath80 OpenFOAM Running, Solving & CFD 18 March 3, 2015 06:36
OpenFOAM Version 1.6 Released opencfd OpenFOAM Announcements from ESI-OpenCFD 0 July 27, 2009 18:55
user subroutine error CFDUSER CFX 2 December 9, 2006 07:31
user defined function cfduser CFX 0 April 29, 2006 11:58


All times are GMT -4. The time now is 03:01.