CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   What kind of computer is needed to simulate with OpenFoam (https://www.cfd-online.com/Forums/openfoam-solving/59106-what-kind-computer-needed-simulate-openfoam.html)

anita February 20, 2008 06:45

Hi, I am new to CFD. What k
 
Hi,

I am new to CFD. What kind of computer do I need to use OpenCFD?

At the moment I am using a computer with dual core. I want to simulate a karman vortex street. I startet in 2D with about 25000 cells. It needed 18 hours to simulate 500ms.
Later I want to simulate 3D. So I need to do something to reduce calculation time.

Is my mesh to fine or my computer to weak?

What is your experience? What do you use to calculate ?

lr103476 February 20, 2008 07:04

Welcome to CFD! First of all,
 
Welcome to CFD! First of all, doing CFD on unsteady cases is relative computationally expensive, at least if you opt for accuracy.

18 hours is reasonable for such a simulation of 25000 cells, which is too small for a Vortex street, at least if you take a sufficiently large domain. Your domain boundaries need to be at least 10-20 times the characteristic body length scale away. Otherwise, the boundary conditions may influence to flow near the body....You have to investigate that, its specific for your problem.

If you are dealing with large meshes, which is the case in 3D, you'll need to run OpenFoam on a parallel linux cluster, which is quite easy. Off course you should have access to a cluster, somewhere....

In the CFD book of Peric and Ferziger, you may find lots of usefull information about CFD to get a good start with CFD.

Regards, Frank

connclark February 20, 2008 12:09

A phrase from the movie Mad Ma
 
A phrase from the movie Mad Max says "Speed is just a matter of money." This holds true in CFD.

I am just a newbie in CFD but this is what I have found out so far about hardware.

CPU:
Dual core machines are better than single core machines and quad core is better than dual core but you get diminishing returns. A dual core machine may be about 20% faster than a single core machine. This is because the two cores fight for memory bandwidth. A dual core CPU with a larger L2 cache will help mitigate bus contention a little but don't expect more than 1 to 2 percent more performance because of it. Getting a machine that has two physical processors tends to do much better than a dual core machine because they tend to have a NUMA memory system where each processor has its own memory bus.


MEMORY:
Having fast memory is important because CFD codes go through large amounts of data for each time step and the CPU spends a lot of time waiting for data from the ram. Also a lot of memory helps when dealing with 3d. God help you if you have to resort to using virtual memory.

CLUSTERING:
Your probably going to find more bang for your buck buying two or more slightly slower machines with fast network connections than one monsterously fast machine. You probably want one machine in the cluster to have more processing power and memory than the others to do visualization on.

vtk_fan February 20, 2008 12:31

The above response is mostly c
 
The above response is mostly correct in my experience. If one analyzes the assembly code for some of the CFD codes based on unstructured meshes, the expense of load/store operations outnumber those for the computation instructions, assuming there are no transcendentals which are very expensive to compute. However, some of the older codes built on "cartesian" or structured meshes are not as bad in terms of scaling since there is a much greater likelihood of memory accesses being successfully located in cache, when compared to unstructured codes.

Unstructured codes can try to get around this by using "renumbering", which is in reality a memory bandwidth optimization for the coefficient matrix of the linear system solver, but it can still be on average much less optimal when compared to a structured mesh, where the numbering is implicit and usually leads to a banded coefficient matrix.

Intel's upcoming 80-core chip works around this problem by providing each core a dedicated path to main memory, which leads to very little contention, and hence high scaling.

I would recommend a rack system, which should be available from vendors like Aberdeen, which would provide a set of CPU boards with their individual processors and fans. This, plus a NAS should provide a better value for both the storage and computation needs.

caw February 20, 2008 12:51

I would like to throw in some
 
I would like to throw in some benchmark results for an interFoam case (a small one, which means difficult for scale-up tests: 50k cells)

Machine 1:
2x Intel Quadcore X5482 3,2 GHZ
16 GB RAM 1600FSB

Machine 2:
2x Intel Quadcore E5345 2,33 GHZ
16 GB RAM 1333FSB

Machine 3 (an old one ;-) ):
2x AMD Opteron single Core 2,4 GHZ

Test case: transient simulation of five seconds real time, parallel grid distribution with simple method (2 1 1), (2 2 1), (2 2 2)

Runtime results:
#cores m1(sec) m2(sec) m3(sec)
1 565 1056 1261
2 275 565 -
4 213 375 -
8 109 205 -


For machine 1 i did a comparison of grid distribution influence on runtime, because scaleup from 2 to 4 cores is quite bad. This is a fast crosscheck using metis:
2 core simple: 275 sec
2 core metis: 275sec
4 core simple: 213 sec !!!
4 core metis: 172 sec !!!
8 core simple: 109 sec
8 core metis: 110 sec

Conclusions: The new FSB1600 intel quadcore is a very fast cpu, but scaleup is still the main issue. Nothing unexpected here.
But again, the testcase used is very demanding due to the small mesh. I will run some more benchmarks with larger grids and post the results next week.

regards
christian

guillaume February 20, 2008 14:57

To dwell on this subject, I am
 
To dwell on this subject, I am interested whether you know any affordable compute-on-demand offers for small-medium simulations (~10..100 cpu hours).

I found http://www.tsunamictechnologies.com/services.htm at $0.77/core/hour and http://www.amazon.com/b/ref=sc_fe_l_2?node=201590011 at $0.10 per hour. The Amazon EC2 "compute units" are defined in a very fuzzy way, and are probably not too powerful.

Of course, if most of you are affiliated with universities, you may not be interested. But has anybody evaluated one of the above or other services?

Guillaume

anita February 21, 2008 02:27

Thank you for your help. So
 
Thank you for your help.

So it is no good idea to use a coarser mesh to save time. I already tried and got different results (presssure an velocity are smaller).
But I want to investigate the pressure at the body.

Do you think the domain is large enough?
The characteristic body length is 7.5 mm. Between inlet and body are 30mm and between body and outlet are about 100 mm. Beside the body are symmetryPlanes. Later I want to change to walls and investigate the influence of diffent wall distances. At the moment the distance between the symmetryPlanes are 80mm.

May it reduce time to use a LES solver?


Anita

vtk_fan February 21, 2008 04:35

I would not subscribe to Amazo
 
I would not subscribe to Amazon EC2. The problem is that each compute unit is actually a Xen virtual instance, and it is hard to get a dedicated slice of CPU time on the machine running that instance. Also, each instance is only 1/6th as powerful as the latest 3.2 GHz Core2 Intel CPU, so costs add up quickly on that misleading 10c/hour.

For parallel computing, this becomes especially bad since the CPU slices available to the Xen virtual instance are not synchronized in general, given that some other user may have another instance on that same physical machine. So communication stalls in unexpected ways leading to very poor and unpredictable scaling.

Amazon EC2 is really meant for web-services, like a load-balanced set of web-servers which can be scaled up depending upon the number of transactions requested. For compute-intensive tasks, you should ensure that the physical machines are dedicated to that task.

I have not evaluated Tsunami, but you may want to enquire about the above issues with them before sinking any money. I do know they have a minimum number of hours that must be purchased (400-500 hours).

connclark February 21, 2008 11:25

Anita, Correct, a coarse me
 
Anita,

Correct, a coarse mesh is trades accuracy for a shorter run time. From what I have learned so far using the right type of mesh and choosing a good density of the mesh is one of the most important steps to get good results.

I leave it to the more experienced members here to guide you on that and the domain size.

LES solvers are newer and are more compute and resource intensive i.e. takes more time and uses more RAM.


All times are GMT -4. The time now is 14:46.