CFD Online Discussion Forums

CFD Online Discussion Forums (
-   OpenFOAM (
-   -   Network hardware used in clusters (

kar May 25, 2008 08:32

Hello, I'm interested in what
I'm interested in what kind of networking people use for effective parallel CFD computing? When it is sufficient with some cat5 wiring + switches and when something faster, like Infiniband is necessary?


bastil May 25, 2008 10:23

It is strongly application and
It is strongly application and code-dependend. As a rute of thumb:

For calculations on more than 16 cpus "normal" gigabyte ethernet is too slow mostly. Nevertheless it is also dependend on number of cores per node,...


kar May 25, 2008 15:41

You mean > 16core machine?? El
You mean > 16core machine?? Else it makes some kind of nonsense to me - switch should be able to handle properly all of it's connects, isn't that right?

bastil May 25, 2008 16:35

I do not understand what you m
I do not understand what you mean. Of course a switch can handle all. However, if you distribute a CFD case into more than about 16 parts communication overhead grows non-linear. That means for these cases with that much communication your network will definitely be the first bottleneck in cases of speed. This is refered to as speedup. If you run a case on one core you get a speedup of one. Running it on eg 8 cores has a theoretical speedup of 8 but you will only get less. And using gigabyte ethernet you will not get much quicker if you use 16 or 32 or 64 codes in general - of course this is case and architecture dependend. However using faster interconnects (eg infiniband) will give you further speedup if you switch from 16 to 32 parts... This is what I wanted to say.

All this is also dependend on number of cores per CPU and CPUs per node. Above numbers go for typical nodes with 2CPUs and 2 cores per CPU. I do not know to much about nodes with more cores on it...

kar May 26, 2008 04:18

So the story is about timestep
So the story is about timestep computing time compared with time necessary to exchange boundary values. Gigabyte network might have two speed problems: too little transfer speed and latencies. By dividing, typical timestep computing time and speedup decreases, if network is slower than inter-core communication.

Just curious: how much those infiniband NIcards cost? And ~30 port switch?

msrinath80 May 26, 2008 16:46

And please don't forget to fac
And please don't forget to factor in the memory bandwidth bottleneck when using multi-core CPUs. The more cores that share memory bandwith, the worse is the speedup (even if onboard core interconnects are used).

All times are GMT -4. The time now is 11:09.