CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

CFD clusters + bonded GigE?

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 17, 2006, 14:49
Default CFD clusters + bonded GigE?
  #1
Joe
Guest
 
Posts: n/a
Does anyone here have any experiance with improving CFD cluster performance by linking multiple multi-core boxes with dual bonded GigE links as opposed to the traditional single GigE link approach?

Theoretically bonding two GigE links into a single logical link should ~double throughput if set up correctly ...

Myrinet would be nice but its expensive

  Reply With Quote

Old   September 18, 2006, 04:05
Default Re: CFD clusters + bonded GigE?
  #2
andy
Guest
 
Posts: n/a
I briefly used channel bonding with fast ethernet and dual Pentium 2 or 3s a few years ago on a small cluster without a switch. It worked fine.

I have not considered using it with our gigabit ethernet cluster because, for our application and our hardware, the performance gain versus cost of adding non blocking switches, cables and ethernet cards is worse than adding more nodes. Your situation may be different.

> Theoretically bonding two GigE links into a single logical link should
: ~double throughput if set up correctly ...

My experience of PC hardware is only to purchase what you have seen demonstrated to work. Far too much that should work properly doesn't. For a cluster I have found getting access to a couple of nodes to check out is usually not a problem.

  Reply With Quote

Old   September 18, 2006, 05:47
Default Re: CFD clusters + bonded GigE?
  #3
Joe
Guest
 
Posts: n/a
Much obliged.

I should have been a bit more specific about the CPU hardware: Linking multiple dual-core boxes with single GigE links seems to scale well to 8-16 cores. However I have heard some anecdotal views that linking multiple quad core or octa core boxes (i.e. two dual core chips or two quad core chips) though a single GigE link is problematic. This seems logical given the theoretical doubling / quadrupling of required data throughput??

So I am interested in finding out whether anyone is running something akin to the last mentioned setup. And whether they have found a single GigE link a performance constraint and investigated using dual bonded GigE to relieve this contraint.
  Reply With Quote

Old   September 18, 2006, 07:10
Default Re: CFD clusters + bonded GigE?
  #4
TG
Guest
 
Posts: n/a
While its true that you can create additional bandwidth by bonding, you aren't going to change the latency. Both Myrinet and Infiniband offer better bandwidth AND much better latency than GigE networks. Most codes are influenced somewhat by both bandwidth and latency. The other problem you may face is a limit in the ability of your NIC card to feed more bandwidth. Just because you bond more GigE lines together does not mean you can actually feed them at their max capacity through a single NIC.
  Reply With Quote

Old   September 18, 2006, 07:46
Default Re: CFD clusters + bonded GigE?
  #5
Joe
Guest
 
Posts: n/a
I use CFX which is less latency sensitive.

The advent of PCIe based NICs has also lessened the problem of actually attaining maximum throughput on a NIC i.e. [RAM - PCIe - NIC] - GigE - [NIC - PCIe - RAM] is much better than the old shared PCI bus limitation.

PCIe gives a NIC dedicated bandwidth.

PS: The big picture behind my questions is that Intel is launching quad core desktop (Kentsfield) and server chips (Clovertown) in the next 2 months which will allow quad core desktop and octa core server boxes.
  Reply With Quote

Old   September 18, 2006, 09:26
Default Re: CFD clusters + bonded GigE?
  #6
andy
Guest
 
Posts: n/a
What matters most when putting together a cluster is the type of simulation being performed. An explicit time stepping CFD code tends to want lots of cheap CPUs with cheap interconnects to be cost effective, a heavily implicit, steady state code tends to want a few fast cpus with fast interconnects and semi implicit time stepping codes sit somewhere in between.

The reasoning for the above follows from the relative performance/cost for CPUs and interconnects and what is limiting the performance of the simulation.

> I should have been a bit more specific about the CPU hardware: Linking
: multiple dual-core boxes with single GigE links seems to scale well to 8-16
: cores.

Performing what type of calculation?

For our purposes, a couple of years ago, cheap single processors nodes with on board gigabit ethernet and the fastest main memory buses was the most cost effective and by a long way. Scales to about 64-128 nodes before the interconnect performance makes additional nodes not worthwhile. The only consideration here was performance for minimum cost plus the acceptance that 64-128 nodes was an acceptable upper performance limit.

2/4/8 processors on a board solutions tend to be expensive but have fast interprocessor communication and are good for implicit calculations that do not need more processors than this. Running lots of crossed cables between a pair of similar boxes may also work quite well but you usually cannot scale up such machines because the interconnect performance will cripple you for simulations that need significant amounts of communications.

  Reply With Quote

Old   September 18, 2006, 09:48
Default Re: CFD clusters + bonded GigE?
  #7
Joe
Guest
 
Posts: n/a
"Performing what type of calculation?"

8-16 cores on CFX gives pretty linear scale-up with gige interconnects. This ties in with what you are saying given CFX being an implicit code.

I am looking at a relatively small number of total nodes e.g. 16-32. Its really down to whether connecting 3-4 octa-core server boxes or 6-8 quad core desktop boxes is the way to go ...
  Reply With Quote

Old   September 18, 2006, 10:16
Default Re: CFD clusters + bonded GigE? - Correction
  #8
Joe
Guest
 
Posts: n/a
I should have been a bit more explicit ('scuse the pun). Basically I have three box choices:

16 Dual core boxes 8 Quad core boxes 4 Octa core boxes

And two interconnect choices:

Single Gige interconnects Dual bonded Gige interconnects

The issue is trying to figure out which would be the best choice. Only one configuration's scaling performance can be directly extrapolated from existing common practice: 16 Dual core boxes + single GigE interconnects.

The probable scaling performance of the other configurations is a mystery (to me at least) ... I was hoping others with more experiance could comment.

  Reply With Quote

Old   September 19, 2006, 07:58
Default Re: CFD clusters + bonded GigE? - Correction
  #9
andy
Guest
 
Posts: n/a
Well I will have to pass because my detailed hardware knowledge is about 2-3 years old. I will add that an octa box is likely to work well for jobs using 8 or less processors. If this is how the machine is mainly to be used then it is worth considering.

A few years ago when I last looked at and bought some cluster hardware one benchmark involved a dual processor machine from a possible supplier and we only used one of the two processors on each dual node because using both was slower than using one. This was not an Opteron but some form of dual Xeon if memory serves.

I can only repeat again that benchmarking your code is a wise move. I presume you have talked to CFX who will have loads of stuff on performance using different hardware.
  Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
is there any money in CFD? T Main CFD Forum 35 May 9, 2001 19:35
Where do we go from here? CFD in 2001 John C. Chien Main CFD Forum 36 January 24, 2001 21:10
ASME CFD Symposium, Atlanta, July 2001 Chris R. Kleijn Main CFD Forum 0 August 21, 2000 04:49
Which is better to develop in-house CFD code or to buy a available CFD package. Tareq Al-shaalan Main CFD Forum 10 June 12, 1999 23:27
public CFD Code development Heinz Wilkening Main CFD Forum 38 March 5, 1999 11:44


All times are GMT -4. The time now is 18:42.