CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Hardware

New Dell M420 Cluster

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   October 1, 2013, 15:37
Default New Dell M420 Cluster
  #1
rja
New Member
 
John Allan
Join Date: Oct 2013
Posts: 1
Rep Power: 0
rja is on a distinguished road
We have just added a 24 node cluster using Dell M420 servers. All the servers have 2 Xeon E5-2440 processors, 48GB RAM and an SSD. The also have QDR Infiniband NIC’s and are interconnected through 2 FDR Infiniband switches. Does anyone out there have experience with a similar setup? We have been running some Star CCM+ benchmarks on this new cluster and are seeing the performance drop off quickly when running on 10 or more cores/node. I’m wondering if this is just a limitation for these servers or do we have something configured incorrectly. Any help/suggestions will be greatly appreciated.
rja is offline   Reply With Quote

Old   October 2, 2013, 15:06
Default
  #2
Senior Member
 
Erik
Join Date: Feb 2011
Location: Earth (Land portion)
Posts: 1,167
Rep Power: 23
evcelica is on a distinguished road
You are most likely hitting memory bandwidth bottlenecks when going over 4 cores/processor or 8 cores/node.
evcelica is offline   Reply With Quote

Old   October 16, 2013, 15:11
Default
  #3
New Member
 
Lee
Join Date: Jun 2012
Posts: 4
Rep Power: 13
rlc113 is on a distinguished road
Are you saying you get close to linear scaling when running up to 9 cores per node (total of 216 cores) Then it starts to lose performance?

I am seeing non linear behavior on GigE interconnect when I add a 4th node (6 core i7) to my current set up. Trying to determine if it is the network latency or memory bandwidth
rlc113 is offline   Reply With Quote

Old   November 16, 2013, 11:27
Default
  #4
TMG
Member
 
Join Date: Mar 2009
Posts: 44
Rep Power: 17
TMG is on a distinguished road
You can't expect to scale CFD runs to many nodes using GigE. It has neither the bandwidth nor the latency performance to work well. We've seen it die at 3 nodes but it depends on what's in the nodes. There are some new low latency 10GigE interconnects that will get you quite a bit farther but ultimately you would need to go to Infiniband to scale to very large numbers of nodes. The cost of the low latency 10GigE hardware is comparable to Infiniband anyway so I don't quite see the advantage.

wrt to using 10 cores/cpu - it won't work either. In that case you are hitting a memory bandwidth limitation. The latest Intel memory architectures have just about enough bandwidth to keep 8 cores busy. You can't keep adding cores without adding more memory bandwidth and there is no way to increase memory bandwidth in the current generation of Intel cpu's.
TMG is offline   Reply With Quote

Reply

Tags
dell m420


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Preformance of Ansys CFX on a Linux Cluster roued CFX 3 September 30, 2012 07:07
Parallel rerun in cluster Andy_bm OpenFOAM Running, Solving & CFD 4 November 27, 2011 07:16
Parallel cluster solving with OpenFoam? P2P Cluster? hornig OpenFOAM Programming & Development 8 December 5, 2010 16:06
How to install the OpenFoam in the cluster. Please help me! flying OpenFOAM Installation 6 November 27, 2009 03:00
Linux Cluster Performance with a bi-processor PC M. FLUENT 1 April 22, 2005 09:25


All times are GMT -4. The time now is 16:34.