CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Hardware

OpenFOAM machine recommendation

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Display Modes
Old   May 9, 2014, 13:43
Default OpenFOAM machine recommendation
  #1
New Member
 
Calum Douglas
Join Date: Apr 2013
Location: Coventry, UK
Posts: 17
Rep Power: 5
snowygrouch is on a distinguished road
Hi,
OpenFOAM, using mostly PisoFOAM solver for carbody external aerodynamics.
Unstructured mesh, cellcount around 10million

Want to spend under 3500 ($5900 USD) on the whole thing. (I have looked over the various existing threads on this forum - but not quite satisfied)

4 Options:

1) Single board QUAD socket 16 core Opteron 6274 3500
64 Cores, 256 GB RAM, Supermicro H8QGi-F MB

http://www.pugetsystems.com/featured...orkstation-100

Downside = only one FPU per core
Upside = All in one built system - no messing around

2) Build HELMER cluster with perhaps 10 boards each with 3500
AMD FX8350 with 8 core cpu. (one board + cpu + 8GB ram is 250)

Upside = can afford more cores
Downside = got to set up network etc & would be ethernet with a
Netgear 16port GS116 switch.

3) Rent out cluster time from Penguin / Amazon or Rescale.com
who all claim to have OpenFOAM enabled systems.

Upside = No hardware to get
Downside = can be expensive to do all the setup runs when doing
new simulations, performance of these type of services is not quantified in publications.


4) Something I didnt consider for under $5900 USD....


Im well aware that Intel CPUs are in many ways better options, but when I have no parallel licence cost issues and can run as many cores as I can buy - from a cost per FLOP perspective im struggling to see a winning combo from Intel. If I go Intel basically I almost half the number of CPU cores I can buy.

Current workstation spec:
Dual Socket Supermicro X7DWA-N MBoard
Dual QUAD 3.33ghz X5470 XEONs
QUADRO FX3700 GFX

Any advice much appreciated.

Regards

Calum Douglas
www.calumdouglas.ch
www.scorpion-dynamics.com
snowygrouch is offline   Reply With Quote

Old   May 12, 2014, 05:31
Default
  #2
Senior Member
 
Charles
Join Date: Apr 2009
Posts: 183
Rep Power: 10
CapSizer is on a distinguished road
I would suggest that you shouldn't get too hung up on the number of compute cores that the AMD processors offer. Rather look at what you can get in terms of the memory system performance. There have been some benchmarks published in this forum that quantify the benefit of the memory performance quite nicely. The quad socket AMD solution is not a bad option (because you get 16 memory channels on a single board), but you won't benefit that much from spending the money on the 16 core variants of the CPU. The downside of the 1X4 socket board option is that it limits you to 1600 MHz RAM, when it is possible to get significantly faster RAM on a single socket board, but then you also need to take care of the networking, preferably Infiniband.

Core i-7 computers are favoured, because you can get 4 memory channels per socket, which you can't get on other single socket systems.
CapSizer is offline   Reply With Quote

Old   May 12, 2014, 15:34
Default network protocol
  #3
New Member
 
Calum Douglas
Join Date: Apr 2013
Location: Coventry, UK
Posts: 17
Rep Power: 5
snowygrouch is on a distinguished road
Ah yes I was afraid of the network speeds.

Anyone got any opinion on the new 10Gb Ethernet or
Thunderbolt network protocols ?

If I cant afford the networking of sufficient speed will probably just go
for the 4 socket motherboard.

Thanks for the info

Calum
snowygrouch is offline   Reply With Quote

Old   May 12, 2014, 16:02
Default
  #4
Senior Member
 
Charles
Join Date: Apr 2009
Posts: 183
Rep Power: 10
CapSizer is on a distinguished road
10 Gb ethernet is a strange animal. The cards and switches are virtually as expensive as infiniband, but much slower, FDR IB being able to hit 56 Gb/s. What makes it even worse for 10 Gb ethernet is that the latency is much worse than IB. The cheap trick is to look for used IB cards, cables and switch on eBay. The stuff is surprisingly cheap,but you will be on your own making it work.
CapSizer is offline   Reply With Quote

Old   May 16, 2014, 11:52
Default
  #5
Senior Member
 
ghost82's Avatar
 
Daniele
Join Date: Oct 2010
Location: Italy
Posts: 987
Rep Power: 16
ghost82 will become famous soon enough
Hi, I agree with Charles, you can get cheap ib cards and cables on ebay.
Recently I bought two cards from USA (refurbished), 2 mellanox MHGS18-XTC (20 Gb/s) and 5m mellanox cable (20 Gb/s) to connect them (MCC4L28-005).

Each Mellanox card costed 32,99$
Cable costed 19$

Shipping costs to Italy: 25,38$ (ib cards) + 21,23$ (cable)

VAT (yes...we have VAT here in Italy....): 11,66$ (for the cards) + 0$ (luckly the cable was very cheap, so no applied VAT)

Conclusion: 2x 20 Gb/s mellanox ib cards + 5m cable for 143,25$, which I think is very cheap..

Daniele

Last edited by ghost82; May 17, 2014 at 05:58.
ghost82 is offline   Reply With Quote

Old   May 17, 2014, 08:28
Default
  #6
Senior Member
 
Derek Mitchell
Join Date: Mar 2014
Location: UK, Reading
Posts: 120
Rep Power: 5
derekm is on a distinguished road
my approach was to go even cheaper

H8QME boards with 4 x 8381HE processors giving only 16 cores per board.

unfortunately this means building custom cases.

board 100
4 cpus 36
16Gb memory 70
psu 30
fans 15
__________________
A CHEERING BAND OF FRIENDLY ELVES CARRY THE CONQUERING ADVENTURER OFF INTO THE SUNSET
derekm is offline   Reply With Quote

Old   May 17, 2014, 14:37
Default
  #7
Senior Member
 
Charles
Join Date: Apr 2009
Posts: 183
Rep Power: 10
CapSizer is on a distinguished road
Daniele, did you have any trouble getting the IB stuff to work, and what operating system are you running?

Quote:
Originally Posted by ghost82 View Post
Hi, I agree with Charles, you can get cheap ib cards and cables on ebay.
Recently I bought two cards from USA (refurbished), 2 mellanox MHGS18-XTC (20 Gb/s) and 5m mellanox cable (20 Gb/s) to connect them (MCC4L28-005).

Daniele
CapSizer is offline   Reply With Quote

Old   May 17, 2014, 16:54
Default
  #8
Senior Member
 
ghost82's Avatar
 
Daniele
Join Date: Oct 2010
Location: Italy
Posts: 987
Rep Power: 16
ghost82 will become famous soon enough
I'm running windows 7 professional 64 bit in 2 nodes.
No problems at all.
As suggested by Erik in another post the only thing to do is to install correct drivers (I downloaded winof 2.1.2) and enable winsock direct protocol during installation.

Ps: I'm using fluent, not openfoam

Help getting ANSYS to work with infiniband on Win7x64
ghost82 is offline   Reply With Quote

Old   May 19, 2014, 18:17
Default for openfoam when to go IB
  #9
Senior Member
 
Derek Mitchell
Join Date: Mar 2014
Location: UK, Reading
Posts: 120
Rep Power: 5
derekm is on a distinguished road
With 20 x 8381 (quad core 2.5Ghz) processors spread across 7 boards is it worthwhile going to IB rather than gb ethernet? How substantial a performance diiference?
__________________
A CHEERING BAND OF FRIENDLY ELVES CARRY THE CONQUERING ADVENTURER OFF INTO THE SUNSET
derekm is offline   Reply With Quote

Old   May 20, 2014, 03:32
Default
  #10
Senior Member
 
Charles
Join Date: Apr 2009
Posts: 183
Rep Power: 10
CapSizer is on a distinguished road
You should absolutely go with infiniband. Don't even think of doing such a cluster with Gb ethernet. Unfortunately I don't have the results with me here, but when testing a 6 node, dual socket, hex-core Opteron system a few years ago, the results were startling. For nominal testcases (something like 5 million cells), when using anything more than about 20 cores on Gb ethernet the performance was essentially flat, and even got worse with more cores. With QDR IB the improvement was more or less linear all the way up to 72 cores. You are looking at 80 cores for your system ... definitely must be IB. To put it differently, you would probably do better with 40 cores and IB than with 80 on Gbe.

Quote:
Originally Posted by derekm View Post
With 20 x 8381 (quad core 2.5Ghz) processors spread across 7 boards is it worthwhile going to IB rather than gb ethernet? How substantial a performance diiference?
CapSizer is offline   Reply With Quote

Old   May 20, 2014, 05:18
Default
  #11
Senior Member
 
Derek Mitchell
Join Date: Mar 2014
Location: UK, Reading
Posts: 120
Rep Power: 5
derekm is on a distinguished road
As I though, GbE two nodes/boards ok, three nodes/boards pushing it.

That means I can get to 32/48 cores if I only use the Quad socket boards
with GbE, but the remaining cores will need InFB.

More expense ... the InFB boards are quite cheap on Ebay but the switches are still much much bigger money than GbE.
__________________
A CHEERING BAND OF FRIENDLY ELVES CARRY THE CONQUERING ADVENTURER OFF INTO THE SUNSET
derekm is offline   Reply With Quote

Reply

Tags
cluster, hpc, openfoam

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
2D Mesh Generation Tutorial for GMSH aeroslacker Open Source Meshers: Gmsh, Netgen, CGNS, ... 12 January 19, 2012 04:52
New OpenFOAM Forum Structure jola OpenFOAM 2 October 19, 2011 06:55
New building machine for OpenFoam gerbervdgraaf OpenFOAM Installation 23 December 9, 2009 03:39
64bitrhel5 OF installation instructions mirko OpenFOAM Installation 2 August 12, 2008 18:07
The OpenFOAM extensions project mbeaudoin OpenFOAM 16 October 9, 2007 09:33


All times are GMT -4. The time now is 21:17.