CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Hardware

New CFD Workstation "Finalised" - Need Feedback

Register Blogs Community New Posts Updated Threads Search

Like Tree3Likes
  • 3 Post By flotus1

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 29, 2016, 08:27
Default New CFD Workstation "Finalised" - Need Feedback
  #1
New Member
 
Andrew Norfolk
Join Date: Feb 2016
Posts: 25
Rep Power: 10
Andrew Norfolk is on a distinguished road
Hi,

I'm about to place an order for the parts needed to build a new CFD workstation.

Here is a previous thread describing the requirements.

http://www.cfd-online.com/Forums/har...rkstation.html

Parts List:

CPU: 2x Intel Xeon E5-2637 v3

http://www.lambda-tek.com/Intel-CM80...01~sh/B1935313

Motherboard: Supermicro X10DRL-i

http://www.lambda-tek.com/shop/?
region=GB&searchString=evo+212&go=go

Memory: 2x Crucial 32GB 2133MHz DDR4 ECC Reg

http://www.ebuyer.com/655315-crucial...ct4k8g4rfs4213

CPU Heatsink: 2x Cooler Master Hyper 212 Evo

http://www.lambda-tek.com/shop/?regi...=evo+212&go=go

Storage: Samsung 512GB 850 PRO

http://www.lambda-tek.com/Samsung-MZ...BW~sh/B1915316

Power Supply:Corsair CS850M

http://www.lambda-tek.com/Corsair-CP...UK~sh/B1954079

PC Case: Fractal R5 Black

http://www.ebuyer.com/675950-fractal...d-ca-def-r5-bk

Total Price: £2987

I have a few questions regarding compatibility.

The motherboard is ATX form factor however it is a server based board. Will it fit into a consumer based ATX case like the fractal R5? I've had a warning that it may be awkward but I do not understand why this would be the case.

I am also concerned that the two CPU coolers I have chosen might overlap due to the proximity of the CPU sockets on the motherboard?

Finally, when it comes to RAM i'm totally lost. I want a total of 64GB ECC DDR4 at 2133MHZ but do I do this as 4x16GB? 8x8GB? Do I use single, dual or quad ranked modules?
Andrew Norfolk is offline   Reply With Quote

Old   February 29, 2016, 11:52
Default
  #2
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
Before going into detail: Did your requirements change or why did you switch to a dual-CPU setup?
flotus1 is offline   Reply With Quote

Old   February 29, 2016, 12:46
Default
  #3
New Member
 
Andrew Norfolk
Join Date: Feb 2016
Posts: 25
Rep Power: 10
Andrew Norfolk is on a distinguished road
Hi Flotus,

No the requirements did not change. I had a chat with the ANSYS tech support team. I suggested the 8 core 2667v3 as that offered the faster RAM and larger cache you recommended. Apparently because of the memory bandwidth limitations of CFX, you can actually get better performance running on dual quad cores than a single eight core. This set up kept me nicely under a £3000 budget and I think offers better performance. Do you think a single 8 core would be better?
Andrew Norfolk is offline   Reply With Quote

Old   February 29, 2016, 13:43
Default
  #4
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
That makes sense. You might need to check the processor affinity so your 4 threads are distributed correctly among the 2 processors

The mainboard you picked is slightly wider than a standard ATX mainboard. It should still fit into most ATX cases. To be on the safe side here choose a case that can hold E-ATX or XL-ATX boards.
You might consider board with 16 DIMM slots so you can easily upgrade RAM later if necessary.

The CPU coolers wont interfere. However, there are 2 components you should not cheap out on when building a workstation: Power supply and CPU cooling. My choice would be the BeQuiet dark power pro 11 550W and CPU coolers from Noctua. Which one depends if your mainboard has square- or narrow-ILM sockets.

RAM: you need 8 DIMMs to fill all 4 memory channels of both CPUs. For 64GB total 8GB DIMMs are your only option. Dual-rank DIMMs can offer slightly better performance than single-rank DIMMs due to rank interleaving.

Remember that you still need a graphics card.
Blanco, Far and AJS like this.
flotus1 is offline   Reply With Quote

Old   March 5, 2016, 09:06
Default
  #5
Senior Member
 
Robert
Join Date: Jun 2010
Posts: 117
Rep Power: 16
RobertB is on a distinguished road
We have a cluster built around this CPU running CENTOS. We use CCM+, I have two suggestions,

1) you turn off hyperthreading
2) you bind the processes to the CPU (I assume CFX can do this, if it is not standard)

On our previous cluster built around some 2.4GHZ Xeons from probably 2 generations earlier cpu binding did not seem to matter. With these faster chips I was somewhat disappointed initially as I had estimated they would be 2x per core faster. Running initially without binding they weren't, but they picked up about 20% speed (iirc) when binding was turned on.
RobertB is offline   Reply With Quote

Old   May 31, 2016, 05:15
Default
  #6
New Member
 
Christian Elgård Clausen
Join Date: Dec 2012
Location: Copenhagen, Denmark
Posts: 7
Rep Power: 13
Elgaard is on a distinguished road
Hi,

We are looking at the same setup with dual E5-2637 for running simulations with ANSYS CFX.

But why not choose RAM with a a frequency of 2400 Mhz? Both the CPU and motherboard support this. Is it just waste of RAM bandwidth due to the low core count, or?

We would like to also use the machine for solid simulations with ANSYS Mechanical. Other than the amount of RAM (choosing 128 GB instead) is there anything else we should be aware of? Is it worth going with a M.2 PCIe SSD instead?
Elgaard is offline   Reply With Quote

Old   May 31, 2016, 07:01
Default
  #7
Senior Member
 
Robert
Join Date: Jun 2010
Posts: 117
Rep Power: 16
RobertB is on a distinguished road
The processor above is a V3 which use DDR4 2133, the newer V4s use 2400.

In general you can't have too much bandwidth. You will still need 8 DIMMs to achieve full memory bandwidth.

Are you license limited, you can get more throughput with the 8 or 12 core CPUs.

One issue with SSDs is that they do wear out. If you are running reasonably big CFD models and saving them regularly I would check out the expected endurance.

40GB a day is a lot of writing for an office desktop but can be less than 1 save for a large CFD model (probably bigger than you would run on a single workstation but if you save a smaller one every hour of run time it all adds up).
RobertB is offline   Reply With Quote

Old   May 31, 2016, 08:58
Default
  #8
New Member
 
Christian Elgård Clausen
Join Date: Dec 2012
Location: Copenhagen, Denmark
Posts: 7
Rep Power: 13
Elgaard is on a distinguished road
Quote:
Originally Posted by RobertB View Post
The processor above is a V3 which use DDR4 2133, the newer V4s use 2400.

In general you can't have too much bandwidth. You will still need 8 DIMMs to achieve full memory bandwidth.
Thanks!

Quote:
Originally Posted by RobertB View Post
Are you license limited, you can get more throughput with the 8 or 12 core CPUs.
We are limited to 1 HPC pack with 8 cores - Unfortunately.

Quote:
Originally Posted by RobertB View Post
One issue with SSDs is that they do wear out. If you are running reasonably big CFD models and saving them regularly I would check out the expected endurance.

40GB a day is a lot of writing for an office desktop but can be less than 1 save for a large CFD model (probably bigger than you would run on a single workstation but if you save a smaller one every hour of run time it all adds up).
Regarding storange, i was interested in knowing if there where any gains going with multiple M.2 PCIe NVMe drives in RAID 0, with the posibility of write speeds arround 2500 Mb/s compared to the 600 MB/s with SATA III SSD.
Elgaard is offline   Reply With Quote

Old   May 31, 2016, 11:00
Default
  #9
Senior Member
 
Robert
Join Date: Jun 2010
Posts: 117
Rep Power: 16
RobertB is on a distinguished road
On our cluster we have conventional disks in a RAID 6 setup. They write at ~6-700MB/s.

We have big models >40GB and it's certainly not bad doing a write. even there it takes a minute or so which in the scheme of things is not that long.

At a certain (speed) point I would go for capacity over speed.

As an aside I would try locking the CPU affinity, if CFX allows it - it is typically in the mpirun commands. This helped significantly on CCM+ on the V3 version of these processors.
RobertB is offline   Reply With Quote

Old   June 13, 2016, 09:40
Default
  #10
New Member
 
Andrew Norfolk
Join Date: Feb 2016
Posts: 25
Rep Power: 10
Andrew Norfolk is on a distinguished road
Quote:
Originally Posted by RobertB View Post
On our cluster we have conventional disks in a RAID 6 setup. They write at ~6-700MB/s.

We have big models >40GB and it's certainly not bad doing a write. even there it takes a minute or so which in the scheme of things is not that long.

At a certain (speed) point I would go for capacity over speed.

As an aside I would try locking the CPU affinity, if CFX allows it - it is typically in the mpirun commands. This helped significantly on CCM+ on the V3 version of these processors.
I can't find anything about setting the process affinity... when I load the solver I have the option to set the number of cores but that's all. Could you give me a bit more detail about core binding?
Andrew Norfolk is offline   Reply With Quote

Old   June 13, 2016, 11:13
Default
  #11
Senior Member
 
Robert
Join Date: Jun 2010
Posts: 117
Rep Power: 16
RobertB is on a distinguished road
Here is a link to the mpirun commands

https://www.ibm.com/support/knowledg..._cpu_bind.html

On CCM+ you can either use the cpubind option or it allows you to specify extra mpirun commands to use when the server process is used.

I have not used CFX so don't know how they do it, you could ping support.
RobertB is offline   Reply With Quote

Old   June 15, 2016, 02:27
Default
  #12
Member
 
Kim Bindesbøll Andersen
Join Date: Oct 2010
Location: Aalborg, Denmark
Posts: 39
Rep Power: 15
bindesboll is on a distinguished road
If you are limited to 4 cores, I dont see the point of spending money on dual quad core CPU - you will max use half of it.
Go for i7-5820K and overclocking + matching RAM speed, that will give you max performance on 4 cores (and still having 2 cores for other tasks while solving).

The system you describe will be suitable for a single ANSYS HPC Pack licence (8 cores), which you should consider taking your management into, as it might be a more cost efficient solution than 4 single HPC licenses.

Sure you will need to have 4 memory modules per CPU to utilize the 4 memory lanes. I agree that dual rank RAM is slightly better than single rank.
bindesboll is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
CFD Design...The CFD Future John C. Chien Main CFD Forum 20 November 19, 2015 23:40
What is the Better Way to Do CFD? John C. Chien Main CFD Forum 54 April 23, 2001 08:10
ASME CFD Symposium, Atlanta, 22-26 July 2001 Chris R. Kleijn Main CFD Forum 0 August 1, 2000 10:07
PC vs. Workstation Tim Franke Main CFD Forum 5 September 29, 1999 15:01
public CFD Code development Heinz Wilkening Main CFD Forum 38 March 5, 1999 11:44


All times are GMT -4. The time now is 20:04.