CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   CFX (https://www.cfd-online.com/Forums/cfx/)
-   -   What Hardware for CFX (https://www.cfd-online.com/Forums/cfx/106750-what-hardware-cfx.html)

bookie56 September 7, 2012 03:57

What Hardware for CFX
 
Hi folks!
OK! the last time I was here was to help a customer with openfoam...and I came to the right place:D

I still have the same customer, but these days he is tending to use ansys CFX more and more.....

I am completely in the dark here when looking at building a new computer for him that is going to cope with the type of demands CFX can have?

At the moment he has the following:
Chassi: Ace Midtower Game Edge 990
Motherboard: Asus P6X58D-E
Processor: Intel Corei7 950 3.06GHz 8mb s-1366
Memory: 12GB Corsair XMS3 Intel i7 PC12800 - 1600MHz
PSU: Corsair GS800W
Grapics: Asus Radeon HD6870 1GB memory
Main Harddrive: Intel SSD 510 SATA/600
Storage drive: WD Green 1TB SATA/300 32mb

He mentioned to me about wanting to buy CFX to work on a 8core processor...

Can someone give me an idea of what we are looking at to build the right computer?

I built the above for him, but he didn't really put any demands on me for better performance....

Looking at what these programs do etc - thinking the new computer should at least have EEC memory for reliability....

Any help or ideas gratefully received:)

bookie56

ghorrocks September 7, 2012 08:55

CFX has essentially the same demands for computing resources as Openfoam.

Most CFD cases need the fastest CPU you can buy, and about 2GB RAM per core. It is often better to run two CPUS (or even two independant computers) rather than a single CPU with the same number of cores. Hard drive speed does not matter much for CFD, so no need for SSD.

bookie56 September 7, 2012 09:35

Hi ghorrocks:)
Thanks for that...
Have been looking at this board from Asus
What do you think?

bookie56

evcelica September 7, 2012 13:32

You could save a lot of money skipping the ECC, as CFX uses iterative solving the need for ECC is debatable.

ghorrocks September 8, 2012 07:25

I agree with Erik. Spend the money on the CPU as that is the key bit of the syste, for CFD. Then put in enough memory and HD, and go cheap on everything else.

But remember that the cost of hardware in CFD is a tiny fraction of the cost of a commercial license. So if you are on a commercial license you might as well buy a top of the range machine as the software is the real cost and you have to maximise return on that investment.

bookie56 September 8, 2012 07:57

Hi guys!
Thanks for the info....
Thing is most of the dual processor boards I have been looking at are ECC memory.....
But I thank you for your time...

As far as I know my customer has a commercial license...

bookie56

evcelica September 8, 2012 12:44

Quote:

Originally Posted by ghorrocks (Post 380846)
But remember that the cost of hardware in CFD is a tiny fraction of the cost of a commercial license. So if you are on a commercial license you might as well buy a top of the range machine as the software is the real cost and you have to maximise return on that investment.


SO TRUE! I wish my workplace would realize this, I run a $46K license on my $1,400 P.O.S. Dell!!! :rolleyes:

ghorrocks September 9, 2012 06:25

Well, you have a good case to go to management and ask for a major computer upgrade. The results your $46k license returns is being compromised and takes longer to save a few thousand bucks - a false economy. Slip a few "return on investment", and other management buzz words and you should be right :)

evcelica September 10, 2012 22:09

Quote:

Originally Posted by ghorrocks (Post 380922)
Well, you have a good case to go to management and ask for a major computer upgrade. The results your $46k license returns is being compromised and takes longer to save a few thousand bucks - a false economy. Slip a few "return on investment", and other management buzz words and you should be right :)

You are absolutely correct, but we are non-profit scientific research, so nobody cares about anything that makes logical sense. We have guys using on 10+ year old computers because "it ain't broke yet." so all I would hear is no because we have a tight budget right now, and what division or project pays for this or that; it's a whole different world.

bookie56 September 14, 2012 04:03

Hi again!

OK! My customer uses cfx5...but the equipment is old and slow....
What motherboard, processor, graphics card etc are we looking at to run this baby?

I have been reading at ansys something about i7 processors not being supported?....

Any help will be gratefully appreciated!

bookie56

Lance September 14, 2012 05:00

My laptop has a M 620 i7 processor and CFX runs fine on it.

sainath.s@tdps.co.in September 14, 2012 06:07

Your problem size, element count and the physics is going to matter the most.

Check out DELL T7500 series workstations.

ghorrocks September 14, 2012 07:56

Have a look at the spec.org website, expecially the CPU2006 results page. For CFX single processor the SPECfp2006 benchmark is the one. For multi processor performance you can get it from SPECfp2006rate but you need to do a little number crunching.

In my experience these numbers are pretty good estimates of CFX performance.

bookie56 September 14, 2012 10:41

Hi guys!
Thanks for the info!

bookie56

bookie56 September 19, 2012 05:41

Hi again!
Here is what would get my customer started and there is a lot of potential for upgrades:

Motherboard: Asus Z9PE-D8 WS
http://www.asus.com/Motherboards/Intel_Socket_2011/Z9PED8_WS/

Chassis: ANTEC PERFORMANCE ONE P280 XL-ATX
http://www.antec.com/product.php?id=704504&fid=6&lan=us

Processor: 2x INTEL XEON E5-2620 6-CORE 2.0GHZ 15MB S-2011
http://ark.intel.com/products/64594/Intel-Xeon-Processor-E5-2620-15M-Cache-2_00-GHz-7_20-GTs-Intel-QPI

Ram: CORSAIR 32GB DDR3 VENGEANCE QUAD 1600MHZ CL10 (4X8GB)
http://www.corsair.com/en/memory-by-product-family/vengeance/vengeance-32gb-quad-channel-ddr3-memory-kit-cmz32gx3m4x1866c10.html

PSU: CORSAIR HX 1050W ATX12V 2.2 / EPS12V PSU
http://www.corsair.com/en/power-supply-units/hx-series-power-supply-units/hx-series-hx1050-power-supply-1050-watt-80-plus-gold-certified-modular-psu.html

Hard drives: INTEL 520 SERIES 2.5" 240GB SSD SATA/600 MLC 25NM BULK
http://ark.intel.com/products/66250/Intel-SSD-520-Series-240GB-2_5in-SATA-6Gbs-25nm-MLC
WESTERN DIGITAL VELOCIRAPTOR 3.5" 1TB 10K RPM SATA/600 64MB
http://www.wdc.com/en/products/products.aspx?id=20

Graphics: 2x ASUS GEFORCE GT 640 2GB DDR3 PCI-E VGA/DVI/HDMI
http://www.asus.com/Graphics_Cards/NVIDIA_Series/GT6402GD3/

Processor Cooling: CORSAIR H80 HYDRO CPU COOLER S-775/1155/1156/1366/2011/AM2+/AM3
http://www.corsair.com/cpu-cooling-kits/hydro-series-water-cooling-cpu-cooler/hydro-series-h80-high-performance-liquid-cpu-cooler.html


bookie56

ghorrocks September 19, 2012 06:15

The E5-2620 CPU scores a 64 on the baseline CPU2006fp. It is also 6 cores. CPUs with this many cores are generally pretty hopeless when running with 6 processes. So save some money and go for a faster CPU with less cores. When the computer shop tells you to buy a CPU with more cores for speed you shoudl ignore them, they don't know what is good for CFX.

I would go for the E5-2643 or some other 4 core processor. Probably cheaper and it will run faster (79 on the same benchmark).

bookie56 September 23, 2012 03:21

Hi ghorrocks!

I was actually thinking of the processor you named, but that is more expensive and not easy to get here....?

I will have another look at that and let my customer know......

I was a bit worried about the 6core and you have confirmed my suspicions....

Thanks for your time!

bookie56

Big Len September 27, 2012 11:24

Hi guys,

Just thought I’d register to post as this is a frustrating topic for many. At the minute, the best setup for a CFX (not fluent, not other CFD, not Mechanical) is two Xeon E5-2643 processors. I have gathered this information from personal research, ANSYS HPC seminars and also some industrial contacts. Here is why:



  • CFX is coupled, fluent etc (for the most part) is segregated. This method of solving the equations in unison is incredibly memory bandwidth intensive. There for memory bandwidth, frequency are important and the E5 series cannot be touched on this in dual processor arrays.
  • The SPEC fp benchmarks do not capture this, I have not looked in detail but I suspect the 4 CFD problems used are segregated type solutions
  • Higher core CPUs, (6 and 8 core from the E5 series) do not have any extra bandwidth for the processor. This is from also confirmed by ANSYS themselves: a 4 core processor will show a higher per-core performance than 6 core processor from the same family
  • Did I mention they are less than half the price of the 'higher end' models :D So if you need more power, buy two machines and a gigabit interconnect will do fine for 2 machines. Quad socket motherboards are not yet refined enough for this purpose.


Im pretty sure if you are buying a single socket machine then there are other better processors but if you are only needing a single socket machine then you arent needing the level of high performance that such in-depth processor selection matters!


Cheers,

Len

evcelica September 27, 2012 13:29

Thanks for the info Len, It is much appreciated.

I agree that the extra cores past 4 are somewhat worthless for CFX. Just for the record, I have a 3930K @ 4.4GHz and 2133MHz ram and during benchmarking of large models I saw no improvement at all going from 4 to 6 cores, obviously it is memory bandwith limited. (I did see an improvement in mechanical though)

One question, why would the 4 core processor complete the job before a 6 core or 8 core, IF they were all running 4 cores AND IF they both had the same frequency? Shouldn't they be the same then?

I realize you get a much better value going with the 4 core, and a high frequency; and the other cores of a 6 or 8 core would be wasted if you only ran 4. But I don't understand your statement "a 4 core processor will complete a job before a 6 core processor from the same family" Unless on a dual socket machine it does not split the processes evenly between the two processors, and using two 4 core processors forces it to do so?

Something else to think about if it applies to you:
The only reason I got a 6 core is because when using all 4 cores of a 4 core processor the computer is pretty much worthless for anything else. With the 6 core I can run on 4 and still actually use my computer for other stuff if I want. When I had a 4 core, I only ran on 3 for this very reason.

Big Len September 27, 2012 15:00

Misunderstood the question there for a minute

Um I need to look again at the info I have on my work computer and I'll post up tomorrow

ghorrocks September 27, 2012 18:32

Quote:

The SPEC fp benchmarks do not capture this
I never said it did. The CPU2006fp benchmark is a good proxy for single processor performance. You can extract an accurate benchmark for multi-processor performance from a combination of the CPU2006fp and CPU2006fp_rate benchmarks - but this is a bit more complex and I did not explain that in a simple forum posting.

Len - Having said that I agree with all your points. Another comment is often two sepearte machines, each with a single CPU is often faster than a single machine with the same two CPUs in the one machine. Recent motherboards are much improved here, but the general trend is still there.

Which brings up the point of motherboard quality - a good motherboard which has high bandwidth is important for multi processor use. A few years back I had a machine which turned out to have a poor motherboard, and when swapped with a machine with the same CPU but a better motherboard parallel operation ran twice as fast. I think motherboards are more reliable nowdays, most quality motherboards which support the top CPUs are pretty good.

Quote:

The only reason I got a 6 core is because when using all 4 cores of a 4 core processor the computer is pretty much worthless for anything else. With the 6 core I can run on 4 and still actually use my computer for other stuff if I want. When I had a 4 core, I only ran on 3 for this very reason.
Doing stuff on the remaining cores will still be pretty painfully slow, regardless of 0, 1 or 2 cores free on a 6 core machine. It is still best avoid the 6 core machines, even if you want to do other stuff at the same time.

Big Len September 28, 2012 03:52

OK first up I did make an error with the original quote :o The information was based on a per-core performance. Basically it was saying that if you turned off 2 cores of a six core machine, for same number of job paritions you will decrease the solution time. This is how most data is presented regarding CFX as it is obviously the licensing costs the dominate. I would imagine this effect may also become more pronouced as you compare chips that are actually 4 core vs 6 core. (not to mention that the only E5 6 core with the cfx-life-giving 8GT/s system bus costs 75% more)

I look at it this way (using dell prices)

For $6,400 I can have 16 cores at 2.9GHz with a total system bus of 16GT/s

or

For $6,600 I can have 16 cores at 3.3Ghz with a total system bus of 32GT/s

One of these systems will blow the other out of the water ...

Big Len September 28, 2012 04:08

Quote:

Originally Posted by ghorrocks (Post 383979)
I never said it did. The CPU2006fp benchmark is a good proxy for single processor performance. You can extract an accurate benchmark for multi-processor performance from a combination of the CPU2006fp and CPU2006fp_rate benchmarks - but this is a bit more complex and I did not explain that in a simple forum posting.

Hi ghorrocks, I did not mean to sound confrontational in my post - I was merely being terse to not have my point lost in a sea of words.

ghorrocks September 28, 2012 07:30

No offense taken. It's all good.

It is important for opinions to be expressed clearly, and if something is wrong then say so. You have obviously done some work and research in this area and your opinion is a good contribution to the forum.

shreyasr October 9, 2012 05:02

Single/Dual Socket processors
 
Hi everybody!

This is a great discussion.
I had a few related questions..
Hope it's okay that I post them on this thread :

1. Why is a dual socket/processor array better than a single socket processor, with the same number of cores ?

2. Lets say you have a dual socket Xeon E5 processor, with a speed of 2.6 GHz and then a single socket E5, with a speed of 3.6GHz; both with the same DDR3, 1600MHz RAM. Which would you prefer, and which would be faster for CFX ?

3. How far is Cache memory important in CFX simulations ?

4. How exactly does Intel's Turbo boost help with CFX ? Does it mean that the processors will run at the max turbo-boosted speed throughout the run ?

Looking forward to your responses !

-shreyas

evcelica October 9, 2012 07:51

Dual socket would be better since each socket has its own memory channels, so you would have 8 memory channels instead of "only" 4 with a single socket. Memory bandwidth seems to be our bottleneck in CFX, so I would go for the dual socket.

I don't think cache would matter much in larger problems with high RAM usage. I don't know for sure though.

Intel's "turbo boost" just increases the CPU clock speed under load and depending on how many cores are being used and if the temperature/power load is low enough. It would probably be max turbo boost with one core running, and decreasing clock as more cores are used.

bookie56 October 9, 2012 15:12

Hi guys!
I am glad I started this thread....it has been a fountain of information regarding different aspects of running CFX...

Thank you to all that have posted here!!

Much appreciated!

bookie56

evcelica October 26, 2012 19:54

I posted this on the Hardware forum but I thought I would share here too:

Just thought I'd share the somewhat unexpected results of my 2 node "cluster". I'm using two identical 6-core i7-3930K computers overclocked to 4.4 GHz, each with 32GB of 2133MHz ram. They are connected using Intel gigabit and I'm using platform-MPI running ANSYS CFX v14.

Benchmark case has ~4 million nodes - steady state thermal with multiple domains.

When comparing:
1 computer running 4 cores to
2 computers running 4 cores each

My speedup shows to be 2.22 times faster :)!
So much for linear scaling, has anyone else seen this, it just seems a little odd to me, though I'm definitely happy about it!
This is something to consider If anyone has been thinking about adding a second node.

I'd also be happy to do a little benchmarking against some dual socket XEON-E5 machines to compare the old 1 vs. 2 node question. I can set my CPU and memory frequency to whatever to make the test more even.

Thinking about this more, perhaps a cluster of single sockets nodes would scale better than dual sockets since you would have twice as many interconnects, where dual sockets would be sharing one lane? Perhaps the E5-2643 is not the best choice then, instead maybe the i7-3820 would take its place at it is almost $600 cheaper? Even my 6 cores are several hundred cheaper than the E5-2643.

EDIT:
After running it a few more times I realized during my single node simulation I accidently had the CPU downclocked to 3.8GHz instead of 4.4. So the 15.6% Overclock gave me the extra 11% speed per node. Running it again with the same 4.4GHz clock speed on all nodes I got 99.5% efficient scaling. Sorry for the misinformation.

shreyasr October 27, 2012 02:18

Hi Eric,

That's an interesting observation. However, wouldn't one expect ~2X performance increase in such a mini cluster setup, assuming both the i7's have the same configuration ?
Why do you find it odd ?

I'd be very interested to know the benchmarking results with the Xeon E5's, especially since I am in the process of figuring out the optimum configuration to upgrade to in my office, with respect to CFX.

So far, in my benchmarking tests with our current computers :
Case :
Steady, Incompressible, subsonic flow
Geometry : complete hydraulic passages of a centrifugal pump, Frozen rotor config. ~2 Million cells.

I've found a 2X speedup with a dual socket (3.0GHz quad core), comparing with a single socket quad core (2.4GHz processor). They both have exactly the same RAM, ~533MHz, DDR2.

I've also found that a Westmere (Quad core 2.4GHz, dual socket config), with 1.3GHz DDR3 RAM completed the same simulation 3.5 hours earlier (46% speedup) , compared to my existing dual socket 3.0GHz quad core.

Based on the above observations, I'd be a little sceptical about parallel single socket configurations being able to beat the performance of dual socket configurations. Extending that further, I also think, when it comes to interconnects, it's probably the speed of the interconnects (Gig-eth/infiniband) which would make a noticable difference rather than the number of interconnects. That's also what ANSYS swear by, though I understand it is really based on the application and the number of computers/cores being connected together.

Please feel free to correct me if I am wrong.

Came across this interesting document which is somewhat relevant (though it's old) : http://www.hpcadvisorycouncil.com/pdf/CFX_Analysis.pdf

Once again, looking forward to your benchmark study with the Xeon E5 2643's.

evcelica October 27, 2012 05:43

Thanks for sharing your benchmarking data.

I just found it odd since its better than 2x faster; I was thinking "perfect" scaling would be 100% faster only, not 122%. Looking through some of the fluent benchmarks I do see some rare cases where they get better than 100% scaling going to two nodes, but not often.

I was thinking for smaller clusters a few single socket i7s would have a higher performance/price ratio than dual socket XEONs.

If scaling to a large cluster, I really know nothing about clusters or interconnects or how they work, so maybe I shouldn't have said anything. I was just thinking each cpu would have its own interconnect instead of sharing one, I'm probably wrong though.

shreyasr October 27, 2012 07:15

Now that you've put it that way, it does seem strange and the difference seems high enough to warrant attention(?).
What do you think is contributing to the extra 22%?

If price is brought into the picture, from what I've read so far, I'd be inclined to agree with you regarding the higher performance/price of a mini cluster of 3rd generation i7's.
But, in such a scenario, I'm concerned about a very reliable, but relatively simple way of managing/administration. I would really want it to be open source/free.

I would like to know :
1. Do you use cluster applications/job schedulers to manage this mini cluster ?
If yes, which one ?
If no , how are you distributing your simulation? Is it via specifying the nodes in the cfx config file ?

2. Which OS are you using on both these computers?

ghorrocks October 27, 2012 07:24

Super-linear speed up (ie greater than 1) generally means the benchmark did not run properly on the single node case. Usually this is because it is too large to fit fully into memory so it had to swap/page some out to disk. The parallel ones are smaller and do not require paging - so run faster than the expected acceleration.

But in your case you have 32GB RAM and that should be big enough to fit this model. But memory fragmentation and other processes could be the reason.

shreyasr October 27, 2012 07:52

Hi Glenn,
If that were the case, does it also mean that Erik would probably get different speedup results on re-running the single node job ?

evcelica October 27, 2012 09:31

Quote:

Originally Posted by shreyasr (Post 388805)
I would like to know :
1. Do you use cluster applications/job schedulers to manage this mini cluster ?
If yes, which one ?
If no , how are you distributing your simulation? Is it via specifying the nodes in the cfx config file ?

2. Which OS are you using on both these computers?

I'm distributing it via the specifying the nodes in the cfx config file.
I'm using windows 7 x64 ultimate.

I'm going to run the simulation again on each node separately, and make sure they are each the same speed.

ghorrocks October 28, 2012 06:01

No, I am not suggesting you are getting different speed for different nodes.

I am saying that because the job is large it has the potential to run slower than optimal due to many reasons - memory being one, but there aera others (eg disk io). So when you use this slower than optimal run as a baseline for speedup factors you get factors greater than 100%.

The benchmark simulation I use "Benchmark.def" which can be found int he examples directory. It is quite a small simulation so it will definitely run properly in any reasonable CFD computer. But it is a small simulation, so it is not good at testing more than about 4 processes.

As I am sure you aware, there is no such thing as a universal benchmark.

evcelica October 31, 2012 10:57

EDIT:
After running it a few more times I realized during my single node simulation I accidentally had the CPU downclocked to 3.8GHz instead of 4.4. So the 15.6% Overclock gave me the extra 11% speed per node. Running it again with the same 4.4GHz clock speed on all nodes I got 99.5% efficient scaling. Sorry for the misinformation.

ghorrocks October 31, 2012 19:26

Doing good benchmarks is not easy. There are lots of gotchas.

shreyasr January 9, 2013 02:12

Hi Erik,
I was wondering if you had the time to get further bench marking test done with the dual socket E5's?

evcelica January 9, 2013 10:49

Quote:

Originally Posted by shreyasr (Post 400868)
Hi Erik,
I was wondering if you had the time to get further bench marking test done with the dual socket E5's?

No Problem,
Just to be clear, I have two i7's machines, not dual socket XEONs. But I'd be happy do do some benchmarking with the i7's, just send me the cfx file and tell me how you would like it ran.

I'm guessing I could also estimate a dual XEON E5 machines speed pretty well by downclocking my processors to whatever speed the XEONs run at, and lowering my memory frequencies and timings to match server memory which would be 1600MHZ @ 11-11-11-28. I'm sure it won't be perfect, but it should be quite close.

I can PM you my email if you're interested.


All times are GMT -4. The time now is 13:48.