What Hardware for CFX
Hi folks!
OK! the last time I was here was to help a customer with openfoam...and I came to the right place:D I still have the same customer, but these days he is tending to use ansys CFX more and more..... I am completely in the dark here when looking at building a new computer for him that is going to cope with the type of demands CFX can have? At the moment he has the following: Chassi: Ace Midtower Game Edge 990 Motherboard: Asus P6X58D-E Processor: Intel Corei7 950 3.06GHz 8mb s-1366 Memory: 12GB Corsair XMS3 Intel i7 PC12800 - 1600MHz PSU: Corsair GS800W Grapics: Asus Radeon HD6870 1GB memory Main Harddrive: Intel SSD 510 SATA/600 Storage drive: WD Green 1TB SATA/300 32mb He mentioned to me about wanting to buy CFX to work on a 8core processor... Can someone give me an idea of what we are looking at to build the right computer? I built the above for him, but he didn't really put any demands on me for better performance.... Looking at what these programs do etc - thinking the new computer should at least have EEC memory for reliability.... Any help or ideas gratefully received:) bookie56 |
CFX has essentially the same demands for computing resources as Openfoam.
Most CFD cases need the fastest CPU you can buy, and about 2GB RAM per core. It is often better to run two CPUS (or even two independant computers) rather than a single CPU with the same number of cores. Hard drive speed does not matter much for CFD, so no need for SSD. |
Hi ghorrocks:)
Thanks for that... Have been looking at this board from Asus What do you think? bookie56 |
You could save a lot of money skipping the ECC, as CFX uses iterative solving the need for ECC is debatable.
|
I agree with Erik. Spend the money on the CPU as that is the key bit of the syste, for CFD. Then put in enough memory and HD, and go cheap on everything else.
But remember that the cost of hardware in CFD is a tiny fraction of the cost of a commercial license. So if you are on a commercial license you might as well buy a top of the range machine as the software is the real cost and you have to maximise return on that investment. |
Hi guys!
Thanks for the info.... Thing is most of the dual processor boards I have been looking at are ECC memory..... But I thank you for your time... As far as I know my customer has a commercial license... bookie56 |
Quote:
SO TRUE! I wish my workplace would realize this, I run a $46K license on my $1,400 P.O.S. Dell!!! :rolleyes: |
Well, you have a good case to go to management and ask for a major computer upgrade. The results your $46k license returns is being compromised and takes longer to save a few thousand bucks - a false economy. Slip a few "return on investment", and other management buzz words and you should be right :)
|
Quote:
|
Hi again!
OK! My customer uses cfx5...but the equipment is old and slow.... What motherboard, processor, graphics card etc are we looking at to run this baby? I have been reading at ansys something about i7 processors not being supported?.... Any help will be gratefully appreciated! bookie56 |
My laptop has a M 620 i7 processor and CFX runs fine on it.
|
Your problem size, element count and the physics is going to matter the most.
Check out DELL T7500 series workstations. |
Have a look at the spec.org website, expecially the CPU2006 results page. For CFX single processor the SPECfp2006 benchmark is the one. For multi processor performance you can get it from SPECfp2006rate but you need to do a little number crunching.
In my experience these numbers are pretty good estimates of CFX performance. |
Hi guys!
Thanks for the info! bookie56 |
The E5-2620 CPU scores a 64 on the baseline CPU2006fp. It is also 6 cores. CPUs with this many cores are generally pretty hopeless when running with 6 processes. So save some money and go for a faster CPU with less cores. When the computer shop tells you to buy a CPU with more cores for speed you shoudl ignore them, they don't know what is good for CFX.
I would go for the E5-2643 or some other 4 core processor. Probably cheaper and it will run faster (79 on the same benchmark). |
Hi ghorrocks!
I was actually thinking of the processor you named, but that is more expensive and not easy to get here....? I will have another look at that and let my customer know...... I was a bit worried about the 6core and you have confirmed my suspicions.... Thanks for your time! bookie56 |
Hi guys,
Just thought I’d register to post as this is a frustrating topic for many. At the minute, the best setup for a CFX (not fluent, not other CFD, not Mechanical) is two Xeon E5-2643 processors. I have gathered this information from personal research, ANSYS HPC seminars and also some industrial contacts. Here is why:
Im pretty sure if you are buying a single socket machine then there are other better processors but if you are only needing a single socket machine then you arent needing the level of high performance that such in-depth processor selection matters! Cheers, Len |
Thanks for the info Len, It is much appreciated.
I agree that the extra cores past 4 are somewhat worthless for CFX. Just for the record, I have a 3930K @ 4.4GHz and 2133MHz ram and during benchmarking of large models I saw no improvement at all going from 4 to 6 cores, obviously it is memory bandwith limited. (I did see an improvement in mechanical though) One question, why would the 4 core processor complete the job before a 6 core or 8 core, IF they were all running 4 cores AND IF they both had the same frequency? Shouldn't they be the same then? I realize you get a much better value going with the 4 core, and a high frequency; and the other cores of a 6 or 8 core would be wasted if you only ran 4. But I don't understand your statement "a 4 core processor will complete a job before a 6 core processor from the same family" Unless on a dual socket machine it does not split the processes evenly between the two processors, and using two 4 core processors forces it to do so? Something else to think about if it applies to you: The only reason I got a 6 core is because when using all 4 cores of a 4 core processor the computer is pretty much worthless for anything else. With the 6 core I can run on 4 and still actually use my computer for other stuff if I want. When I had a 4 core, I only ran on 3 for this very reason. |
Misunderstood the question there for a minute
Um I need to look again at the info I have on my work computer and I'll post up tomorrow |
Quote:
Len - Having said that I agree with all your points. Another comment is often two sepearte machines, each with a single CPU is often faster than a single machine with the same two CPUs in the one machine. Recent motherboards are much improved here, but the general trend is still there. Which brings up the point of motherboard quality - a good motherboard which has high bandwidth is important for multi processor use. A few years back I had a machine which turned out to have a poor motherboard, and when swapped with a machine with the same CPU but a better motherboard parallel operation ran twice as fast. I think motherboards are more reliable nowdays, most quality motherboards which support the top CPUs are pretty good. Quote:
|
OK first up I did make an error with the original quote :o The information was based on a per-core performance. Basically it was saying that if you turned off 2 cores of a six core machine, for same number of job paritions you will decrease the solution time. This is how most data is presented regarding CFX as it is obviously the licensing costs the dominate. I would imagine this effect may also become more pronouced as you compare chips that are actually 4 core vs 6 core. (not to mention that the only E5 6 core with the cfx-life-giving 8GT/s system bus costs 75% more)
I look at it this way (using dell prices) For $6,400 I can have 16 cores at 2.9GHz with a total system bus of 16GT/s or For $6,600 I can have 16 cores at 3.3Ghz with a total system bus of 32GT/s One of these systems will blow the other out of the water ... |
Quote:
|
No offense taken. It's all good.
It is important for opinions to be expressed clearly, and if something is wrong then say so. You have obviously done some work and research in this area and your opinion is a good contribution to the forum. |
Single/Dual Socket processors
Hi everybody!
This is a great discussion. I had a few related questions.. Hope it's okay that I post them on this thread : 1. Why is a dual socket/processor array better than a single socket processor, with the same number of cores ? 2. Lets say you have a dual socket Xeon E5 processor, with a speed of 2.6 GHz and then a single socket E5, with a speed of 3.6GHz; both with the same DDR3, 1600MHz RAM. Which would you prefer, and which would be faster for CFX ? 3. How far is Cache memory important in CFX simulations ? 4. How exactly does Intel's Turbo boost help with CFX ? Does it mean that the processors will run at the max turbo-boosted speed throughout the run ? Looking forward to your responses ! -shreyas |
Dual socket would be better since each socket has its own memory channels, so you would have 8 memory channels instead of "only" 4 with a single socket. Memory bandwidth seems to be our bottleneck in CFX, so I would go for the dual socket.
I don't think cache would matter much in larger problems with high RAM usage. I don't know for sure though. Intel's "turbo boost" just increases the CPU clock speed under load and depending on how many cores are being used and if the temperature/power load is low enough. It would probably be max turbo boost with one core running, and decreasing clock as more cores are used. |
Hi guys!
I am glad I started this thread....it has been a fountain of information regarding different aspects of running CFX... Thank you to all that have posted here!! Much appreciated! bookie56 |
I posted this on the Hardware forum but I thought I would share here too:
Just thought I'd share the somewhat unexpected results of my 2 node "cluster". I'm using two identical 6-core i7-3930K computers overclocked to 4.4 GHz, each with 32GB of 2133MHz ram. They are connected using Intel gigabit and I'm using platform-MPI running ANSYS CFX v14. Benchmark case has ~4 million nodes - steady state thermal with multiple domains. When comparing: 1 computer running 4 cores to 2 computers running 4 cores each My speedup shows to be 2.22 times faster :)! So much for linear scaling, has anyone else seen this, it just seems a little odd to me, though I'm definitely happy about it! This is something to consider If anyone has been thinking about adding a second node. I'd also be happy to do a little benchmarking against some dual socket XEON-E5 machines to compare the old 1 vs. 2 node question. I can set my CPU and memory frequency to whatever to make the test more even. Thinking about this more, perhaps a cluster of single sockets nodes would scale better than dual sockets since you would have twice as many interconnects, where dual sockets would be sharing one lane? Perhaps the E5-2643 is not the best choice then, instead maybe the i7-3820 would take its place at it is almost $600 cheaper? Even my 6 cores are several hundred cheaper than the E5-2643. EDIT: After running it a few more times I realized during my single node simulation I accidently had the CPU downclocked to 3.8GHz instead of 4.4. So the 15.6% Overclock gave me the extra 11% speed per node. Running it again with the same 4.4GHz clock speed on all nodes I got 99.5% efficient scaling. Sorry for the misinformation. |
Hi Eric,
That's an interesting observation. However, wouldn't one expect ~2X performance increase in such a mini cluster setup, assuming both the i7's have the same configuration ? Why do you find it odd ? I'd be very interested to know the benchmarking results with the Xeon E5's, especially since I am in the process of figuring out the optimum configuration to upgrade to in my office, with respect to CFX. So far, in my benchmarking tests with our current computers : Case : Steady, Incompressible, subsonic flow Geometry : complete hydraulic passages of a centrifugal pump, Frozen rotor config. ~2 Million cells. I've found a 2X speedup with a dual socket (3.0GHz quad core), comparing with a single socket quad core (2.4GHz processor). They both have exactly the same RAM, ~533MHz, DDR2. I've also found that a Westmere (Quad core 2.4GHz, dual socket config), with 1.3GHz DDR3 RAM completed the same simulation 3.5 hours earlier (46% speedup) , compared to my existing dual socket 3.0GHz quad core. Based on the above observations, I'd be a little sceptical about parallel single socket configurations being able to beat the performance of dual socket configurations. Extending that further, I also think, when it comes to interconnects, it's probably the speed of the interconnects (Gig-eth/infiniband) which would make a noticable difference rather than the number of interconnects. That's also what ANSYS swear by, though I understand it is really based on the application and the number of computers/cores being connected together. Please feel free to correct me if I am wrong. Came across this interesting document which is somewhat relevant (though it's old) : http://www.hpcadvisorycouncil.com/pdf/CFX_Analysis.pdf Once again, looking forward to your benchmark study with the Xeon E5 2643's. |
Thanks for sharing your benchmarking data.
I just found it odd since its better than 2x faster; I was thinking "perfect" scaling would be 100% faster only, not 122%. Looking through some of the fluent benchmarks I do see some rare cases where they get better than 100% scaling going to two nodes, but not often. I was thinking for smaller clusters a few single socket i7s would have a higher performance/price ratio than dual socket XEONs. If scaling to a large cluster, I really know nothing about clusters or interconnects or how they work, so maybe I shouldn't have said anything. I was just thinking each cpu would have its own interconnect instead of sharing one, I'm probably wrong though. |
Now that you've put it that way, it does seem strange and the difference seems high enough to warrant attention(?).
What do you think is contributing to the extra 22%? If price is brought into the picture, from what I've read so far, I'd be inclined to agree with you regarding the higher performance/price of a mini cluster of 3rd generation i7's. But, in such a scenario, I'm concerned about a very reliable, but relatively simple way of managing/administration. I would really want it to be open source/free. I would like to know : 1. Do you use cluster applications/job schedulers to manage this mini cluster ? If yes, which one ? If no , how are you distributing your simulation? Is it via specifying the nodes in the cfx config file ? 2. Which OS are you using on both these computers? |
Super-linear speed up (ie greater than 1) generally means the benchmark did not run properly on the single node case. Usually this is because it is too large to fit fully into memory so it had to swap/page some out to disk. The parallel ones are smaller and do not require paging - so run faster than the expected acceleration.
But in your case you have 32GB RAM and that should be big enough to fit this model. But memory fragmentation and other processes could be the reason. |
Hi Glenn,
If that were the case, does it also mean that Erik would probably get different speedup results on re-running the single node job ? |
Quote:
I'm using windows 7 x64 ultimate. I'm going to run the simulation again on each node separately, and make sure they are each the same speed. |
No, I am not suggesting you are getting different speed for different nodes.
I am saying that because the job is large it has the potential to run slower than optimal due to many reasons - memory being one, but there aera others (eg disk io). So when you use this slower than optimal run as a baseline for speedup factors you get factors greater than 100%. The benchmark simulation I use "Benchmark.def" which can be found int he examples directory. It is quite a small simulation so it will definitely run properly in any reasonable CFD computer. But it is a small simulation, so it is not good at testing more than about 4 processes. As I am sure you aware, there is no such thing as a universal benchmark. |
EDIT:
After running it a few more times I realized during my single node simulation I accidentally had the CPU downclocked to 3.8GHz instead of 4.4. So the 15.6% Overclock gave me the extra 11% speed per node. Running it again with the same 4.4GHz clock speed on all nodes I got 99.5% efficient scaling. Sorry for the misinformation. |
Doing good benchmarks is not easy. There are lots of gotchas.
|
Hi Erik,
I was wondering if you had the time to get further bench marking test done with the dual socket E5's? |
Quote:
Just to be clear, I have two i7's machines, not dual socket XEONs. But I'd be happy do do some benchmarking with the i7's, just send me the cfx file and tell me how you would like it ran. I'm guessing I could also estimate a dual XEON E5 machines speed pretty well by downclocking my processors to whatever speed the XEONs run at, and lowering my memory frequencies and timings to match server memory which would be 1600MHZ @ 11-11-11-28. I'm sure it won't be perfect, but it should be quite close. I can PM you my email if you're interested. |
All times are GMT -4. The time now is 13:48. |