Greetings to all!
I'll be trying to answer on this post the questions posed by acasas and by Chris Lee:
@acasas:
Quote:
Originally Posted by acasas
But yet, can you believe, the company kept claiming I was wrong? It is a very important server producer and workstation company from USA, but I won't tell their name. They claim that for their applications, big data storage, labs, etc, this is not an important issue. Are they right??
|
For the average range of applications, they are somewhat correct. The performance difference is in the range of 1-10%, depending on the application. Problem is that CFD requires a very optimized (or at least very good) system, not an
average system In which case, memory access is critical and 2 vs 4 channels can mean something in the range of 10 to 30% performance, depending on the cases.
Quote:
Originally Posted by acasas
Any way, I just wanted to ask one 2 more things. This motherboard have 16 memory modules. Do I need to fill ALL of them (16) OR 8 (4 per each processor) will be enough? Of course I´ll populate them as in the motherboard specification, and yes, they are ECC DDR4.
|
Each processor/socket has 4 memory channels, which implies that a minimum of 4 modules/slots should be occupied; above that, it should be a multiple of 4.
Quote:
Originally Posted by acasas
The 2nd question is related with discs and storage disposal. I do have 3 SSD 250 GB each. One for the system and software and the other 2 in RAID 0 mode for the working and scratching folders. Is it a good configuration for best performance?
|
Seems OK. It depends on how frequently you need to write the data to disk and how big your cases are for each time/iteration snapshot. It might make more sense to have smaller SSDs for RAID 0 and to have a 2-4TB hard-drive for off-loading data after writing to SSD is complete. But again, it strongly depends on your work-flow and file frequency+sizes.
-----------------------
@Chris Lee:
Quote:
Originally Posted by Chris Lee
The system I am shopping has options to go up to 12 cores. There are, for example, options on the configuration which include
E5-1660 V2 , and
i7-4960X, and
E5-2680 V2 .
|
We might not yet have the technology to make a car compact itself into a briefcase, but at least computers are getting there
Perhaps the cartoons "The Jetsons" were actually referring to teleworking...
Quote:
Originally Posted by Chris Lee
For RAM, being a laptop, this system uses DDR3, with the best option being 32 GB (4 x 8G) 204-pin "quad channel" memory.
|
Mmm... if you might go up to 30 GB for a case, you might eventually see a need to go even further to 64GB of RAM... but I guess that if you ever need that, you will use a cluster or a server to do the mesh and calculations.
Quote:
Originally Posted by Chris Lee
Now I don't understand well the architecture of how the RAM channels and CPU communicate, but I think you need to have at least 4 DIMM slots filled to get 4-channel functionality out of the RAM.
|
Yes.
Quote:
Originally Posted by Chris Lee
The question is, am I not spending my $ efficiently if I go a number of cores greater than the number of channels in the memory? (If so, why would anyone ever go with more than 4 cores?)
|
I did a bit of lengthy mathematics on this topic yesterday:
http://www.cfd-online.com/Forums/har...tml#post523825 - post #10
The essential concept is that you have to think that mores cores will be running slower, but they will also be responsible for lesser RAM to be
crunched. Then you have to take into account for the total available memory bandwidth. Beyond that, it starts depending on the complexity of your case... this to say that in some crazy situations,
overscheduling a 12 core machine with 18-36 processes might provide results slightly faster, because of an alignment in memory accesses.
Quote:
Originally Posted by Chris Lee
I'm guessing that as long as you have 4 DIMM slots filled (for any of these single physical CPUs) there is no bottleneck being made as in the example above with two physical CPUs. Is that right?
|
The idea is that each socket should use 4 DIMMs for itself. In your case, you only have 1 socket
Quote:
Originally Posted by Chris Lee
I was going to get a 10 core system (or 12 core, if I can find the budget for it) but I want to make sure I'm not throwing money away if I get more than 4 cores.
|
As I mentioned a bit above about the mathematics I did yesterday, it really depends. For example, if you search online for:
Code:
OpenFOAM xeon benchmark
I guess it's quicker to give the link I'm thinking of:
http://www.anandtech.com/show/8423/i...l-ep-cores-/19
there you might find that a system with 12 cores @ 2.5GHz that costs roughly 1000 USD gives a better bang-for-your-buck than 8 cores @ 3.9 GHz that cost 2000 USD (not sure of the exact values). But the 8 core system gives the optimum performance of RAM bandwidth and core efficiency, but the 12 core system costs a lot less and spends a lot less in electrical power consumption, while running only at 76% CPU compute performance of the 8 core system.
In such a case, you might want to weigh in an additional and very important factor: how fast do you want your meshes to be generated, if they can only be generated in serial mode, not in parallel?
Quote:
Originally Posted by Chris Lee
Note, I'm assuming the E5-2680 v2 is a "single CPU" with 10 cores, and so I would still have 4 channels of RAM available to all 10 cores, or in terms similar to yours above, I would still have the full 59.7 GB/s max memory bandwidth.
|
Yes, and at 32 GB of total RAM, would equate to 3.2 GB per core at roughly 5.97 GB/s access speed.
For comparison, the i7-4960X with 6 cores would be using 32 GB, with 5.33 GB per core at roughly 9.95 GB/s.
Now that I look more closely at the 3 CPUs you proposed for comparison, the only major difference is:
- How much maximum RAM do you really want to use.
- Are you willing to pay the extra cost for ECC memory. This can give you a greater piece of mind when running CFD cases, but it will make a bigger hole in the wallet as well.
For 32GB of RAM, from these 3, I would vote on the i7-4960X, which you could potentially be overclocked on situations where you need a little bit more performance and are willing to spend more electricity to achieve it... although on a laptop, this isn't easily achieved, and OC is a bit risky (namely it takes some time to master). Either way, it roughly gives you the same performance as the other 2 CPUs and you save a lot of money. Just make sure you keep your workplace clean and once a year have your laptop cleaned in the fans and heat-sinks, to ensure that it's always properly being cooled.
Quote:
Originally Posted by Chris Lee
As a side question, with regard to the limiting factor in time to solution, I guess what I don't really know is how much time in the solution is spent with the cpu cores cranking away on the equations, vs updating the information in the RAM, . . . but I'll suppose for the time being that my CFD problem will be memory bandwidth limited. If you've got some rules of thumb on how to figure where the overall bottleneck is, i'd be most grateful.
|
Already mentioned on this post. Nonetheless, the primary rule of thumb is that it can strongly depend on the kind of simulations you need to perform. Some cases are easily parallelised, others aren't.
And don't forget about the time it takes to generate the mesh, when using a CPU that has more cores, but less top speed when running in single core.
-------------------
@acasas:
Quote:
Originally Posted by acasas
Hey Chris! You see? It was not bad hijacking your thread even by mistake. Now you can ask interesting things in mine and I dont mind ;-)
|
You might not mind, but others might and probably will. It's considerably hard to be talk/discuss about two or more different topics on the same thread, without loosing track of whom the questions are being asked/answered to. The only reason why I (as a moderator) haven't moved the implied posts was because it seemed it wasn't a complete hijack and the details were still somewhat related.
Best regards,
Bruno