|
[Sponsors] | |||||
Possible benefits of interleaved memory for CFD |
![]() |
|
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
|
|
|
#1 |
|
New Member
Achilles Vassilicos
Join Date: Mar 2025
Posts: 2
Rep Power: 0 ![]() |
Hello everyone,
I am new to these forums, and seeking info on a hardware question. Are there any experiences/studies that indicate a speed-up benefit in using interleaved memory configurations in HPC compute nodes for CFD computations? Typical hardware of my compute nodes: CPU 2 x 8124M 3.0 GHZ, 6 memory channels per CPU, 12 memory channels total, each channel has 2 DIMM slots for a total of 24. There is a DIMM installed in each memory channel. Installing a 2nd DIMM per memory channel will result in interleaved memory. I am running a couple of different CFD codes including Overflow and Fun3D, and considering trying OpenFOAM. Thanks for any input. -Achilles |
|
|
|
|
|
|
|
|
#2 |
|
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,460
Rep Power: 50 ![]() ![]() |
Yes, rank interleaving used to be beneficial for peak memory bandwidth, and thus for CFD performance. Performance differences outside of stream benchmark can be up to 10%.
Note that rank interleaving does not necessarily require two DIMMs per channel. A single dual-rank DIMM per channel is enough. Also, two ranks per channel are usually the sweet-spot for overall performance. with too many ranks per channel, performance can be worse. Either via poor sub-timings, or outright lower transfer rates. All of this is for the pre-DDR5 era, both AMD and Intel. AMD released some numbers for Epyc Genoa with DDR5, demonstrating very close to peak performance with only a single rank per channel. No idea if this is exclusive to AMDs architecture, or if the same applies to Intel. For your CPUs, probably best to check which memory modules you have installed right now. Either read the label, or use "sudo dmidecode -t 17" on Linux. |
|
|
|
|
|
|
|
|
#3 |
|
New Member
Achilles Vassilicos
Join Date: Mar 2025
Posts: 2
Rep Power: 0 ![]() |
I have a total of 10 compute nodes with the same 2xCPU and DDR4-2666 RAM. All the RDIMMs/LRDIMMs are 2Rx4 or 2Rx8. So from what you are saying, populating the second memory channel slot in this case would not be of any benefit. Unless one wanted a bigger RAM. Currently, I am at about 5GB per core and 3 cores per memory channel, which should be sufficient.
|
|
|
|
|
|
![]() |
| Tags |
| cfd, interleaved, memory |
| Thread Tools | Search this Thread |
| Display Modes | |
|
|
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| OpenFOAM benchmarks on various hardware | eric | Hardware | 884 | December 12, 2025 03:28 |
| General recommendations for CFD hardware [WIP] | flotus1 | Hardware | 21 | April 9, 2025 14:55 |
| Used Memory Accumulates During Course of Simulation Until interFoam gets Killed | Ship Designer | OpenFOAM Running, Solving & CFD | 7 | October 6, 2023 02:26 |
| Workstation Suggestions For A Newbie | mrtcnsmgr | Hardware | 1 | February 22, 2023 02:13 |
| CPU for Flow3d | mik_urb | Hardware | 4 | December 4, 2022 23:06 |