CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Hardware

OpenFOAM benchmarks on various hardware

Register Blogs Community New Posts Updated Threads Search

Like Tree495Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 22, 2019, 17:26
Default
  #1
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,400
Rep Power: 47
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
To be honest, I had to look up what a backport is first.
Too old for what, or how would I even begin to check whether the exact kernel of my distro is too old for my hardware? Officially, 4.12 is recent enough for Naples.
flotus1 is offline   Reply With Quote

Old   September 29, 2019, 08:18
Default
  #2
Senior Member
 
Simbelmynė's Avatar
 
Join Date: May 2012
Posts: 548
Rep Power: 15
Simbelmynė is on a distinguished road
Quote:
Originally Posted by flotus1 View Post
To be honest, I had to look up what a backport is first.
Too old for what, or how would I even begin to check whether the exact kernel of my distro is too old for my hardware? Officially, 4.12 is recent enough for Naples.

There are a few improvements with each kernel version that may or may not have an impact on performance. Sometimes there are regressions as well.


Probably this is no big deal, but if you wish to get the last few % out of your system then it may be worth checking out.


The easiest way to check is to just install a newer kernel. OpenSUSE can probably do this through YaST, and if you break your system you should be able to return to the stable kernel.
Simbelmynė is offline   Reply With Quote

Old   September 29, 2019, 10:03
Default
  #3
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,400
Rep Power: 47
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
I have done my fair share of experiments with installing newer kernel versions. My takeaway is that I won't do it on a system I intend to actually use
I will leave that to people who know what they are doing.

As a side-note: I tried a few different compiler optimizations. Seems like there are no significant gains here, barely above the margin of error.
flotus1 is offline   Reply With Quote

Old   October 23, 2019, 17:35
Default
  #4
ctd
New Member
 
anonymous
Join Date: Oct 2019
Posts: 4
Rep Power: 6
ctd is on a distinguished road
2X EPYC 7302, 16x16GB 2Rx8 DDR4-3200 ECC, OpenFOAM v5, Ubuntu 18.04.3

Code:
# cores   Wall time (s):
------------------------
1 723.64
2 328.11
4 164.21
8 81.4
12 55.2
16 41.1
20 37.53
24 34.27
28 29.99
32 26.89
ctd is offline   Reply With Quote

Old   October 23, 2019, 17:39
Default
  #5
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,400
Rep Power: 47
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
Hot damn!
flotus1 is offline   Reply With Quote

Old   October 24, 2019, 03:22
Default
  #6
Senior Member
 
Simbelmynė's Avatar
 
Join Date: May 2012
Posts: 548
Rep Power: 15
Simbelmynė is on a distinguished road
So it seems that the architecture performs better than just looking at the increased memory bandwidth.


EPYC 7301: 36.8 s @ 2666 MT/s



EPYC 7302: 26.9 s @ 3200 MT/s


That is impressive!



From Ryzen 3000 it can be seen that (all) the timings of the memory can play a huge role. Perhaps the XMP profiles and motherboard auto timings are better in tune with this release?


Edit: Can you also test this with OpenFOAM v7? Most of the benchmarks here are with v6 or v7.
Simbelmynė is offline   Reply With Quote

Old   October 24, 2019, 22:28
Default
  #7
ctd
New Member
 
anonymous
Join Date: Oct 2019
Posts: 4
Rep Power: 6
ctd is on a distinguished road
Sure, below are the results with v7.

2X EPYC 7302, 16x16GB 2Rx8 DDR4-3200 ECC, Ubuntu 18.04.3, OF v7

Code:
# cores   Wall time (s):
 ------------------------
1 711.73
2 345.65
4 164.97
8 84.15
12 55.9
16 47.45
20 38.14
24 34.21
28 30.51
32 26.89
Simbelmynė likes this.
ctd is offline   Reply With Quote

Old   October 25, 2019, 22:01
Default
  #8
New Member
 
Michael
Join Date: Feb 2016
Posts: 1
Rep Power: 0
mh-cfd is on a distinguished road
Quote:
Originally Posted by ctd View Post
2X EPYC 7302, 16x16GB 2Rx8 DDR4-3200 ECC, OpenFOAM v5, Ubuntu 18.04.3

Code:
# cores   Wall time (s):
------------------------
1 723.64
2 328.11
4 164.21
8 81.4
12 55.2
16 41.1
20 37.53
24 34.27
28 29.99
32 26.89

Wow, very impressive results! Could you please tell us which motherboard did you use? Because it's quite hard to find one that supports Rome officially. I've read on supermicro's site that some old motherboards do support Rome processors (with full 3200 MT bandwidth) but the motherboards have to be "revision 2". I don't know what that means, maybe it's just an updated bios...

Regards
mh-cfd is offline   Reply With Quote

Old   October 26, 2019, 02:22
Default
  #9
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,400
Rep Power: 47
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
I can not tell you where to buy compatible motherboards, since I had the same problem. But I can answer the rest of your question:

On numerous occasions, AMD reiterated their claim that all SP3 Platforms will be able to get an upgrade from Naples to Rome. A promise they could not keep.
The alleged reason (German site): https://www.planet3dnow.de/cms/49742...en-bios-chips/
Most retail SP3 motherboards shipped with a 16MB ROM. The bios version for Rome require 32MB ROMs. Hence many board revisions 1.x will never get support for Rome. There will not be a bios update, the hardware is incompatible.
Board revisions 2.x solve this, mainly with a bigger ROM chip. So new hardware revision, not just a software update. Of course, these new revisions of older boards still lack support for some features of Epyc Rome, for example PCIe 4.0. When in the market for one of these boards, contact the retailer beforehand, and make sure they ship rev. 2.x.
Actual new versions of retail boards with full feature support for Rome were announced, but have not yet been spotted in the wild.
The lack availability is a recurring theme with AMD Epyc, unfortunately.
flotus1 is offline   Reply With Quote

Old   November 4, 2019, 17:47
Default
  #10
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,400
Rep Power: 47
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
Quote:
Originally Posted by ctd View Post
2X EPYC 7302, 16x16GB 2Rx8 DDR4-3200 ECC, OpenFOAM v5, Ubuntu 18.04.3
[...]
Forgot to ask: did you run this in NUMA or UMA mode? AMD calls it NPS4 if I am not mistaken.
https://www.anandtech.com/show/14694...epyc-2nd-gen/6
flotus1 is offline   Reply With Quote

Old   November 6, 2019, 00:07
Default
  #11
ctd
New Member
 
anonymous
Join Date: Oct 2019
Posts: 4
Rep Power: 6
ctd is on a distinguished road
flotus1,
It was run in the default, "one NUMA domain per socket". I haven't had the opportunity yet to experiment with the options in:
https://developer.amd.com/wp-content...56745_0.75.pdf

I can try running the NPS4 setting if you're interested, but I may need some guidance on how to set it. I didn't initially see it in the bios, but could have missed it.

mh-cfd,
The Motherboard is a SuperMicro H11DSi version 2.0. It was purchased from https://www.interpromicro.com/ based on a tip from the thread below:
https://forums.servethehome.com/inde...yc-rome.25430/
mh-cfd and Sombrer0 like this.
ctd is offline   Reply With Quote

Old   November 6, 2019, 12:58
Default
  #12
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,400
Rep Power: 47
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
Due to an appalling lack of Epyc Rome equipment on my part, I can not help you with finding that bios option. But I would not be surprised if Supermicro just left it out. "Screw that noise, more options would just confuse our customers"

It is partly out of curiosity, but I also think it should give you some better performance with NUMA-aware software like OpenFOAM.
flotus1 is offline   Reply With Quote

Old   November 12, 2019, 07:33
Default
  #13
New Member
 
Join Date: Jul 2015
Posts: 10
Rep Power: 10
jakethejake is on a distinguished road
2 x AMD EPYC 7371, 16 x 16GB DDR4 Dual-Rank, Supermicro h11dsi
Windows 10 Pro Vers. 1903 Build 18362.418 - WSL Ubuntu 18.04 LTS
OpenFOAM-6 (precompiled package from openfoam.org)

Code:
# cores   Wall time (s):
------------------------
1 1254.01
2 447.25
4 212.51
6 139.17
8 101.92
12 88.24
16 88.04
20 83.5
24 74.72
28 70.44
32 87.87
I expected it to be worse than native Linux OS installed. But anyway its slower than I expected from what I see around here.
Any ideas? Or is it just windows 10?

Last edited by jakethejake; November 14, 2019 at 02:53. Reason: corrected build data
jakethejake is offline   Reply With Quote

Old   November 12, 2019, 11:05
Default
  #14
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,400
Rep Power: 47
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
WSL = Windows subsystem for linux?

It might not be the best solution if you want near bare-metal performance. You might want to try a dockerized version of Openfoam, or a proper VM.
flotus1 is offline   Reply With Quote

Old   November 15, 2019, 18:03
Default
  #15
Senior Member
 
Simbelmynė's Avatar
 
Join Date: May 2012
Posts: 548
Rep Power: 15
Simbelmynė is on a distinguished road
My experience is that the WSL is almost as fast as native Linux for this benchmark. Writing to disk often should be avoided though.


Here is some (old) information. WSL has seen improvements after this post.


https://www.phoronix.com/scan.php?pa...900x-wsl&num=1
Simbelmynė is offline   Reply With Quote

Old   November 23, 2019, 10:46
Default
  #16
SLC
Member
 
Join Date: Jul 2011
Posts: 53
Rep Power: 14
SLC is on a distinguished road
Quote:
Originally Posted by ctd View Post
flotus1,
It was run in the default, "one NUMA domain per socket". I haven't had the opportunity yet to experiment with the options in:
https://developer.amd.com/wp-content...56745_0.75.pdf

I can try running the NPS4 setting if you're interested, but I may need some guidance on how to set it. I didn't initially see it in the bios, but could have missed it.

mh-cfd,
The Motherboard is a SuperMicro H11DSi version 2.0. It was purchased from https://www.interpromicro.com/ based on a tip from the thread below:
https://forums.servethehome.com/inde...yc-rome.25430/

Are you able to run Fluent benchmarks?
SLC is offline   Reply With Quote

Old   December 3, 2019, 09:13
Default Epyc Rome Benchmark
  #17
New Member
 
Henrik
Join Date: Mar 2019
Posts: 8
Rep Power: 7
HBH_aero is on a distinguished road
Hi!

Is the below the only EPYC Rome benchmark available here?

I am looking to get a new Linux workstation, currently considering the Epyc Rome CPUs.

By the way - looks really promising from the below results!



Quote:
Originally Posted by ctd View Post
2X EPYC 7302, 16x16GB 2Rx8 DDR4-3200 ECC, OpenFOAM v5, Ubuntu 18.04.3

Code:
# cores   Wall time (s):
------------------------
1 723.64
2 328.11
4 164.21
8 81.4
12 55.2
16 41.1
20 37.53
24 34.27
28 29.99
32 26.89
HBH_aero is offline   Reply With Quote

Old   December 3, 2019, 13:58
Default
  #18
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,400
Rep Power: 47
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
Yes, these are the only Epyc Rome results we have so far.
Yet I don't think you can go wrong with them. Especially for a general purpose workstation, they are a huge improvement over 1st gen due to the less complicated NUMA topology.
flotus1 is offline   Reply With Quote

Old   December 3, 2019, 15:08
Default
  #19
SLC
Member
 
Join Date: Jul 2011
Posts: 53
Rep Power: 14
SLC is on a distinguished road
I’m in contact with a compute service provider who is offering to perhaps benchmark some fluent cases for me on a dual 7302 setup. Will post back if I get it done.
HBH_aero likes this.
SLC is offline   Reply With Quote

Old   December 4, 2019, 02:16
Default
  #20
New Member
 
Henrik
Join Date: Mar 2019
Posts: 8
Rep Power: 7
HBH_aero is on a distinguished road
I agree flotus1 - maybe the doubled L3 cache has a say too?
Wondering if they manage to increase the clock freq in the future.
HBH_aero is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
How to contribute to the community of OpenFOAM users and to the OpenFOAM technology wyldckat OpenFOAM 17 November 10, 2017 15:54
UNIGE February 13th-17th - 2107. OpenFOAM advaced training days joegi.geo OpenFOAM Announcements from Other Sources 0 October 1, 2016 19:20
OpenFOAM Training Beijing 22-26 Aug 2016 cfd.direct OpenFOAM Announcements from Other Sources 0 May 3, 2016 04:57
New OpenFOAM Forum Structure jola OpenFOAM 2 October 19, 2011 06:55
Hardware for OpenFOAM LES LijieNPIC Hardware 0 November 8, 2010 09:54


All times are GMT -4. The time now is 10:27.