CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Hardware

AMD Ryzen Threadripper 1920X vs. Intel Core i7 7820X

Register Blogs Community New Posts Updated Threads Search

Like Tree14Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   January 25, 2018, 02:54
Default
  #21
Senior Member
 
Simbelmynë's Avatar
 
Join Date: May 2012
Posts: 546
Rep Power: 15
Simbelmynë is on a distinguished road
Nice, so you managed about 12% speedup compared to the stock frequency.

Did you OC the memory as well, or did you run it at 3200?

Also, did you delidd the CPU or was it not necessary?

Did you change the voltage?

A fair comparison vs the Threadripper is to run the Threadripper @ 3.9 GHz, since that is a fair mark to achieve with OC (perhaps 4 GHz with some luck).

I have never ordered from Silicon Lottery, but they guarantee a certain OC potential from their offerings:

https://siliconlottery.com/collectio...intel-i7-7820x

Finally, since AMD is releasing Zen+ very soon, we might see a speedup in the AMD line with about 10% on stock operation, before the summer.

Now, if only the DDR prices would go down....
Simbelmynë is offline   Reply With Quote

Old   January 25, 2018, 13:46
Default
  #22
New Member
 
Join Date: Jan 2018
Posts: 7
Rep Power: 8
The_Sle is on a distinguished road
The memory is a HyperX 3000 MHz CL15 kit, with XMP enabled and then 3200 MHz overclock. It takes that speed no problem, which is nice considering it's about 50€ cheaper than "true" 3200 MHz kits

CPU delidding is not necessary (for the 8 core model at least) unless you are aiming for 5 GHz+. My Noctua D15 can't keep up after 4.7 GHz, but a proper AIO watercooler should be able to run 4.8, maybe 4.9 on a good chip. However, the system stability demands for CFD use are much higher than pretty much any other use, and this is why the mileage varies greatly.

My chip runs stable 4.5GHz@1.15 V, or 4.7@1.25 at which point the temps are the issue before I can test system stability properly. I can't even run simpleFoam because the CPU shuts down due to thermals immediately

And considering Zen+, the TR models of that architecture won't come in a while. And Intel will release the next gen X299 CPUs late this year (or early next year with usual delays ) keeping the competition hot. It's great that AMD can make proper CPUs again, as Intel can't just milk the customers so bad now

I'd say that right now, the Skylake-X CPUs are excellent value for what they are, IF overclocked. TR is a better deal if you can find motherboards and memory for reasonable prices, and don't want to mess with overclocking.
The_Sle is offline   Reply With Quote

Old   January 26, 2018, 09:43
Default
  #23
Senior Member
 
Joern Beilke
Join Date: Mar 2009
Location: Dresden
Posts: 498
Rep Power: 20
JBeilke is on a distinguished road
Thanks for your work.

So we just get about 25% improvement on 6 cores compared to the Xeon E5-1650 v3. And therefore we need to overclock the machine

I still hope that someone from the Epyc fraction might run this benchmark.
JBeilke is offline   Reply With Quote

Old   January 26, 2018, 11:32
Default
  #24
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
I heard you. What I did so far was installing the latest OpenFOAM docker image and copying the motorbike files to a run directory. From here on I need specific instructions
flotus1 is offline   Reply With Quote

Old   January 26, 2018, 15:37
Default
  #25
Senior Member
 
Simbelmynë's Avatar
 
Join Date: May 2012
Posts: 546
Rep Power: 15
Simbelmynë is on a distinguished road
I can try to explain (without actually having access to any linux box to verify my memory ). You have all the information in the Allrun file. In order to run the test without meshing etc. you can simply comment out the solver line and then execute Allrun. This will setup the case for the simpleFoam solver.

In the system folder you can edit the file that ends with ".6" and set the correct number of processes (default is 6). You also need to determine how to partition the domain (with 32 cores, you might wish to go with 4 4 2, or 8 2 2). Save and then also make a copy of the file where you remove the ".6" ending (this is needed since simpleFoam will look for a file without ".6" ending if you do not execute it from the "Allrun" script file).

After that you can simply run

Code:
time mpirun -np 32 simpleFoam -parallel
Update:

OK so I logged into one of my machines to look at my setup.

1. edit the decomposeParDict.6 and change the number of subdomains. Make sure that the product in "n" equals the number of subdomains.
2. simply add "time" in front of the call to the solver. The call will then be:
Code:
time runParallel $decompDict $(getApplication)
(the original method is perhaps better if you wish to test bind-to-core or bind-to-hwthread)
flotus1 likes this.

Last edited by Simbelmynë; January 26, 2018 at 15:51. Reason: Improved some parts
Simbelmynë is offline   Reply With Quote

Old   January 26, 2018, 17:10
Default
  #26
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
OK thanks, maybe I got it right.
On 32 threads simpleFoam takes 74s
With bind-to core: 71s
With bind-to hwthread: 71s

A large portion of the time seems to be I/O. Load balancing might be far from optimal and I have no idea if the case is large enough to scale on 32 cores.
flotus1 is offline   Reply With Quote

Old   January 27, 2018, 07:37
Default
  #27
Senior Member
 
Joern Beilke
Join Date: Mar 2009
Location: Dresden
Posts: 498
Rep Power: 20
JBeilke is on a distinguished road
Dear Alex,

thanks so much for this. Can you please also run the 6 core variant. I'm still looking for a machine which might be suitable for my ccm+ simulations :-)

Viele Grüße
Jörn
JBeilke is offline   Reply With Quote

Old   January 27, 2018, 09:18
Default
  #28
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
All right, a fresh start with 6 threads:

no core binding: 153s
binding to the first 6 cores of one CPU: 196s
distributing across one CPU (2+2+1+1 threads per NUMA node): 166s
distributing across both CPUs (1 thread per NUMA node, two left idle): 159s

Run times may vary by up to 10s when repeating runs. A SATA SSD is used to store the data.

With no core binding option, the case ran on cores 1,12,16,21,24,26
NUMA nodes are located on cores 1-4, 5-8, 9-12, 13-16, 17-20, 21-24, 25-28, 29-32 when we assume the first core is number 1 instead of 0.

So it may not come as a surprise that Epyc is not the best choice for low core count CFD simulations

Edit: forgot to mention that I changed one line in controlDict: "startFrom startTime;" in order to perform the same iterations for all re-runs. I hope this was appropriate.
flotus1 is offline   Reply With Quote

Old   January 28, 2018, 13:22
Default
  #29
Senior Member
 
Joern Beilke
Join Date: Mar 2009
Location: Dresden
Posts: 498
Rep Power: 20
JBeilke is on a distinguished road
Many thanks Alex,

the 6 core variant is somehow in line with the other results posted over various threads. Nearly the same speed as the Threadripper.

This seems to be ok since you are using not the fastest RAM and not the fastest CPU.

But your result with 32 cores (71 - 73 sec) is a bit disappointing. I would expect someting around 40 seconds (30 sec for linear scaling).

So we might have a little problem there. One option might be, that the case does not scale very well (decomposition method or computing overhead) or that the machine is already saturated at a lower core count than 32.

To check this you can try two tests:

  1. run several instances of the 6 core variant at the same time or
  2. check the speedup for 12 / 16 / 24 / ... cores and see where the efficency drops.

Viele Grüße
Jörn
JBeilke is offline   Reply With Quote

Old   January 28, 2018, 14:13
Default
  #30
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
I am pretty sure poor scaling here is particular to this example. As stated earlier, there is quite some I/O overhead, load balancing might be far from ideal due to my non-existent OpenFOAM experience and the case itself might be too small.
Running several instances seems to be the easiest way for me to get around all of these issues. I will give this a shot...maybe tomorrow. Is it ok if I use 4 threads per instance and then run 1-8 instances?
Edit: but to be honest, I would expect pretty much linear scaling with this method unless the I/O saturates my SATA SSD at some point.
flotus1 is offline   Reply With Quote

Old   January 28, 2018, 14:54
Default
  #31
Senior Member
 
Joern Beilke
Join Date: Mar 2009
Location: Dresden
Posts: 498
Rep Power: 20
JBeilke is on a distinguished road
Alex, there is no need for urgent actions :-)

I would stay with 6 cores and go to a maximum of 5 instances. We know from the theory of discrete simulations that whenenver we try to use a finite resource up to 100% the queue in front of it grows to infinity. So with something like 80% we get the best throughput.

Unfortunately many managers never heard about this basic knowledge of resource planning and always try to use the workers and machines up to 100%.
JBeilke is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
solving a conduction problem in FLUENT using UDF Avin2407 Fluent UDF and Scheme Programming 1 March 13, 2015 02:02
Superlinear speedup in OpenFOAM 13 msrinath80 OpenFOAM Running, Solving & CFD 18 March 3, 2015 05:36
[OpenFOAM] Color display problem to view OpenFOAM results. Sargam05 ParaView 16 May 11, 2013 00:10
CFX11 + Fortran compiler ? Mohan CFX 20 March 30, 2011 18:56
AMD X2 & INTEL core 2 are compatible for parallel? nikolas FLUENT 0 October 5, 2006 06:49


All times are GMT -4. The time now is 13:23.