CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > ANSYS > CFX

using core i7 cpu for parallel solving

Register Blogs Community New Posts Updated Threads Search

Like Tree2Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 4, 2011, 05:57
Default
  #21
Member
 
Max
Join Date: May 2011
Location: old europe
Posts: 88
Rep Power: 14
murx is on a distinguished road
Of course, you are right. But a deviation of lets say 2 seconds caused by taking the time manually at measured times between 2 and 3 minutes, only means an error of 1%.
So I guess this is not the reason for the poor performance improvement. Any other idea what the reason could be?
murx is offline   Reply With Quote

Old   September 4, 2011, 20:11
Default
  #22
Super Moderator
 
Glenn Horrocks
Join Date: Mar 2009
Location: Sydney, Australia
Posts: 17,703
Rep Power: 143
ghorrocks is just really niceghorrocks is just really niceghorrocks is just really niceghorrocks is just really nice
I do not trust your measurement yet. Please recalculate based on CFD Solver wall clock seconds reported immediately after the iterations are complete.
ghorrocks is offline   Reply With Quote

Old   September 5, 2011, 07:39
Default
  #23
Member
 
Max
Join Date: May 2011
Location: old europe
Posts: 88
Rep Power: 14
murx is on a distinguished road
Serial: CFD Solver wall clock seconds: 2.2300E+02

HP MPI Local Parallel, 4 Processes: CFD Solver wall clock seconds: 1.2000E+02 (factor 1.86)

HP MPI Local Parallel, 7 Processes: CFD Solver wall clock seconds: 1.1000E+02 (factor 2.03)

This time, i used a different case with a smaller mesh (~ 600 000 elements).
murx is offline   Reply With Quote

Old   September 5, 2011, 18:47
Default
  #24
Super Moderator
 
Glenn Horrocks
Join Date: Mar 2009
Location: Sydney, Australia
Posts: 17,703
Rep Power: 143
ghorrocks is just really niceghorrocks is just really niceghorrocks is just really niceghorrocks is just really nice
I see. Forget about the 7 processes model, you are always going to get terrible speedups with hyperthreading. A speedup of 1.86 at 4 processes is not very good. You should be over 3 for modern processors. Have you run the benchmark simulation? That is the reference I use to benchmark solver speed.

But I think you can be pretty sure something is wrong with your setup and is robbing you of multi processor speed.
ghorrocks is offline   Reply With Quote

Old   September 8, 2011, 03:43
Default
  #25
Member
 
Max
Join Date: May 2011
Location: old europe
Posts: 88
Rep Power: 14
murx is on a distinguished road
First of all, thanks for your help!

Here are the results for the Benchmark run:
Serial: CFD Solver wall clock seconds: 3.0000E+01
HP MPI Local Parallel, 4: CFD Solver wall clock seconds: 1.6000E+01 (--> factor 1.88)

I checked the memory usage during my last runs and it was never fully used.
Also, I tested another machine. It's a Core i-5 2500 (the one I usually use is a Core i-7 2600). The speedup factor on this machine was only 1.9, too.

I dont know too much about CPUs, but maybe there is some kind of automatic down-clocking when several cores are used ...
murx is offline   Reply With Quote

Old   September 8, 2011, 07:51
Default
  #26
Super Moderator
 
Glenn Horrocks
Join Date: Mar 2009
Location: Sydney, Australia
Posts: 17,703
Rep Power: 143
ghorrocks is just really niceghorrocks is just really niceghorrocks is just really niceghorrocks is just really nice
If the benchmark problem runs at a similar speed then you definitely have a problem with your set up, it is not the simulation.

Recent Intel CPUs do run a higher clock speed when running single processor, this could explain it. To check this run it 1, 2, 3 and 4 processor. If the 1 processor result is clearly different then this probably explains it.

The benchmark in 30s and 4 processes in 16s is very fast - must be quite a new machine.

So I would recommend trying other multi processor implementations such as MPI, HPMPI, PVM, Intel etc. You may be able to get better speedups from them.
ghorrocks is offline   Reply With Quote

Old   June 25, 2012, 08:55
Default
  #27
Senior Member
 
OJ
Join Date: Apr 2012
Location: United Kindom
Posts: 473
Rep Power: 20
oj.bulmer will become famous soon enough
I have i7-2860QM (4 cores) and 8 GB RAM.
Popular opinion in this thread is that 4 cores give optimum performance.

Question is, would keeping simulation on for hours harmful for life of system? I can feel the system heating up as I see the 4 processors loaded at ~100%.
oj.bulmer is offline   Reply With Quote

Old   June 25, 2012, 09:01
Default
  #28
Senior Member
 
OJ
Join Date: Apr 2012
Location: United Kindom
Posts: 473
Rep Power: 20
oj.bulmer will become famous soon enough
I have i7-2860QM (4 cores) and 8 GB RAM.
Popular opinion in this thread is that 4 cores give optimum performance.

Question is, would keeping simulation on for hours harmful for life of system? I can feel the system heating up as I see the 4 processors loaded at ~100%.


Quote:
Originally Posted by ghorrocks View Post
... So I would recommend trying other multi processor implementations such as MPI, HPMPI, PVM, Intel etc. You may be able to get better speedups from them.
Glenn, given my processor build, which of these (MPI, HPMPI, PVM, Intel ) would you recommend for best performance?

Thanks
OJ
oj.bulmer is offline   Reply With Quote

Old   June 25, 2012, 19:01
Default
  #29
Super Moderator
 
Glenn Horrocks
Join Date: Mar 2009
Location: Sydney, Australia
Posts: 17,703
Rep Power: 143
ghorrocks is just really niceghorrocks is just really niceghorrocks is just really niceghorrocks is just really nice
When CPUs run hard they run hot. Well designed systems can handle the load and should still function fine. Poorly designed systems will overheat and cause problems - I think the i7 chips sense their own temeprature and if they are too hot they slow down to stop overheating. This saves the CPU but means you are running at reduced speed.

As for which multi processor implementation is best, the simple answer is to benchmark them all on your system.
ghorrocks is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Superlinear speedup in OpenFOAM 13 msrinath80 OpenFOAM Running, Solving & CFD 18 March 3, 2015 05:36
Moving mesh Niklas Wikstrom (Wikstrom) OpenFOAM Running, Solving & CFD 122 June 15, 2014 06:20
Differences between serial and parallel runs carsten OpenFOAM Bugs 11 September 12, 2008 11:16
IcoFoam parallel woes msrinath80 OpenFOAM Running, Solving & CFD 9 July 22, 2007 02:58
Could anybody help me see this error and give help liugx212 OpenFOAM Running, Solving & CFD 3 January 4, 2006 18:07


All times are GMT -4. The time now is 03:36.