Hi.. I've just recently got ac
Hi.. I've just recently got access to a quite big cluster.. Containing a total of about 1440 cores.. But I've noticed, that more cores do not always increase the calculation speed.. As an example.. I have a grid, containing 106250 cells.. I've tried three different runs, one on 100 cores, one on 20 cores and one on an other workstation with just 1 core.. The estimated calculation speeds is:
1 core: 163 hours
20 cores: 10 hours
100 cores: 18 hours
I've also noticed, that the seems to be an acceleration in simulation speed from the start, eg if you make an estimate just when the calculations have begun, it will be higher than an estimate after an hour.
The simulations is to simulate 500 seconds, and the the simulations has so far reached (They were not started at the same time):
1 core: 450 secs simulated
20 cores: 19 secs simulated
100 cores: 23 secs simulated
So my conclussion is, that since the 20 cores seems to be faster than both 100 cores and 1 core, there must be an optimum number of cores to use..
Is there any literature available on this subject, or does any of you guys have a rule of thumb to use, as to how many cores should be utilized?
Hi Mark! There seem to be s
There seem to be several things at work here. Amongst these http://en.wikipedia.org/wiki/Amdahl%27s_law
If you say that the parallel part of the computation is approximately proportional to the volume of the domain of one CPU and the serial part proportional to the processor boundaries then it is easy to see that the ratio of parallel/serial gets even more unfavourable for higher processor numbers which would explain why the computation times are getting worse (instead of approaching a fixed value like Ahmdal's law would predict)
The optimum in you case is definitly N<20 but it depends on your problem. Depending on whom you ask the rule of thumb is that you should have at least 50k (10k) cells per processor to get reasonable speedups
Are they all single-cores, dua
Are they all single-cores, dual-cores, quad-cores and what make are they? This could be due to memory bandwidth limitation. Actually, you have a grid with 100k cells. Using 100 cores means each core gets 1000 cells. I'm sure I read that Openfoam shows no benefits from parallel processing when the cell count per node is under 10k. I have a quad-core at home and I found that, for a 100k mesh running on 2 cores was actually slightly faster than running on all 4.
Thank you guys for the quick a
Thank you guys for the quick answers..
The simulation on 100 cores is still running, and has simulated 204 seconds 18,5hours.. The simulation finished (500s) on 20 cores in 16,85hours..
I'm an intern in the company that bought the cluster, so I really don't know that much about it.. All i know, is that is has approx. 1440 cores, and that each node has 4 cores.. So approx. 360 nodes..
As I read it, I should take 10k-50k cells/core (This must depend on flow complexity), and then try to start a couple of simulations using eg 10k cells/core, 30k cells/core and 50k cells/core, and then do an estimate after a while, and then kill the two slowest..
Sounds good, Mark. Let us know
Sounds good, Mark. Let us know how you get on.
|All times are GMT -4. The time now is 19:55.|