CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   CD-adapco (http://www.cfd-online.com/Forums/cd-adapco/)
-   -   Increase speed of parallel computation (http://www.cfd-online.com/Forums/cd-adapco/82571-increase-speed-parallel-computation.html)

Purushothama November 29, 2010 21:16

Increase speed of parallel computation
 
Dear StarCD colleagues,

I am using starcd 3.26 for fuel cell simulation. We have cluster facility with 48GB server and 8 nodes of 8GB memory each. When I do simulation of large-scale geometry having cells about 4 million with 32 processors, it takes about 4 minutes/iteration. This is too much. I issued the command,

star -dp -mpi=hp -decompmeth=s node0,4 node1,4 etc..

I have also tried with 8 processors, but the time taken per iteration is almost same. I use ufiles and CG solving method.

Is there anyone know how I can reduce simulation time? I have tried with AMG method, but found it is not working well with my ufiles.

TMG November 30, 2010 15:03

First of all, you might try running on a version of STAR that's not 10 years out of date. Second, its often the case (not always) that your own user subroutines are at fault. If they don't scale, then the solution certainly won't. Finally, you are forcing the decomposition definition yourself by using sets. If that is not an optimum decomposition, then you will not be able to scale to higher numbers of domains.

Pauli November 30, 2010 15:51

"48GB server and 8 nodes of 8GB" ?? My math says 8nodes x 8GB = 64[IMG]file:///home/home-titan/ptroxler/tmp/moz-screenshot.png[/IMG]GB

Are you running on all cores of all nodes? If so, that could also be contributing to lack of scalability. Whether it does or not is problem and cluster configuration dependant.

Have you tried leaving one core "free on each node"? That along with using Metis for decomposition might help performance. That is unless you have a good reason to decompose by sets.


All times are GMT -4. The time now is 09:09.