|
[Sponsors] |
October 29, 2009, 06:17 |
Load balancing pre- and during computation
|
#1 |
Senior Member
Anonymous
Join Date: Mar 2009
Posts: 110
Rep Power: 17 |
Hi,
I'm currently using decomposePar to decompose the mesh into n portions for running in parallel. The actual mesh partitioning itself appears very efficient and there is almost an equal number of cells on every node. However, when I'm actually running the solver (in this case, it is just potentialFoam), it appears to overload the memory distribution on the first processor, then the second, then the third, and fourth, each time removing memory allocated to other processors... is there a reason for this, or have I missed out a step? |
|
October 29, 2009, 06:23 |
|
#2 |
Senior Member
Anonymous
Join Date: Mar 2009
Posts: 110
Rep Power: 17 |
Also, what are the memory it requires per million cells in double precision, and does this scale linearly with cell count?
|
|
Thread Tools | Search this Thread |
Display Modes | |
|
|