CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   SU2 (http://www.cfd-online.com/Forums/su2/)
-   -   Domain Decomposition Out-of-Memory (http://www.cfd-online.com/Forums/su2/114463-domain-decomposition-out-memory.html)

grjmpower March 11, 2013 14:33

Domain Decomposition Out-of-Memory
 
2 Attachment(s)
Hi,

I am running on a cluster with 12 cores/node and 16 MB per node. When running the Domain Decomposition on a reasonably large problem (21 million cells), I get an OOM (Out of Memory) message from the node. This seems to be happening during the write operation. (see attached output file).

When I reduce to 3.1 million cells, then the domain decomposition code runs and SU2_CFD starts.

Could SU2_DDC be using more that 16 GB with the 21 million cell grid? Why should the error occur during the write?

Thanks,

Greg

fpalacios March 12, 2013 15:43

Quote:

Originally Posted by grjmpower (Post 413189)
Hi,

I am running on a cluster with 12 cores/node and 16 MB per node. When running the Domain Decomposition on a reasonably large problem (21 million cells), I get an OOM (Out of Memory) message from the node. This seems to be happening during the write operation. (see attached output file).

When I reduce to 3.1 million cells, then the domain decomposition code runs and SU2_CFD starts.

Could SU2_DDC be using more that 16 GB with the 21 million cell grid? Why should the error occur during the write?

Thanks,

Greg

Hi Greg,
We are aware of the SU2_DDC and I/O limitations for a 21M cells grid, and the new monthly version of SU2 2.0.2 (next Tuesday) will include substantial improvements: binary outputs (tecplot, cgns) without python scripts for the merging process, and a new SU2_DDC. So please stay tuned.

Best,
Francisco

grjmpower March 29, 2013 07:52

Francisco,

I am running v2.0.2 now and still run into the out-of-memory problem. The grid is over 25 million elements. The code stops while writing the grid partitions as with the earlier version. A 3 million element grid does partition correctly, but has problems writing the flow files (see separate post).

The computer I am using has 16GB per node.

Unfortunately, this is a show stopper for using SU^2 for real problems.

Greg

aniketaranake April 11, 2013 13:24

Quote:

Originally Posted by grjmpower (Post 413189)
Hi,

I am running on a cluster with 12 cores/node and 16 MB per node. When running the Domain Decomposition on a reasonably large problem (21 million cells), I get an OOM (Out of Memory) message from the node. This seems to be happening during the write operation. (see attached output file).

When I reduce to 3.1 million cells, then the domain decomposition code runs and SU2_CFD starts.

Could SU2_DDC be using more that 16 GB with the 21 million cell grid? Why should the error occur during the write?

Thanks,

Greg

Hi Greg,

Thank you for your post. We are aware of this memory leak, and it will be fixed in the next monthly release. Look for it at the beginning of next week.

Cheers,
Aniket


All times are GMT -4. The time now is 07:09.