Domain Decomposition Out-of-Memory
I am running on a cluster with 12 cores/node and 16 MB per node. When running the Domain Decomposition on a reasonably large problem (21 million cells), I get an OOM (Out of Memory) message from the node. This seems to be happening during the write operation. (see attached output file).
When I reduce to 3.1 million cells, then the domain decomposition code runs and SU2_CFD starts.
Could SU2_DDC be using more that 16 GB with the 21 million cell grid? Why should the error occur during the write?
We are aware of the SU2_DDC and I/O limitations for a 21M cells grid, and the new monthly version of SU2 2.0.2 (next Tuesday) will include substantial improvements: binary outputs (tecplot, cgns) without python scripts for the merging process, and a new SU2_DDC. So please stay tuned.
I am running v2.0.2 now and still run into the out-of-memory problem. The grid is over 25 million elements. The code stops while writing the grid partitions as with the earlier version. A 3 million element grid does partition correctly, but has problems writing the flow files (see separate post).
The computer I am using has 16GB per node.
Unfortunately, this is a show stopper for using SU^2 for real problems.
Thank you for your post. We are aware of this memory leak, and it will be fixed in the next monthly release. Look for it at the beginning of next week.
|All times are GMT -4. The time now is 10:34.|