SU2 High Memory Footprint-CGNS Reader
Guys,
I'm observing a very high memory footprint/processor basis on running SU2_CFD. nElements: 85Million nNodes: 20 Million Case: NASA CRM Model (CGNS MESH) nProcs: 24 memory per process ~ 10GB Looks like the CGNS reader reads the entire mesh onto all Partitions ? Also I can not use more than 32 processors for this mesh size. It exits while it is reading a particular section. Any comments ? Cheers, Dominic |
Hi Dominic,
I am also facing the same issue. I attempted solution for 40 million cells grid. Memory taken per node is very high. I was also trying if CGNS grid can be written in SU2 format and see if situation can be improved. In earlier version there was an option to convert and write grid from CGNSTOSU2 format, in later versions it has been removed i.e. it is converting the grid but write option is not available. Hope this issue will be resolved in next release. Best. |
Hi,
I checked the same problem with a commercial code, Memory requirement for that is similar to as of SU2. Actually memory taken by SU2 is on higher side only. I was going through another thread in SU2 forum, other guys have similar experience. Also developers pointed that this issue is bieng continuously improved. Regards Amit |
Quote:
if you are interested in converting the grid you can use SU2_MSH instead of CGNSTOSU2. To work properly you have to run it in serial. Regards. |
Jiba and Amit,
Thanks for your replies. I was able to use SU2_MSH to convert the cgns to su2 mesh format. I'm able to run it (currently) on 1536 cores and the memory footprint is Partitioning state (ParMETIS): 1.5Gb per process Communication Partitioning : 2.5Gb per process Running state : 1.6b per process Per step Time : 1.54s ( on 768 processes it is 2.5 s ). However, OpenFOAM has relatively lower memory footprint for the exact same problem, about 3Gb per process during run time on 24 processes. But from the point of view of robustness/stability, SU2 is definitely better. Cheers Dominic |
Dominic, all,
I am glad to hear that you were able to get it working. Indeed, we have just recently put in even more fixes to the memory (especially during partitioning), so you might try with the current 'develop' branch on GitHub with your large cases. Please keep us up to date on your findings so that we can keep improving the code. Take care, Tom |
All times are GMT -4. The time now is 14:23. |