Using reconstructParMesh on a cluster, bottle neck and out of memory
Hello all,
I think there is something I don't understand about reconstructParMesh. I have been trying to reconstruct large meshes that I am producing on a somewhat antiquated cluster. Even with the old hardware, I can produce a large mesh with snappyHexMesh in less than a day. However, I am running into a major bottleneck when I try to reconstruct it. reconstructParMesh runs for over a day, and then crashes because it is out of memory - the largest memory node I have access to is 256 Gb. Also, every time I run reconstructParMesh, I get this: Quote:
Thanks! Abe |
Quote:
|
Yes, exactly. It is not a good option, but might be worth it. My idea was that maybe there was a way to transfer the decomposed mesh output of snappyHexMesh -parallel from the processor* folders into the "constant" folders in a decomposed simpleFoam (or other solver) case. This would of course limit me to running the same number of processes for the solver as for snappyHexMesh. I am not sure that it would even work, or if there is an easy way to transfer everything without writing a script.
I just figured I was doing something wrong, since I am able to produce much larger meshes than I can reconstruct. reconstructParMesh seems to hold everything in memory before writing it to disk, which gets really slow (or crashes) when I exceed the ram limit on the largest memory node I have access to. I am not very strong in computer science, so I would be curious to learn why reconstructParMesh has to run in serial? Or why it can't write parts of the mesh to disk while running to loosen up some RAM? I could be interpreting that all wrong, and it is also possible that there are some tricks to reconstructing large meshes that I am not aware of... Any insight is helpful! Thanks- |
All times are GMT -4. The time now is 11:07. |