CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Meshing & Mesh Conversion (https://www.cfd-online.com/Forums/openfoam-meshing/)
-   -   [Other] Using reconstructParMesh on a cluster, bottle neck and out of memory (https://www.cfd-online.com/Forums/openfoam-meshing/187364-using-reconstructparmesh-cluster-bottle-neck-out-memory.html)

KTG May 4, 2017 20:35

Using reconstructParMesh on a cluster, bottle neck and out of memory
 
Hello all,

I think there is something I don't understand about reconstructParMesh. I have been trying to reconstruct large meshes that I am producing on a somewhat antiquated cluster. Even with the old hardware, I can produce a large mesh with snappyHexMesh in less than a day. However, I am running into a major bottleneck when I try to reconstruct it. reconstructParMesh runs for over a day, and then crashes because it is out of memory - the largest memory node I have access to is 256 Gb. Also, every time I run reconstructParMesh, I get this:

Quote:

This is an experimental tool which tries to merge individual processor
meshes back into one master mesh. Use it if the original master mesh has
been deleted or if the processor meshes have been modified (topology change).
This tool will write the resulting mesh to a new time step and construct
xxxxProcAddressing files in the processor meshes so reconstructPar can be
used to regenerate the fields on the master mesh.

Not well tested & use at your own risk!
Is there an alternative way to put meshes back together that I don't know about? Or some kind of work around for reconstructing incrementally so that I don't go over RAM limits? Or is there a way to run solvers on the decomposed mesh cases?

Thanks!

Abe

akidess May 5, 2017 02:37

Quote:

Originally Posted by KTG (Post 647704)
Or is there a way to run solvers on the decomposed mesh cases?

You mean like running the solvers in parallel? :confused:

KTG May 5, 2017 12:48

Yes, exactly. It is not a good option, but might be worth it. My idea was that maybe there was a way to transfer the decomposed mesh output of snappyHexMesh -parallel from the processor* folders into the "constant" folders in a decomposed simpleFoam (or other solver) case. This would of course limit me to running the same number of processes for the solver as for snappyHexMesh. I am not sure that it would even work, or if there is an easy way to transfer everything without writing a script.

I just figured I was doing something wrong, since I am able to produce much larger meshes than I can reconstruct. reconstructParMesh seems to hold everything in memory before writing it to disk, which gets really slow (or crashes) when I exceed the ram limit on the largest memory node I have access to. I am not very strong in computer science, so I would be curious to learn why reconstructParMesh has to run in serial? Or why it can't write parts of the mesh to disk while running to loosen up some RAM? I could be interpreting that all wrong, and it is also possible that there are some tricks to reconstructing large meshes that I am not aware of... Any insight is helpful!

Thanks-


All times are GMT -4. The time now is 11:07.