CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Meshing & Mesh Conversion

[Other] Using reconstructParMesh on a cluster, bottle neck and out of memory

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   May 4, 2017, 20:35
Default Using reconstructParMesh on a cluster, bottle neck and out of memory
  #1
KTG
Senior Member
 
Abe
Join Date: May 2016
Posts: 119
Rep Power: 9
KTG is on a distinguished road
Hello all,

I think there is something I don't understand about reconstructParMesh. I have been trying to reconstruct large meshes that I am producing on a somewhat antiquated cluster. Even with the old hardware, I can produce a large mesh with snappyHexMesh in less than a day. However, I am running into a major bottleneck when I try to reconstruct it. reconstructParMesh runs for over a day, and then crashes because it is out of memory - the largest memory node I have access to is 256 Gb. Also, every time I run reconstructParMesh, I get this:

Quote:
This is an experimental tool which tries to merge individual processor
meshes back into one master mesh. Use it if the original master mesh has
been deleted or if the processor meshes have been modified (topology change).
This tool will write the resulting mesh to a new time step and construct
xxxxProcAddressing files in the processor meshes so reconstructPar can be
used to regenerate the fields on the master mesh.

Not well tested & use at your own risk!
Is there an alternative way to put meshes back together that I don't know about? Or some kind of work around for reconstructing incrementally so that I don't go over RAM limits? Or is there a way to run solvers on the decomposed mesh cases?

Thanks!

Abe
KTG is online now   Reply With Quote

Old   May 5, 2017, 02:37
Default
  #2
Senior Member
 
akidess's Avatar
 
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 29
akidess will become famous soon enough
Quote:
Originally Posted by KTG View Post
Or is there a way to run solvers on the decomposed mesh cases?
You mean like running the solvers in parallel?
__________________
*On twitter @akidTwit
*Spend as much time formulating your questions as you expect people to spend on their answer.
akidess is offline   Reply With Quote

Old   May 5, 2017, 12:48
Default
  #3
KTG
Senior Member
 
Abe
Join Date: May 2016
Posts: 119
Rep Power: 9
KTG is on a distinguished road
Yes, exactly. It is not a good option, but might be worth it. My idea was that maybe there was a way to transfer the decomposed mesh output of snappyHexMesh -parallel from the processor* folders into the "constant" folders in a decomposed simpleFoam (or other solver) case. This would of course limit me to running the same number of processes for the solver as for snappyHexMesh. I am not sure that it would even work, or if there is an easy way to transfer everything without writing a script.

I just figured I was doing something wrong, since I am able to produce much larger meshes than I can reconstruct. reconstructParMesh seems to hold everything in memory before writing it to disk, which gets really slow (or crashes) when I exceed the ram limit on the largest memory node I have access to. I am not very strong in computer science, so I would be curious to learn why reconstructParMesh has to run in serial? Or why it can't write parts of the mesh to disk while running to loosen up some RAM? I could be interpreting that all wrong, and it is also possible that there are some tricks to reconstructing large meshes that I am not aware of... Any insight is helpful!

Thanks-
KTG is online now   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On



All times are GMT -4. The time now is 05:04.