blockMesh breaks down when handling with huge grid numbers
Hello Foamers~!
Here I met a serious problem: when i want to generate a mesh with grid number of 1e8, blockMesh just breaks down. Here is my mesh dict: Code:
vertices |
"Breaks down" is a very unspecific description. I'd blame your memory.
|
Just a complete shot in the dark: Did you compile OpenFOAM with 32 or 64 bit label size?
see e. g. http://openfoamwiki.net/index.php/Label and https://github.com/OpenFOAM/OpenFOAM...etc/bashrc#L81 |
Quote:
I now describe my situation in more details: As you predicted, the memory ran out quickly and the hard disk was busy. Then the program do not response any more and I have to shut it down forcely. I think blockMesh generates the mesh information all in memory and do not write to disk until finished computing. So the memory become the bottleneck of my system. Here is my idea about how to solve it: 1. generate the coarse mesh 2. use multi-thread tool in order to decompose the mesh to different sub mesh 3. refine every sub meshes in parallel Would you please give me some advices about how to make it? Thanks very much! |
Quote:
I work with ubuntu 16.04 and I installed OpenFOAM by the apt command. So I think my label set is the default value:( Is there a way to install OpenFOAM with label value equaling to 64 by apt command? Thank you very much! |
Yes, you can go that way. Some ideas here:
http://www.cfd-online.com/Forums/ope...eneration.html Quote:
|
Quote:
I have tried the 3-step-Meshing: 1. blockMesh a coarse mesh and topoSet a cellSet 2. decomposePar for making it ready for parallel processing 3. mpirun -np 8 refineMesh -overwrite -parallel This works fine until the memory bottleneck was met:( Now it comes to me a idea that if i can refineMesh one subMesh by one subMesh. I mean, if it is possible that i refine one subMesh once but still parallelly. I guess if the meshing domain could be limited to one subMesh and exploited all cpu cores' power, then the memory needed is affordable for me and the performance is ok. |
If you run on a single node, of course running in parallel will not gain you anything in terms of memory. If you don't have more than a single node, you are out of luck I think. Yeah, you can mesh subdomains separately and then stich, but you'll probably still dump once you stitch and get the large mesh, or finally when you attempt to solve.
|
Quote:
|
All times are GMT -4. The time now is 08:57. |