blockMesh for bigger mesh
Dear all,
I am trying to generate a blockMesh with 6.75 million cells and have some trouble: Code:
new cannot satisfy memory request. |
Hi Gijs,
I cannot say anything about blockMesh, but I've observed a similar barrier with snappyHexMesh. By using a machine with 12GB of RAM, I was able to obtain a mesh consisting of approximately 3.3 Mio cells. Maybe there is no alternative as to use more memory? I'm glad if anybody has an other suggestion? Jens |
Hi,
You could split up your geometry into smaller parts and then mesh each part, then merge and stitch the part meshes together with the mergeMeshes and stitchMesh utilities. This is not a very nice way of doing it, but if you can't figure anything else out then it might be an option. Philip |
Hi guys,
@Jens: That's a long time ago, how's life? Hope you're doing well! Hmm, it seems that brute force (or actually memory) is the only way ... I'll let you know if I find something out. @Philip: Thanks for the suggestion, good idea. Would a renumberMesh be helpful after I merge and stitch? |
Gijs,
I'm not very familiar with renumberMesh but it seems to give you a more efficient mesh, so yes it probably would be a good idea after merging and stitching. I'll keep that in mind the next time I use merge and stitch! Philip |
Hi Philip,
I used renumberMesh after converting Fluent meshes to OpenFOAM. After the conversion the cellID numbering is not the most efficient for use in OF. renumberMesh renumbers the cellIDs for more efficient calculation, it can speed up the calculation time actually. However, I heard it doesn't work with MRF meshes. Maybe there are some tricks for that ... |
Quote:
I will keep it in mind when I convert meshes from Gambit. Philip |
<offtopic>
So renumberMesh can help to use Fluent meshes? Does this work for the export of ICEM grids in the Fluent format as well? I've hat a lot of trouble with that in the past. @gijs: Life is great so far :) </offtopic> |
Hi Jens,
Good to hear you are well :). I have seen this effect with Gambit meshes (the .msh extension), converted to OF. I suppose that other meshing tools also do not necessarily generate the mesh cellID order that is most optimal for OF to calculate with. Some codes work with "x-y-z", but OF "stacks" cells on a pile based on cellID. Of course, OF also uses "x-y-z", but perhaps in a slightly different way. I am not sure, but my guess is that renumberMesh may also be effective for meshes generated by other tools, e.g. ICEM. |
Thanks Gijs I'll give it a try, on the next ICEM mesh I need to use for the simulations.
|
One way to make bigger blockMeshes is to make a smallish blockMesh, decompose it and then use snappyHexMesh with volume refinement to refine it into a bigger mesh.
The memory requirements of snappy are large because it always stores 2 meshes (new and old) and each processor in a parallel run contains the entire set of surface meshes used to define the geometry. It could be better though. |
Ah great, thanks Eugene, I will give it a try.
We do have a larger machine available, but it uses multiple cores on a single node. Does anyone know whether blockMesh is multithreaded so it could run with the full amount of memory on the node? Thanks in advance! |
blockMesh is single core only. Just about all multi-socket nodes these days have a shared memory architecture though, so each core can use all the memory on the node if required.
|
Thanks, Eugene, I will have a try :)
|
I just started using OpenFoam for evaluating it for multi-core/many-core machines. I need to create a mesh with large number of cells (~2 million).
So I was looking at nonnewtonianIcofoam. I refined the mesh using refineMesh utility. The current mesh structure is as follows: Mesh stats points: 14884 internal points: 0 faces: 29042 internal faces: 14182 cells: 7200 boundary patches: 6 point zones: 0 face zones: 0 cell zones: 0 Overall number of cells of each type: hexahedra: 7180 prisms: 0 wedges: 0 pyramids: 0 tet wedges: 0 tetrahedra: 0 polyhedra: 20 I would like to increase the number of cells and faces in this mesh to a large value. How can it be done using blockMesh? Please suggest the required changes. It will be great if I can get some kind of help. Thanks!! |
Hello everybody,
it makes a long time that you wrote into this subject. Today four years later, I have the same kind of problem. I would like a ~18 Mcells blockMesh. Do I need only to ask more memory on my cluster ? I am asking for 8000mb and it is still too few... Any idea / suggestion / comment ? |
All times are GMT -4. The time now is 01:10. |