blockMesh - parallel mesh generation
I have just a short question.
Is a parallel mesh generation with blockMesh possible?
Thanks and best regards!
I'm not sure, what you're meaning with a parallel mesh generation.. But if you want to define a rectangular area with a mesh (existing of rectangular cells, which can have regular or irregular dimensions): This is possible.
I normally define my (3d) domains with BlockMesh and insert other geometries with snappyHexMesh.. But I'm still a Beginner in OF, so that this is maybe not the best way for a geometry preparation..
I think he meant if he can run blockMesh in parallel, and I think the answer is no.
hi felix and akidess
akidess is right.
By now i used...
mpiexec -n 12 snappyHexMesh -parallel
... and it worked fine.
Thanks akidess for your quick answer.
Now that we're in 2013 :cool: , do you know if there's a way to run blockMesh in parallel?
I am trying to run a 3D case on a cluster, with 28 blocks and a total of 67x10^6 cells... Hence, blockMesh it taking forever.
Do you have any suggestions? :confused:
No seriously: blockMesh doesn't work in parallel and would be hard to write such a thing in parallel (think about it: you're asking the decomposition algorithm to know how to decompose before the actual cells are known).
Anyway: one way would be the following (I'm only sketching this. You have a cluster for a 67m cell mesh so I assume you either have the time to set this up yourself or the money to buy support to do this for you):
- set up blockMeshes for single blocks (or small managable groups). Set up patches for the processor patches (but give them the type patch)
- run blockMesh on each of these blocks
- copy the constant/polyMesh into the appropriate processorX/constant/polyMesh directory
- either edit by hand or have a script to edit the polyMesh/boundary-files to change the type of the patches from patch to processor
Now you have a mesh that is decomposed for 28 processors (assuming each block gets his own processor). If that is not the number of processors you want to do the calculation with or if the mesh is unbalanced because the blocks are not of the same cell number then you can use the redistributeParMesh utility to get an evenly distributed mesh on any number of processors you like.
All these steps can be scripted (even decomposing the big blockMeshDict into 28 dicts ... but it won't be pretty)
Although this seems to be a year later. I think a nice solution would also be to:
1. make a coarse mesh
2. decomposePar it
3. Edit: run topoSet in parallel to add sets to constant/polyMesh
in system/topoSetDict create a cellSet via a large box to include
all cells for refinement
3. run refineHexMesh as many times as you desire in parallel
|All times are GMT -4. The time now is 01:36.|