CFD Online URL
[Sponsors]
Home > Forums > OpenFOAM Mesh Utilities

SnappyHexMesh in Parallel

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Display Modes
Old   September 16, 2008, 19:21
Default Hi all, I started first tri
  #1
Senior Member
 
BastiL
Join Date: Mar 2009
Posts: 462
Rep Power: 10
bastil is on a distinguished road
Hi all,

I started first tries with snappyHexMesh in parallel instead of serial. Everything works quite good so far. However, I am wondering about stategy of decomposition. The way it is intended to work seems to be:
1. Underlying hex-mesh (blockmesh)
2. decompose hex-mesh into n parts (decomposepar)
3. Run snappyHexMesh parallel with n processes
4. get the final snapped mesh distributed into n parts

So I guess you should use the same number of partitions for meshing as you intend to use for calculation. If you do so there is no need for redecomposition.
If you want one mesh you may run reconstructParMesh. This worked for me.
If you may run the calculation on m processes and m<n redistributeParMesh may do the job, but I have not tried it.
More interesting I want to run calculation on m cores, m>n. redistributeParMesh does not seem to the able handling this redistribution to a larger number of domains, is it?
Another question: Due to lack of memory I do not want to start all parallel processes at ones but e.g. 2 out of 4 and the other two after finishing the first two. Of course during meshing there is no opportunity for dynamic load balancing. However, it will save RAM. Is it possible?

Regards
bastil is offline   Reply With Quote

Old   September 17, 2008, 04:54
Default redistributeParMesh should be
  #2
Super Moderator
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,416
Rep Power: 15
mattijs is on a distinguished road
redistributeParMesh should be able to handle m>n. Just start it with the larger of the two. Don't know if it reads a mesh from time directories or always from constant so you might have to move your polyMesh into constant beforehand.

snappyHexMesh needs parallel communications in all phases, not just in the load balancing phase so no you cannot run in sequence.
mattijs is offline   Reply With Quote

Old   September 17, 2008, 05:57
Default Matthijs, thanks for this a
  #3
Senior Member
 
BastiL
Join Date: Mar 2009
Posts: 462
Rep Power: 10
bastil is on a distinguished road
Matthijs,

thanks for this answer. So far I only got it working for m=n. I will run some more tests for m>n and m<n this afternoon and I will get back to you afterwards.
bastil is offline   Reply With Quote

Old   September 17, 2008, 12:05
Default redistributeParMesh seems to r
  #4
Senior Member
 
BastiL
Join Date: Mar 2009
Posts: 462
Rep Power: 10
bastil is on a distinguished road
redistributeParMesh seems to read from time-directories. However, it does not work that way . I got it working to redistribute a mesh from 2 to 3 parts using redistributeMeshPar. It is litte tricky and only worked wih hierarchical or metis, not with parmetis. Using metis I get the warning:

You have selected decomposition method decompositionMethod which does
not synchronise the decomposition across processor patches.

I do not understand the meaning and consequences from that.
bastil is offline   Reply With Quote

Old   September 18, 2008, 09:29
Default I managed to run snappyHexMesh
  #5
Senior Member
 
BastiL
Join Date: Mar 2009
Posts: 462
Rep Power: 10
bastil is on a distinguished road
I managed to run snappyHexMesh with processes one after another manually. This also seams to work with some tricks. These leads to two questions for me:

- Why not replace decompsePar with redistributeParMesh. His also works for 1->many parts and is parallel. His helps saving memory....
- Why not implement the option to run snappyHexMesh in parallel with individual parts sequentialy one after another instead of all at once. This will also save memory. Clusters are rare for meshing because snappyHexMesh is the first tool I know that is able to use them.

Regards
bastil is offline   Reply With Quote

Old   September 18, 2008, 10:30
Default You must be some kind of mirac
  #6
Senior Member
 
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 11
eugene is on a distinguished road
You must be some kind of miracle worker Basti, because there is a lot of communication required between processors in snappyHexMesh. Running separate processors sequentially will not produce anything remotely resembling what you would get it you ran it in parallel.

If you are going to solve a case on a cluster, why not mesh it there as well? Saves you a whole lot of trouble.
eugene is offline   Reply With Quote

Old   September 18, 2008, 11:52
Default Eugene, two reasons that cu
  #7
Senior Member
 
BastiL
Join Date: Mar 2009
Posts: 462
Rep Power: 10
bastil is on a distinguished road
Eugene,

two reasons that currently might be a problem:
- Our current environment has not been designed for that because current commercial meshers are not really parallel. Meshing with snappyHexMesh takes more RAM than solving. I am running out of memory. SnappyHexMesh needs lots of memory.
- On the other hand I have messing nodes with relatively large shared memory.

However this is a problem of current hardware status and may change in future.
What I did to get it working is quite simple:
1. Run blockMesh for underlying hexmesh
2. Distribute hexmesh
3. Run snappyHexMesh on each of the underlying parts individually instead of one run. (Of course you can run each part in parallel once again...) This is quite similar to what old "proAM" can do. You have to change the "processor" patch to type "patch" for this to work.
4. Assemble mesh. I sometimes had troubles with stitchmesh for that.

Maybe our next generation cluster will solve al this. Regards.
bastil is offline   Reply With Quote

Old   September 18, 2008, 12:31
Default Well it sure is an interesting
  #8
Senior Member
 
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 11
eugene is on a distinguished road
Well it sure is an interesting approach, but I repeat, the meshes you generate like this will be substantially different (and probably poorer in quality) than ones generated with all components online. Specifically, you will get weird jumps where you match the processors.

If you cannot mesh the entire thing due to insufficient memory, then I guess this is the only way to go.
eugene is offline   Reply With Quote

Old   February 12, 2009, 05:27
Default Hi, thanks for you advices.
  #9
Senior Member
 
Wolfgang Heydlauff
Join Date: Mar 2009
Location: Germany
Posts: 136
Rep Power: 7
wolle1982 is on a distinguished road
Hi,

thanks for you advices. Let me sum the procedure.

- run "blockMesh" a usual
- decomposeMethode in decomposeParDict must be hirarcial
- run "decomposePar"
- run "foamJob -p -s snappyHexMesh"
- afterwards run "reconstructParMesh -mergeTol 1
e-06 -latestTime" (or -time 1; -time 2; ...)

Works perfect. ("Yes, it can!") ;-)
wolle1982 is offline   Reply With Quote

Old   June 25, 2009, 21:45
Default
  #10
Senior Member
 
louisgag's Avatar
 
Louis Gagnon
Join Date: Mar 2009
Location: Québec, QC, Canada
Posts: 178
Rep Power: 7
louisgag is on a distinguished road
Send a message via ICQ to louisgag
Hi,

I am having trouble running snappyHexMesh in parallel. I use hierachical decomposition and I can run snappyHexMesh in parallel as long as I don't run the "snap" phase, which I absolutely need! Running the snap phase causes an immediate error.

My case runs fine on a single processor, however wwhen I set it to parallel (mpirun or foamJob -p,) I get a

Smoothing patch points ...
Smoothing iteration 0
Found 0 non-mainfold point(s).
[louis-dell:32518] *** An error occurred in MPI_Recv
[louis-dell:32518] *** on communicator MPI_COMM_WORLD
[louis-dell:32518] *** MPI_ERR_TRUNCATE: message truncated
[louis-dell:32518] *** MPI_ERRORS_ARE_FATAL (goodbye)


and changing MPI_BUFFER_SIZE does not solve the problem. It either changes the error message to a segmentation fault or a "cannot satisfy memory request" error. Even on a 40K cells mesh!!

Thanks for any hints on solving this!

-Louis
louisgag is offline   Reply With Quote

Old   August 4, 2009, 01:47
Default
  #11
Member
 
Cem Albukrek
Join Date: Mar 2009
Posts: 50
Rep Power: 7
albcem is on a distinguished road
How do you assign fields to the decomposed meshes that you generate with snappyhexmesh in parallel? Can it be done directly on the decomposed mesh or does the mesh need to be reconstructed and then decomposed again?
albcem is offline   Reply With Quote

Old   August 28, 2009, 14:40
Default
  #12
Senior Member
 
louisgag's Avatar
 
Louis Gagnon
Join Date: Mar 2009
Location: Québec, QC, Canada
Posts: 178
Rep Power: 7
louisgag is on a distinguished road
Send a message via ICQ to louisgag
What do you mean by fields?
louisgag is offline   Reply With Quote

Old   August 28, 2009, 14:52
Default
  #13
Member
 
Cem Albukrek
Join Date: Mar 2009
Posts: 50
Rep Power: 7
albcem is on a distinguished road
I was trying to identify a way to assign the specified U,p, k, epsilon, nut, etc. flow variable (field) boundary conditions to the decomposed mesh directly. The incentive was to be able to process large cases on a 32bit parallel machine, as I thought the reconstructed mesh would violate the 32bit memory limit, which is around 2.5gigs.

It turns out the serial processes for the reconstruction, flow field assignment and re-decomposition do not consume the memory for the whole mesh. So I do not need a solution to this issue at this point, although one would improve the overall process, avoiding unnecessary mesh reconstruction and re-decomposition steps with it.

Last edited by albcem; August 28, 2009 at 15:28.
albcem is offline   Reply With Quote

Old   August 28, 2009, 17:18
Default
  #14
Senior Member
 
louisgag's Avatar
 
Louis Gagnon
Join Date: Mar 2009
Location: Québec, QC, Canada
Posts: 178
Rep Power: 7
louisgag is on a distinguished road
Send a message via ICQ to louisgag
Well maybe I don't understand properly but as far as I know you can set the field conditions in the "0" folder. As for patch names, you have to define them prior to decomposition.

Best of luck,

-Louis
louisgag is offline   Reply With Quote

Old   September 9, 2009, 07:18
Default
  #15
Member
 
Andrew King
Join Date: Mar 2009
Location: Perth, Western Australia, Australia
Posts: 81
Rep Power: 7
andersking is on a distinguished road
Hi Louis,

Unfortunately snappyHexMesh adds new patches, however decomposePar doesn't pass on any field BCs for patches that don't exist at decomposition time. ie. even if you've defined the BC's for the new patch in 0 before decomposition, it won't copy these to the processor directories. You can do it manually, but for large numbers of processors its not optimal.

However, i think there may be a workaround. If you create an empty patch in constant/polyMesh/boundary with the same name as the patch(es) that snappy adds, the field will be decomposed, and all is fine.

To create the empty patch open constant/polyMesh/boundary find the last patch (which should look something like)

Code:
    last_patch
    {
        type             wall;
        nFaces          1000;
        startFace       22000;
    }
to add a new patch use, put startface as the sum of nFaces and startFace from the last patch, and nFaces as 0, ie

Code:
    new_empty_patch
    {
        type             wall;
        nFaces          0;
        startFace       23000;
    }
The BC information for new_empty_patch should now be passed on when decomposePar runs (in theory)

I'm about to test this, so I'll let you know if it works.

Cheers,
Andrew
__________________
Dr Andrew King
Fluid Dynamics Research Group
Curtin University
andersking is offline   Reply With Quote

Old   September 23, 2009, 08:43
Default
  #16
Senior Member
 
louisgag's Avatar
 
Louis Gagnon
Join Date: Mar 2009
Location: Québec, QC, Canada
Posts: 178
Rep Power: 7
louisgag is on a distinguished road
Send a message via ICQ to louisgag
Dear Andrew,

did your approach work?

Best regards,

-Louis
louisgag is offline   Reply With Quote

Old   September 23, 2009, 10:15
Default
  #17
Member
 
Andrew King
Join Date: Mar 2009
Location: Perth, Western Australia, Australia
Posts: 81
Rep Power: 7
andersking is on a distinguished road
Hi Louis,

It worked in some ways, the empty patch worked for the decomposition, but the mesh was not in a state to run anything. (missing cellProcAddressing files). I had to use reconstructParMesh followed by decomposePar again.

This approach seemed to work, without running out of memory.

Cheers,
Andrew
andersking is offline   Reply With Quote

Old   October 16, 2009, 06:38
Default triSurface directories
  #18
New Member
 
Simon Rees
Join Date: Mar 2009
Posts: 12
Rep Power: 7
sjrees is on a distinguished road
I have got some way with running snappyHexMesh in prallel (a great utility) but have a question. It seems that decomposePar does what you would normally expect to run a solver but snappyHexMesh will not run in parallel (I am using v1.6 and doing 'foamJob -p -s snappyHexMesh -overwrite') unless I manually copy the constant/triSurface directory into processor?/constant/. This is true where I have used an stl file but also in the iglooWithFridges tutorial where there is only an edge file in triSurface directory. Is this the expected behaviour? Is there something I am missing that would avoid this?

Thanks,
Simon
sjrees is offline   Reply With Quote

Old   October 16, 2009, 11:00
Default
  #19
Senior Member
 
louisgag's Avatar
 
Louis Gagnon
Join Date: Mar 2009
Location: Québec, QC, Canada
Posts: 178
Rep Power: 7
louisgag is on a distinguished road
Send a message via ICQ to louisgag
Hi Simon,

I don't know the answer to your question but I'm happy to know that snappy works in parallel with 1.6! Can't wait to try it

-Louis
louisgag is offline   Reply With Quote

Old   April 1, 2010, 16:07
Default parallel meshing on n processors to parallel solution on n processors?
  #20
bjr
Member
 
Ben Racine
Join Date: Mar 2009
Location: Seattle, WA, USA
Posts: 62
Rep Power: 7
bjr is on a distinguished road
Send a message via AIM to bjr Send a message via Skype™ to bjr
I know the answer is probably here, but I think this needs to be explicitly discussed... has someone gotten parallel snappyHexMesh to parallel solver (in my case simpleFoam) to work? I understand that one must move the polymesh folder from the latest snappyHexMesh iteration folder (2 or 3 or ?), but I get the following error messages after attempting to run...

host2:/data/offToCluster # mpirun --mca btl openib,sm,self -np 10 -machinefile ~/machinelist.txt simpleFoam -parallel > log.simpleFoam &
[1] 7652
host2:/data/offToCluster #
host2:/data/offToCluster # [1]
[1]
[1] keyword OBJECT_patch0 is undefined in dictionary "/data/offToCluster/processor1/0/p::boundaryField"
[1]
[1] file: /data/offToCluster/processor1/0/p::boundaryField from line 26 to line 63.
[1]
[1] From function dictionary::subDict(const word& keyword) const
[1] in file db/dictionary/dictionary.C at line 449.
[1]


I thought that the workflow was basic input deck > blockMesh > decomposePar (n processes) > copy stl files to all processor folders > snappyHexMesh (in parallel on n processes) > simpleFoam (in parallel on n processes).

My workaround of using "reconstructParMesh -mergeTol 1e-06 -time 2" does work, but then that limits me to the RAM of only one node because reconstructParMesh doesn't run in parallel.

Do I need to copying boundary conditions around in some way?

Last edited by bjr; April 1, 2010 at 16:47.
bjr is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
SnappyHexMesh erik023 OpenFOAM Mesh Utilities 19 October 17, 2010 11:09
Parallel case setup boundry conditions snappyhexmesh oskar OpenFOAM Pre-Processing 5 September 11, 2009 02:12
SnappyHexMesh in parallel openmpi wikstrom OpenFOAM Bugs 18 November 26, 2008 06:55
SnappyHexMesh in parallel openmpi wikstrom OpenFOAM Mesh Utilities 7 November 24, 2008 10:52
SnappyHexMesh mp340 OpenFOAM Meshing & Mesh Conversion 1 November 13, 2008 14:30


All times are GMT -4. The time now is 14:53.