|
[Sponsors] |
[snappyHexMesh] mpirun with SnappyHexMesh in cluster |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
March 29, 2018, 22:01 |
mpirun with SnappyHexMesh in cluster
|
#1 |
New Member
subhasis
Join Date: Feb 2017
Posts: 4
Rep Power: 9 |
Hi ,
I am running snappyHexMesh in cluster by using the following command mpirun --np 10 snappyHexMesh in some scenario the snappyHexFails (This is a normal use case) when snappyHexmesh fails the in some cases the control does not come out gracefully , it hangs in the terminal . In case of snappyHex Mesh success the control comes out properly . Can any one help me on this Thanks B. |
|
March 30, 2018, 10:20 |
|
#2 |
Senior Member
Taher Chegini
Join Date: Nov 2014
Location: Houston, Texas
Posts: 125
Rep Power: 12 |
When you want to run it in parallel you should add -parallel flag, also you may add -overwrite flag as well so it doesn't create a mesh in a new time step. So it should be:
Code:
mpirun --np 10 snappyHexMesh -parallel -overwrite Code:
foamJob -w -a -p snappyHexMesh -overwrite >&/dev/null -a: append to log file instead of overwriting it -p: parallel run of processors |
|
March 31, 2018, 23:23 |
|
#3 |
New Member
subhasis
Join Date: Feb 2017
Posts: 4
Rep Power: 9 |
Hi
Thank you for reply When i try to use foamJob -parallel snappyHexMesh , it gives me a eerror like Case is not currently decomposed system/decomposeParDict exists Try decomposing with foamJob decomposePar"" Thanks B. |
|
April 1, 2018, 03:51 |
|
#4 |
Senior Member
Join Date: Aug 2013
Posts: 407
Rep Power: 15 |
Hi,
If you want to run anything in parallel, the case must first be split/ decomposed based on the number of processors you want to run the parallel application on. So before you run snappyHexMesh in parallel with 10 processors, you need to have decomposed the case such that there are 10 processor folders, each of which contains a region that will be acted on by one processor. This is typically done after running blockMesh and creating the original mesh. In order to perform this splitting, you need to run decomposePar. And once your have the processor* directories, you can run snappyHexMesh or any other application in parallel. Hope this helps. Cheers, Antimony |
|
April 1, 2018, 11:26 |
|
#5 |
Senior Member
Taher Chegini
Join Date: Nov 2014
Location: Houston, Texas
Posts: 125
Rep Power: 12 |
Here are the steps that you need to perform for running snappyHexMesh in parallel:
Code:
# Clean the case foamJob -w -a foamCleanTutorials >&/dev/null # Generate background mesh foamJob -w -a blockMesh >&/dev/null # Extracts surface features foamJob -w -a surfaceFeatureExtract >&/dev/null # Domain decomposition foamJob -w -a decomposePar >&/dev/null # Generate 3D mesh foamJob -w -a -p snappyHexMesh -overwrite >&/dev/null # Reconstruct the generated mesh in constant directory foamJob -w -a reconstructParMesh -constant >&/dev/null # Check mesh quality, look for non-orthogonality and skewness foamJob -w -a checkMesh >&/dev/null |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[Other] Basic questions about OpenFOAM cluster running and installing | Fauster | OpenFOAM Installation | 0 | May 25, 2018 15:00 |
mpirun unable to find SU2_PRT | Apollinaris | SU2 Installation | 1 | May 10, 2017 05:31 |
Why not install cluster by connecting workstations together for CFD application? | Anna Tian | Hardware | 5 | July 18, 2014 14:32 |
OpenMPI fails cluster run with an orphaned IP Address | svg | OpenFOAM Running, Solving & CFD | 0 | January 28, 2014 03:41 |
openFOAM mpirun error on cluster | cheng1988sjtu | OpenFOAM Running, Solving & CFD | 1 | November 14, 2012 22:13 |