CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Meshing & Mesh Conversion

[snappyHexMesh] Running snappyHexMesh in parallel creates new time directories

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   October 4, 2018, 14:22
Default Running snappyHexMesh in parallel creates new time directories
  #1
Member
 
Hüseyin Can Önel
Join Date: Sep 2018
Location: Ankara, Turkey
Posts: 33
Rep Power: 4
hconel is on a distinguished road
I want to run snappyHexMesh in parallel and then pimpleFoam in parallel also. I'm using the following script for mesh preparation:

Code:
blockMesh > log.blockMesh
decomposePar > log.decomposePar.1
mpirun -np $nProc snappyHexMesh -latestTime -parallel > log.snappyHexMesh
reconstructParMesh -latestTime > log.reconstructPar1
renumberMesh -latestTime > renumberMesh.log
rm -rf processor*
topoSet > log.topoSet
and then the following for solving:

Code:
decomposePar > log.decomposePar.2
ls -d processor* | xargs -I {} rm -rf ./{}/0
ls -d processor* | xargs -I {} cp -r 0.org ./{}/0
mpirun -np $nProc pimpleFoam -parallel > log.pimpleFoam 
reconstructPar > log.reconstructPar.2
However, snappyHexMesh creates new time directories and I don't want that but to start from 0. How can I do that? Also, is there an "optimized" way to do this by avoiding intermediate decomposition and reconstruction processes?
Thanks.
hconel is offline   Reply With Quote

Old   October 4, 2018, 17:59
Default
  #2
Member
 
Luis Eduardo
Join Date: Jan 2011
Posts: 85
Rep Power: 12
lebc is on a distinguished road
Try using "mpirun -np $nProc snappyHexMesh -latestTime -parallel -overwrite> log.snappyHexMesh", I use the "-overwrite" option and I don't get new time directories.

What I use to run my cases is a Allrun file:

Code:
#!/bin/sh
cd ${0%/*} || exit 1    # Run from this directory

# Source tutorial run functions
. $WM_PROJECT_DIR/bin/tools/RunFunctions

# Make dummy 0 directory
mkdir 0

runApplication blockMesh
cp system/decomposeParDict.hierarchical system/decomposeParDict
runApplication decomposePar

runParallel snappyHexMesh -overwrite

find . -type f -iname "*level*" -exec rm {} \;

ls -d processor* | xargs -I {} cp -r 0.org ./{}/0 $1

runParallel topoSet
runParallel `getApplication`

runApplication reconstructParMesh -constant
runApplication reconstructPar

cp -a 0.org/. 0/
I just addapted the Allrun file from one tutorial, so maybe there are other ways to do it.
lebc is offline   Reply With Quote

Old   October 5, 2018, 07:01
Default
  #3
Member
 
Hüseyin Can Önel
Join Date: Sep 2018
Location: Ankara, Turkey
Posts: 33
Rep Power: 4
hconel is on a distinguished road
Hi Luis,
I do not have runParallel and runApplication commands, I guess they come with another application you have. Also, qsub and pbs scripts does not accept them I guess. Is there a way to do it with mpirun?
hconel is offline   Reply With Quote

Old   October 5, 2018, 17:03
Default
  #4
Member
 
Luis Eduardo
Join Date: Jan 2011
Posts: 85
Rep Power: 12
lebc is on a distinguished road
Hi,

These commands will be available when you execute the line ". $WM_PROJECT_DIR/bin/tools/RunFunctions", but you can probably use the mpi as well (I never used it, so I can't give you more information about it, sorry!)

When I run "runApplication blockMesh" I get the same result as your "blockMesh > log.blockMesh" command, so probably you can keep it.

I guess you could replace "runParallel" by "mpirun -np $nProc". For example, runParallel topoSet may have the same result as mpirun -np $nProc topoSet > log.topoSet

Also, I don't need to reconstruct my case before running the solver, because topoSet can be run in parallel. It seems to be the same case as yours, you just reconstruct it to run topoSet, and after decompose it again, correct?

Best Regards,
Luis
lebc is offline   Reply With Quote

Old   October 5, 2018, 21:40
Default
  #5
Member
 
Hüseyin Can Önel
Join Date: Sep 2018
Location: Ankara, Turkey
Posts: 33
Rep Power: 4
hconel is on a distinguished road
Hi,
Thanks to you, sourcing that file allowed me to use runApplication and runParallel commands in pbs script job submissions! My idea was to successfully generate the mesh in parallel first error free, then continue on solving without intermediate reconstruction steps. I'm still not sure how to do it with mpirun commands, but runParallel function has taken care of everything smoothly! I'll post the final script when I get on my pc.
hconel is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[General] Extracting ParaView Data into Python Arrays Jeffzda ParaView 19 June 11, 2020 16:54
courant number increases to rather large values 6863523 OpenFOAM Running, Solving & CFD 21 November 10, 2019 11:14
pimpleDyMFoam computation randomly stops babapeti OpenFOAM Running, Solving & CFD 5 January 24, 2018 06:28
High Courant Number @ icoFoam Artex85 OpenFOAM Running, Solving & CFD 11 February 16, 2017 14:40
Superlinear speedup in OpenFOAM 13 msrinath80 OpenFOAM Running, Solving & CFD 18 March 3, 2015 06:36


All times are GMT -4. The time now is 17:42.