CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   OpenFOAM Meshing & Mesh Conversion (http://www.cfd-online.com/Forums/openfoam-meshing/)
-   -   Some questions on blockMesh, decomposePar and renumberMesh (http://www.cfd-online.com/Forums/openfoam-meshing/96540-some-questions-blockmesh-decomposepar-renumbermesh.html)

Leech January 25, 2012 09:09

Some questions on blockMesh, decomposePar and renumberMesh
 
Hi everyone,

actually I am working on a real big case. So i tried renumberMesh on the damBreak tutorial and it said that the bandwith was reduced and indeed the processing time was slightly lower. (for a fairly coarse mesh)
So I am about to include renumberMesh in my big case. This case is decomposed for 4 processors.
Question 1: Do i have to run renumberMesh before or after decomposePar?
To see what i am doing:
Quote:

#!/bin/sh
cd ${0%/*} || exit 1 # run from this directory

# Source tutorial run functions
. $WM_PROJECT_DIR/bin/tools/RunFunctions

runApplication blockMesh
runApplication topoSet
runApplication subsetMesh -overwrite c0 -patch floatingObject
cp -r 0.org 0 > /dev/null 2>&1
runApplication setFields
runApplication decomposePar
runApplication renumberMesh
runApplication foamJob -screen -parallel interDyMFoam
runApplication reconstructPar

# ----------------------------------------------------------------- end-of-file
Question2: As i tried renumberMesh for the damBreak (without decomposing) i had to change the controlDict, so that interFoam starts from 0.001 instead of 0 (the renumbered Mesh is written one timestep later). For my big case i changed the controlDict, but interDyMFoam allways said "wrting mesh for time 0"? So i tried renumberMesh -overwrite (to overwrite the old mesh), and then renumberMesh says "writing mesh to constant" and interDyMFoam still starts from 0?
So what do i have to do, to decompose my case and to renumber the mesh so that interDyMFoam uses it?

Question3: renumberMesh and also subsetMesh offer an option -parallel in the help output. But when i try to run it with the -parallel flag there comes for boath an error: "FATAL ERROR: attempt to run parallel on 1 processor" ? What does that mean and what do i have to do to run these commands in parallel?


Thanks a lot!
Greetings
Leech


PS: i tried to run renumberMesh before decomposing. This seemed to work better, as decomposePar said writing mesh for time 0.01. But then interDyMFoam says: "number of points in mesh differs from ... blabla in /processor0/constant/polyMesh" ? Any idea ?

wyldckat January 25, 2012 17:12

Hi Pierre,

OK, in a nutshell:
  1. Code:

    renumberMesh -overwrite
    This is the quickest option, since this way you don't need to do any changes to the time management system. The mesh present in "constant/polyMesh" gets renumbered into a more diagonal form for the "A.x=B" system.
    If you don't use the overwrite option, then two things must be done:
    • You must also copy the other fields ("U", "p" and so on) from the folder "0" to the new folder, which in your case is "0.001".
    • You must also edit "system/controlDict" and state that "startFrom" is "latestTime": http://www.openfoam.org/docs/user/controlDict.php
  2. You should run renumberMesh in parallel (after decomposePar), so that it is more likely that the processors will work with a higher probability of being in sync:
    Code:

    foamJob -s -p renumberMesh -overwrite
    or
    Code:

    mpirun -n 4 renumberMesh -overwrite -parallel
    or any other similar way, depending on the parallel launching system you are using.
  3. Running renumberMesh both before and after decomposePar will only have significant effect after, simply because the decomposition method will not take into account the location of the cells in the matrix; the decomposition method will only care about the position on the XYZ space. Nonetheless, it's possible that for those more complex decomposition methods, it might prove to be slightly more efficient (i.e. faster) to decompose with a nice and neat matrix ;)
I hope that answers your questions!

Best regards,
Bruno

Leech January 26, 2012 02:29

Hi Bruno,

thank you very much! I will try this out this evening.
I didn't had the idea to run renumberMesh with foamJob..

How do you know this stuff? I googled a lot and got problems to find something about these more special tools from openFOAM. When i write renumberMesh -doc the help says that there i can find a documentation. But i just get something like "blalba no documentation online"..

Thank you!
Greets
Leech

wyldckat January 28, 2012 02:22

Hi Leech,

I found this out when I was trying to improve performance with running a case in parallel in a single machine. I went looking on these folders:
  • applications/utilities/parallelProcessing
  • applications/utilities/mesh
And I then tripped on renumberMesh. Tested running renumberMesh before and after decomposition, to compare which gave the better results and that's when I concluded that it would be better to renumber after decomposing.

edit: running renumberMesh in parallel mode felt natural, since I wanted to affect the decomposed mesh, not the base mesh. Additionally, the "-doc" option only works if you compile the Doxygen source code as well: Building Code Documentation on OpenFOAM 2.0.0/x

As for foamJob, I've been using it for so long now that I can't remember how I found out about it.

Either way, I've been dealing and messing with the OpenFOAM source code for quite some time now, including being responsible for one of the various unofficial ports for Windows, namely blueCFD:
Additionally, I've been taking notes on how to run OpenFOAM in parallel, so when I get some time (or if someone else wants to get started on it before me) I can write about it on the openfoamwiki.net: Advanced tips for working with the OpenFOAM shell environment

In conclusion: by keeping up-to-date with what is written here on the forum (and help in what I can), monitoring the bug tracker, going through the code and tinkering with things, I've continued to gather information and try to compile it into my blog and/or openfoamwiki.net!

Best regards,
Bruno

aqua March 8, 2012 09:17

single and parallel run
 
Dear Bruno,

I am facing another problem.... and thank you in advance!

I have a case, it's two cubes in two blocks respectively, one block will rotate with the inside cube, to simulate two cubes(cars) passing by each other. I run the case in a single processor, icoDyMFoam, it works fine for me: please find the animation in the link : http://www.youtube.com/watch?v=G0tIz...M5Gu5PwqPzpj8=

when I tried to run in parallel, decomposePar is fine. Then run the solver, it gave the error like this :

Code:

Create time

Create dynamic mesh for time = 0

Selecting dynamicFvMesh turboFvMesh
Initializing the GGI interpolator between master/shadow patches: iminy/imaxy
Initializing the GGI interpolator between master/shadow patches: ominy/omaxy
Turbomachine Mixer mesh:
    origin: (0 0 0)
    axis  : (0 0 1)
Reading transportProperties

Reading field p

Reading field U

Reading/calculating face flux field phi

Reading field rAU if present


Starting time loop

Volume: new = 50.3983 old = 50.3983 change = 0 ratio = 0
Courant Number mean: 0 max: 0 velocity magnitude: 0
deltaT = 0.000119048
Time = 0.000119048

Moving Cell Zone Name: cellRegion0 rpm: 5
Moving Face Zone Name: interfacei_faces rpm: 5
Moving Face Zone Name: imaxy_faces rpm: 5
Moving Face Zone Name: iminy_faces rpm: 5
volume continuity errors : volume = 50.3983, max error = 3.53317e-08, sum local = 2.04425e-14, global = 1.38975e-17
BiCGStab:  Solving for Ux, Initial residual = 1, Final residual = 9.6183e-09, No Iterations 5
BiCGStab:  Solving for Uy, Initial residual = 1, Final residual = 3.3054e-09, No Iterations 4
BiCGStab:  Solving for Uz, Initial residual = 1, Final residual = 1.16427e-08, No Iterations 5
DICPCG:  Solving for p, Initial residual = 1, Final residual = 24807.5, No Iterations 1000
time step continuity errors : sum local = 1.52389e-05, global = 1.30326e-06, cumulative = 1.30326e-06
DICPCG:  Solving for p, Initial residual = 0.818325, Final residual = 68.676, No Iterations 1000
time step continuity errors : sum local = 0.00155194, global = 0.00053231, cumulative = 0.000533613
DICPCG:  Solving for p, Initial residual = 0.906319, Final residual = 1.58151e+07, No Iterations 1000
time step continuity errors : sum local = 27767.9, global = -9667.13, cumulative = -9667.13
DICPCG:  Solving for p, Initial residual = 0.925246, Final residual = 114.069, No Iterations 1000
time step continuity errors : sum local = 3.43791e+06, global = -2.00058e+06, cumulative = -2.01024e+06
ExecutionTime = 83.4 s  ClockTime = 84 s

Volume: new = 50.3983 old = 50.3983 change = 1.13687e-13 ratio = -2.22045e-15
Courant Number mean: 4.54082e+06 max: 1.61654e+10 velocity magnitude: 5.94116e+11
[bluebear3:24854] *** Process received signal ***
[bluebear3:24854] Signal: Floating point exception (8)
[bluebear3:24854] Signal code:  (-6)
[bluebear3:24854] Failing at address: 0x1f5500006116
[bluebear3:24853] *** Process received signal ***
[bluebear3:24853] Signal: Floating point exception (8)
[bluebear3:24853] Signal code:  (-6)
[bluebear3:24853] Failing at address: 0x1f5500006115
[bluebear3:24853] [ 0] /lib64/libc.so.6 [0x36192302f0]
[bluebear3:24853] [ 1] /lib64/libc.so.6(gsignal+0x35) [0x3619230285]
[bluebear3:24853] [ 2] /lib64/libc.so.6 [0x36192302f0]
[bluebear3:24853] [ 3] /bb/civ/liuyu/OpenFOAM/OpenFOAM-1.6-ext/lib/linux64Gcc44DPOpt/libOpenFOAM.so(_ZN4Foam4Time12adjustDeltaTEv+0x5d) [0x2ac85a0a7dcd]
[bluebear3:24853] [ 4] icoDyMFoam [0x416ddb]
[bluebear3:24853] [ 5] /lib64/libc.so.6(__libc_start_main+0xf4) [0x361921d9b4]
[bluebear3:24853] [ 6] icoDyMFoam(_ZNK4Foam11regIOobject11writeObjectENS_8IOstream12streamFormatENS1_13versionNumberENS1_15compressionTypeE+0xc1) [0x414279]
[bluebear3:24853] *** End of error message ***
[bluebear3:24854] [ 0] /lib64/libc.so.6 [0x36192302f0]
[bluebear3:24854] [ 1] /lib64/libc.so.6(gsignal+0x35) [0x3619230285]
[bluebear3:24854] [ 2] /lib64/libc.so.6 [0x36192302f0]
[bluebear3:24854] [ 3] /bb/civ/liuyu/OpenFOAM/OpenFOAM-1.6-ext/lib/linux64Gcc44DPOpt/libOpenFOAM.so(_ZN4Foam4Time12adjustDeltaTEv+0x5d) [0x2afb243e4dcd]
[bluebear3:24854] [ 4] icoDyMFoam [0x416ddb]
[bluebear3:24854] [ 5] /lib64/libc.so.6(__libc_start_main+0xf4) [0x361921d9b4]
[bluebear3:24854] [ 6] icoDyMFoam(_ZNK4Foam11regIOobject11writeObjectENS_8IOstream12streamFormatENS1_13versionNumberENS1_15compressionTypeE+0xc1) [0x414279]
[bluebear3:24854] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 24854 on node bluebear3 exited on signal 8 (Floating point exception).


Could you please help on this? I don't know why...

Thank you so much!

Aqua

wyldckat March 8, 2012 17:41

Hi Aqua,

Sorry, but I've never used GGI, so I don't know where to begin with it in parallel :(
All I can do is suggest that you study the case you used for the base mesh.

Good luck!
Bruno

s_shimz April 6, 2012 13:54

Hi Aqua,
I'm supposed to have the same problem as yours. You use the 1.6-ext version of OpenFOAM with GGI patch, but the version of the renumberMesh is not seemed to be executed in parallel mode...

GerhardHolzinger March 5, 2013 12:11

renumberMesh and runtime
 
Hi,

I tried out the renumberMesh utility on a case with 197945 cells. I decomposed the domain into 2 sub-domains (98894 + 99051). I ran the parallel case once without renumbering and once with renumbering.

renumberMesh displayed this

Code:

Band before renumbering: 2018
...
Band after renumbering: 1534

However, the renumbered case is significantly slower than the non-renumbered case.

Non-renumbered:

Code:

ExecutionTime = 8280.43 s  ClockTime = 8296 s
...
Time = 7.3



renumbered:

Code:

ExecutionTime = 8905.01 s  ClockTime = 8920 s
...
Time = 7.29


What could be the reason for this?

wyldckat March 5, 2013 16:06

Hi Gerhard,

There are a few details that come into play:
  1. Did you renumber before decomposition?
  2. Or did you renumber after decomposition and executed renumberMesh with the parallel option?
  3. If you did #1, try #2 as well. Or vice-versa.
  4. How exactly was the mesh generated?
  5. How structured is the mesh, or is it completely unstructured?
  6. Which decomposition method did you use?
    1. Did you confirm how many faces were shared between sub-domains in each case? The more there are, the most likely it will take longer.
  7. Last but not least: the renumbered mesh might be worst for your case, due to the memory access order that each sub-domain needs.
    I've had a test case where with a 6 core processor, would have greater performance if I over-scheduled it with 16 sub-domains, instead of using only 6 sub-domains!
Best regards,
Bruno

GerhardHolzinger March 6, 2013 05:03

I tried #2 and ran

Code:

mpirun -n 2 renumberMesh -overwrite -parallel
The mesh is generated by blockMesh and is structured. Maybe there is no or little room for improvements when using structured meshes?

I used the decomposition method scotch without any further parameters.

The number of shared faces is 3948. If i renumber after decomposing, this number will not change, will it?

wyldckat March 6, 2013 05:16

Quote:

Originally Posted by GerhardHolzinger (Post 411893)
The number of shared faces is 3948. If i renumber after decomposing, this number will not change, will it?

It might be different if you renumber before decomposing.
And please try renumbering before decomposing, because as I said before, it may vary depending on the case.


All times are GMT -4. The time now is 11:22.