CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Meshing & Mesh Conversion

[mesh manipulation] Some questions on blockMesh, decomposePar and renumberMesh

Register Blogs Community New Posts Updated Threads Search

Like Tree44Likes
  • 33 Post By wyldckat
  • 10 Post By wyldckat
  • 1 Post By aqua

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   January 25, 2012, 09:09
Default Some questions on blockMesh, decomposePar and renumberMesh
  #1
Member
 
Pierre
Join Date: Sep 2010
Posts: 57
Rep Power: 15
Leech is on a distinguished road
Hi everyone,

actually I am working on a real big case. So i tried renumberMesh on the damBreak tutorial and it said that the bandwith was reduced and indeed the processing time was slightly lower. (for a fairly coarse mesh)
So I am about to include renumberMesh in my big case. This case is decomposed for 4 processors.
Question 1: Do i have to run renumberMesh before or after decomposePar?
To see what i am doing:
Quote:
#!/bin/sh
cd ${0%/*} || exit 1 # run from this directory

# Source tutorial run functions
. $WM_PROJECT_DIR/bin/tools/RunFunctions

runApplication blockMesh
runApplication topoSet
runApplication subsetMesh -overwrite c0 -patch floatingObject
cp -r 0.org 0 > /dev/null 2>&1
runApplication setFields
runApplication decomposePar
runApplication renumberMesh
runApplication foamJob -screen -parallel interDyMFoam
runApplication reconstructPar

# ----------------------------------------------------------------- end-of-file
Question2: As i tried renumberMesh for the damBreak (without decomposing) i had to change the controlDict, so that interFoam starts from 0.001 instead of 0 (the renumbered Mesh is written one timestep later). For my big case i changed the controlDict, but interDyMFoam allways said "wrting mesh for time 0"? So i tried renumberMesh -overwrite (to overwrite the old mesh), and then renumberMesh says "writing mesh to constant" and interDyMFoam still starts from 0?
So what do i have to do, to decompose my case and to renumber the mesh so that interDyMFoam uses it?

Question3: renumberMesh and also subsetMesh offer an option -parallel in the help output. But when i try to run it with the -parallel flag there comes for boath an error: "FATAL ERROR: attempt to run parallel on 1 processor" ? What does that mean and what do i have to do to run these commands in parallel?


Thanks a lot!
Greetings
Leech


PS: i tried to run renumberMesh before decomposing. This seemed to work better, as decomposePar said writing mesh for time 0.01. But then interDyMFoam says: "number of points in mesh differs from ... blabla in /processor0/constant/polyMesh" ? Any idea ?
Leech is offline   Reply With Quote

Old   January 25, 2012, 17:12
Default
  #2
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,975
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi Pierre,

OK, in a nutshell:
  1. Code:
    renumberMesh -overwrite
    This is the quickest option, since this way you don't need to do any changes to the time management system. The mesh present in "constant/polyMesh" gets renumbered into a more diagonal form for the "A.x=B" system.
    If you don't use the overwrite option, then two things must be done:
    • You must also copy the other fields ("U", "p" and so on) from the folder "0" to the new folder, which in your case is "0.001".
    • You must also edit "system/controlDict" and state that "startFrom" is "latestTime": http://www.openfoam.org/docs/user/controlDict.php
  2. You should run renumberMesh in parallel (after decomposePar), so that it is more likely that the processors will work with a higher probability of being in sync:
    Code:
    foamJob -s -p renumberMesh -overwrite
    or
    Code:
    mpirun -n 4 renumberMesh -overwrite -parallel
    or any other similar way, depending on the parallel launching system you are using.
  3. Running renumberMesh both before and after decomposePar will only have significant effect after, simply because the decomposition method will not take into account the location of the cells in the matrix; the decomposition method will only care about the position on the XYZ space. Nonetheless, it's possible that for those more complex decomposition methods, it might prove to be slightly more efficient (i.e. faster) to decompose with a nice and neat matrix
I hope that answers your questions!

Best regards,
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   January 26, 2012, 02:29
Default
  #3
Member
 
Pierre
Join Date: Sep 2010
Posts: 57
Rep Power: 15
Leech is on a distinguished road
Hi Bruno,

thank you very much! I will try this out this evening.
I didn't had the idea to run renumberMesh with foamJob..

How do you know this stuff? I googled a lot and got problems to find something about these more special tools from openFOAM. When i write renumberMesh -doc the help says that there i can find a documentation. But i just get something like "blalba no documentation online"..

Thank you!
Greets
Leech
Leech is offline   Reply With Quote

Old   January 28, 2012, 02:22
Default
  #4
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,975
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi Leech,

I found this out when I was trying to improve performance with running a case in parallel in a single machine. I went looking on these folders:
  • applications/utilities/parallelProcessing
  • applications/utilities/mesh
And I then tripped on renumberMesh. Tested running renumberMesh before and after decomposition, to compare which gave the better results and that's when I concluded that it would be better to renumber after decomposing.

edit: running renumberMesh in parallel mode felt natural, since I wanted to affect the decomposed mesh, not the base mesh. Additionally, the "-doc" option only works if you compile the Doxygen source code as well: Building Code Documentation on OpenFOAM 2.0.0/x

As for foamJob, I've been using it for so long now that I can't remember how I found out about it.

Either way, I've been dealing and messing with the OpenFOAM source code for quite some time now, including being responsible for one of the various unofficial ports for Windows, namely blueCFD:
Additionally, I've been taking notes on how to run OpenFOAM in parallel, so when I get some time (or if someone else wants to get started on it before me) I can write about it on the openfoamwiki.net: Advanced tips for working with the OpenFOAM shell environment

In conclusion: by keeping up-to-date with what is written here on the forum (and help in what I can), monitoring the bug tracker, going through the code and tinkering with things, I've continued to gather information and try to compile it into my blog and/or openfoamwiki.net!

Best regards,
Bruno
chegdan, panda, kiddmax and 7 others like this.
__________________

Last edited by wyldckat; January 28, 2012 at 02:25. Reason: see "edit:"
wyldckat is offline   Reply With Quote

Old   March 8, 2012, 09:17
Default single and parallel run
  #5
Member
 
Aqua
Join Date: Oct 2011
Posts: 96
Rep Power: 14
aqua is on a distinguished road
Dear Bruno,

I am facing another problem.... and thank you in advance!

I have a case, it's two cubes in two blocks respectively, one block will rotate with the inside cube, to simulate two cubes(cars) passing by each other. I run the case in a single processor, icoDyMFoam, it works fine for me: please find the animation in the link : http://www.youtube.com/watch?v=G0tIz...M5Gu5PwqPzpj8=

when I tried to run in parallel, decomposePar is fine. Then run the solver, it gave the error like this :

Code:
Create time

Create dynamic mesh for time = 0

Selecting dynamicFvMesh turboFvMesh
Initializing the GGI interpolator between master/shadow patches: iminy/imaxy
Initializing the GGI interpolator between master/shadow patches: ominy/omaxy
Turbomachine Mixer mesh:
    origin: (0 0 0)
    axis  : (0 0 1)
Reading transportProperties

Reading field p

Reading field U

Reading/calculating face flux field phi

Reading field rAU if present


Starting time loop

Volume: new = 50.3983 old = 50.3983 change = 0 ratio = 0
Courant Number mean: 0 max: 0 velocity magnitude: 0
deltaT = 0.000119048
Time = 0.000119048

Moving Cell Zone Name: cellRegion0 rpm: 5
Moving Face Zone Name: interfacei_faces rpm: 5
Moving Face Zone Name: imaxy_faces rpm: 5
Moving Face Zone Name: iminy_faces rpm: 5
volume continuity errors : volume = 50.3983, max error = 3.53317e-08, sum local = 2.04425e-14, global = 1.38975e-17
BiCGStab:  Solving for Ux, Initial residual = 1, Final residual = 9.6183e-09, No Iterations 5
BiCGStab:  Solving for Uy, Initial residual = 1, Final residual = 3.3054e-09, No Iterations 4
BiCGStab:  Solving for Uz, Initial residual = 1, Final residual = 1.16427e-08, No Iterations 5
DICPCG:  Solving for p, Initial residual = 1, Final residual = 24807.5, No Iterations 1000
time step continuity errors : sum local = 1.52389e-05, global = 1.30326e-06, cumulative = 1.30326e-06
DICPCG:  Solving for p, Initial residual = 0.818325, Final residual = 68.676, No Iterations 1000
time step continuity errors : sum local = 0.00155194, global = 0.00053231, cumulative = 0.000533613
DICPCG:  Solving for p, Initial residual = 0.906319, Final residual = 1.58151e+07, No Iterations 1000
time step continuity errors : sum local = 27767.9, global = -9667.13, cumulative = -9667.13
DICPCG:  Solving for p, Initial residual = 0.925246, Final residual = 114.069, No Iterations 1000
time step continuity errors : sum local = 3.43791e+06, global = -2.00058e+06, cumulative = -2.01024e+06
ExecutionTime = 83.4 s  ClockTime = 84 s

Volume: new = 50.3983 old = 50.3983 change = 1.13687e-13 ratio = -2.22045e-15
Courant Number mean: 4.54082e+06 max: 1.61654e+10 velocity magnitude: 5.94116e+11
[bluebear3:24854] *** Process received signal ***
[bluebear3:24854] Signal: Floating point exception (8)
[bluebear3:24854] Signal code:  (-6)
[bluebear3:24854] Failing at address: 0x1f5500006116
[bluebear3:24853] *** Process received signal ***
[bluebear3:24853] Signal: Floating point exception (8)
[bluebear3:24853] Signal code:  (-6)
[bluebear3:24853] Failing at address: 0x1f5500006115
[bluebear3:24853] [ 0] /lib64/libc.so.6 [0x36192302f0]
[bluebear3:24853] [ 1] /lib64/libc.so.6(gsignal+0x35) [0x3619230285]
[bluebear3:24853] [ 2] /lib64/libc.so.6 [0x36192302f0]
[bluebear3:24853] [ 3] /bb/civ/liuyu/OpenFOAM/OpenFOAM-1.6-ext/lib/linux64Gcc44DPOpt/libOpenFOAM.so(_ZN4Foam4Time12adjustDeltaTEv+0x5d) [0x2ac85a0a7dcd]
[bluebear3:24853] [ 4] icoDyMFoam [0x416ddb]
[bluebear3:24853] [ 5] /lib64/libc.so.6(__libc_start_main+0xf4) [0x361921d9b4]
[bluebear3:24853] [ 6] icoDyMFoam(_ZNK4Foam11regIOobject11writeObjectENS_8IOstream12streamFormatENS1_13versionNumberENS1_15compressionTypeE+0xc1) [0x414279]
[bluebear3:24853] *** End of error message ***
[bluebear3:24854] [ 0] /lib64/libc.so.6 [0x36192302f0]
[bluebear3:24854] [ 1] /lib64/libc.so.6(gsignal+0x35) [0x3619230285]
[bluebear3:24854] [ 2] /lib64/libc.so.6 [0x36192302f0]
[bluebear3:24854] [ 3] /bb/civ/liuyu/OpenFOAM/OpenFOAM-1.6-ext/lib/linux64Gcc44DPOpt/libOpenFOAM.so(_ZN4Foam4Time12adjustDeltaTEv+0x5d) [0x2afb243e4dcd]
[bluebear3:24854] [ 4] icoDyMFoam [0x416ddb]
[bluebear3:24854] [ 5] /lib64/libc.so.6(__libc_start_main+0xf4) [0x361921d9b4]
[bluebear3:24854] [ 6] icoDyMFoam(_ZNK4Foam11regIOobject11writeObjectENS_8IOstream12streamFormatENS1_13versionNumberENS1_15compressionTypeE+0xc1) [0x414279]
[bluebear3:24854] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 24854 on node bluebear3 exited on signal 8 (Floating point exception).

Could you please help on this? I don't know why...

Thank you so much!

Aqua
albet likes this.
aqua is offline   Reply With Quote

Old   March 8, 2012, 17:41
Default
  #6
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,975
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi Aqua,

Sorry, but I've never used GGI, so I don't know where to begin with it in parallel
All I can do is suggest that you study the case you used for the base mesh.

Good luck!
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   April 6, 2012, 13:54
Default
  #7
New Member
 
s_shimz
Join Date: Jul 2011
Posts: 8
Rep Power: 14
s_shimz is on a distinguished road
Hi Aqua,
I'm supposed to have the same problem as yours. You use the 1.6-ext version of OpenFOAM with GGI patch, but the version of the renumberMesh is not seemed to be executed in parallel mode...

Last edited by s_shimz; April 6, 2012 at 14:25. Reason: Wrong quote.
s_shimz is offline   Reply With Quote

Old   March 5, 2013, 12:11
Default renumberMesh and runtime
  #8
Senior Member
 
Gerhard Holzinger
Join Date: Feb 2012
Location: Austria
Posts: 339
Rep Power: 28
GerhardHolzinger will become famous soon enoughGerhardHolzinger will become famous soon enough
Hi,

I tried out the renumberMesh utility on a case with 197945 cells. I decomposed the domain into 2 sub-domains (98894 + 99051). I ran the parallel case once without renumbering and once with renumbering.

renumberMesh displayed this

Code:
Band before renumbering: 2018
...
Band after renumbering: 1534
However, the renumbered case is significantly slower than the non-renumbered case.

Non-renumbered:

Code:
ExecutionTime = 8280.43 s  ClockTime = 8296 s
...
Time = 7.3


renumbered:

Code:
ExecutionTime = 8905.01 s  ClockTime = 8920 s
...
Time = 7.29

What could be the reason for this?
GerhardHolzinger is offline   Reply With Quote

Old   March 5, 2013, 16:06
Default
  #9
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,975
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi Gerhard,

There are a few details that come into play:
  1. Did you renumber before decomposition?
  2. Or did you renumber after decomposition and executed renumberMesh with the parallel option?
  3. If you did #1, try #2 as well. Or vice-versa.
  4. How exactly was the mesh generated?
  5. How structured is the mesh, or is it completely unstructured?
  6. Which decomposition method did you use?
    1. Did you confirm how many faces were shared between sub-domains in each case? The more there are, the most likely it will take longer.
  7. Last but not least: the renumbered mesh might be worst for your case, due to the memory access order that each sub-domain needs.
    I've had a test case where with a 6 core processor, would have greater performance if I over-scheduled it with 16 sub-domains, instead of using only 6 sub-domains!
Best regards,
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   March 6, 2013, 05:03
Default
  #10
Senior Member
 
Gerhard Holzinger
Join Date: Feb 2012
Location: Austria
Posts: 339
Rep Power: 28
GerhardHolzinger will become famous soon enoughGerhardHolzinger will become famous soon enough
I tried #2 and ran

Code:
 mpirun -n 2 renumberMesh -overwrite -parallel
The mesh is generated by blockMesh and is structured. Maybe there is no or little room for improvements when using structured meshes?

I used the decomposition method scotch without any further parameters.

The number of shared faces is 3948. If i renumber after decomposing, this number will not change, will it?
GerhardHolzinger is offline   Reply With Quote

Old   March 6, 2013, 05:16
Default
  #11
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,975
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Quote:
Originally Posted by GerhardHolzinger View Post
The number of shared faces is 3948. If i renumber after decomposing, this number will not change, will it?
It might be different if you renumber before decomposing.
And please try renumbering before decomposing, because as I said before, it may vary depending on the case.
__________________
wyldckat is offline   Reply With Quote

Old   May 1, 2015, 14:03
Default renumberMesh after refineMesh
  #12
New Member
 
AI
Join Date: Jun 2014
Posts: 17
Rep Power: 11
sherif35 is on a distinguished road
Hello all,

I have a quick question,

I build my grid using blockMesh, then I run snappyHexMesh -overwrite to add boundary layers and then refineMesh -overwrite to refine local area. And then if I run renumberMesh -overwrite I get the following error:

--> FOAM FATAL IO ERROR:
size 7680 is not equal to the given value of 122880

7680 is the size (or number of faces of one of the patches before refineMesh and the 122880 is the size after refineMesh)

Any idea how to either renumberMesh to read the size correctly? Or how to update the mesh record to reflect the updated size?

Thank you all.

Last edited by sherif35; May 1, 2015 at 15:35.
sherif35 is offline   Reply With Quote

Old   May 1, 2015, 16:24
Default
  #13
New Member
 
AI
Join Date: Jun 2014
Posts: 17
Rep Power: 11
sherif35 is on a distinguished road
Quote:
Originally Posted by sherif35 View Post
Hello all,

I have a quick question,

I build my grid using blockMesh, then I run snappyHexMesh -overwrite to add boundary layers and then refineMesh -overwrite to refine local area. And then if I run renumberMesh -overwrite I get the following error:

--> FOAM FATAL IO ERROR:
size 7680 is not equal to the given value of 122880

7680 is the size (or number of faces of one of the patches before refineMesh and the 122880 is the size after refineMesh)

Any idea how to either renumberMesh to read the size correctly? Or how to update the mesh record to reflect the updated size?

Thank you all.
I was able to work around it by doing the refineMesh step first before the snappyHexMesh step because the mesh size in the snappy generated files in the 0 directory don't get updated when executing refineMesh -overwrite

By switching the order, snappy generates those files for the final mesh.

However, I would still like to know if there is way to update all of the 0 directory (including the snappy files). Thank you.

Ahmed Ibrahim
sherif35 is offline   Reply With Quote

Old   April 1, 2019, 11:41
Default
  #14
New Member
 
Arne
Join Date: Dec 2018
Posts: 19
Rep Power: 7
arsimons is on a distinguished road
Hi all


First off, I think you can reset the 0-folder with setFields? I think this will also work of you have changed the mesh, since setFields works with real distances rather than a number of cells?

Secondly, I am wondering whether there is any benefit in applying renumberMesh directly after using blockMesh? I am working on a case which is not in parallel (yet) and without any further additions to the mesh?


I was going to try this out myself and I am going the post the answer once I am done. But in case anyone already knows the answer, feel free to respond.



Best regards
Arne
arsimons is offline   Reply With Quote

Old   April 2, 2019, 07:32
Default
  #15
New Member
 
Arne
Join Date: Dec 2018
Posts: 19
Rep Power: 7
arsimons is on a distinguished road
Quote:
Originally Posted by arsimons View Post
Hi all


First off, I think you can reset the 0-folder with setFields? I think this will also work of you have changed the mesh, since setFields works with real distances rather than a number of cells?

Secondly, I am wondering whether there is any benefit in applying renumberMesh directly after using blockMesh? I am working on a case which is not in parallel (yet) and without any further additions to the mesh?


I was going to try this out myself and I am going the post the answer once I am done. But in case anyone already knows the answer, feel free to respond.



Best regards
Arne

Hi all


I tried to use renumberMesh immediately after blockMesh and it turns out that this does have an influence. I am not sure why this is not automatically done whenever using blockMesh?

I saw that the bandwidth does not change, but the envolope decreased by 11%, I am just not sure whether this will have a remarkable influence on the simulationtime afterwards?


Best regards
Arne
arsimons is offline   Reply With Quote

Old   April 2, 2019, 20:39
Default
  #16
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,975
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Quick answer: Depends on how the blocks were assembled with blockMesh.
  • If you only have 2-3 blocks, then it is probable that they are already well sorted, or at least well sorted within those blocks.
  • If you have dozens of blocks, created in nearly random order, I expect that renumberMesh will improve considerably the mesh organization in memory.
Nonetheless, this is what I expect, because I'm not entirely certain how much of an improvement I've gained from this with a mesh created with blockMesh, since I rarely create multi-block meshes with it.
__________________
wyldckat is offline   Reply With Quote

Old   April 3, 2019, 03:45
Default
  #17
New Member
 
Arne
Join Date: Dec 2018
Posts: 19
Rep Power: 7
arsimons is on a distinguished road
Quote:
Originally Posted by wyldckat View Post
Quick answer: Depends on how the blocks were assembled with blockMesh.
  • If you only have 2-3 blocks, then it is probable that they are already well sorted, or at least well sorted within those blocks.
  • If you have dozens of blocks, created in nearly random order, I expect that renumberMesh will improve considerably the mesh organization in memory.
Nonetheless, this is what I expect, because I'm not entirely certain how much of an improvement I've gained from this with a mesh created with blockMesh, since I rarely create multi-block meshes with it.

I also work with only 1 block. I just gave it a try on one of the tutorial cases (pisoFoam -> RAS -> cavity), the mesh exists out of 1 block here. Without renumberMesh, the simulation took 8.98 second, while with renumberMesh it only took 8.86. This difference corresponds with approximately 1.5% decrease in time for a mesh of one block with 20x20 cells.

Furthermore, the only change I've noted so far is the 'time step continuity errors'.


Best regards
Arne
arsimons is offline   Reply With Quote

Old   April 3, 2019, 07:55
Default
  #18
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,975
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Quick answer: That's within the margin of error due to computational randomness... Examples:
  • CPUs don't always run at the same exact frequency every time.
  • The way the matrices are solved can affect slightly the end results.
wyldckat is offline   Reply With Quote

Old   April 5, 2019, 05:25
Default
  #19
New Member
 
Arne
Join Date: Dec 2018
Posts: 19
Rep Power: 7
arsimons is on a distinguished road
Quote:
Originally Posted by wyldckat View Post
Quick answer: That's within the margin of error due to computational randomness... Examples:
  • CPUs don't always run at the same exact frequency every time.
  • The way the matrices are solved can affect slightly the end results.

In other words, there is certainly a difference (the matrices are solved in a different way), but this difference does not necessarily mean that the simulation would run faster (because the difference is within the margin error)?


Thanks for your help. I think I will be using renumberMesh from now on. It might not do any good, but it will certainly not do any bad.
arsimons is offline   Reply With Quote

Old   April 5, 2019, 19:21
Default
  #20
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,975
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Quote:
Originally Posted by arsimons View Post
It might not do any good, but it will certainly not do any bad.
Quick note: Except when it does... always look at how much the matrix band-width changes, don't assume it always works perfectly.
I've seen it get worse when running it in serial and then in parallel after decomposing... namely that renumbering in serial was already great and running it again in parallel did disturb the band-width... although it wasn't all that much and it was not clear how much it impacted performance by also running it in parallel or not.

Oh, and don't forget about the "-overwrite" option... otherwise you can risk having an extra time step that you don't notice that corrupted your results...
wyldckat is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[mesh manipulation] renumberMesh causes solver divergence spaceprop OpenFOAM Meshing & Mesh Conversion 8 November 22, 2021 17:18
[mesh manipulation] decomposePar Method affects executability of renumberMesh Vyssion OpenFOAM Meshing & Mesh Conversion 0 November 3, 2016 07:57
Parallel Processisng DecomposePAR - Different Solutions PeteH OpenFOAM Running, Solving & CFD 1 March 24, 2016 07:25
renumberMesh pere OpenFOAM 4 May 9, 2012 08:52


All times are GMT -4. The time now is 09:28.