|
[Sponsors] |
September 8, 2014, 12:24 |
Decomposing meshes
|
#1 |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 51 |
Hi all,
I have a few questions to the decompose process. The first question is about the decomposition output and general stuff:
Code:
Processor 0 Number of cells = 79703 Number of faces shared with processor 1 = 2347 Number of faces shared with processor 3 = 1252 Number of faces shared with processor 6 = 1983 Number of faces shared with processor 7 = 4 Number of faces shared with processor 9 = 6 Number of processor patches = 5 Number of processor faces = 5592 Number of boundary faces = 13097 Processor 1 Number of cells = 30927 Number of faces shared with processor 0 = 2347 Number of faces shared with processor 2 = 2691 Number of faces shared with processor 3 = 5 Number of faces shared with processor 4 = 605 Number of faces shared with processor 5 = 1 Number of faces shared with processor 6 = 6 Number of faces shared with processor 7 = 1223 Number of faces shared with processor 8 = 35 Number of faces shared with processor 9 = 1 Number of processor patches = 9 Number of processor faces = 6914 Number of boundary faces = 6206 Processor 2 Number of cells = 98548 Number of faces shared with processor 1 = 2691 Number of faces shared with processor 4 = 6 Number of faces shared with processor 5 = 1538 Number of faces shared with processor 8 = 2181 Number of faces shared with processor 11 = 1 Number of processor patches = 5 Number of processor faces = 6417 Number of boundary faces = 17893 Processor 3 Number of cells = 72934 Number of faces shared with processor 0 = 1252 Number of faces shared with processor 1 = 5 Number of faces shared with processor 4 = 2949 Number of faces shared with processor 6 = 1 Number of faces shared with processor 9 = 1460 Number of processor patches = 5 Number of processor faces = 5667 Number of boundary faces = 18218 Processor 4 Number of cells = 45303 Number of faces shared with processor 1 = 605 Number of faces shared with processor 2 = 6 Number of faces shared with processor 3 = 2949 Number of faces shared with processor 5 = 3615 Number of faces shared with processor 9 = 28 Number of faces shared with processor 10 = 1574 Number of faces shared with processor 11 = 24 Number of processor patches = 7 Number of processor faces = 8801 Number of boundary faces = 8444 Processor 5 Number of cells = 66574 Number of faces shared with processor 1 = 1 Number of faces shared with processor 2 = 1538 Number of faces shared with processor 4 = 3615 Number of faces shared with processor 10 = 5 Number of faces shared with processor 11 = 1430 Number of processor patches = 5 Number of processor faces = 6589 Number of boundary faces = 14245 Processor 6 Number of cells = 38329 Number of faces shared with processor 0 = 1983 Number of faces shared with processor 1 = 6 Number of faces shared with processor 3 = 1 Number of faces shared with processor 7 = 1036 Number of faces shared with processor 9 = 700 Number of faces shared with processor 12 = 2172 Number of faces shared with processor 13 = 2 Number of faces shared with processor 15 = 8 Number of processor patches = 8 Number of processor faces = 5908 Number of boundary faces = 6084 Processor 7 Number of cells = 96970 Number of faces shared with processor 0 = 4 Number of faces shared with processor 1 = 1223 Number of faces shared with processor 6 = 1036 Number of faces shared with processor 8 = 1488 Number of faces shared with processor 9 = 4 Number of faces shared with processor 10 = 593 Number of faces shared with processor 11 = 4 Number of faces shared with processor 12 = 2 Number of faces shared with processor 13 = 248 Number of faces shared with processor 14 = 2 Number of faces shared with processor 16 = 2 Number of processor patches = 11 Number of processor faces = 4606 Number of boundary faces = 20630 Processor 8 Number of cells = 45017 Number of faces shared with processor 1 = 35 Number of faces shared with processor 2 = 2181 Number of faces shared with processor 7 = 1488 Number of faces shared with processor 11 = 593 Number of faces shared with processor 14 = 1197 Number of faces shared with processor 17 = 2 Number of processor patches = 6 Number of processor faces = 5496 Number of boundary faces = 8169 Processor 9 Number of cells = 33585 Number of faces shared with processor 0 = 6 Number of faces shared with processor 1 = 1 Number of faces shared with processor 3 = 1460 Number of faces shared with processor 4 = 28 Number of faces shared with processor 6 = 700 Number of faces shared with processor 7 = 4 Number of faces shared with processor 10 = 910 Number of faces shared with processor 15 = 1436 Number of faces shared with processor 16 = 5 Number of processor patches = 9 Number of processor faces = 4550 Number of boundary faces = 7381 Processor 10 Number of cells = 144720 Number of faces shared with processor 4 = 1574 Number of faces shared with processor 5 = 5 Number of faces shared with processor 7 = 593 Number of faces shared with processor 9 = 910 Number of faces shared with processor 11 = 1497 Number of faces shared with processor 15 = 2 Number of faces shared with processor 16 = 575 Number of faces shared with processor 17 = 4 Number of processor patches = 8 Number of processor faces = 5160 Number of boundary faces = 29770 Processor 11 Number of cells = 35368 Number of faces shared with processor 2 = 1 Number of faces shared with processor 4 = 24 Number of faces shared with processor 5 = 1430 Number of faces shared with processor 7 = 4 Number of faces shared with processor 8 = 593 Number of faces shared with processor 10 = 1497 Number of faces shared with processor 16 = 6 Number of faces shared with processor 17 = 1100 Number of processor patches = 8 Number of processor faces = 4655 Number of boundary faces = 8624 Processor 12 Number of cells = 97632 Number of faces shared with processor 6 = 2172 Number of faces shared with processor 7 = 2 Number of faces shared with processor 13 = 3143 Number of faces shared with processor 15 = 1410 Number of processor patches = 4 Number of processor faces = 6727 Number of boundary faces = 20346 Processor 13 Number of cells = 31151 Number of faces shared with processor 6 = 2 Number of faces shared with processor 7 = 248 Number of faces shared with processor 12 = 3143 Number of faces shared with processor 14 = 1952 Number of faces shared with processor 15 = 7 Number of faces shared with processor 16 = 682 Number of faces shared with processor 17 = 3 Number of processor patches = 7 Number of processor faces = 6037 Number of boundary faces = 6840 Processor 14 Number of cells = 72706 Number of faces shared with processor 7 = 2 Number of faces shared with processor 8 = 1197 Number of faces shared with processor 13 = 1952 Number of faces shared with processor 17 = 1281 Number of processor patches = 4 Number of processor faces = 4432 Number of boundary faces = 12889 Processor 15 Number of cells = 71806 Number of faces shared with processor 6 = 8 Number of faces shared with processor 9 = 1436 Number of faces shared with processor 10 = 2 Number of faces shared with processor 12 = 1410 Number of faces shared with processor 13 = 7 Number of faces shared with processor 16 = 3801 Number of processor patches = 6 Number of processor faces = 6664 Number of boundary faces = 17329 Processor 16 Number of cells = 44918 Number of faces shared with processor 7 = 2 Number of faces shared with processor 9 = 5 Number of faces shared with processor 10 = 575 Number of faces shared with processor 11 = 6 Number of faces shared with processor 13 = 682 Number of faces shared with processor 15 = 3801 Number of faces shared with processor 17 = 3951 Number of processor patches = 7 Number of processor faces = 9022 Number of boundary faces = 8759 Processor 17 Number of cells = 75775 Number of faces shared with processor 8 = 2 Number of faces shared with processor 10 = 4 Number of faces shared with processor 11 = 1100 Number of faces shared with processor 13 = 3 Number of faces shared with processor 14 = 1281 Number of faces shared with processor 16 = 3951 Number of processor patches = 6 Number of processor faces = 6341 Number of boundary faces = 15767 Number of processor faces = 54789 Max number of cells = 144720 (120.392% above average 65664.8) Max number of processor patches = 11 (65% above average 6.66667) Max number of faces between processors = 9022 (48.2013% above average 6087.67) Time = 0 The second question is about simple and hierarchical
If you split like (3 1 2) in xyz, you split first x (3) then y (1) and z (2). At least you also get 6 domains which should be (in my imagination) the same as above, or not? The third question is about scotch and metis
At least one more questions
Thanks in advance and for reading the topic. Any hints and experiences are welcome.
__________________
Keep foaming, Tobias Holzmann |
|
September 9, 2014, 05:30 |
|
#2 |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 51 |
Hi all,
today I made some tests with my college using decompose method hierarchical. Question two is not anymore necessary to explain. I miss understand something. For all who are interested here is the answer:
__________________
Keep foaming, Tobias Holzmann |
|
July 13, 2017, 02:37 |
|
#3 |
Member
Sebastian Trunk
Join Date: Mar 2015
Location: Erlangen, Germany
Posts: 60
Rep Power: 11 |
Hey Tobi,
could you please answer your questions from above if you know the answers now? Thanks and best wishes Sebastian |
|
July 13, 2017, 04:59 |
|
#4 |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 51 |
Hello Sebastian,
I don't have all answers but I can give you more informations. Answer to question 1
Information about question 3
__________________
Keep foaming, Tobias Holzmann |
|
July 13, 2017, 05:03 |
|
#5 |
Member
Sebastian Trunk
Join Date: Mar 2015
Location: Erlangen, Germany
Posts: 60
Rep Power: 11 |
"Again what learned" as Lothar Matthäus would say !
Thank you very much for your quick answer... |
|
July 30, 2018, 10:45 |
|
#6 |
Senior Member
Gerhard Holzinger
Join Date: Feb 2012
Location: Austria
Posts: 341
Rep Power: 28 |
Let me share a recent observation of mine.
I simulated axi-symmetric gas flow in a long pipe. As the domain is quite large, the simulation was run using several parallel processes. The scotch decomposition created a processor boundary, which zic-zags through the pipe. The part of the processor boundary, which is parallel to the flow, seems to create some disturbance in the pressure field. Luckily, the disturbance does not blow up the simulation, yet it's quite interesting. The attached images show the pressure field. The involved subdomains are shown as white wireframes. |
|
July 30, 2018, 15:56 |
|
#7 |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 51 |
There is also a topic of damn break in the German OpenFOAM forum. A guy decomposed the domain and got different results for different decomposition methods. That is clear if one is thinking about the fluxes which has to be shared at the processor boundaries.
Nice to get the proof again. Thank you Gerhard
__________________
Keep foaming, Tobias Holzmann |
|
August 1, 2018, 04:46 |
|
#8 |
Senior Member
Gerhard Holzinger
Join Date: Feb 2012
Location: Austria
Posts: 341
Rep Power: 28 |
The images I posted are from a case, which I am not able to share. Unfortunately, I am not able to reproduce the issue using a simpler, distributable geometry.
A minimal working example (MWE) for the behaviour I observed in my case, would be quite interesting, since this is a quite odd behaviour. |
|
August 1, 2018, 12:56 |
|
#9 |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 51 |
The influence of the decomposition should be stronger for free convection (if the flow can go anywhere). However, for forced convection - while the fluid has a fixed direction - the decomposition should not influence the fields too much.
__________________
Keep foaming, Tobias Holzmann |
|
August 3, 2018, 15:18 |
|
#10 | |
Senior Member
Michael Alletto
Join Date: Jun 2018
Location: Bremen
Posts: 616
Rep Power: 16 |
Quote:
|
||
November 17, 2019, 23:20 |
Decomposing Mesh for a multiregion domain
|
#11 |
New Member
Muhammad Omer Mughal
Join Date: Jul 2010
Location: Singapore
Posts: 22
Rep Power: 16 |
Dear Tobi and all
I am performing a heat transfer simulation in which I have four regions. When I use scotch method of decomposition for the regions, the mesh is decomposed correctly however, it doesnot move forward while performing faceAgglomeration. When I use simple method with the following coefficients for the two of the larger regions while using scotch method for the other two domains I get a singular matrix error. numberOfSubdomains 144; method simple; simpleCoeffs { n (16 9 1); // total must match numberOfSubdomains delta 0.001; } When I try using simple method for all regions with the above coefficients as before for the larger regions while the following coefficients for the other two smaller regions, numberOfSubdomains 144; method simple; simpleCoeffs { n (12 12 1); // total must match numberOfSubdomains delta 0.001; } I get the following warning and I also find 0 cells in some of the processors during decompostion FOAM Warning : From function Foam:olyMesh:olyMesh(const Foam::IOobject &) in file meshes/polyMesh/polyMesh.C at line 330 no points in mesh When I try running the solver, it terminates with the following message [57] --> FOAM FATAL ERROR: [57] (5 20) not found in table. Valid entries: 847 ( (98 103) (88 104) ....................... ........................... [57] From function T& Foam::HashTable<T, Key, Hash>:perator[](const Key&) [with T = double; Key = Foam::edge; Hash = Foam::Hash<Foam::edge>] [57] in file OpenFOAM-6/src/OpenFOAM/lnInclude/HashTableI.H at line 117. Can some one kindly help me to fix this issue Last edited by Muhammad Omer Mughal; November 17, 2019 at 23:30. Reason: missed some information |
|
January 23, 2020, 03:19 |
|
#12 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,426
Rep Power: 49 |
I am a bit confused about the results shared here.
In my opinion, a domain decomposition/parallelization strategy should be designed in a way that it has no influence on the results whatsoever. That would be my first and most important item on a list of prerequisites for any parallelization. Is this a bug in OpenFOAM, or do other CFD packages just do a better job at hiding the influence of domain decomposition? |
|
January 23, 2020, 11:27 |
|
#13 |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 51 |
Dear Flotus,
well, as I have no idea how ANSYS and other programs are solving the problem, I only can make a rough statement how I think FOAM is doing it. Maybe it is wrong here, probably as I don't now these things in detail as I never investigated into that. Without talking about parallelization, what I think we do is actually to divide the physical domain into closed single domains that - together - form the whole geometry again. Now, the problem is as follows:
Lets imagine that we have two cells that share a face for the one core case, it is easy to calculate the face value based on the two cells. Now, assume that the mesh is split at these cells. Therefore, as each processor domain is separated, the before mentioned cells don't know anything about each other anymore. They are only connected via the faces. Therefore, we calculate the value at the processor face by using only one cell (as we don't know the neighbor cell -> it is in another processor domain). This information is sent to the other mesh that is solved by the other processor and it is done until we reach the convergence criterion. So actually it is as follows:
Thus, for directed flows, this is not a big deal but for free convection it will influence your solution. Of course, if your decomposition strategy is not well defined and you get the processor boundary faces at - let say - really bad locations, this will also influence forced fluid flow cases. Nevertheless, I personally could tell that I have decomposition influences probably for free-convection cases using scotch, as it randomly decomposes your mesh. The error one introduces here depends on the number of decomposed regions, the way one decomposes (e.g., arbitrary (scotch) or aligned to the axis (simple, hierarchical)). Hope it got a bit clearer. If other software uses other strategies, no idea. PS: I might investigate into the processor boundary condition to ensure how it works.
__________________
Keep foaming, Tobias Holzmann Last edited by Tobi; January 23, 2020 at 12:49. |
|
January 23, 2020, 12:41 |
|
#14 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,426
Rep Power: 49 |
What you describe sounds like ordinary domain decomposition.
If a cell on a domain boundary needs information from an adjacent cell, which resides in a different domain, then the parallelization needs to provide this information. E.g. via MPI transfers. And that's what is usually done when parallelizing a code using domain decomposition. A very intuitive way to achieve this is to add the adjacent cells from the neighbor domain to the original domain, sometimes referred to as "ghost cells". They don't update their own values, they just provide values for updating the regular cells of each domain. I thought this was the standard way of handling domain decomposition, which avoids reverting back to lower order methods. |
|
January 23, 2020, 12:48 |
|
#15 |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 51 |
Hi Alex,
well, I have to say, I don't know if foam is doing it like that. Here, one should investigate into the processor boundary condition in order to allow one to make a clear statement. I can't do that and I added a hint to my previous post.
__________________
Keep foaming, Tobias Holzmann |
|
January 23, 2020, 15:07 |
|
#16 |
Senior Member
Michael Alletto
Join Date: Jun 2018
Location: Bremen
Posts: 616
Rep Power: 16 |
This slides provide some explanation of how parallization is done in OF https://www.google.com/url?sa=t&sour...a1AouhvqjcH3Ih
|
|
January 23, 2020, 15:25 |
|
#17 |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Bad Wörishofen
Posts: 2,711
Blog Entries: 6
Rep Power: 51 |
Thanks for the link.
Summary: I was wrong. We share the information of the neighbor cell values if I got the presentation correct.
__________________
Keep foaming, Tobias Holzmann |
|
January 24, 2020, 05:42 |
|
#18 |
Senior Member
Michael Alletto
Join Date: Jun 2018
Location: Bremen
Posts: 616
Rep Power: 16 |
I understood the presentation that the processor patch is treated as boundary conditions.
If we look at the source code (https://www.openfoam.com/documentati...8C_source.html) we have an evaluate() function. This function is called to set the boundary conditions for the fields which are solved by the fvMatrix class. Actually it is called by the function correctBoundaryConditions(). For a deeper explanation see this thread: updateCoeffs() and evaluate(). The correctBoundayCondition() function is directly called by the fvMatrix solver when solving the matrix. (see e.g. https://www.openfoam.com/documentati...ve_8C_source.c) so I guess depending on the operator (div or laplacian) the processor patch is responsible to evaluate the fluxes on the patch |
|
January 19, 2021, 21:51 |
|
#19 |
New Member
victor
Join Date: Nov 2015
Location: pku,china
Posts: 5
Rep Power: 10 |
Dear Foamers,
I am recently working on the wide-gap Taylor-Couette flow (eta=0.5), the Reynolds number is 475, the number of vortices is varying according to the number of processors and time-steps. In the work of Razzak, they found the number of vortexes is 6 at Reynolds number is 475. (https://doi.org/10.1063/1.5125640) However, in my study, the number of vortexes is 6 when using 280 processors, the number of vortexes is 8 when using 240 processors, the number of vortexes is 10 when using 360 processors. The OpenFOAM version is openfoam5. The decompose method used here is scotch, similar results observed with simple and hierarchical methods to do decompose. So my question is whether the decomposing method in OF is able to such a lower Reynolds number? Have you ever met such an issue that the flow structure is varying according to the number of processors and time-steps? Maybe in turbulent flow, the numerical dissipation induced by the parallel decomposing will be less significant. Thanks in advance. Different flow structures obtained due to the number of processors and time-steps |
|
January 22, 2021, 05:13 |
|
#20 |
Senior Member
|
It is well established that the accuracy of the domain decomposition preconditioner decreases as the number of subdomains increases (see e.g. [1], [2]).
I am unaware of how this effects number of vortices. I can imagine, however, that there is some link. Do you monitor residuals in the simulations? Are you able to enforce same accuracy at each step of the segregated solver, independent of number of processors? [1]: https://books.google.nl/books?id=dxw...sition&f=false [2]: ddm.org |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Hex and Tet meshes - simplefoam comparison | danvica | OpenFOAM Running, Solving & CFD | 10 | January 4, 2013 01:18 |
Getting prism to inflate into mixed tet-hex meshes | Joe | CFX | 16 | October 10, 2011 07:06 |
HELP!! How could I obtain structured-orthogonal-body fitted meshes???? | DajeMoo | ANSYS | 0 | January 28, 2011 12:52 |
Dynamic Meshes | Cfdtoy | FLUENT | 2 | February 6, 2004 12:14 |
Large 3D tetrahedral meshes | Aldo Bonfiglioli | Main CFD Forum | 4 | August 27, 1999 03:33 |