# Decomposing meshes

 User Name Remember Me Password
 Register Blogs Members List Search Today's Posts Mark Forums Read

 September 8, 2014, 12:24 Decomposing meshes #1 Super Moderator     Tobias Holzmann Join Date: Oct 2010 Location: Tussenhausen Posts: 2,708 Blog Entries: 6 Rep Power: 51 Hi all, I have a few questions to the decompose process. The first question is about the decomposition output and general stuff: First of all I think a main topic is to reduce the faces between the processors as good as you can but on the other hand you should not have 1000 faces on processor1 and 2 faces on processor2 (if you are on processor3). Is that correct? If I am right, then the following decomposition should be very very bad and decrease your speed extremly due to the fact of the bad face distribution (marked red) Code: ```Processor 0 Number of cells = 79703 Number of faces shared with processor 1 = 2347 Number of faces shared with processor 3 = 1252 Number of faces shared with processor 6 = 1983 Number of faces shared with processor 7 = 4 Number of faces shared with processor 9 = 6 Number of processor patches = 5 Number of processor faces = 5592 Number of boundary faces = 13097 Processor 1 Number of cells = 30927 Number of faces shared with processor 0 = 2347 Number of faces shared with processor 2 = 2691 Number of faces shared with processor 3 = 5 Number of faces shared with processor 4 = 605 Number of faces shared with processor 5 = 1 Number of faces shared with processor 6 = 6 Number of faces shared with processor 7 = 1223 Number of faces shared with processor 8 = 35 Number of faces shared with processor 9 = 1 Number of processor patches = 9 Number of processor faces = 6914 Number of boundary faces = 6206 Processor 2 Number of cells = 98548 Number of faces shared with processor 1 = 2691 Number of faces shared with processor 4 = 6 Number of faces shared with processor 5 = 1538 Number of faces shared with processor 8 = 2181 Number of faces shared with processor 11 = 1 Number of processor patches = 5 Number of processor faces = 6417 Number of boundary faces = 17893 Processor 3 Number of cells = 72934 Number of faces shared with processor 0 = 1252 Number of faces shared with processor 1 = 5 Number of faces shared with processor 4 = 2949 Number of faces shared with processor 6 = 1 Number of faces shared with processor 9 = 1460 Number of processor patches = 5 Number of processor faces = 5667 Number of boundary faces = 18218 Processor 4 Number of cells = 45303 Number of faces shared with processor 1 = 605 Number of faces shared with processor 2 = 6 Number of faces shared with processor 3 = 2949 Number of faces shared with processor 5 = 3615 Number of faces shared with processor 9 = 28 Number of faces shared with processor 10 = 1574 Number of faces shared with processor 11 = 24 Number of processor patches = 7 Number of processor faces = 8801 Number of boundary faces = 8444 Processor 5 Number of cells = 66574 Number of faces shared with processor 1 = 1 Number of faces shared with processor 2 = 1538 Number of faces shared with processor 4 = 3615 Number of faces shared with processor 10 = 5 Number of faces shared with processor 11 = 1430 Number of processor patches = 5 Number of processor faces = 6589 Number of boundary faces = 14245 Processor 6 Number of cells = 38329 Number of faces shared with processor 0 = 1983 Number of faces shared with processor 1 = 6 Number of faces shared with processor 3 = 1 Number of faces shared with processor 7 = 1036 Number of faces shared with processor 9 = 700 Number of faces shared with processor 12 = 2172 Number of faces shared with processor 13 = 2 Number of faces shared with processor 15 = 8 Number of processor patches = 8 Number of processor faces = 5908 Number of boundary faces = 6084 Processor 7 Number of cells = 96970 Number of faces shared with processor 0 = 4 Number of faces shared with processor 1 = 1223 Number of faces shared with processor 6 = 1036 Number of faces shared with processor 8 = 1488 Number of faces shared with processor 9 = 4 Number of faces shared with processor 10 = 593 Number of faces shared with processor 11 = 4 Number of faces shared with processor 12 = 2 Number of faces shared with processor 13 = 248 Number of faces shared with processor 14 = 2 Number of faces shared with processor 16 = 2 Number of processor patches = 11 Number of processor faces = 4606 Number of boundary faces = 20630 Processor 8 Number of cells = 45017 Number of faces shared with processor 1 = 35 Number of faces shared with processor 2 = 2181 Number of faces shared with processor 7 = 1488 Number of faces shared with processor 11 = 593 Number of faces shared with processor 14 = 1197 Number of faces shared with processor 17 = 2 Number of processor patches = 6 Number of processor faces = 5496 Number of boundary faces = 8169 Processor 9 Number of cells = 33585 Number of faces shared with processor 0 = 6 Number of faces shared with processor 1 = 1 Number of faces shared with processor 3 = 1460 Number of faces shared with processor 4 = 28 Number of faces shared with processor 6 = 700 Number of faces shared with processor 7 = 4 Number of faces shared with processor 10 = 910 Number of faces shared with processor 15 = 1436 Number of faces shared with processor 16 = 5 Number of processor patches = 9 Number of processor faces = 4550 Number of boundary faces = 7381 Processor 10 Number of cells = 144720 Number of faces shared with processor 4 = 1574 Number of faces shared with processor 5 = 5 Number of faces shared with processor 7 = 593 Number of faces shared with processor 9 = 910 Number of faces shared with processor 11 = 1497 Number of faces shared with processor 15 = 2 Number of faces shared with processor 16 = 575 Number of faces shared with processor 17 = 4 Number of processor patches = 8 Number of processor faces = 5160 Number of boundary faces = 29770 Processor 11 Number of cells = 35368 Number of faces shared with processor 2 = 1 Number of faces shared with processor 4 = 24 Number of faces shared with processor 5 = 1430 Number of faces shared with processor 7 = 4 Number of faces shared with processor 8 = 593 Number of faces shared with processor 10 = 1497 Number of faces shared with processor 16 = 6 Number of faces shared with processor 17 = 1100 Number of processor patches = 8 Number of processor faces = 4655 Number of boundary faces = 8624 Processor 12 Number of cells = 97632 Number of faces shared with processor 6 = 2172 Number of faces shared with processor 7 = 2 Number of faces shared with processor 13 = 3143 Number of faces shared with processor 15 = 1410 Number of processor patches = 4 Number of processor faces = 6727 Number of boundary faces = 20346 Processor 13 Number of cells = 31151 Number of faces shared with processor 6 = 2 Number of faces shared with processor 7 = 248 Number of faces shared with processor 12 = 3143 Number of faces shared with processor 14 = 1952 Number of faces shared with processor 15 = 7 Number of faces shared with processor 16 = 682 Number of faces shared with processor 17 = 3 Number of processor patches = 7 Number of processor faces = 6037 Number of boundary faces = 6840 Processor 14 Number of cells = 72706 Number of faces shared with processor 7 = 2 Number of faces shared with processor 8 = 1197 Number of faces shared with processor 13 = 1952 Number of faces shared with processor 17 = 1281 Number of processor patches = 4 Number of processor faces = 4432 Number of boundary faces = 12889 Processor 15 Number of cells = 71806 Number of faces shared with processor 6 = 8 Number of faces shared with processor 9 = 1436 Number of faces shared with processor 10 = 2 Number of faces shared with processor 12 = 1410 Number of faces shared with processor 13 = 7 Number of faces shared with processor 16 = 3801 Number of processor patches = 6 Number of processor faces = 6664 Number of boundary faces = 17329 Processor 16 Number of cells = 44918 Number of faces shared with processor 7 = 2 Number of faces shared with processor 9 = 5 Number of faces shared with processor 10 = 575 Number of faces shared with processor 11 = 6 Number of faces shared with processor 13 = 682 Number of faces shared with processor 15 = 3801 Number of faces shared with processor 17 = 3951 Number of processor patches = 7 Number of processor faces = 9022 Number of boundary faces = 8759 Processor 17 Number of cells = 75775 Number of faces shared with processor 8 = 2 Number of faces shared with processor 10 = 4 Number of faces shared with processor 11 = 1100 Number of faces shared with processor 13 = 3 Number of faces shared with processor 14 = 1281 Number of faces shared with processor 16 = 3951 Number of processor patches = 6 Number of processor faces = 6341 Number of boundary faces = 15767 Number of processor faces = 54789 Max number of cells = 144720 (120.392% above average 65664.8) Max number of processor patches = 11 (65% above average 6.66667) Max number of faces between processors = 9022 (48.2013% above average 6087.67) Time = 0``` The second question is about simple and hierarchical it is known that hierarchical is only a extended simple algorithm you can specify which axis has to decomposed first (e.g. zxy) But what advantage do you have? If you decompose like (3 1 2) in zxy you split first z (2) then x (3) and y (1 = none). At least you have 6 domains. If you split like (3 1 2) in xyz, you split first x (3) then y (1) and z (2). At least you also get 6 domains which should be (in my imagination) the same as above, or not? The third question is about scotch and metis in the forum you find a few threads about that topic Metis and scotch are decompose algorithm that attmpt to minimize the proc boundary faces between each core by using some special algorithm, aren't they? Therefor you should be able to get a better (more balanced mesh) by using these algorithm instead of simple or hierarchical Due to the fact that at the moment the metis lib is not available (if you do not compile it yourself) you have the ability to use scotch method. In the scotchDecomp.C file there are some hints how the algorithm is working but unfortunatelly I dont understand the things. Is there a paper, literature about this method? At least one more questions The more cores you have the more neigbourfaces you get. So if you have something like above (decompose output), could it be better to reduce the cores to get a better decomposition mesh? At least speed up the simulation because of better cpu communication? How can you check out if your mesh is decomposed well or notonly with the decomposePar output? What can i derivate of the green marked lines in the code block above? Thanks in advance and for reading the topic. Any hints and experiences are welcome. makaveli_lcf, keepfit and Minghao_Li like this. __________________ Keep foaming, Tobias Holzmann

 September 9, 2014, 05:30 #2 Super Moderator     Tobias Holzmann Join Date: Oct 2010 Location: Tussenhausen Posts: 2,708 Blog Entries: 6 Rep Power: 51 Hi all, today I made some tests with my college using decompose method hierarchical. Question two is not anymore necessary to explain. I miss understand something. For all who are interested here is the answer: using hierarchical or simple split your mesh into regions with same amount of cells I thought that the splitting is done in the center or due to the length of a geometry; e.g. if you have a pipe with a length of 2m then I thought with splitting of (2 1 1) (xyz), the mesh will be splitted in the middle at 1m. Thats correct if you have same amount of cells at each side. But if you have (e.g. a refinement region at one side) the mesh will not be splitted in the middle because of different cell amount. Its splitted so, that both processors will have same amount of cells. vigneshTG, vivek05, Owais Shabbir and 2 others like this. __________________ Keep foaming, Tobias Holzmann

 July 13, 2017, 02:37 #3 Member   Sebastian Trunk Join Date: Mar 2015 Location: Erlangen, Germany Posts: 60 Rep Power: 11 Hey Tobi, could you please answer your questions from above if you know the answers now? Thanks and best wishes Sebastian

 July 13, 2017, 05:03 #5 Member   Sebastian Trunk Join Date: Mar 2015 Location: Erlangen, Germany Posts: 60 Rep Power: 11 "Again what learned" as Lothar Matthäus would say ! Thank you very much for your quick answer...

July 30, 2018, 10:45
#6
Senior Member

Gerhard Holzinger
Join Date: Feb 2012
Location: Austria
Posts: 339
Rep Power: 28
Let me share a recent observation of mine.

I simulated axi-symmetric gas flow in a long pipe. As the domain is quite large, the simulation was run using several parallel processes. The scotch decomposition created a processor boundary, which zic-zags through the pipe.

The part of the processor boundary, which is parallel to the flow, seems to create some disturbance in the pressure field. Luckily, the disturbance does not blow up the simulation, yet it's quite interesting.

The attached images show the pressure field. The involved subdomains are shown as white wireframes.
Attached Images
 pressureField.jpg (26.1 KB, 185 views) pressureField_wSubdomain01.jpg (37.1 KB, 160 views) pressureField_wSubdomain02.jpg (40.9 KB, 155 views)

 July 30, 2018, 15:56 #7 Super Moderator     Tobias Holzmann Join Date: Oct 2010 Location: Tussenhausen Posts: 2,708 Blog Entries: 6 Rep Power: 51 There is also a topic of damn break in the German OpenFOAM forum. A guy decomposed the domain and got different results for different decomposition methods. That is clear if one is thinking about the fluxes which has to be shared at the processor boundaries. Nice to get the proof again. Thank you Gerhard __________________ Keep foaming, Tobias Holzmann

 August 1, 2018, 04:46 #8 Senior Member   Gerhard Holzinger Join Date: Feb 2012 Location: Austria Posts: 339 Rep Power: 28 The images I posted are from a case, which I am not able to share. Unfortunately, I am not able to reproduce the issue using a simpler, distributable geometry. A minimal working example (MWE) for the behaviour I observed in my case, would be quite interesting, since this is a quite odd behaviour.

 August 1, 2018, 12:56 #9 Super Moderator     Tobias Holzmann Join Date: Oct 2010 Location: Tussenhausen Posts: 2,708 Blog Entries: 6 Rep Power: 51 The influence of the decomposition should be stronger for free convection (if the flow can go anywhere). However, for forced convection - while the fluid has a fixed direction - the decomposition should not influence the fields too much. saidc. likes this. __________________ Keep foaming, Tobias Holzmann

August 3, 2018, 15:18
#10
Senior Member

Michael Alletto
Join Date: Jun 2018
Location: Bremen
Posts: 615
Rep Power: 15
Quote:
 Originally Posted by GerhardHolzinger Let me share a recent observation of mine. I simulated axi-symmetric gas flow in a long pipe. As the domain is quite large, the simulation was run using several parallel processes. The scotch decomposition created a processor boundary, which zic-zags through the pipe. The part of the processor boundary, which is parallel to the flow, seems to create some disturbance in the pressure field. Luckily, the disturbance does not blow up the simulation, yet it's quite interesting. The attached images show the pressure field. The involved subdomains are shown as white wireframes.
It Seems something Like the checkboard effect when pressure and velocity are decoupled

 November 17, 2019, 23:20 Decomposing Mesh for a multiregion domain #11 New Member   Muhammad Omer Mughal Join Date: Jul 2010 Location: Singapore Posts: 22 Rep Power: 15 Dear Tobi and all I am performing a heat transfer simulation in which I have four regions. When I use scotch method of decomposition for the regions, the mesh is decomposed correctly however, it doesnot move forward while performing faceAgglomeration. When I use simple method with the following coefficients for the two of the larger regions while using scotch method for the other two domains I get a singular matrix error. numberOfSubdomains 144; method simple; simpleCoeffs { n (16 9 1); // total must match numberOfSubdomains delta 0.001; } When I try using simple method for all regions with the above coefficients as before for the larger regions while the following coefficients for the other two smaller regions, numberOfSubdomains 144; method simple; simpleCoeffs { n (12 12 1); // total must match numberOfSubdomains delta 0.001; } I get the following warning and I also find 0 cells in some of the processors during decompostion FOAM Warning : From function Foam:olyMesh:olyMesh(const Foam::IOobject &) in file meshes/polyMesh/polyMesh.C at line 330 no points in mesh When I try running the solver, it terminates with the following message [57] --> FOAM FATAL ERROR: [57] (5 20) not found in table. Valid entries: 847 ( (98 103) (88 104) ....................... ........................... [57] From function T& Foam::HashTable:perator[](const Key&) [with T = double; Key = Foam::edge; Hash = Foam::Hash] [57] in file OpenFOAM-6/src/OpenFOAM/lnInclude/HashTableI.H at line 117. Can some one kindly help me to fix this issue Last edited by Muhammad Omer Mughal; November 17, 2019 at 23:30. Reason: missed some information

 January 23, 2020, 03:19 #12 Super Moderator     Alex Join Date: Jun 2012 Location: Germany Posts: 3,398 Rep Power: 46 I am a bit confused about the results shared here. In my opinion, a domain decomposition/parallelization strategy should be designed in a way that it has no influence on the results whatsoever. That would be my first and most important item on a list of prerequisites for any parallelization. Is this a bug in OpenFOAM, or do other CFD packages just do a better job at hiding the influence of domain decomposition?

 January 23, 2020, 12:41 #14 Super Moderator     Alex Join Date: Jun 2012 Location: Germany Posts: 3,398 Rep Power: 46 What you describe sounds like ordinary domain decomposition. If a cell on a domain boundary needs information from an adjacent cell, which resides in a different domain, then the parallelization needs to provide this information. E.g. via MPI transfers. And that's what is usually done when parallelizing a code using domain decomposition. A very intuitive way to achieve this is to add the adjacent cells from the neighbor domain to the original domain, sometimes referred to as "ghost cells". They don't update their own values, they just provide values for updating the regular cells of each domain. I thought this was the standard way of handling domain decomposition, which avoids reverting back to lower order methods.

 January 23, 2020, 12:48 #15 Super Moderator     Tobias Holzmann Join Date: Oct 2010 Location: Tussenhausen Posts: 2,708 Blog Entries: 6 Rep Power: 51 Hi Alex, well, I have to say, I don't know if foam is doing it like that. Here, one should investigate into the processor boundary condition in order to allow one to make a clear statement. I can't do that and I added a hint to my previous post. __________________ Keep foaming, Tobias Holzmann

 January 23, 2020, 15:07 #16 Senior Member   Michael Alletto Join Date: Jun 2018 Location: Bremen Posts: 615 Rep Power: 15 This slides provide some explanation of how parallization is done in OF https://www.google.com/url?sa=t&sour...a1AouhvqjcH3Ih

 January 23, 2020, 15:25 #17 Super Moderator     Tobias Holzmann Join Date: Oct 2010 Location: Tussenhausen Posts: 2,708 Blog Entries: 6 Rep Power: 51 Thanks for the link. Summary: I was wrong. We share the information of the neighbor cell values if I got the presentation correct. __________________ Keep foaming, Tobias Holzmann

 January 24, 2020, 05:42 #18 Senior Member   Michael Alletto Join Date: Jun 2018 Location: Bremen Posts: 615 Rep Power: 15 I understood the presentation that the processor patch is treated as boundary conditions. If we look at the source code (https://www.openfoam.com/documentati...8C_source.html) we have an evaluate() function. This function is called to set the boundary conditions for the fields which are solved by the fvMatrix class. Actually it is called by the function correctBoundaryConditions(). For a deeper explanation see this thread: updateCoeffs() and evaluate(). The correctBoundayCondition() function is directly called by the fvMatrix solver when solving the matrix. (see e.g. https://www.openfoam.com/documentati...ve_8C_source.c) so I guess depending on the operator (div or laplacian) the processor patch is responsible to evaluate the fluxes on the patch

 January 19, 2021, 21:51 #19 New Member   victor Join Date: Nov 2015 Location: pku,china Posts: 5 Rep Power: 10 Dear Foamers, I am recently working on the wide-gap Taylor-Couette flow (eta=0.5), the Reynolds number is 475, the number of vortices is varying according to the number of processors and time-steps. In the work of Razzak, they found the number of vortexes is 6 at Reynolds number is 475. (https://doi.org/10.1063/1.5125640) However, in my study, the number of vortexes is 6 when using 280 processors, the number of vortexes is 8 when using 240 processors, the number of vortexes is 10 when using 360 processors. The OpenFOAM version is openfoam5. The decompose method used here is scotch, similar results observed with simple and hierarchical methods to do decompose. So my question is whether the decomposing method in OF is able to such a lower Reynolds number? Have you ever met such an issue that the flow structure is varying according to the number of processors and time-steps? Maybe in turbulent flow, the numerical dissipation induced by the parallel decomposing will be less significant. Thanks in advance. Different flow structures obtained due to the number of processors and time-steps

 January 22, 2021, 05:13 #20 Senior Member   Domenico Lahaye Join Date: Dec 2013 Posts: 722 Blog Entries: 1 Rep Power: 17 It is well established that the accuracy of the domain decomposition preconditioner decreases as the number of subdomains increases (see e.g. [1], [2]). I am unaware of how this effects number of vortices. I can imagine, however, that there is some link. Do you monitor residuals in the simulations? Are you able to enforce same accuracy at each step of the segregated solver, independent of number of processors? [1]: https://books.google.nl/books?id=dxw...sition&f=false [2]: ddm.org

 Thread Tools Search this Thread Search this Thread: Advanced Search Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is OffTrackbacks are Off Pingbacks are On Refbacks are On Forum Rules

 Similar Threads Thread Thread Starter Forum Replies Last Post danvica OpenFOAM Running, Solving & CFD 10 January 4, 2013 01:18 Joe CFX 16 October 10, 2011 07:06 DajeMoo ANSYS 0 January 28, 2011 12:52 Cfdtoy FLUENT 2 February 6, 2004 12:14 Aldo Bonfiglioli Main CFD Forum 4 August 27, 1999 03:33

All times are GMT -4. The time now is 07:36.

 Contact Us - CFD Online - Privacy Statement - Top