CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM

Domain decomposition & processor output

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree1Likes
  • 1 Post By eugene

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   November 4, 2011, 09:42
Question Domain decomposition & processor output
  #1
New Member
 
Join Date: Jun 2010
Location: Germany
Posts: 13
Rep Power: 15
marango is on a distinguished road
Hello everybody,

I have a problem in understanding, how OpenFOAM works. I'd like to find out, how the domain decomposition is performed, when working in parallel.

The main focus lies on the processor boundary condition. My guess is that for these bc some kind of Schur complement method is done, but I'm not certain.

So can anyone tell me more, how this decomposition is done and the values at the processor bc are calculated?

Since I don't really find an answer in the source code, I try to look at the matrices. So, I took a look at fvMatrix.C and added a line like

Info << *this ,

but unfortunately ONLY the output of the master processor is shown. How can I get the output of the other processors?

Thanks in advance :-)
marango is offline   Reply With Quote

Old   November 4, 2011, 11:03
Default
  #2
Senior Member
 
Pablo Higuera
Join Date: Jan 2011
Location: Auckland
Posts: 627
Rep Power: 19
Phicau is on a distinguished road
Hi

Use cout instead of Info.

Regards
Phicau is offline   Reply With Quote

Old   November 7, 2011, 03:22
Default
  #3
New Member
 
Join Date: Jun 2010
Location: Germany
Posts: 13
Rep Power: 15
marango is on a distinguished road
Thanks, Phicau, for your reply.

I've already tried "cout" and even "cerr", but then I get an error like

error: no match for ‘operator<<’ in ‘std:perator<< [with _Traits = std::char_traits<char>](((std::basic_ostream<char>&)((std::basic_ostream< char>*)std:perator<< [with _Traits = std::char_traits<char>](((std::basic_ostream<char>&)(& std::cout))

So, unfortunately it doesn't work. :-( Do you have any other hint, how I can get the matrix coefficients via cout/cerr ?
marango is offline   Reply With Quote

Old   November 7, 2011, 03:40
Default
  #4
Senior Member
 
akidess's Avatar
 
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 29
akidess will become famous soon enough
I think "Pout" will give you what you are looking for.
akidess is offline   Reply With Quote

Old   November 7, 2011, 06:05
Default
  #5
New Member
 
Join Date: Jun 2010
Location: Germany
Posts: 13
Rep Power: 15
marango is on a distinguished road
Thanks, akidess. That was a good hint. Now I could see, what every processor does.

However, I get a bit confused about the matrices. I try to solve the homogenous Poisson equation. The domain is a simple cube with a length of 2m. This cube is split into 8 cells with size 1x1x1 m.

When I run on a single processor the matrix is

6 -2 -2 0 -2 0 0 0
-2 6 0 -2 0 -2 0 0
-2 0 6 -2 0 0 -2 0
0 -2 -2 6 0 0 0 -2 =: A
-2 0 0 0 6 -2 -2 0
0 -2 0 0 -2 6 0 -2
0 0 -2 0 -2 0 6 -2
0 0 0 -2 0 -2 -2 6

and running it on two processors (domain is split in x-directory) I receive on each processor

4 -2 -2 0
-2 4 0 -2 := B.
-2 0 4 -2
0 -2 -2 4

So, my question is again how this decomposition is done and how the values on the processor boundaries are calculated?
marango is offline   Reply With Quote

Old   November 8, 2011, 06:06
Default
  #6
Senior Member
 
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21
eugene is on a distinguished road
What do you mean "how is the decomposition done?" ? Are you asking how the partitioning is specified, or what?

Communication is via interfaces. Check fvMatrixSolve.C from line 111. The update functions come from OpenFOAM --> lduMatrixUpdateMatrixInterfaces.C --> lduInterFaceField --> gets complicated (processorFvPacth eventually).

Hope this helps.
eugene is offline   Reply With Quote

Old   November 8, 2011, 08:00
Default
  #7
New Member
 
Join Date: Jun 2010
Location: Germany
Posts: 13
Rep Power: 15
marango is on a distinguished road
Thanks for your reply, eugene. I will take a look at the file fvMatrixSolve.C .


Okay, I try to explain in detail what I want to know.


We'd like to solve the pde

- laplacian(u) = 0.

We have the cube as our domain. Due to parallel computation we split this domain into two subdomains (one for each processor) and solve the pde on each subdomain, right? After this split we have a new "boundary" (processor boundary), which was an internal face before. Using two processors the equations

B_1 u_1 = b_1 and B_2 u_2 = b_2

are solved (in the respective subdomain) with e.g. PCG and the solution u is reconstructed for the whole domain. Due to our chosen (sub-)domain the matrices are the same, i.e. B_1 = B_2 = B.


If we would solve the pde only on a single processor, the fvMatrix would be the matrix A (posted above), but running in parallel yields to matrix B.


So my questions are:

1. Is the processor BC in fact a fixedValue BC and in case how are the values calculated?

2. How do I get from matrix A to matrix B or are they independent of each other?


Possible methods to perform this split into two subproblems are schur complement methods like described here

http://en.wikipedia.org/wiki/Schur_complement_method

But I don't know, if these are applied in OF.

I hope, now you understand what I want to know.
marango is offline   Reply With Quote

Old   November 9, 2011, 07:01
Default
  #8
Senior Member
 
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21
eugene is on a distinguished road
I read the first paragraph of the Schur page - no this is in no way similar to FOAM other than that it does not use overlapping elements.

The processor BC is not the same as a fixed value. The processor boundary provides a communications framework for the "halo" cells and then directly modifies the solution matrix during each inner loop of the solver to account for their influence. It does this by calculating a composite coefficient for each processor neighbour and then adding the product of the coefficient and the value of the neighbour field from the previous inner iteration. As such it should be identical to a normal solver sweep, except that the localised matrix preconditioning will be different on the sub-domain.

After decomposition, foam solves two separate matrices. There is no addressing into the neighbouring vector space. All communication is via the shared faces (there are some exceptions, but they are not relevant to 2nd order systems).

So the processor boundary is not in fact a boundary in the normal sense, it is more like a matrix manipulator. It does however supply surface normal gradients for Laplacians. See gaussLaplacianScheme.C and processorFvPatchField.C

If you want to go into more detail, you will need to dig further into the code.
eugene is offline   Reply With Quote

Old   November 10, 2011, 04:18
Default
  #9
New Member
 
Join Date: Jun 2010
Location: Germany
Posts: 13
Rep Power: 15
marango is on a distinguished road
Thanks, eugene, for your explanation. But I didn't quite understand what you mean in your second paragraph since I'm probably not so familiar with it.


So, could you please explain it to me once again (and step by step)?
marango is offline   Reply With Quote

Old   November 10, 2011, 07:13
Default
  #10
Senior Member
 
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21
eugene is on a distinguished road
I will try, but I didn't write this stuff and its like a zillion interconnected files and classes.

1. When you construct a new equation/matrix, all the fvc operations have components which automatically deal with processor boundaries in the correct way, i.e. passing the field values from the cell on the other side to calculate face values and face gradients.
2. The matrix is constructed for only the cells in the current processor. Cells in adjacent processors do not have entries in the matrix. Instead they are accounted for by "interfaces".
3. Cells on the other side of the interfaces have exactly the same effect on the matrix entries as a physical cell, but they do not have a matrix entry themselves.
4. Interfaces are updated (and update the matrix components) during the inner loop of the solvers (the exact granularity with respect to smoothing and preconditioning depends on the individual solver). Check for example the usage of "updateMatrixInterfaces" in GaussSeidelSmoother.C
5. The interface achieves this by providing the correct coefficients and current value of the cell on the other side of the interface via MPI message passing. All the solvers have direct access to the interfaces so can in theory get hold of any information on the other side any time they need it. This all descends into lduMatrixATmul and other bits of the solver guts where my expertise stops.

I suggest you start from something like PBiCG.C which is one of the simplest solvers. You can then go on to lduMatrixATmul (Amul & Tmul) and follow the interface string from there to gain a better understanding of the fine grained workings. I have no idea how the linear solver internals are supposed to work, so I can't help you here.
C. Okubo likes this.
eugene is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Domain decomposition method Bram OpenFOAM 5 November 28, 2017 05:42
Domain Imbalance HMR CFX 5 October 10, 2016 06:57
CFX domain comparison Kiat110616 CFX 4 April 3, 2011 23:43
domain decomposition dshawul Main CFD Forum 0 October 22, 2010 13:24
Mesh generation software for Domain Decomposition ali987 Main CFD Forum 0 November 10, 2009 12:06


All times are GMT -4. The time now is 04:52.