CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

How to handel fields in parallel computation

Register Blogs Community New Posts Updated Threads Search

Like Tree9Likes
  • 1 Post By marupio
  • 4 Post By alberto
  • 1 Post By alberto
  • 1 Post By niklas
  • 2 Post By marupio

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   May 6, 2011, 16:23
Default How to handel fields in parallel computation
  #1
Member
 
Kai
Join Date: May 2010
Location: Shanghai
Posts: 61
Blog Entries: 1
Rep Power: 15
kaifu is on a distinguished road
I have some problem with creating fields in parallel OpenFoam. For example if i create a fvPatch in createField.H, it seems that I can only create the patch in part of mesh. The code is like,
Code:
    //find the patch:
    label patchWall=mesh.boundaryMesh().findPatchID("walls");
    const fvPatch &thePatchItselfWall=mesh.boundary()[patchWall];
  
    //loop over the patch:
    forAll(thePatchItselfWall,faceI)
    {
        label cellI=thePatchItselfWall.faceCells()[faceI];
        Info<<"label="<<cellI<<endl;        
        SfPerVol[cellI]=mag(mesh.Sf().boundaryField()[patchWall][faceI])/mesh.V()[cellI];
    }
The serial calculation shows:
Quote:
label=19
label=39
label=59
...
label=2999
But the parallel one shows:
Quote:
label=19
label=39
label=59
...
label=739
which is 1/4 of labels that I can get from serial one. (there are 4 procs)

Obviously I can only access part of mesh. So how could I create the fvPatch on the other 3 procs? If done, how could I know I have accessed to the other part of mesh? Does the output "Info<<cellI<<endl;" work? thx

// Kai
kaifu is offline   Reply With Quote

Old   May 7, 2011, 13:11
Default
  #2
Senior Member
 
David Gaden
Join Date: Apr 2009
Location: Winnipeg, Canada
Posts: 437
Rep Power: 21
marupio is on a distinguished road
That's the way parallel works. Each instance uses only 1/4 of the mesh, and solves everything as if that is the entire mesh, except at the interface between the meshes. Information passes between the interfaces using PStream.
hua1015 likes this.
marupio is offline   Reply With Quote

Old   August 18, 2011, 15:30
Default
  #3
Member
 
Kim Yusik
Join Date: Dec 2009
Posts: 39
Rep Power: 16
impecca is on a distinguished road
I am raising the same question on 'field handling with a parallel computation'. In my case, I haven't faced any problem with patch handling with a parallel computation. I do not know how does Kai have that problem. What I have done is pretty mush same.

label inletPatchID = mesh.boundaryMesh().findPatchID("inlet");
fvPatchVectorField refField= U.boundaryField()[inletPatchID];
forAll(refField, celli){
// any command
}

celli showed the same number as the 'inlet' patch has both single and parallel computing. But When I try to manipulate the field within the domain (not a patch on boundary conditions), I recognize that parallel and single computations show different number of cells in the field. code which I use,


volVectorField U_= U;

scalar i = 0;
forAll(U_, facei){
i++;
}
Info<<"i= "<<i<<nl;


with single processor : 'i' shows same number of cell as the domain contains
i= 336000

with 4 processors : 'i' shows smaller number then it should
i= 84332

To sum up, Could anyone can give a hint on field manipulation with a parallel computation?
Thanks in advance.

Yusik
impecca is offline   Reply With Quote

Old   August 21, 2011, 04:14
Default
  #4
Senior Member
 
Alberto Passalacqua
Join Date: Mar 2009
Location: Ames, Iowa, United States
Posts: 1,912
Rep Power: 36
alberto will become famous soon enoughalberto will become famous soon enough
It really depends on the manipulation you are trying to do. In general, as shown in many examples:

1. Try to avoid looping. In many cases it is possible to avoid them by re-thinking the operations you have to do. For example, if you have to define a field using piece-wise function, you can use "pos" and "neg" functions, instead than if and forAll.

Example:

v = a, if x < b
v = c if x > b

can be coded as

volScalarField v = neg(x - b)*a + pos(x-b)*c;

since

pos(x) = 1 if x is positive, and 0 if it's negative
neg(x) = 1 if x is negative, and 0 if it's positive

2. If you loop over cells, and modify the cell values of a field directly, remember you most likely have to correct the BC for that field after the manipulation.

Best,
hua1015, kosan, Aaron_L and 1 others like this.
__________________
Alberto Passalacqua

GeekoCFD - A free distribution based on openSUSE 64 bit with CFD tools, including OpenFOAM. Available as in both physical and virtual formats (current status: http://albertopassalacqua.com/?p=1541)
OpenQBMM - An open-source implementation of quadrature-based moment methods.

To obtain more accurate answers, please specify the version of OpenFOAM you are using.
alberto is offline   Reply With Quote

Old   August 21, 2011, 09:21
Post
  #5
Member
 
Kim Yusik
Join Date: Dec 2009
Posts: 39
Rep Power: 16
impecca is on a distinguished road
Thanks alberto, I 've managed to manipulate the field. The code I used (it's bit simplified),

forAll(cells, celli){
if( fabs(0.19-xc[celli]) <0.02 ){
jj=(int)( (yc[celli] )/dy );
kk=(int)( (zc[celli]) / dz);

U_[celli] = Um_[jj][kk]*vector(1, 0, 0)
+ vector(
u2_[jj][kk],
v2_[jj][kk],
w2_[jj][kk]
);

}
}

based on pisoFoam, application is a channel flow.
intention was, imposing artificial fluctuations(u2_[jj][kk], v2_[jj][kk], _[jj][kk]) at a certain down stream distance of inlet (here x= 0.19). xc[celli], yc[celli] and zc[celli] are pre-defined cell-centre positions of the domain. mean velocity ( Um_[jj][kk]) and artificial fluctuations are generated N by M virtual grid where 0<jj<N and 0<kk<M. dy and dz are unifrom grid distance in the virtual grid.

The previouse problem was, I pre-defined the plane (saved cell ID at this specific plane) where the fluctuations are imposed. and try to call these saved cell IDs at each time step. I suspect that these pre-saved cell IDs do not have same orders between single and paralle computations. That's why it didn't work out with the parallel computations.
I am not sure whether I make myself clear or not but thank you very much for your great comments.

Yusik
impecca is offline   Reply With Quote

Old   August 30, 2011, 09:01
Default
  #6
Member
 
Kim Yusik
Join Date: Dec 2009
Posts: 39
Rep Power: 16
impecca is on a distinguished road
Hi, Alberto.

Could you tell me more on the 'correcting boundary condition' from your reply? can you give me a simple example of requried BC corrections following by a field manipulation?
In my case, I change cell centre values at one plane in the domain. I do not see a reason for correcting BC in this case. Please correct me, if I am wrong.

Regards
Yusik
impecca is offline   Reply With Quote

Old   August 30, 2011, 10:52
Default
  #7
Senior Member
 
Alberto Passalacqua
Join Date: Mar 2009
Location: Ames, Iowa, United States
Posts: 1,912
Rep Power: 36
alberto will become famous soon enoughalberto will become famous soon enough
The reason to update boundary conditions is that the domain decomposition in OpenFOAM considers the interface between different processors as a boundary condition (the processor patch).

If you directly access a field (a forAll loop) to change its values, the processor patches won't be updated automatically. As a consequence, you might want to use correctBoundaryConditions().

Best,
hua1015 likes this.
__________________
Alberto Passalacqua

GeekoCFD - A free distribution based on openSUSE 64 bit with CFD tools, including OpenFOAM. Available as in both physical and virtual formats (current status: http://albertopassalacqua.com/?p=1541)
OpenQBMM - An open-source implementation of quadrature-based moment methods.

To obtain more accurate answers, please specify the version of OpenFOAM you are using.
alberto is offline   Reply With Quote

Old   March 20, 2013, 14:34
Default
  #8
Member
 
Join Date: Jun 2011
Posts: 42
Rep Power: 14
mikeP is on a distinguished road
what if a volScalarField is scaled with cmptMultiply function? is correctBoundaryConditions() still necessary?

also does the cmptMultiply function update both internalField() and boundaryField()?
mikeP is offline   Reply With Quote

Old   March 20, 2013, 15:47
Default
  #9
Super Moderator
 
niklas's Avatar
 
Niklas Nordin
Join Date: Mar 2009
Location: Stockholm, Sweden
Posts: 693
Rep Power: 29
niklas will become famous soon enoughniklas will become famous soon enough
Just a note...

If you are running a parallel case and using the Info stream to output information, you need to remember that its only done for the main processor.

Instead of Info, try using Sout if you want the information printed for every processor.
Kareem Abdelshafy likes this.
niklas is offline   Reply With Quote

Old   June 26, 2013, 12:29
Default
  #10
New Member
 
Roberto Ribeiro
Join Date: Jun 2013
Posts: 3
Rep Power: 12
rribeiro is on a distinguished road
Quote:
Originally Posted by marupio View Post
That's the way parallel works. Each instance uses only 1/4 of the mesh, and solves everything as if that is the entire mesh, except at the interface between the meshes. Information passes between the interfaces using PStream.
marupio,

could explain this with a little more detail? For instance, the impact of parallel execution in the chosen equation solver?

Thanks
rribeiro is offline   Reply With Quote

Old   June 26, 2013, 12:37
Default
  #11
Senior Member
 
David Gaden
Join Date: Apr 2009
Location: Winnipeg, Canada
Posts: 437
Rep Power: 21
marupio is on a distinguished road
chegdan describes the disection of the matrix fairly well here:

http://www.cfd-online.com/Forums/ope...tml#post316988
wyldckat and banji like this.
__________________
~~~
Follow me on twitter @DavidGaden
marupio is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
problem in the CFX12.1 parallel computation BalanceChen ANSYS 2 July 7, 2011 10:26
Parallel computation using NUMECA 6.1 BalanceChen Fidelity CFD 1 June 5, 2011 06:24
load balancing in parallel computation shyamdsundar Main CFD Forum 0 September 2, 2009 23:53
Why the parallel computation is slow ztdep OpenFOAM Running, Solving & CFD 1 May 1, 2008 04:55
how to parallel computation Jane Siemens 2 April 28, 2004 06:11


All times are GMT -4. The time now is 19:50.