CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > SU2

avoid double counting for convective flux

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Display Modes
Old   March 7, 2017, 08:25
Default avoid double counting for convective flux
  #1
Member
 
Paul Zhang
Join Date: Feb 2011
Posts: 41
Rep Power: 8
paulzhang is on a distinguished road
I have a question about the flux computation as a result of mesh partitioning. Say two nodes i and j are connected by an edge, then the edge is cut by two processors resulting into two ghost nodes i' and j'.

i---j -> i---j'(for processor 1) and i'----j (for processor 2)

The convective flux associated to the original edge is now evaluated on both sides as follows:

///////////////////////////////////////////////////////////////////////
/*--- Compute the residual ---*/

numerics->ComputeResidual(Res_Conv, Jacobian_i, Jacobian_j, config);

/*--- Update residual value ---*/

LinSysRes.AddBlock(iPoint, Res_Conv);
LinSysRes.SubtractBlock(jPoint, Res_Conv);
///////////////////////////////////////////////////////////////////////

How to avoid double counting when loading the values into the linear system?
paulzhang is offline   Reply With Quote

Old   March 11, 2017, 04:50
Default
  #2
hlk
Senior Member
 
Heather Kline
Join Date: Jun 2013
Posts: 267
Rep Power: 6
hlk is on a distinguished road
Quote:
Originally Posted by paulzhang View Post
I have a question about the flux computation as a result of mesh partitioning. Say two nodes i and j are connected by an edge, then the edge is cut by two processors resulting into two ghost nodes i' and j'.

i---j -> i---j'(for processor 1) and i'----j (for processor 2)

The convective flux associated to the original edge is now evaluated on both sides as follows:

///////////////////////////////////////////////////////////////////////
/*--- Compute the residual ---*/

numerics->ComputeResidual(Res_Conv, Jacobian_i, Jacobian_j, config);

/*--- Update residual value ---*/

LinSysRes.AddBlock(iPoint, Res_Conv);
LinSysRes.SubtractBlock(jPoint, Res_Conv);
///////////////////////////////////////////////////////////////////////

How to avoid double counting when loading the values into the linear system?
Thanks for your question.
I'm not 100% sure if this will answer it, but maybe it will help if I describe a little what happens when the code is run in parallel.
Each partition has access to the nodes that it 'owns' as well as 'ghost' nodes at the interfaces, which are updated each time that the processor communicates with the other processors. During the computation, it only has access to the nodes stored on that processor. This means that each processor will compute the flux using the ghost nodes at the interface - this does mean that other processors are also computing that same flux at the same time.

This is a cost of parallelism - there will always be some inefficiency introduced at the interfaces between partitions. An alternative would be to have one processor calculate the flux, and then communicate that across processors, however since communication is usually more time-consuming than computations (and since each processor would have to sit around waiting for the another one to compute the flux it needs to proceed) it is often more efficient to have some repeated calculations.
hlk is offline   Reply With Quote

Old   March 11, 2017, 22:53
Default
  #3
Member
 
Paul Zhang
Join Date: Feb 2011
Posts: 41
Rep Power: 8
paulzhang is on a distinguished road
Hi Heather,

Thank you very much for taking your time trying to clarify the question in details. That makes a lot of sense to me.

Besides the efficiency issue, I was not quite sure how the linear system is constructed in SU2. Are we solving a huge global linear system Ax=b for all of the nodes which are indexed in the original mesh, now coupled in the linear system and therefore solved at the same time (So there is only ONE linear system no matter in serial or parallel), OR we just solve small local ones of Ax=b for each partition on each processor. In this way, we take advantage of parallelism by solving a reduced size linear system, but the node connecting the halo may lose part of the Jacobian information contributed by the update of its real "neighbor" solved in another processor?

I guess it is the second approach that we use in SU2. Is that right?

Again, your help is highly appreciated.
Paul




Quote:
Originally Posted by hlk View Post
Thanks for your question.
I'm not 100% sure if this will answer it, but maybe it will help if I describe a little what happens when the code is run in parallel.
Each partition has access to the nodes that it 'owns' as well as 'ghost' nodes at the interfaces, which are updated each time that the processor communicates with the other processors. During the computation, it only has access to the nodes stored on that processor. This means that each processor will compute the flux using the ghost nodes at the interface - this does mean that other processors are also computing that same flux at the same time.

This is a cost of parallelism - there will always be some inefficiency introduced at the interfaces between partitions. An alternative would be to have one processor calculate the flux, and then communicate that across processors, however since communication is usually more time-consuming than computations (and since each processor would have to sit around waiting for the another one to compute the flux it needs to proceed) it is often more efficient to have some repeated calculations.
paulzhang is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
How to use "translation" in solidBodyMotionFunction in OpenFOAM rupesh_w OpenFOAM Running, Solving & CFD 5 August 16, 2016 04:27
How can I avoid Divergence when simulating a boiling flux in FLUENT P_VS Fluent Multiphase 0 July 14, 2016 08:59
Linux vs PC Compiling Rhyno466 Fluent UDF and Scheme Programming 10 May 11, 2012 17:01
Parallelize UDF help Rhyno466 Fluent UDF and Scheme Programming 1 February 14, 2012 15:49
Replace periodic by inlet-outlet pair lego CFX 3 November 5, 2002 21:09


All times are GMT -4. The time now is 23:32.