|
[Sponsors] |
March 7, 2017, 07:25 |
avoid double counting for convective flux
|
#1 |
Member
Paul Zhang
Join Date: Feb 2011
Posts: 44
Rep Power: 15 |
I have a question about the flux computation as a result of mesh partitioning. Say two nodes i and j are connected by an edge, then the edge is cut by two processors resulting into two ghost nodes i' and j'.
i---j -> i---j'(for processor 1) and i'----j (for processor 2) The convective flux associated to the original edge is now evaluated on both sides as follows: /////////////////////////////////////////////////////////////////////// /*--- Compute the residual ---*/ numerics->ComputeResidual(Res_Conv, Jacobian_i, Jacobian_j, config); /*--- Update residual value ---*/ LinSysRes.AddBlock(iPoint, Res_Conv); LinSysRes.SubtractBlock(jPoint, Res_Conv); /////////////////////////////////////////////////////////////////////// How to avoid double counting when loading the values into the linear system? |
|
March 11, 2017, 03:50 |
|
#2 | |
Senior Member
Heather Kline
Join Date: Jun 2013
Posts: 309
Rep Power: 13 |
Quote:
I'm not 100% sure if this will answer it, but maybe it will help if I describe a little what happens when the code is run in parallel. Each partition has access to the nodes that it 'owns' as well as 'ghost' nodes at the interfaces, which are updated each time that the processor communicates with the other processors. During the computation, it only has access to the nodes stored on that processor. This means that each processor will compute the flux using the ghost nodes at the interface - this does mean that other processors are also computing that same flux at the same time. This is a cost of parallelism - there will always be some inefficiency introduced at the interfaces between partitions. An alternative would be to have one processor calculate the flux, and then communicate that across processors, however since communication is usually more time-consuming than computations (and since each processor would have to sit around waiting for the another one to compute the flux it needs to proceed) it is often more efficient to have some repeated calculations. |
||
March 11, 2017, 21:53 |
|
#3 | |
Member
Paul Zhang
Join Date: Feb 2011
Posts: 44
Rep Power: 15 |
Hi Heather,
Thank you very much for taking your time trying to clarify the question in details. That makes a lot of sense to me. Besides the efficiency issue, I was not quite sure how the linear system is constructed in SU2. Are we solving a huge global linear system Ax=b for all of the nodes which are indexed in the original mesh, now coupled in the linear system and therefore solved at the same time (So there is only ONE linear system no matter in serial or parallel), OR we just solve small local ones of Ax=b for each partition on each processor. In this way, we take advantage of parallelism by solving a reduced size linear system, but the node connecting the halo may lose part of the Jacobian information contributed by the update of its real "neighbor" solved in another processor? I guess it is the second approach that we use in SU2. Is that right? Again, your help is highly appreciated. Paul Quote:
|
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
How to use "translation" in solidBodyMotionFunction in OpenFOAM | rupesh_w | OpenFOAM Running, Solving & CFD | 5 | August 16, 2016 04:27 |
How can I avoid Divergence when simulating a boiling flux in FLUENT | P_VS | Fluent Multiphase | 0 | July 14, 2016 08:59 |
Linux vs PC Compiling | Rhyno466 | Fluent UDF and Scheme Programming | 10 | May 11, 2012 17:01 |
Parallelize UDF help | Rhyno466 | Fluent UDF and Scheme Programming | 1 | February 14, 2012 14:49 |
Replace periodic by inlet-outlet pair | lego | CFX | 3 | November 5, 2002 20:09 |