|September 20, 2007, 15:57||
I have run into a problem on a
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37Rep Power: 9
I have run into a problem on a number of my custom solvers that has been a minor inconvienence with my current grid size, but will become a major issue with my final target simulation.
As part of my solver, I am calculating a number of temporary variables and taking the gradient of such variables during each time step. When implemented on a single machine, everything works great. However, when implemented in a parallel environment (Be it 2 processors or 16), the solution becomes unrealistic / diverges. From looking at the data, it appears as if the processors are not communicating the values of these temporary variables between one-another for the gradient/divergence/laplacian operations near the edge of each processor's segment of the grid.
When parallelized, is it only those variables defined using the IOobject structure (as is commonly found in the createFields.H file) which retain their communication ability between processors?
If someone would have an alternative explaination on what might be causing this issue, or what precautions I need to take to avoid such, I would greatly appreciate hearing from you.
|Thread||Thread Starter||Forum||Replies||Last Post|
|Nonbasic mathematical operations||ngj||OpenFOAM||0||September 26, 2008 17:16|
|FvPatch operations||maka||OpenFOAM Post-Processing||4||July 10, 2008 07:36|
|Cleaning up operations||Vidya Raja||FLUENT||3||May 22, 2006 14:17|
|body operations||fintan||CFX||3||December 1, 2005 00:05|
|GET_GVAR operations?||Cujo||CFX||3||July 23, 2003 11:27|