I have run into a problem on a
I have run into a problem on a number of my custom solvers that has been a minor inconvienence with my current grid size, but will become a major issue with my final target simulation.
As part of my solver, I am calculating a number of temporary variables and taking the gradient of such variables during each time step. When implemented on a single machine, everything works great. However, when implemented in a parallel environment (Be it 2 processors or 16), the solution becomes unrealistic / diverges. From looking at the data, it appears as if the processors are not communicating the values of these temporary variables between one-another for the gradient/divergence/laplacian operations near the edge of each processor's segment of the grid.
When parallelized, is it only those variables defined using the IOobject structure (as is commonly found in the createFields.H file) which retain their communication ability between processors?
If someone would have an alternative explaination on what might be causing this issue, or what precautions I need to take to avoid such, I would greatly appreciate hearing from you.
|All times are GMT -4. The time now is 17:23.|