|
[Sponsors] |
December 10, 2020, 13:50 |
Looping on Boundary faces all processors
|
#1 |
Member
Join Date: Aug 2017
Posts: 32
Rep Power: 8 |
Howdy yall, I'm having some issues with looping over some boundary cells in simple test case using 32 processors.
I have a volScalarField called Test_Q When I try to modify values on every cell (not boundary yet) in the field I use the following code snippit. I call this code on if(runTime.outputTime()), only running for 100 timesteps @ 10 I/O calls per run, thus simplifying my test case. Code:
// forAll(Test_Q, celli) // { // Info<< "CELLID IS " << celli << " CELL VALUE WAS " << testQ[celli] <<"\n" <<endl; // Test_Q[celli] += .001; // Info<< "CELLID IS " << celli << " CELL VALUE IS " << testQ[celli] <<"\n" <<endl; // } Using paraview to inspect the result I can confirm it is working as intended. (The log file writes also show this) When trying to execute a similar operation on a boundary, constrained to the boundary named "obstacle", I use the following code: Code:
label patchID =mesh.boundaryMesh().findPatchID("obstacle"); forAll(Test_Q.boundaryField()[patchID], facei) { Info<< "FaceID IS " << facei << " Boundary Value Was " << Test_Q.boundaryField()[patchID][facei] <<"\n" <<endl; Test_Q.boundaryFieldRef()[patchID][facei] += .001; Info<< "FaceID IS " << facei << " Boundary Value Was " << Test_Q.boundaryField()[patchID][facei] <<"\n" <<endl; } My initial runs are decomposed on 32 cores and the decomposition is show from paraview in picture 32_decomposed: 32_decomposed.jpg The boundary loop never gets called as it's likely only looping on proc0 which has no "obstacle" patch faces on it When I run serially the code functions and the target boundary increases in value by .001 per I/O step. See img below: Serial_functioning.PNG Can anybody inform what I am doing wrong? The modifications are being made in the solver and recompiled each time. My thought running the solver in parallel would just "make forAll function in parallel". Naïve right? EDIT** should note that I did use the search function and found Looping over a volScalarField in a parallel run More or less looking for an explanation / discussion on the use of reduce, mesh.C() and the openFOAM parallel paradigm Last edited by siefer92; December 10, 2020 at 13:59. Reason: clarification |
|
December 11, 2020, 10:09 |
|
#2 |
Senior Member
Mark Olesen
Join Date: Mar 2009
Location: https://olesenm.github.io/
Posts: 1,679
Rep Power: 40 |
It would be helpful if you explained what you want to accomplish. Are you trying to bump the level of the entire field, but without a += operation. Do you really want to change the values on all boundaries (cyclic, processor, slip etc). Do you need to change each face and cell individually, or can you work with field values....
|
|
December 11, 2020, 13:56 |
|
#3 |
Member
Join Date: Aug 2017
Posts: 32
Rep Power: 8 |
So I am just integrating OpenFOAM's topology modification capabilities in to a solver that I am assembling.
The volScalarField Test_Q is a fill in for what will eventually be erosion values predicted by a model. The Test_Q field increases by a value of .001 (think of this as a prescribed erosion rate for now) at each I/O step (for debugging purposes so I can discriminate the I/O steps in paraview). I only run 100 timsteps @ I/O / 10 Timesteps for a quick turn-around. I should note that I have code that functions as intended, but I'd like to truly understand my code Code:
forAll(Test_Q.boundaryField()[patchID], facei) { Info<< "FaceID IS " << facei << " Boundary Value Was " << Test_Q.boundaryField()[patchID][facei] <<"\n" <<endl; Test_Q.boundaryFieldRef()[patchID][facei] += .001; } // Here the erosion is mapped onto the surface normal vector components (need to divide erosion by patch area) forAll(mesh.C().boundaryField()[patchID], facei) { Test_Q_Vect.boundaryFieldRef()[patchID][facei].component(0) = Test_Q.boundaryFieldRef()[patchID][facei]*(mesh.Sf().boundaryField()[patchID][facei].component(0) / mesh.magSf().boundaryField()[patchID][facei]); Test_Q_Vect.boundaryFieldRef()[patchID][facei].component(1) = Test_Q.boundaryFieldRef()[patchID][facei]*(mesh.Sf().boundaryField()[patchID][facei].component(1) / mesh.magSf().boundaryField()[patchID][facei]); } pointInterpolation.interpolate(Test_Q_Vect, pointDU); const vectorField& pointDUI = pointDU.internalField(); vectorField newPoints = mesh.points(); forAll (pointDUI, pointI) { newPoints[pointI] += pointDUI[pointI]; } In the code snippit I provided there are 3 loops
My confusion comes in loops 1 & 2 in identifying what would be considered the "best practice". what is the difference in looping for: Code:
forAll(Test_Q.boundaryField()[patchID], facei) vs. forAll(mesh.C().boundaryField()[patchID], facei) My problem is that I don't wholly understand the most effective way of coding within OpenFOAM and I'd like for this to change. |
|
October 1, 2021, 15:30 |
|
#4 |
Member
Federico Zabaleta
Join Date: May 2016
Posts: 47
Rep Power: 9 |
The problem is not the loop, it's the "Info<<"
"Info <<" only shows the results from processor0. If you want to output everything on every processor you should use "Pout <<" |
|
Tags |
boundary face loop, forall parallel |
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Decomposing meshes | Tobi | OpenFOAM Pre-Processing | 22 | February 24, 2023 10:23 |
[snappyHexMesh] Error snappyhexmesh - Multiple outside loops | avinashjagdale | OpenFOAM Meshing & Mesh Conversion | 53 | March 8, 2019 10:42 |
[Gmsh] Vertex numbering is dense | KateEisenhower | OpenFOAM Meshing & Mesh Conversion | 7 | August 3, 2015 11:49 |
Error finding variable "THERMX" | sunilpatil | CFX | 8 | April 26, 2013 08:00 |
RPM in Wind Turbine | Pankaj | CFX | 9 | November 23, 2009 05:05 |