|
[Sponsors] |
April 2, 2012, 10:19 |
for loop, not working in parallel
|
#1 |
Member
|
Hello everyone,
Maybe this is not a new question, but for some reason this piece of code does not work in parallel, just in serial. Any advice? Code:
for (label faceI = 0; faceI < mesh.nInternalFaces(); faceI++) //internalFACES { if(mag(M_R[faceI]) >= scalar(1.0)) //rigth face values { M_4_neg[faceI] = M_1_neg[faceI]; //rigth side face function P_5_neg[faceI] = M_1_neg[faceI]/M_R[faceI]; } if(mag(M_L[faceI]) >= scalar(1.0)) //left face values { M_4_pos[faceI] = M_1_pos[faceI]; //left side face functions P_5_pos[faceI] = M_1_pos[faceI]/M_L[faceI]; } } |
|
April 3, 2012, 05:20 |
|
#2 |
Member
|
This is the Bernhard answer to my email:
>Anyway: one of your problems might be that faces that were internal in >the serial case are now boundary faces (on a processor patch). From >what I see in your code you don't treat them at all. You'll have to do >that treatment separately for these patches. Thanks Bernhard |
|
April 3, 2012, 08:32 |
|
#3 |
Member
|
So I have added the boundaryField. But is still not working.
Any ideas? Code:
for (label faceI = 0; faceI < mesh.nInternalFaces(); faceI++) //internalFACES { if(mag(M_R[faceI]) >= scalar(1.0)) //rigth face values (for ox) { M_4_neg[faceI] = M_1_neg[faceI]; //rigth side face function P_5_neg[faceI] = M_1_neg[faceI]/M_R[faceI]; } if(mag(M_L[faceI]) >= scalar(1.0)) //left face values (for ox) { M_4_pos[faceI] = M_1_pos[faceI]; //left side face functions P_5_pos[faceI] = M_1_pos[faceI]/M_L[faceI]; } } forAll(mesh.boundaryMesh(), patchi) { if(mag(M_R[patchi]) >= scalar(1.0)) //rigth face values (for ox) { M_4_neg[patchi] = M_1_neg[patchi]; //rigth side face function P_5_neg[patchi] = M_1_neg[patchi]/M_R[patchi]; } if(mag(M_L[patchi]) >= scalar(1.0)) //left face values (for ox) { M_4_pos[patchi] = M_1_pos[patchi]; //left side face functions P_5_pos[patchi] = M_1_pos[patchi]/M_L[patchi]; } } |
|
April 3, 2012, 14:42 |
|
#4 | |
Assistant Moderator
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51 |
Quote:
a) you'll have to loop over the individual faces in each patch (just like you did for the internal field) b) you'll want to be sure that you do this only on processor patches. Use "isA" to find out Have a look at $FOAM_SRC/finiteVolume/lnInclude/adjustPhi.C for inspiration |
||
April 3, 2012, 17:45 |
|
#5 | |
Member
|
I was doing this in the wrong way, now I see that. But I can use a different and ingenious approach (without loops) for this particular case (see bellow).
Once more, thanks Bernhard you were very helpful and I think that I need to use your suggestion for another loop problem. Following the solution of Alberto Passalacqua. The best way in this case is to forget about the loops and treat everything as a surface scalar. Quote:
It was necessary to use Foam:os(magMR -1.0), dont know why. Many thanks Alberto and Bernhard You guys are the best. Carlos |
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
OF 2.0.1 parallel running problems | moser_r | OpenFOAM Running, Solving & CFD | 9 | July 27, 2022 03:15 |
Running mapFields with Parallel Source and Parallel Target | RDanks | OpenFOAM Pre-Processing | 4 | August 2, 2016 05:24 |
CFX parallel multi-node jobs fail w/ SLURM on Ubuntu 10.04 | danieru | CFX | 0 | February 17, 2012 06:20 |
parallel fluent under pbs not working | kharnabnew | FLUENT | 0 | January 6, 2011 03:28 |
MPICH Parallel Run Error on the Windows Server2003 | Saturn | CFX | 3 | August 29, 2006 08:42 |