CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

Strange behaviour with a custom utility while running in parallel

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   April 30, 2021, 10:43
Default Strange behaviour with a custom utility while running in parallel
  #1
Member
 
MNM
Join Date: Aug 2017
Posts: 61
Rep Power: 6
SHUBHAM9595 is on a distinguished road
Dear Foamers,


I'm trying to implement a Pre - processing utility (customField) in parallel to initialize a volVectorField (H) within a 3D domain. It involves integration of a piecewise function based on axial coordinate of the geometry. The function involves the Bessel function (Foam::j0 & Foam::j1) and sample code is shown below

Code:
/*---------------------------------------------------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     |
    \\  /    A nd           | Copyright (C) 2021 OpenFOAM Foundation
     \\/     M anipulation  |
-------------------------------------------------------------------------------
License
    This file is part of OpenFOAM.

    OpenFOAM is free software: you can redistribute it and/or modify it
    under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    OpenFOAM is distributed in the hope that it will be useful, but WITHOUT
    ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
    FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
    for more details.

    You should have received a copy of the GNU General Public License
    along with OpenFOAM.  If not, see <http://www.gnu.org/licenses/>.

Application
    customField

Description

\*---------------------------------------------------------------------------*/

#include "fvCFD.H"

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

int main(int argc, char *argv[])
{
    #include "setRootCase.H"
    #include "createTime.H"
    #include "createMesh.H"
     #include "readSolenoidDict.H"
     #include "createHField.H"

    // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
 Info<< "\nStarting time loop\n" << endl;

    while (runTime.run())
    {
		        
		runTime++;

        Info<< "Time = " << runTime.timeName() << nl << endl;
            
	const vectorField& CC = mesh.C(); 
	const scalarField x1 = CC.component(vector::X);      //Extracting the x cordinates of cell centre     
	const scalarField y1 = CC.component(vector::Y);		//Extracting the y cordinates of cell centre 
	const scalarField z1 = CC.component(vector::Z);		//Extracting the z cordinates of cell centre 
			
	scalarField Hx(mesh.nCells(), 0.);  // Initialize Hx over each cell
	scalarField Hy(mesh.nCells(), 0.); // Initialize Hy over each cell
	scalarField Hz(mesh.nCells(), 0.); // Initialize Hz over each cell
	 
	scalarField GG1(mesh.nCells(), 0.); // Initialize custom var GG1 = (Lcend - Z) over each cell
	scalarField GG2(mesh.nCells(), 0.); // Initialize custom var GG2 = (Lcstart - z) over each cell
				  
	scalarField unity(mesh.nCells(), 1.0); ////declaring dummy scalar field for for ease of computations over all cells
	  
	scalarField rho = Foam::sqrt((x1*x1)+(y1*y1)) ;  //  Declaring the cylindrical coordinate r
	scalarField  Rcoil = 0.5*Ds.value()*unity;
	scalarField  SHD = (Rcoil*Nt.value()*Ic.value())/(2*Ls.value());
	
	 forAll(H,celli)
{
	rho[celli] = Foam::sqrt((x1[celli]*x1[celli])+(y1[celli]*y1[celli])) ;  //  Declaring the cylindrical coordinate r					
	GG1[celli]  =  Lcend.value() - z1[celli]; 
	GG2[celli]  = Lcstart.value() - z1[celli] ;
	
//*************************************For z coordinate smaller than LEFT BOUND********************************************//	
		if (z1[celli] <= Lcstart.value() ) 
			
	{       
//###### Axial H 
				for (int no = 1; no<=Naxout; no++)
			 {
			   Hz[celli]	+= (-SHD[celli])*(1*Foam::j0(no*rho[celli]))*(1*Foam::j1(Rcoil[celli]*no))*
			   ((1*Foam::exp(no*(-GG1[celli]))) - (1*Foam::exp(no*(-GG2[celli]))));
			 }
					 				 
	}
          
//*************************************For z coordinate greater than RIGHT BOUND********************************************//	
		else if (z1[celli] > Lcend.value() ) //
	{		
			
//###### Axial H
				for (int no = 1; no<=Naxout; no++)
			{
			Hz[celli]	+= (SHD[celli])*(1*Foam::j0(no*rho[celli]))*(1*Foam::j1(Rcoil[celli]*no))*
			((1*Foam::exp(no*GG1[celli]))-(1*Foam::exp(no*GG2[celli])));
			}
	
	}
         
//*************************************For z coordinate between BOTH BOUNDS********************************************//
		else 
	{	
//###### Axial H		            	 
				for (int ni = 1; ni<=Naxin; ni++)
			{
			Hz[celli]	+= (-SHD[celli])*(1*Foam::j0(ni*rho[celli]))*(1*Foam::j1(Rcoil[celli]*ni))*
			((1*Foam::exp(-ni*GG1[celli])) + (Foam::exp(ni*GG2[celli])) - (2.0));  
			}						
			  
	}
//*********************************************************************************************************************//
        							 H[celli] = vector(Hx[celli], Hy[celli], Hz[celli]); 

 }   

   runTime.write();
   
    Info<< nl << "ExecutionTime = " << runTime.elapsedCpuTime() << " s"
        << "  ClockTime = " << runTime.elapsedClockTime() << " s"
        << nl << endl;
}
    Info<< "End\n" << endl;

    return 0;
}


// ************************************************************************* //
The utility works perfectly in serial.

However, to reduce the time required for integration, I'm trying to run it using mpirun.


Now, When the number of terms of integration are small (around 100), the utility works correctly and write the generated field in the subsequent processor dictionaries. Using this I can correctly reconstruct the field for the whole domain.

But if we go for higher values (around 10,000) for Naxin, Naxout, the "reconstructPar" will throw an error of "No times selected".

The error is expected as for higher Naxin, Naxout the utility writes no output in corresponding processors


It'll be really helpful if someone can comment on what is going wrong here ?



P.S.
Naxin and Naxout are scalar inputs provides using an I/Odictionary whereas Nt, Ic, Ds Ls, Lcstart, and Lcend are provided as dimensionedScalar
I'm using OpenFOAM 5.0
SHUBHAM9595 is offline   Reply With Quote

Old   May 3, 2021, 02:29
Default
  #2
Senior Member
 
Mark Olesen
Join Date: Mar 2009
Location: https://olesenm.github.io/
Posts: 1,482
Rep Power: 35
olesen will become famous soon enougholesen will become famous soon enough
Quote:
Originally Posted by SHUBHAM9595 View Post
and sample code is shown below

Code:
/*---------------------------------------------------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     |
    \\  /    A nd           | Copyright (C) 2021 OpenFOAM Foundation
     \\/     M anipulation  |
-------------------------------------------------------------------------------
License
    This file is part of OpenFOAM.
Since the code states that it was written by or belongs to the OpenFOAM Foundation Ltd., why don't you ask them about it??
olesen is offline   Reply With Quote

Reply

Tags
custom utility, openfoam, parallel

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
environment variable MPI_BUFFER_SIZE not defined when running a parallel case rokasbytautas OpenFOAM Bugs 0 January 18, 2021 06:41
[snappyHexMesh] Error while running SnappyHex in parallel mg.mithun OpenFOAM Meshing & Mesh Conversion 1 February 10, 2016 13:13
Explicitly filtered LES saeedi Main CFD Forum 16 October 14, 2015 11:58
Running parallel case after parallel meshing with snappyHexMesh? Adam Persson OpenFOAM Running, Solving & CFD 0 August 31, 2015 22:04
parallel running student666 OpenFOAM Running, Solving & CFD 7 May 21, 2014 14:55


All times are GMT -4. The time now is 00:09.