CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > OpenFOAM Programming & Development

modified code crashes in parallel

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree1Likes
  • 1 Post By mabinty

Reply
 
LinkBack Thread Tools Display Modes
Old   September 28, 2011, 12:06
Default modified code crashes in parallel
  #1
Senior Member
 
Aram Amouzandeh
Join Date: Mar 2009
Location: Vienna, Vienna, Austria
Posts: 184
Rep Power: 8
mabinty is on a distinguished road
Dear all!

I modified a code (chtMRF) and it runs without problems on a single CPU. At the moment I parallelize (here on 4 CPUs), it crashes in the first time step (error message see below).

What I did is I introduced a source for the energy equation of region1 (last line):

Code:
tmp<fvScalarMatrix> TEqn
        (
            fvm::ddt(rho*cp, T)
          - fvm::laplacian(K, T)
          - fvc::div(fvc::interpolate(Qr)*mesh.magSf())
        );
where Qr is created as volScalarField(IOobject()) in "region1" and initialized with zero. In order to update the boundary field of Qr from "region0" according to a certain condition, I added the following before the energy equation is solved:

Code:
if(condition) 
{
    Qr.boundaryField()[patchID] == QrRegion0.boundaryField()[patchID];
}
The solver crashes at the moment the energy equation of "region1" (modified one) is solved. Thus I assume that Qr is not "fully" updated from all CPUs. I m not sure if thats the problem but e.g. for the sum of a scalar value there is something like the "reduce" function which collects the value from all CPUs. Does somebody have an idea how i could avoid this problem??

I greatly appreciate your comments!!

Cheers,
Aram

#################################################

Quote:
Solving for solid region region1
[0] #0 Foam::error:rintStack(Foam::Ostream&) in "/opt/openfoam171/lib/linux64GccDPOpt/libOpenFOAM.so"
[0] #1 Foam::sigFpe::sigFpeHandler(int) in "/opt/openfoam171/lib/linux64GccDPOpt/libOpenFOAM.so"
[0] #2 in "/lib/libc.so.6"
[0] #3 Foam::PCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/opt/openfoam171/lib/linux64GccDPOpt/libOpenFOAM.so"
[0] #4 Foam::fvMatrix<double>::solve(Foam::dictionary const&) in "/opt/openfoam171/lib/linux64GccDPOpt/libfiniteVolume.so"
[0] #5
[0] in "/home/aa/OpenFOAM/aa-1.7.1/applications/bin/linux64GccDPOpt/yazdRadBC"
[0] #6 __libc_start_main in "/lib/libc.so.6"
[0] #7
[0] in "/home/aa/OpenFOAM/aa-1.7.1/applications/bin/linux64GccDPOpt/yazdRadBC"
[lws16:14262] *** Process received signal ***
[lws16:14262] Signal: Floating point exception (8)
[lws16:14262] Signal code: (-6)
[lws16:14262] Failing at address: 0x3e8000037b6
[lws16:14262] [ 0] /lib/libc.so.6(+0x33af0) [0x7fbe6783caf0]
[lws16:14262] [ 1] /lib/libc.so.6(gsignal+0x35) [0x7fbe6783ca75]
[lws16:14262] [ 2] /lib/libc.so.6(+0x33af0) [0x7fbe6783caf0]
[lws16:14262] [ 3] /opt/openfoam171/lib/linux64GccDPOpt/libOpenFOAM.so(_ZNK4Foam3PCG5solveERNS_5FieldIdEER KS2_h+0xe75) [0x7fbe687027d5]
[lws16:14262] [ 4] /opt/openfoam171/lib/linux64GccDPOpt/libfiniteVolume.so(_ZN4Foam8fvMatrixIdE5solveERKNS _10dictionaryE+0x14b) [0x7fbe6a8ebcfb]
[lws16:14262] [ 5] yazdRadBC() [0x4597f7]
[lws16:14262] [ 6] /lib/libc.so.6(__libc_start_main+0xfd) [0x7fbe67827c4d]
[lws16:14262] [ 7] yazdRadBC() [0x425549]
[lws16:14262] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 14262 on node lws16 exited on signal 8 (Floating point exception).
--------------------------------------------------------------------------
mabinty is offline   Reply With Quote

Old   October 12, 2011, 09:47
Default
  #2
Senior Member
 
Aram Amouzandeh
Join Date: Mar 2009
Location: Vienna, Vienna, Austria
Posts: 184
Rep Power: 8
mabinty is on a distinguished road
Dear all!

Found the solution to my problem mentioned above. I took the BC solidWallMixedTemperatureCoupled to see how boundary patches are handled in parallel, and voil:

Code:
        const directMappedPatchBase& mpp = refCast<const directMappedPatchBase>
            (
                T.boundaryField()[patchID].patch().patch()
            );
            // Force recalculation of mapping and schedule
            const mapDistribute& distMap = mpp.map();
           
            Qr.boundaryField()[patchID] == QrRegion0.boundaryField()[patchID];

        // Swap to obtain full local values of neighbour Qr
            mapDistribute::distribute
            (
                Pstream::defaultCommsType,
                distMap.schedule(),
                distMap.constructSize(),
                distMap.subMap(),           // what to send
                distMap.constructMap(),     // what to receive
                Qr.boundaryField()[solidPatchID]
            );
I compared simulations on a single CPU with the one on 4 CPUs and the results match well.

Cheers,
Aram
Bahram likes this.
mabinty is offline   Reply With Quote

Old   January 11, 2013, 06:15
Default
  #3
Member
 
Nicklas Linder
Join Date: Jul 2012
Location: Germany
Posts: 33
Rep Power: 4
nlinder is on a distinguished road
Hi Aram,

it has been a while, but i looks as if i am facing the same problem right now. (see .boundary() in parallel)

I do not really get, what you changed to get it to work, could you explain that a liitle bit more detailed?

Thanks in advance.
Nicklas
nlinder is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Parallel Error in ANSYS FLUENT 12 zeusxx FLUENT 23 April 21, 2015 03:02
Own boundary condition modified simpleFoam erorr in parallel execution sponiar OpenFOAM Running, Solving & CFD 1 August 27, 2008 09:16
Problems with MFIX code and Parallel Processing. Fernando Pio Main CFD Forum 4 August 29, 2006 14:33
Design Integration with CFD? John C. Chien Main CFD Forum 19 May 17, 2001 15:56
What is the Better Way to Do CFD? John C. Chien Main CFD Forum 54 April 23, 2001 08:10


All times are GMT -4. The time now is 03:17.