Boundary condition with mpirun
Hello everyone!
I just implemented Lund's recycled boundary condition (thank you Perry Johnson :D) in OpenFOAM, and it seems to work just fine on a single processor. However, when I run the simulation on several processors, I get a pretty uggly error (sigSegv, sigFpe, etc...). Is there anything I should be doing to parallelize my code? Basically, the algorithms starts with void Foam::scaledMappedVelocityFixedValueFvPatchField:: updateCoeffs() { if (updated()) { return; } // Since we're inside initEvaluate/evaluate there might be processor // comms underway. Change the tag we use. int oldTag = UPstream::msgType(); UPstream::msgType() = oldTag+1; // Get the mappedPatchBase const mappedPatchBase& mpp = refCast<const mappedPatchBase> ( scaledMappedVelocityFixedValueFvPatchField::patch( ).patch() ); const fvMesh& nbrMesh = refCast<const fvMesh>(mpp.sampleMesh()); const word& fieldName = dimensionedInternalField().name(); const volVectorField& nbrField = nbrMesh.lookupObject<volVectorField>(fieldName); after that, I extract a plane from the domain, scale it, and eventually return the inlet as follows: // return the velocity values to inlet condition operator==(scaledU); // Restore tag UPstream::msgType() = oldTag; fixedValueFvPatchVectorField::updateCoeffs(); } If someone had an idea, it would be really great! I don't really feel like running my LES on a single processor. :D Thanks! Joachim |
All times are GMT -4. The time now is 07:42. |