Problems running a customized solver in parallel
Hi All
I have been trying to run a new developed solver in parallel which generally delivers the same message meaning the evolution of the second process is broken down. But i have checked my MPI installation which is really fine and also runs some other standard tutorials in parallel. The decompostionPar file looks fine, as the complexity of my geometry is very low as it is square. However i have tried to used different decomposition schemes in segregating the geometry. Can somebody clarify me where actually does these errors arise from. Note : I am trying to run it on a normal desktop (4 parallel). Thanks in Advance Rohith Code:
|
Greetings Rohith,
I've moved your post from the other thread: http://www.cfd-online.com/Forums/ope...ntroldict.html - because it wasn't a similar problem :( The problem you're getting is that there is a problem on the receiver end in one of the processes, because the message was truncated. That usually refers to either there not being enough memory available for the data transfer to be performed safely or there was possibly an error in the network connection. Without more information about the customizations you've made, it's almost impossible to diagnose the problem. All I can say is that at work we had a similar problem sometime ago and the problem was that we weren't using enough "const &" variables for keeping a local copy of scalar fields; instead, we always called the method that calculated and gave us the whole field, which was... well... bad programming, since it had to calculate a lot of times the whole field for the whole mesh, just to give us one result for a single cell :rolleyes: Suggestions:
Best regards, Bruno |
Same problem in new solver made of simpleIbFoam
Hi,
I have a similar problem in a solver developed using "simpleIBFoam" in foam-extend-4.0. This happens only when the follwing part is added to the UEqn.H Code:
-fvc::div(mu*dev2GradUTranspose) Code:
volTensorField dev2GradUTranspose =dev2(fvc::grad(U)().T()) Code:
tmp<fvVectorMatrix> UEqn Code:
[b-cn0105:506766] *** An error occurred in MPI_Recv I'm stucked in this long time. Please see whether someone can help!! Thanks in advance. Thamali |
Quick answer: My guess is that you should not use "dev2" independently from the whole equation. In other words, "dev2GradUTranspose" should not be used like that. You should instead code it directly like this:
Code:
tmp<fvVectorMatrix> UEqn |
All times are GMT -4. The time now is 13:02. |