I'm performing simulations wit
I'm performing simulations with the coodles solver, running the application both in serial and in parallel (on a single machine with several cores).
I' ve observed that the solution obtained with the parallel run (mpirun) is slightly different from the one obtained running the application on a single processor(I'm observing the time history of pressure, temperature,...in some points). Obviously the two simulations start from identical condition; the initial conditions for the parallel case has been obtained decomposing the solution of the serial case in a certain time instant. Has someone observed the same behaviour? |
There are several possible exp
There are several possible explanations. If floatTransfer is on you may sacrifice some accuracy due to round-off. Preconditioning of matrices does not always parallelize well. The IC preconditioning for instance relies on a scheme whereby matrix elements are visited in an order which cannot be replicated in a parallel run. I'm not sure if it makes sense for your case, but you can try diagonal preconditioning and see what happens - it does not have the samme parallelization problem as IC.
|
Thank you for your answer Mart
Thank you for your answer Martin. I think that all the possible causes you proposed can be applied to my case.
I've also observed that for long time the differences tend to disappear, probably the main source of differences is related to the fieldDecomposition. |
All times are GMT -4. The time now is 13:53. |