Inconsistent behaviour of gMax/gMin for parallel runs
This seems to be the case for almost any OF-Version I know of:
Assume that you have a scalarField foo whose values are in the range [7,13]. In anticipation of later parallel runs you fetch the minimum of that field using gMin. The correct result of gMin(foo) is of course 7. Now assume that the case runs in parallel. The data in foo is the same, but now it is distributed and on some processors the local foo has the length 0. On these processors the local min is computed using (I quote) Code:
template<class Type> What I would propose would be to add a member huge to pTraits and change the default of the template to Code:
return pTraits<Type>::huge; The only problem that I see to this solution is that there might be code somewhere that relies on min(foo)==0 for foo.size()==0. |
I had second thoughts about a detail here:
Quote:
|
Hi Bernhard,
this is fixed in 1.6 by indeed using min,zero,one and max from pTraits. Thanks, Mattijs |
Quote:
Just one question: will this fix every be coming to 1.5.x? Or let me rephrase this question in more general terms: I noticed that a git for 1.6.x is already available. Does this mean, that the 1.5.x is "closed" (No more fixes will be posted there) or will there still be fixes there for some time? Any way is fine with me it'd just be good to know what your policy is for that Bernhard |
We will concentrate on 1.6.x but if there are urgent problems with 1.5.x we will apply bug fixes as required.
The change to gMax and gMin required changes in several places in the code and given that the problem does not manifest itself often we are not planning to make this change to 1.5.x unless there is a big demand to do so. Henry |
Quote:
And BTW: Thanks for the new release. Quote:
|
All times are GMT -4. The time now is 10:55. |