CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (http://www.cfd-online.com/Forums/openfoam-solving/)
-   -   is double precision default of OF? (http://www.cfd-online.com/Forums/openfoam-solving/114760-double-precision-default.html)

 immortality March 17, 2013 09:17

is double precision default of OF?

Thanks.

 haakon March 17, 2013 18:23

Yes, by default it is double precision. But you can change this at compile time by setting WM_PRECISION_OPTION=SP instead of WM_PRECISION_OPTION=DP in etc/bashrc before you compile. Anyways, that is not something I would recommend.

 alberto March 18, 2013 02:13

One comment: the calculation is done in double precision by default, but results are saved to disk typically with a precision of 6 digits. Use the binary format to preserve precision.

Best,

 immortality March 18, 2013 05:44

thank you dear Hacon and Alberto.
double precision is 14 digit after point,isn't?
When tolerances are set for example e-20 does it have any effect on accuracy of calculations since it's lower than DP criteria?
Thanks again.

 wyldckat March 18, 2013 18:56

Greetings to all!

Quote:
 Originally Posted by immortality (Post 414630) thank you dear Hacon and Alberto. double precision is 14 digit after point,isn't? When tolerances are set for example e-20 does it have any effect on accuracy of calculations since it's lower than DP criteria? Thanks again.
It depends. Some extensive description is given here: http://en.wikipedia.org/wiki/Floatin...uracy_problems

In essence:
• If you add 1.0 with 1e-20, you still get 1.0.
• If you multiply 1.0 by 1e-20, you get 1e-20.
Usually, you shouldn't have any problems, since double precision works between e-308 and e+308. The problem only occurs if you have a simulation domain where in one corner you have very small values, e.g. 1e-20, while on the opposite corner you have very large values, e.g. 1e20.
If I'm not mistaken, this is where the continuity indicators help figure out how good or bad the solution is... but that's all I can remember :(

Best regards,
Bruno

 immortality March 21, 2013 13:30

thanks but I wonder then whats its relation to zero of machine thats about 1e-17 that must reached by continuity residuals?Is it a limit while DP can work with numbers very smaller than 1e-17?

 immortality March 21, 2013 13:36

I was curious about DP but didn't find it in etc/bashrc file.

 wyldckat March 24, 2013 13:22

Quote:
 Originally Posted by immortality (Post 415546) thanks but I wonder then whats its relation to zero of machine thats about 1e-17 that must reached by continuity residuals?Is it a limit while DP can work with numbers very smaller than 1e-17?
Ah, that's probably due to EPS - http://en.wikipedia.org/wiki/Machine_epsilon - on the first table, you'll find this line:
Quote:
 binary64 double precision double 2 53 (one bit is implicit) 2 -53 pow (2, -53) 1.11e-16
1.11e-16 is the eps for double precision.

Quote:
 Originally Posted by immortality (Post 415549) I was curious about DP but didn't find it in etc/bashrc file.
:confused: https://github.com/OpenFOAM/OpenFOAM...etc/bashrc#L75 - line 75!?

edit: "eps" is actually the name of the command on MATLAB that gives us this machine precision... it looks like it's not an acronym but it actually stands for "epsilon" :rolleyes:

 haakon March 24, 2013 13:49

Quote:
 Originally Posted by immortality (Post 415546) thanks but I wonder then whats its relation to zero of machine thats about 1e-17 that must reached by continuity residuals?Is it a limit while DP can work with numbers very smaller than 1e-17?
Be aware that you can work with numbers much smaller than eps, as the maximum and minimum exponents (with 10-base) is -308 and 308 respectively (double prec.). That means that it is possible to represent a number down to to 2.2250738585072014e-308 (equal to 2^-1022) in double precision. It is also perfectly possible to do arithmetics with small numbers, for example 1e-30 + 1e-30 equals (correctly) 2e-30. The trouble arise when the numbers differ greatly in magnitude. If you for example try to calculate the sum 1+10^-16, the result is still 1 (exactly).

 immortality March 29, 2013 02:04

thank you both so much for explanations.
But still a doubt is remaind about the relation between machine eps(around 1e-16) and DP(that calculates until e-308 in low values) or SP.

 immortality March 29, 2013 17:39

hakon and bruno,please complete your useful explanations with giving a comparison between DP bound and machine epsilon.
Why machine epsilon is restricted to e-17 when calculations could be done untile e-308 according to DP accuracy?

 wyldckat March 29, 2013 19:58

Quote:
 Originally Posted by immortality (Post 417250) hakon and bruno,please complete your useful explanations with giving a comparison between DP bound and machine epsilon. Why machine epsilon is restricted to e-17 when calculations could be done untile e-308 according to DP accuracy?
:confused: First I've got to ask the following questions, because I'm curious:
1. Is your internet access restricted in any way, in the sense that it does not allow you to access Wikipedia? Because your question is answered on the pages we've posted.
2. You've described in the past that you're doing a master's degree and that your using/doing CFD for the thesis. Did you have a course on Computational Mathematics or something similar? Because this topic should have been addressed on that course...

Anyway, to try to answer your question, I'll go through a somewhat of summary bullet point presentation:
• Double Precision usually follows the IEEE 754 standard, where 64 bits (8 bytes) pack numbers in the following format:
• 1 bit for the sign of the global value.
• 11 bits for the exponent, which allows for the 1e+308 to 1e-308 values.
• 52 bits for the fraction, which allows for roughly 14 to 16 decimal digits.
• As for machine epsilon, I'll quote from the Wikipedia article:
Quote:
 machine epsilon is the maximum relative error of the chosen rounding procedure.
This equates to the error associated with changing only the last bit of the fraction part of 52bit.
I hope this answers your question... because right now I'm almost falling asleep :(... it's been a long week...

 immortality March 30, 2013 05:20

thank you Bruno
I grasped it.
There isn't any comparison description in wiki.no,i don't remember such topics in the courses.professor assumed such subject too simple that he want to describe.so my friends i asked wasn't certain like me.although we had courses like numerical analysis in bsc but they're too far to remember!
Have a nice day and good asleep.

 haakon March 30, 2013 08:49

I started on writing a short explanation yesterday, so I thought I might post it even tough this thread should be laid dead:

The magic lies in the definition of epsilon, which according to the Wikipedia article "gives an upper bound on the relative error due to rounding in floating point arithmetic" (my emphasis).

To use my previous example, if you add two numbers of different magnitude, you might get roundoff errors. This also goes for other operations, such as multiplication/division. For the sake of simplicity assume that one of the numbers is one (exactly).

First, let the other number also be one (exactly). In double precision that is:
Code:

`1+1=2.00000000000000000000`
And the result is represented accurately, i.e. no errors.

Then let us assume that we are to do a division, 1/10, witch we know (from numerics) is exactly 0.1. In double precision that is actually not the case:
Code:

`1/10 = 0.10000000000000000555`
As we can see, we have got an error. In this case the absolute error is 5.55e-18. If we normalize with respect to the correct answer (0.1) we get the relative error, which is 5.55e-17.

We might as well do this with some other numbers:
Code:

`1e-67/1e-66 = 0.09999999999999999167`
The absolute error in this case: 8.32e-18. The relative error (with respect to the correct answer): 8.32e-17

Code:

`1e-100+2e-100 = 3.00000000000000005998e-100`
The absolute error? 5.998e-117 The relative error? 5.998e-17.

Do you get the concept? The crux is that we are dealing with relative errors due to round-offs. The last example is an good one, because we are dealing with numbers that are small (order of magnitude of 10^-100), and thus get small absolute errors. The key in the definition of epsilon is that it (i repeat myself) "gives an upper bound on the relative error due to rounding in floating point arithmetic". In all of our examples the relative errors are smaller than epsilon, and that is what it is all about.

Disclaimer: I have not considered that numbers like 1e-100 are not possible to represent exactly in double precision, and hence the error calculated here is a result of first converting 1e-100 (in the exact sense) to a binary number (witch is 1.00000000000000001999e-100) and then doing the same thing with the other numbers before actually performing the operation. Nevertheless, it illustrate the concept fairly well without adding to much complexity. I'm just a silly hydrodynamics engineer, so if some computer geeks wants to blow my head off and come with some exact calculations and considerations, feel free to do that.

 immortality March 30, 2013 09:20

thank you so much dear hakon for precise writing.that illuminated the concept more.this topic is alive for refers and add thoughts.I hope it is useful to anyone who has some questions related machine epsilon and double precision.

 All times are GMT -4. The time now is 14:23.