is double precision default of OF?
can anyone answer me?
Thanks. 
Yes, by default it is double precision. But you can change this at compile time by setting WM_PRECISION_OPTION=SP instead of WM_PRECISION_OPTION=DP in etc/bashrc before you compile. Anyways, that is not something I would recommend.

One comment: the calculation is done in double precision by default, but results are saved to disk typically with a precision of 6 digits. Use the binary format to preserve precision.
Best, 
thank you dear Hacon and Alberto.
double precision is 14 digit after point,isn't? When tolerances are set for example e20 does it have any effect on accuracy of calculations since it's lower than DP criteria? Thanks again. 
Greetings to all!
Quote:
In essence:
If I'm not mistaken, this is where the continuity indicators help figure out how good or bad the solution is... but that's all I can remember :( Best regards, Bruno 
thanks but I wonder then whats its relation to zero of machine thats about 1e17 that must reached by continuity residuals?Is it a limit while DP can work with numbers very smaller than 1e17?

I was curious about DP but didn't find it in etc/bashrc file.

Quick answers:
Quote:
Quote:
Quote:
edit: "eps" is actually the name of the command on MATLAB that gives us this machine precision... it looks like it's not an acronym but it actually stands for "epsilon" :rolleyes: 
Quote:
See this page for more information: http://en.wikipedia.org/wiki/Double...gpoint_format 
thank you both so much for explanations.
But still a doubt is remaind about the relation between machine eps(around 1e16) and DP(that calculates until e308 in low values) or SP. 
hakon and bruno,please complete your useful explanations with giving a comparison between DP bound and machine epsilon.
Why machine epsilon is restricted to e17 when calculations could be done untile e308 according to DP accuracy? 
Quote:
Anyway, to try to answer your question, I'll go through a somewhat of summary bullet point presentation:

thank you Bruno
I grasped it. There isn't any comparison description in wiki.no,i don't remember such topics in the courses.professor assumed such subject too simple that he want to describe.so my friends i asked wasn't certain like me.although we had courses like numerical analysis in bsc but they're too far to remember! Have a nice day and good asleep. 
I started on writing a short explanation yesterday, so I thought I might post it even tough this thread should be laid dead:
The magic lies in the definition of epsilon, which according to the Wikipedia article "gives an upper bound on the relative error due to rounding in floating point arithmetic" (my emphasis). To use my previous example, if you add two numbers of different magnitude, you might get roundoff errors. This also goes for other operations, such as multiplication/division. For the sake of simplicity assume that one of the numbers is one (exactly). First, let the other number also be one (exactly). In double precision that is: Code:
1+1=2.00000000000000000000 Then let us assume that we are to do a division, 1/10, witch we know (from numerics) is exactly 0.1. In double precision that is actually not the case: Code:
1/10 = 0.10000000000000000555 We might as well do this with some other numbers: Code:
1e67/1e66 = 0.09999999999999999167 Lets do some additions: Code:
1e100+2e100 = 3.00000000000000005998e100 Do you get the concept? The crux is that we are dealing with relative errors due to roundoffs. The last example is an good one, because we are dealing with numbers that are small (order of magnitude of 10^100), and thus get small absolute errors. The key in the definition of epsilon is that it (i repeat myself) "gives an upper bound on the relative error due to rounding in floating point arithmetic". In all of our examples the relative errors are smaller than epsilon, and that is what it is all about. Disclaimer: I have not considered that numbers like 1e100 are not possible to represent exactly in double precision, and hence the error calculated here is a result of first converting 1e100 (in the exact sense) to a binary number (witch is 1.00000000000000001999e100) and then doing the same thing with the other numbers before actually performing the operation. Nevertheless, it illustrate the concept fairly well without adding to much complexity. I'm just a silly hydrodynamics engineer, so if some computer geeks wants to blow my head off and come with some exact calculations and considerations, feel free to do that. 
thank you so much dear hakon for precise writing.that illuminated the concept more.this topic is alive for refers and add thoughts.I hope it is useful to anyone who has some questions related machine epsilon and double precision.

All times are GMT 4. The time now is 20:52. 