Any questions about RungeKutta methods
Does anyone have any questions about RungeKutta methods?
My motivation for posing this question is to see if I can find an employer who might be interested in hiring a fluid dynamicist/ numerical analyst. So, the deal is that I'll try to help you if you try to help me. OK? 
Re: Any questions about RungeKutta methods
Yes, I have one:
Why do structural analysis people use their own set of numerical methods, like Newmark Beta, Wilson Theta, Collocation, Houbolt, Park etc? Why don't they use simple ode integration methods, like Runge Kutta Fehlberg? 
Re: Any questions about RungeKutta methods
Well, as I am not a structural analysis person, I can't really say much about this. Atmospheric science folks have a bunch of their own schemes also. What you tend to see is the mathematics community doing a serious analysis and physical scientists using what they understand. The net result is an enormous chasm between them. In my opinion, the fluid dynamics community is far better at spatial discretizations than temporal ones.
RungeKutta methods come in many flavors. There are implicit, explicit, additive, and partitioned. One may then apply these to ordinary differential equations, differential algebraic equations, delay differential equations, etcetera. In fluid dynamics, we are generally concerned with ordinary differential equations and differential algebraic equations. As to your comment about RKF methods; much has happened since Fehlberg made those schemes in the late 1960's. Here are just a few things. It is generally accepted that the propagating solution should be the higherorder solution rather than the lower one as Fehlberg did. Next, people have devoted quite a bit of effort to minimizing the leading order truncation errors of new explicit RK pairs. One more thing is that error controllers have been systematically studied and PI controllers are much better than what Fehlberg used. If I could grab the ear of every numerical fluid dynamicist I would tell them to measure their temporal error and then adjust the time step to control it. Maybe 1% of people do that. Most people just close their eyes and put th gas pedal on the floor. Integration practices of most codes are astonishly primative. 
Re: Any questions about RungeKutta methods
Well I have one question:
For the integration of a Discontinuous Galerkin spatial discretization, is it necessary to have limiters whan using explict RK time integration? The model equations are Linearized Euler, but the coefficients of the mean flow. around which the linearization was done, can vary spatially. Your comments are most welcome. Yves 
Re: Any questions about RungeKutta methods
Well for one structural equations of motion are second order equations...so you would have to put them in first order form to use RungeKutta methods..which would entail increasing the storage of the arrays used in the computation by a factor of two.

Re: Any questions about RungeKutta methods
Secondorder ODEs may be integrated by either recasting them as firstorder ODEs or simply using explicit RungeKutta Nystrom (RKN) methods. RKN methods are designed, from the start, for secondorder ODEs. Here are some references:
http://scholar.google.com/scholar?q=explicit+rungeKutta+nystrom&ie=UTF8&oe=UTF8&hl=en (Sorry about the link  you'll have to manually enter it) 
Re: Any questions about RungeKutta methods
Yves,
I have not used DG methods but where are you using the limiter; in the spatial or temporal discretization? From what I am aware of, many folks using DG methods like to use the 3stage, 3rdorder ERK by Shu and Osher (JCP 1988). The currently trendy name for methods of similar design is SSP (Strongly Stability Preserving) methods. They do not have a limiter by they do possess a very specific and unusual stability property. Associated with this stability are the concepts of contractivity and monotonicity. Is this what you are asking about?? I can go about this but I'm worried that I'm not addresssing your question. 
Re: Any questions about RungeKutta methods
Thanks for the link..I was only attempting to explain why I thought Newmark and the such were used in structural dynamics..not that there is any justification behind it.

Re: Any questions about RungeKutta methods
Can anyone comment on the conservative properties of an IRK or an IRKFehlberg, with local extrapolation and step size control?
Do such schemes conserve momenta or energies or any other intergals of motion? Structural guys seem to use their own schemes because they conserve the right quantity and are dissipative for higher frequencies. See this paper: http://www2.mae.okstate.edu/Faculty/roy/hht4.pdf In addition to the reasons I mentioned, could someone tell me why structural people prefer Newmark type schemes instead of commonly used numerical methods that may be more efficient, of a higher order, better for stiff problems, etc? 
Re: Any questions about RungeKutta methods
First, the abbreviation IRK means Implicit RungeKutta. Fehlberg worked principally on explicit RungeKutta methods (ERK) and a form of mutiderivative RungeKutta methods. I don't recall ever seeing him deal with straightforward IRKs.
Anyway, when one has a linear problem, one may speak about dispersion and dissipation orders of accuracy. With these methods, you design in the formal order of accuracy and the method will, hopefully, attain this order on any nonlinear problem. In addition, you design in higherorder accuracy for linear problems. These methods are sometimes referred to as reduced phaseerror methods. You'll see these methods in acoustics. One can also do this with IRKs. These concepts are not exactly ones of conservation. Strictly speaking, local extrapolation (what Fehlberg did not do) and a good step size controller simply keep the local temporal error at some user specified value. The hope is that this will simultaneously also control the global integration error. Hence, your conservation is done to some order of accuracy. By the way, Tony Jameson's famous ERK is 4thorder on linear problems but only 2ndorder on nonlinear ones. Numerical methods can be strange. Applications people sometimes need a method but don't understand what relevant mathematicians have written. Their recourse has often been to simply try to invent methods by themselves. These methods are often unique to various small communities. The theory of phaseerror in RungeKutta methods has been, in large part, described by Peter J. van der Houwen. He's got a bunch of stuff in SIAM J. Numer. Math. Otherwise, look for people who cite his work and offer useful methods. http://scholar.google.com/scholar?q=houwen+runge+kutta+phase&ie=UTF8&oe=UTF8&hl=en You should NOT assume that Newmark methods are the optimal methods for these type problems. I don't mean to sound like an arrogant SOB but there's definitely room for improvement here. 
Re: Any questions about RungeKutta methods
I was indeed refering to, as it is called in the DG community, the TVB(Total Variation Bounded)RK DG of Cockburn and Shu. See "Mathematics of Computation" and JCP 1989  1990. They use it for nonlinear hyperbolic equations.
Is such TVB/TVD property required for those equations? And what about linear equations as the LEE with constant mean flow? Do the requirements change for LEE with spatially varying mean flow? The equations are linear in the variables but the coefficients are not constant. 
Re: Any questions about RungeKutta methods
Let me refer to these methods, as a group, as SSP/TVD ERK methods. They arose independently by Shu and Osher (1988) and Kraaijevanger (1991) but were looked at in slightly different ways. Shu and Osher were concerned about the property of monotonicity while Kraaijevanger was looking at contractivity. Since then, a cottage industry has sprouted up to make lots of these methods. What has been lacking is a clear demonstration of where these methods offer an advantage over traditional ERK methods. A true test of the methods would be selecting a SSP/TVD ERK method and an ordinary ERK which posess nearly identical leading order truncation errors and linear stability domains and testing both of them on a battery of test problems while simultaneously using an error controller to maintain a fixed local error. Unfortunately, that might jeopardize ones ability to continue publishing papers on these methods. My personal opinion (and experience) is that this property offers little to no true benefit. It is 95% hype. Two of the people who publish this stuff knowingly left Kraaijevanger's work unmentioned and republished his methods. It was the cheapest thing I've ever watched in my career. Kraaijevanger's paper is really amazing. By the way, Kraaijevanger's advisor (Spijker) and another person are addressing the topic in a unifying way now.
http://www.ams.org/journalgetitem?p...71804016643 It is 2005 now and the fluid dynamics community MUST start controlling their temporal error with error controllers. This stuff has been around for decades but nobody uses it. To agonize over SSP/TVD methods but fail to use an error controller is like building an F1 race car but never putting air in the tires. Another thing about SSP/TVD ERK methods. One can devise countless measures of contractivity, monotonicity, or stability. Go look at all of the stability measures for Implicit RungeKutta (IRK) methods. There are maybe 20. Does this mean that your method needs to posess all 20 of them? The trick is to understand what is needed rather than making a method that satisfies the most demanding stability property. Imagine an ODE with one variable. Integrate that equation from some initial time, Ti, to a final time, Tf. Select an initial condition and plot the solution as time proceeds. Now redo the calculation by slightly perturbing the initial condition. You should have two very similar solution curves. Imagine you have the exact solution. If the distance between the exact solution curves approach each other as time proceeds, the equation is called dissipative. We would like the numerical solution to do the same otherwise any error we commit during the integration will jump us onto an adjacent and diverging solution trajectory. If the distance between the two numerical trajectories decreases as we integrate, the numerical method is called contractive. There are still two issues to be resolved. Is the equation linear or nonlinear and what norm shall we use to decide the distance between the two solution trajectories? Traditional linear stability is for linear problems when the distance is measured with an L2 norm. For implicit methods, algebraic stability is for nonlinear problems when the distance is measured with an L2 norm. SSP/TVD ERK methods are demanding contractivity on nonlinear problems in an L_infinity norm. If one asks for this property in IRKs, the maximum order of accuracy possible is one if stability is requested for the entire complex left half plane. This should tell you that since methods of lesser stability work very well, maybe the criterion is too severe. For stiff problems, Lstability (a linear stability criterion) is more useful than algebraic stability ( a nonlinear stability criterion). My advice is to look for an ERK method with low leading order error that accepts an error controller well. That is more important than whether the method is SSP/TVD. By the way, here is Kraaijevanger's method WITH an embedded method. I'm not saying this is the best but it is partly what you want and partly what you need!! a(2,1) = 0.39175222657188905833d0 a(3,1) = 0.21766909626116921036d0 a(3,2) = 0.36841059305037202075d0 a(4,1) = 0.08269208665781075441d0 a(4,2) = 0.13995850219189573938d0 a(4,3) = 0.25189177427169263984d0 a(5,1) = 0.06796628363711496324d0 a(5,2) = 0.11503469850463199467d0 a(5,3) = 0.20703489859738471851d0 a(5,4) = 0.54497475022851992204d0 b(1) = 0.14681187608478644956d0 b(2) = 0.24848290944497614757d0 b(3) = 0.10425883033198029567d0 b(4) = 0.27443890090134945681d0 b(5) = 0.22600748323690765039d0 bh(1) = 3298630537163.d0/18795049059328.d0 bh(2) = 1771072646821.d0/14103719399933.d0 bh(3) = 25277699738943.d0/87361585864100.d0 bh(4) = 1305111367099.d0/5907422403405.d0 bh(5) = 3614287973383.d0/19159025146192.d0 
Re: Any questions about RungeKutta methods
Hi,
I would like to know why Third order Runge Kutta method is TVD? What does this mean in 'terms' of the terms used in the Runge Kutta? 
Re: Any questions about RungeKutta methods
Let's begin this with my personal opinion that TVD, explicit RungeKutta methods have taken on a life of their own, independent of their true merit. It is frustrating for me to hear everyone talking about them based principally on the idea that the phrase TVD is intuitively appealing. To use TVD ERK methods and not use an error controller is completely misguided. OK, that was my speech.
The term TVD refers to the property of monotonicity. Closely related to this, but not identical, is the concept of contractivity. Stability is closely related to contractivity but not as demanding. If you would like serious details on the derivation and proof of monotonicity or contractivity of ERK methods then I defer to the papers by Luco Ferracina and Marc Spijker: http://www.ams.org/journalgetitem?p...71804016643 http://epubs.siam.org/sambin/dbq/article/41558 and the classic paper on contractivity is by Hans Kraaijevanger: http://portal.acm.org/citation.cfm?i...N=16360205#CIT That is part of my answer to your question "I would like to know why Third order Runge Kutta method is TVD?" The 3stage, 3rdorder ERK method is a two free parameter family of methods. The method that Shu and Osher present selects c2=1, c3=1/2. That means that the solution vector at the first nontrivial stage occurs at tn + c2*dt while the next stage occurs at tn + c3*dt. In terms of contractivity, the radius of contractivity for Fehlberg's method (1969) (the one Shu and Osher present) applied to a nonlinear problem is 1.000 using an Linfinity norm. That is a very severe requirement for a method. Not all 3stage, 3rdorder ERK methods have a contractivity radius of 1.000. Some have 0.000. Maximizing the contractivity radius for 3stage, 3rdorder ERK methods leads one to Fehlberg's method. Kraaijevanger lists other maximally contractive ERK methods. Maximizing TVD radius gives identical methods to those where contractivity radius is maximized. Kraaijevanger lists the requirements for contractivity (nonlinear problems using an Linfinity norm). I'm not going to try to lay it out here because it is not simple but part of it is. A RungeKutta method is defined by it's coefficients; A, b, and c. For this severe contractivity requirement, b > 0, A >=0, and the RungeKutta K(Z)function must be absolutely monotonic on the negative real axis for [rOO,0] where rOO is the radius of contractivity. Now, pick your favorite method with b > 0, A >=0. Next you need to define several new scalars, vectors, and matrices in terms of a variable xi. You set xi=0 and see if all of your scalars, vectors, and matrices stay nonnegative. Increase xi until someone goes negative and at the point this happens, record xi. This value is the contractivity radius for nonlinear problems using an Linfinity norm. 
Re: Any questions about RungeKutta methods
O.K. Do you have any reference about time error control? Just at a basic level.

Re: Any questions about RungeKutta methods
I'm afraid the references I might give would bewilder most readers. So, I'll try to say it here.
A RungeKutta method samples the space between two steps by creating intermediate solutions between the current step and future step. Let's say we want to integrate dU/dt = F(U) for one step begining at step n. Our first intermediate Uvector occurs at tn + c1*dt and is given by U^{1} = U^{(n)} + dt*[ a11*F^{1} + a12*F^{1} + a13*F^{3} + ...] Then the next intermediate Uvector, U^{2} occuring at tn + c2*dt, is U^{2} = U^{(n)} + dt*[ a21*F^{1} + a22*F^{1} + a23*F^{3} + ...] One does this s (s= number of stages) times, or U^{i} = U^{(n)} + sum_{j=1}^{s}[ dt*aij*F^{j} ] Now that you have all of the intermediate stages, you take a linear combination of intermediate function values to get U^{(n+1)}. We use b's for the weights used to get the step Uvector but a's to get the stage Uvector. U^{(n+1)} = U^{(n)} + sum_{i=1}^{s}[ dt*bi*F^{i} ] OK, once we've finished the step, we create a second solution that I'll call Uhat. The main solution is usually order p but the Uhat (the embedded method) is order (p1). Hence we might have U^{(n+1)} = U^{(n)} + sum_{i=1}^{s}[ dt*bi*F^{i} ] Uh^{(n+1)} = U^{(n)} + sum_{i=1}^{s}[ dt*bhi*F^{i} ] where U^{(n+1)} might be 4thorder and Uh^{(n+1)} is 3rdorder. Create an error estimate by computing [U^{(n+1)}  Uh^{(n+1)}]/ [ U^{(n+1)} ] = delta^(n+1) Actually, one needs to design this error estimate to avoid dividing by zero. OK, now we have an error estimate. The step size is now selected using a controller and a desired error which I'll call eps. One selects the new step size to get us to step n+1 in the most general case as dt^(n+1) = safe_fac*dt^(n)* *( eps/delta^(n+1) )^alpha *( delta^(n)/eps )^beta *( eps/delta^(n1) )^gamma *( delta^(n)/delta^(n1) )^a *( delta^(n1)/delta^(n2) )^b If you are using an explicit RungeKutta method, Use a PI or PID controller (a=b=0). Start with alpha = 0.7/p_hat beta = 0.4/p_hat gamma = 0 or alpha = 0.6/p_hat beta = 0.2/p_hat gamma = 0 where p_hat is the order of the embedded method (3 in the case I have used). The only published PIDcontroller I've seen had alpha = 0.49/p_hat beta = 0.34/p_hat gamma = 0.10/p_hat Let me know if you want to use implicit RK methods because they use very different controller coefficients. Take a look at this paper http://ntrs.nasa.gov/archive/nasa/ca...1999099362.pdf The man who has done the most at error control for RK methods is Gustaf Soderlind. Read his (and his students) papers. http://www.maths.lth.se/na/staff/gustaf/ http://scholar.google.com/scholar?q=rungeKutta+soderlind&ie=UTF8&oe=UTF8&hl=en Also, http://www.unige.ch/~hairer/books.html So, here's one method that I gave before: a(2,1) = 0.39175222657188905833d0 a(3,1) = 0.21766909626116921036d0 a(3,2) = 0.36841059305037202075d0 a(4,1) = 0.08269208665781075441d0 a(4,2) = 0.13995850219189573938d0 a(4,3) = 0.25189177427169263984d0 a(5,1) = 0.06796628363711496324d0 a(5,2) = 0.11503469850463199467d0 a(5,3) = 0.20703489859738471851d0 a(5,4) = 0.54497475022851992204d0 b(1) = 0.14681187608478644956d0 b(2) = 0.24848290944497614757d0 b(3) = 0.10425883033198029567d0 b(4) = 0.27443890090134945681d0 b(5) = 0.22600748323690765039d0 bh(1) = 3298630537163.d0/18795049059328.d0 bh(2) = 1771072646821.d0/14103719399933.d0 bh(3) = 25277699738943.d0/87361585864100.d0 bh(4) = 1305111367099.d0/5907422403405.d0 bh(5) = 3614287973383.d0/19159025146192.d0 I hope that wasn't too much ... 
Re: Any questions about RungeKutta methods
Hi i wrote a 4th order RungeKutta code in VBA Excel Macro and it is not fit for my experimental data at long residence times, so i would like to make a test for it and see the calculation parameters before using the main function but i don't know how can i make it, i will be very glad if some one can help me. Thank you very much in advance

Re: Any questions about RungeKutta methods
I don't understand the question. You said the explicit? RungeKutta (ERK) method is not fit for your data at long times. What does that mean? Is the ERK failing? If so, what happens? What is it in the method that you would like to see fixed?

Re: Any questions about RungeKutta methods
Thank you very much!
The references are very good. Do you know if some similar analysis has been carried out with AdamBashforth schemes? 
Re: Any questions about RungeKutta methods
Temporal error estimation and the controllers that adjust the step size are much more difficult for multistep methods than multistage methods. The best reference for this is a thesis from Sweden. Download the thesis by Anders Sjo here
http://www.maths.lth.se/na/thesis.html I think you will conclude that doing error control with an explicit RungeKutta method is much easier than an explicit multistep method. The essential problem with error control with multistep methods is their history  they retain information from many old, and sometimes unequally spaced, steps. 
All times are GMT 4. The time now is 05:10. 