Explicit vs Implicit timestepping
I wonder which is better, explicit or implicit timesteppings. Maybe, it depends on the purpose: steady problems, unsteady problems, stiff equations, parallel computations? Anyone has ideas or opinions or references? Thanks.
Amdi 
oh, long discussion... explicit vs implicit, which is better?
Totally depends on what you want to compute.... some random thoughts: implicit: + steady state .. + if you are not interested in the fastest wave speeds of your prob, or if the dominating physical effect takes place on large timescales + incompressible flow  no preconditioner, no gain in speed compared to explicit....preconditioning is an art in itself, there are many many combinations of implicit solution methods + precons.... kinda a black art to me.... seems to be a lot of experience and guess work....  memory requirement can seriously limit your computation  (my opinion:) less intuitive, harder to program and debug  harder if not impossible to parallelize well (and get good scaling!) explicit:  sucks at steady state  computation can be bogged down by fastest wave speed + very easy to set up + often (e.g. turbulence) fastest wave speeds are of importance, so the smallest timescales must be resolved anyway + parallelization very easy, almost perfect scaling possible My conclusion: Totally depends on what you want to do!! 
Dear cfdnewbie
Thanks for your insightful comments! Just one more question. Why is it difficult to parallelize implicit schemes and get good scaling? Each linear solve iteration is like an explicit scheme which scales perfectly, no? Amdi 
Implicit schemes always require the setup of the system matrix in each step (at least for a nonlinear problem), so you have to build and manipulate the matrix in memory....this matrix can become huge, as it scales with DOF ^d, with d = dimension of your problem (2d, 3d)....
so to invert this matrix, you need a) a huge amount of memory b) distribute this matrix somehow over your procs c) communicate a lot between procs (to find the inverse) This communication is what generally slows down your code, and makes good scaling difficult. I'm no expert on implicit methods, though, maybe there are clever tricks to circumvent this and make the scaling acceptable. I prefer explicit methods, because the problems I am interested in need them anyway.. 
It's interesting that it takes a lot more effort to solve a linear system in parallel.
Thanks! Amdi 
Keep also in mind that implicit schemes have a much higher memory requirement due to the matrix... I know that all people doing implicit stuff always complain that the memory requirement is what kills them when they are trying to get their code to perform on hpc machines...
Again, i am no expert on implicit schemes, i prefer explicit ones, so maybe somebody with more experience can give you an other opinion! 
Hi Amid/cfdnewbie,
I agree with most comments from cfdnewbie but would like to add a few more. There are more than one way to do implicit timestepping:  iterative solver (conjuguate gradient and the like) can be used to keep memory down and they scale reasonably well in parallel  also quite good for sparse matrix (which most CFD solver require);  simple/piso like algorithm where the implicit equation are solved in a segregated manner (ie solver uvelocity, vvelocity, wvelocity as distinct equation). This approach again limit memory use as the system is not NxN where N is the number of cells  in place of (nxN)x(nxN)  where n is the number of equations;  compressible flows can be treated implicitly  as long as the flow physics you are interested allows for implicit treatment (i.e interaction of a pressure wave with a turbulent field would likely require an explicit algorithm); cfdnewbie would need to refine his comments on preconditioning as it can refer to either preconditioning of the equation to reduce the stiffness of the system to solve (physic based such introduction of an artificial compressibility) or preconditioning of the system to solve (incomplete LU decomposition as a preconditioned to conjuguate gradient for example). It is a very wide question and subject. And you need to keep in mind that an appropriate solution for the automotive industry may not be appropriate for the aerospace industry  get the right tool for the job. Kind regards, Julien 
Hi Julien,
thank you very much for your insightful comments! Quote:
Another question: Is there a clever way to use the sparse matrix to your advantage? I know that people store just the nonzero matrix entries plus their matrix indices, but this mapping is pretty timeconsuming, right? Quote:
Quote:
However, one should forget the mixed approach: treating terms in your equation according to their physics either explicitly or implicitly...RKIMEX and such... that seems to be pretty interesting, but I haven't looked into it yet. Quote:
Quote:
It's not a question which one is better  in general. Each has its uses and disadvantages, but it's good to be aware of them.... Cheers (explicitly ;) ) Cfd newdbie 
I will do my best to answer your question.
Quote:
There are clever ways to store a sparse matrix, and even ways not to store the matrix (if you are willing to pay the price of not using a preconditioner). An example: if you know that your matrix is tridiagonal, you can just store 3 arrays (one for the diagonal element and one for the upper and lower diagonal). It would still require a larger memory than explicit. Quote:
Quote:
I am afraid that I am more from the industry than the academia  did my phd 10 years ago and moved away from research. I am not too good on papers. But I can still do some mean coding and have a decent memory. Quote:
Quote:
Another note (from the industry). A large amount of time is spent on the understanding the problem, pre, postprocessing and reporting. Sometimes it can actually exceeds the solver time  I am not doing LES... Good luck with it. Kind regards, Julien 
You could also try to use a specialized version of Lapack for inverting your huge array, also you will need to be able to reorder your mesh numbering (for unstructured grids) for limiting the bandwidth of your array. Using a structured grid will give you an already well arranged bandwidth array.
An alternative to a pure implicit solver will be to use the psudoimplicit method of A Jameson, in which you start with an impliciti temporal discretization but, at each time step you will actually use an explicit solver for a slightly modified version of your discretized equation. This is the approach used in a commercial solver like Fluent for example. If you need an exact reference I will find the title of the article in which Jameson presents his method. 
thanks Julien for your insightful comments!

cfdnewbie, Julien, RanaP,
I thank you all for very insightful comments. I think I have a better understanding of timestepping schemes now. Amdi 
All times are GMT 4. The time now is 05:41. 