Good Starting Guesses for Faster Iterative Solution of Unsteady Simulations
Hello,
I am writing an unsteady solver for the incompressible NavierStokes equations using the exact fractional step method (EFS) (by J.B. Perot). I am currently using Conjugate Gradients (CG) to solve for the linear system at each time step. I would like to discuss how the starting guess used for solving the linear system could be manipulated for efficiency. Can we somehow use information from previous time steps to formulate our starting guesses for solving the current time step that result in faster convergence (lower required iterations)? Thanks for your time and input. P.S. In case you were curious, I am using CG because in my implementation because the contents of the system matrix is determinable from the grid, and constant throughout the whole simulation. So, this allows for a matrixfree implementation. I think CG also lends itself to effective GPU implementation. Comments on these thoughts are welcome as well! Note that I am trying to make an unsteady (timeaccurate) solver for HighReynoldsNumber flow, so small time steps (compared to implicit methods) will probably be desired. So, more expensive solution methods that allow for larger time steps may not necessarily be more desirable (depending on the tradeoff, of course). 
For an iterative algorithm in unsteady solvers is quite common to use the solution of the previous time step as starting value

Yes, I forgot to mention I am doing that but I dont get any difference compared to when I start with a zero vector for a liddriven cavity flow. I was wondering if there are any more sophisticated techniques.

Quote:
the CG method is somehow much more independent from the starting solution than methods like Jacobi, S.O.R., ... see the paragraph in the book of Peric & Ferziger :) 
Are you using a preconditioner? That will improve your performance much more than the initial guess. I am using a multigrid preconditioner and having a good initial guess helps a little but is nothing compared to the effect of the preconditioner.

Can you tell me how it affects the memory requirements and regularity of the system matrix?
Even if a really successful preconditioner reduces iterations by a factor of lets say 2, it may not be faster in implementation for GPUs due to memory bandwidth, coalescing, latency, etc. considerations, which have orderofmagnitude effects on speed. 
A multigrid preconditioner will reduce the number of iterations by at least an order of magnitude. Not familiar with the exact fractional step method, but I am using a fraction step method and it involves the solution of an elliptic equation for the pressure which is the time limiting step. I run everything on the CPU but multigrid methods can be ported to GPU's. The are highly parallel algorithms so I don't think that will be an issue. The extra memory required would likely be close to half of what you are using for the CG part of the solution. You could try doing a simple version of multigrid like a two level method to get an idea of how it will work without investing too much time on it

I thought MG would be too much of a time investment, but at your suggestion and explanation of the magnitude of the rewards, I will put it on my list of things to try to implement. I have read that MGpreconditioned CG works better than MG  what do you think?
What kind of matrix storage do you use (CSR, etc.)? Is the matrix still symmetric? 
I would try using a MGCG over just a plain MG method. If you have discontinous coefficients or sharp forcing terms then plain MG can sometimes fail to converge.
I'm not sure if the resulting matrix is still symmetric, however I know people have used MG as a CG preconditioner. I use it with BiCGstab so I don't have to worry about symmetry or positive definitness. As for the storage scheme I'm using a regular cartesian grid so I can get away with storing everything in 2d arrays. Mudpack is an open source multigrid package but it is only for CPUs. http://www2.cisl.ucar.edu/resources/legacy/mudpack A quick search turned up this open source code for MG on a GPU (CUDA). https://developer.nvidia.com/cusp 
Quote:

I meant that the most time consuming part of the problem is the Poisson equation for the pressure. Because multgrid methods were design for elliptic problems it is often worth the effort to apply a multigrid preconditioner for the pressure equation because it will speed it up so much and because the pressure equation typically takes up most of the computation time.

just to say something about my experience... if the elliptic equation for pressure is the standard one coming from the second order FD discretization of the Poisson equation, try the simple SOR method with the correct relaxation value for the grid. If the system is of O(10^6), SOR can be the fastest method in terms of CPU time and can be simply parallelized (for example b/w procedure). Further, SOR does not require to store all the matrix coefficients.
I tried multigrid (with Jacobi iteration), CG, GMRES and I always obtained that the workunit of the SOR is so fast to be convenient even for a number of iterations ten times greater than others... Of course, for systems of huge number of equations, the situation becomes different. 
Quote:

All times are GMT 4. The time now is 18:33. 