local timestep for agglomeration multigrid method
Dear friends,
When I try to use agglomeration multigrid method to solve Euler equations on unstructured grid, I find it difficult for me to determinate local timestep on coarse grid. Could you give me some advice? Thanks. 
Re: local timestep for agglomeration multigrid method
Hello,
you can do it in exactly the same way as on the finest grid, i.e., for a given control volume dt = vol/((u.n+a)ds) where the symbols have their usual meanings. All the quantities are available on the coarse grids also, the volume is simply the agglomerated volume, and the face vector area ds is also available. Maybe you need to be more specific in your question as to where you encounter problems. I programmed the agglomeration multigrid method and I used the above definition  there were absolutely no problems with instability in time. For Euler flows, agglomeration multigrid is quite straightforward and works extremely well. I got convergence rates of down to 0.5 for first order schemes and 0.75 for second order schemes (first order on coarse grid levels) with an optimised 4stage RungeKutta scheme. For viscous flows, however, you have to be more clever... Greetings & good luck Andreas 
Re: local timestep for agglomeration multigrid method
Dear Haselbacher, Thank you for your help! In fact, the local timestep I am using is just as you say. The residu keep unchanged(about 1.0e1) after several iterations. Of course, the results are not right. The method I am using is just like Mavriplis(cellcentred symmetric finitevolume spatial discretisation, explicit,multistage procedure). In sigle fine grid, the residu 1.0e6 is achieved after 1000 steps. The CFL can reach 10 in the finest grid, 2 in the second and third grid and 1 in the coarsest grid(for single grid computation). I think maybe you use the same method as I do. Could you give me more information about it? Thanks in advance.

Re: local timestep for agglomeration multigrid method
Hello,
I am happy to give you more information  but I don't really know what else to tell you. The routines which computed the time step, fluxes, residual and enforced boundary conditions and so on were the same on _all_ grid levels. That is, I simply had a wrapper around these routines which drove the multigrid process. The only thing you also need to add is the computation of the forcing function and the restriction and prolongation operators. For inviscid flows, there really is nothing more to it. The numerical method I used was a finitevolume method based on dual control volumes, with GreenGauss or leastsquares reconstruction and Roe's fluxdifference splitting. I don't think that the discretisation should make that much of a difference. There's a few things you can check: 1. Sum the controlvolume areas on all grid levels. Are the sums constant? 2. Accumulate the controlvolume face areas? Do they add to zero for all volumes on all grid levels? 3. Try running the solver in singlegrid mode on the various coarse levels. This is easily done by restricting the solution down to a given coarse level, ignoring the forcing function, and just running the code to convergence. I also found it helpful to monitor the residual, correction, and forcing function at a given vertex on the fine grid and the coarse grid. You can see quite easily when something's wrong with your multigrid method if you look at these quantities. I realise that this is maybe not as specific as you wanted, but I hope it helps all the same. Greetings & good luck Andreas 
All times are GMT 4. The time now is 18:33. 