Multigrid on FDM
Hi.
I hope someone can help with my master thesis. I am using an explicit, 2nd order, 3D finite difference algorithm, with central differences. We've chosen the Turkel&Vatsa scalar artificial dissipation model. It is applied to rocketlike configurations. Boundary conditions include: far field, exit, centerline averaging and solid wall. For the wall, I am currently extrapolating (zero order) from the adjacent line. I already know that this simple wall condition leads to a contact surface (Chakravarthy told me so), and this impairs very much the convergence. Thus, I tried to implement a geometric multigrid process to the code. I start from scratch at the finer grid and sweep till the coarser, and then back, as usual with an FAS method. I've used bilinear interpolation and tested both direct injection and fullweighted restriction. However, I cannot get good convergence and I don't know why. With the fullweighted restriction the code is even blowing! I wonder if it could be the contact surface, or the proximity between the "normal" shock wave and the nose of the rocket, or the boundary conditions! Does anyone have any idea of what is going wrong? I would appreciate very much your help. Best regards, Biga 
Re: Multigrid on FDM
Hallo, do you have a converged solution of your Problem (see below)? I mean what happens if, you start calculating your mg with the converged solution as a starting guess? It shouldn't change something and you have to get the same solution. If not you have a problem: your code is not consistent!
Do you start from the finer grid? what happens if you use only one grid? (despite the fact that your code should run a longer time) Do you get a converged Solution? Jannis 
Re: Multigrid on FDM

Re: Multigrid on FDM
Thanx for your reply.
I've already performed these two tests. The code performs well with one grid, and I get converged solution, since I start from the finer grid. I've also used a converged solution (actually, it was not full converged) as a starting guess and there was no convergence. The residue jumped (two orders), decrease one order and stabilized without convergence. This test was important, however I cannot tell if either the mg implementation or the mg algorithm is wrong, you see? It seems to me that the shock or the boundary conditions could spoil the convergence. I'm applying the same art. dissipation model as well as the boundary for all the grids. Thanx again. Biga 
Re: Multigrid on FDM
Thanx for your help.
I've checked that thread and found that the "symptoms of the disease" are quite similar, indeed. However, I'm working with a compressible code. Is it any harder task for the mg? I'm applying the same art. dissipation model as well as the boundary for all the grids. Biga 
Re: Multigrid on FDM
Multigrid in a straightforward form is excellent for strongly elliptic equations such as Poisson equations. It relies on the solution of coarse grid modes having something useful to contribute to the fine grid and the smoother rapidly removing local unresolvable modes.
The equations governing compressible flow are not well suited to straightforward multigrid. There is not much useful information in the coarse grid modes of a contact surface or shock wave! A simple residual averaging based smoother across a shock is not going to rapidly remove the short wavelength modes. For the multigrid to work at all (in terms of improving the computational time) it needs to be made more sophisticated  FAS, adapated grid coasening, adapted smoother, using the anisotropy in the coefficients, monitoring/checking/scaling the residuals and corrections (particularly the latter), adapting the cycling, post and presmoothing, etc... Even then you are not going to see computional times linearly increasing with grid numbers. Multigrid can be made to work to some extent but the cost is high in terms of effort and the benefits not large. Unless multigrid is the subject of your MSc I would stick with the base solver and wait a bit longer to get the solutions. If it is the basis of you thesis I would suggest finding a paper or two on multigrid + shock waves and following the suggestions of the author. To echo the earlier reply and the advice in the thread: the first test is always to see if corrections are generated for a converged solution. 
Re: Multigrid on FDM
Thanx, Andy.
Actually, the multigrid is not part of my MSc. However, it seemed to be an attractive convergence accelerator to my explicit method. I'm already using the CFL marching. So, you believe it would not be worthy to put so much effort into a mg process for this compressible solver? Do you suggest any other accelerator? I'm asking so because the wall conditions done by extrapolation of conserved quantities from the interior are generating a contact discontinuity near the wall. This slows pretty much the convergence. Chakravarthy says that more careful wall conditions could be used, but they are very expensive numerically. Thus, even comercial codes presents this contact surface. However, they converge faster than I can do. Is it so due to the fact that my code is explicit and the mayority of comercial codes are implicit? Biga 
Re: Multigrid on FDM
I do not know the details of your scheme but a few points (envision a certain amount of hand waving):
For compressible flows, simple explicit schemes tend to be limited to a CFL limit around 1.0 (assuming you are not hitting the diffusion number limit by fully resolving boundary layers or something similar). More sophisticated explicit schemes push this upto 5.0 or so. Simple implicit sequential schemes tend to be limited to around 10.0 or so because the solution variables are uncoupled. To get big times steps you need to couple variables. Multigrid algorithms in commercial codes tend to couple the variables and it is probably this as much as the implicitness that is bringing about the acceleration. However, it is not a statement I make with a high degree of confidence. Can anyone confirm performance for noncoupled compressible flow multigrid? A full Newton scheme can easily converge in less than 10 iterations for Euler flows (including transport equation turbulence models like ke is almost certain to knacker this though). If your problem is 2D, you have plenty of memory, none or a very simple turbulence model and you really want convergence in a few iterations... I cannot make any firm recommendations without knowing what is currently causing your stability limit. This tends to require time and patience tracing through the calculation procedure step by step in the vicinity of the most unstable grid point (we are usually talking about days). It is not unusual for small detailed changes to make substantial improvements. For example, replacing multiplying and averaging a nonlinear term with averging and multiplying. 
Re: Multigrid on FDM
Thanx for your advices. I'll try to work around those problems.
Guess I'll implement an implicit residual smoothing! 
Re: Multigrid on FDM
Hi,
Since I'm working with a FDM, I store the Jacobianscaled properties. In the mg, when I transfer the properties, I unscale (is it the right word?) them by the current grid Jacobian, then transfer, then scale again by the new grid Jacobian. Do I have to do the same with the residue? Thanx. 
All times are GMT 4. The time now is 11:24. 