Error decreases in one norm and increases in another one
Hi everyone!
Have anyone experienced when error decreases in one norm and grows up in another one during convergence study? In particulary, in L_1 norm error goes down, while goes up in both L_2 and L_infty. 
Quote:

It is odd that 1norm and 2norm behave differently. I would have expected if a norm behaves differently, that would be the maxnorm.

I guess, i have found the reason. Briefly speaking, I have unsteady flow of 2D TaylorGreen vortexes and therefore mixed spatial and temporary error. For a fixed grid resolution I performed time step convergence study and observed increase in error in L_2 and L_infty norms, while in L_1 norm the error decreased with almost the expected order.
After the grid was refined, I observed error drop in both L1 and L2 norms, not L_infty. After one more grid refinement, all the norms showed error drop. So probably the spatial error was dominated over the temporary error. 
Quote:
Conversely, the accuracy study done by taking dt/h= constant does not produce such problem. 
Quote:
I know a little bit about the second approach of simultaneous grid and time step refinement. Can you suggest any literature about it? Thanks in advance. 
All times are GMT 4. The time now is 10:01. 