|
[Sponsors] |
September 9, 2005, 09:24 |
Gauss-reduction with back-substitution
|
#1 |
Guest
Posts: n/a
|
I am working with CFD-FEM research code on my PhD project & am looking into ways to speed up the matrix inversion operation.
The current method is Gauss-Reduction with back-substitution - apparently in order to maintain the solution accuracy. I find it to be both slow & memory-intensive. Does anyone have suggestions on how to perform this operation in a faster, more efficient manner - without undue loss of accuracy? Thanks... diaw... |
|
September 9, 2005, 10:22 |
Re: Gauss-reduction with back-substitution
|
#2 |
Guest
Posts: n/a
|
Now here's a thing I've just discovered...
Using the "-mfpmath=sse" switch in g95 for a PentiumIV cpu, speeds up the gaussian-reduction routine by a factor of around 20 ! What on earth is this SSE function in the PentiumIV? diaw... |
|
September 9, 2005, 11:11 |
Re: Gauss-reduction with back-substitution
|
#3 |
Guest
Posts: n/a
|
It's a form of vecorization (Same Instruction Multiple Data) SSE stands for Streaming SIMD Extensions. This basically speeds up code where the data is changing while the actual operations being performed on the data remain the same.
|
|
September 9, 2005, 15:43 |
Re: Gauss-reduction with back-substitution
|
#4 |
Guest
Posts: n/a
|
Thanks Tom...
>>>It's a form of vecorization (Same Instruction Multiple Data) SSE stands for Streaming SIMD Extensions. This basically speeds up code where the data is changing while the actual operations being performed on the data remain the same. Is there any way in which we can use/enhance this feature in algorithm construction? diaw... |
|
September 9, 2005, 17:09 |
Re: Gauss-reduction with back-substitution
|
#5 |
Guest
Posts: n/a
|
Inversion is a pretty standard problem, and I think there are some quite mature algorithms out there. Maybe you can get some ideas on faster algorithms by looking at LAPACK or equivalent libraries.
I was under the impression that the Gauss-reduction is not only slow but actually not very accurate, either. Analytically you will get an exact solution, but numerical round-off errors will spoil the result. This is especially true with a large number of operations on very large matrices. The low efficiency of the Gauss method is because of the large number of floating point operations, isn't it? That could also have a negative effect on numerical accuracy, unless there is some precision control. So please correct me on this if I am mistaken: Even for accuracy, the Gauss method is not optimal...? |
|
September 9, 2005, 18:18 |
Re: Gauss-reduction with back-substitution
|
#6 |
Guest
Posts: n/a
|
With gauss reduction you can improve higher speed with faster convergence using multigrid techniques.
-Harish |
|
September 9, 2005, 22:29 |
Re: Gauss-reduction with back-substitution
|
#7 |
Guest
Posts: n/a
|
As I understand things currently - with Gauss-reduction & then back-substitution, there are less operations than for say Gauss-Jordan. This is said to be the reason for its use, leading to less numeric round-off error.
diaw... |
|
September 10, 2005, 11:57 |
Re: Gauss-reduction with back-substitution
|
#8 |
Guest
Posts: n/a
|
Hmmm,
I think I did not understand what your issue is, because if I understood I woul suggest you to use a Gauss Seidel with over relaxation (if linear problem) or under relaxation (if non linear problem) parameter. If you want to do that, your initial solution will be extremly important on the convergence speed. Let me know if that helped. |
|
September 11, 2005, 02:24 |
Re: Gauss-reduction with back-substitution
|
#9 |
Guest
Posts: n/a
|
Hi Sylvan,
Thanks for your comments regarding Gauss-reduction with back-susbstitution. Sylvain wrote: :>I think I did not understand what your issue is, because if I understood I woul suggest you to use a Gauss Seidel with over relaxation (if linear problem) or under relaxation (if non linear problem) parameter. If you want to do that, your initial solution will be extremly important on the convergence speed. ----------- My lecturer argues that the Gauss-reduction with back-substitution method provides minimal (if any) compuational error when compared with other reduction techniques. Judging from the large time requirement to 'invert' large systems, I will certainly experiment with alternative methods - Gauss-Seidel is on the list. There is space to use under-relaxation on the full outer loop in converging the Newton-Raphson scheme, but I guess you were refering to it in the context of an iterative matrix inversion. Does anyone know what amount of difference is typically acceptable between 'exact matrix inversion' methods & 'iterative inversion' methods? Thanks... diaw... |
|
September 12, 2005, 04:45 |
Re: Gauss-reduction with back-substitution
|
#10 |
Guest
Posts: n/a
|
I would just suggest writing clean well structured code and let the compiler handle the optimization.
That said, if the g95 compiler gives vectorization information (I've never used g95 but the intel compiler does this to some extent) then you can use this to help vectorize up your code; e.g. look for statements which prevent loops vectorizing and see if you can find ways of making them vectorize. Whether your code actually runs any faster after this is another issue altogehter. |
|
September 12, 2005, 05:50 |
Re: Gauss-reduction with back-substitution
|
#11 |
Guest
Posts: n/a
|
Hi Tom,
Thanks for the advice... I'll set g95 to provide a verbose output & perhaps look into a few other compilers. I have recently downloaded the Intel compiler for Linex - I'll give that a whirl... Would something like 'Lint', or a code profiler be able to provide some additional information? Cheers, diaw... ---------------------------- Tom wrote: :>>I would just suggest writing clean well structured code and let the compiler handle the optimization. That said, if the g95 compiler gives vectorization information (I've never used g95 but the intel compiler does this to some extent) then you can use this to help vectorize up your code; e.g. look for statements which prevent loops vectorizing and see if you can find ways of making them vectorize. Whether your code actually runs any faster after this is another issue altogehter. |
|
September 12, 2005, 18:13 |
Re: Gauss-reduction with back-substitution
|
#12 |
Guest
Posts: n/a
|
>Does anyone know what amount of difference is typically >acceptable between 'exact matrix inversion' methods & >'iterative inversion' methods?
That's difficult to answer in general... but it's an important point. It depends on a number of things. If the inverted matrix presents your final result, then you should be clear on the required accuracy. However, if you are using some kind of iterative method (maybe for a nonlinear problem) and have to perform many matrix inversions in the process, the accuracy of the inverse will have an effect on the stability and convergence of your method. If the problem is nonlinear (which means that you need iterations anyway) it is usually not wise to use "exact inversion". It's simply not necessary and too expensive (unless your matrices are very small). An iterative solver will give you more control over the amount of work spent on each matrix inversion and you will usually end up with a far more efficient method. |
|
September 13, 2005, 09:01 |
Re: Gauss-reduction with back-substitution
|
#13 |
Guest
Posts: n/a
|
Hi Mani,
Thanks for your input... very helpful indeed. Why have an exact matrix when you are going to iterate, rebuild the system matrices (non-linear cfd) using the deviations, re-invert, iterate etc? The iteration scheme can then settle the issue in the long run. Thanks - that is wise advice. I have noted with this particular scheme that it really only needs a few iterations (say at most 15) to settle many viscous, incompresible fluids problems - some in as little as 5 steps. Does anyone have any advice to offer on fast, sparse matrix inversion schemes? Thanks, diaw... -------------------------------- Mani wrote: :>>That's difficult to answer in general... but it's an important point. It depends on a number of things. If the inverted matrix presents your final result, then you should be clear on the required accuracy. However, if you are using some kind of iterative method (maybe for a nonlinear problem) and have to perform many matrix inversions in the process, the accuracy of the inverse will have an effect on the stability and convergence of your method. If the problem is nonlinear (which means that you need iterations anyway) it is usually not wise to use "exact inversion". It's simply not necessary and too expensive (unless your matrices are very small). An iterative solver will give you more control over the amount of work spent on each matrix inversion and you will usually end up with a far more efficient method. |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Problem in fvschemes divSchemes cannot use Gauss linearUpwind | nico765 | OpenFOAM Bugs | 30 | August 9, 2018 07:39 |
please help me which solver is better for my application | Ger_US | OpenFOAM | 8 | January 8, 2013 12:29 |
Submarine with SimpleFoam | alex_rubel | OpenFOAM | 20 | February 25, 2011 06:34 |
solution diverges when linear upwind interpolation scheme is used | subash | OpenFOAM | 0 | May 29, 2010 01:23 |
Stability problems with kepsilon in external aero | edreed | OpenFOAM Running, Solving & CFD | 21 | July 16, 2008 15:00 |