# Tridiagonal matrix algorithm - TDMA (Thomas algorithm)

(Difference between revisions)
 Revision as of 12:32, 14 February 2006 (view source) (+wikilink)← Older edit Revision as of 04:59, 8 April 2006 (view source)Jasond (Talk | contribs) (Additions to variants, other clean-up)Newer edit → Line 39: Line 39: [/itex] [/itex] - For such systems, the solution can be obtained in $O(n)$ operations instead of $O(n^3)$ required by Gaussian Elimination. A first sweep eliminates the $a_i$'s, and then an (abbreviated) backward substitution produces the solution.  Example of such matrices commonly arise from the discretization of 1D problems (e.g. the 1D Possion problem). + For such systems, the solution can be obtained in $O(n)$ operations instead of $O(n^3)$ required by Gaussian Elimination. A first sweep eliminates the $a_i$'s, and then an (abbreviated) backward substitution produces the solution.  Example of such matrices commonly arise from the discretization of 1D problems (e.g. the 1D Poisson problem). == Algorithm == == Algorithm == + + The following algorithm performs the TDMA, overwriting the original arrays.  In some situations this is not desirable, so some prefer to copy the original arrays beforehand. + Forward elimination phase Forward elimination phase :    for k = 2 step until n do
:    for k = 2 step until n do
::    $m = {{a_k } \over {b_{k - 1} }}$
::    $m = {{a_k } \over {b_{k - 1} }}$
- ::    $b_k^' = b_k - mc_{k - 1}$
+ ::    $b_k = b_k - mc_{k - 1}$
- ::    $d_k^' = d_k - md_{k - 1}$
+ ::    $d_k = d_k - md_{k - 1}$
:    end loop (k) :    end loop (k) Backward substitution phase Backward substitution phase - :    $x_n = {{d_n^' } \over {b_n }}$
+ :    $x_n = {{d_n } \over {b_n }}$
:    for k = n-1 stepdown until 1 do
:    for k = n-1 stepdown until 1 do
- ::    $x_k = {{d_k^' - c_k x_{k + 1} } \over {b_k }}$
+ ::    $x_k = {{d_k - c_k x_{k + 1} } \over {b_k }}$
:    end loop (k) :    end loop (k) Line 67: Line 70: [/itex]
[/itex]
- In this case, we can make use of the Sherman-Morrison formula to avoid the additional operations of Gaussian elimination and still use the Thomas algorithm. + In matrix form, this is + + :$+ \left[ + \begin{matrix} + {b_1} & {c_1} & { } & { } & {a_1} \\ + {a_2} & {b_2} & {c_2} & { } & { } \\ + { } & {a_3} & {b_3} & \cdot & { } \\ + { } & { } & \cdot & \cdot & {c_{n-1}}\\ + {c_n} & { } & { } & {a_n} & {b_n}\\ + \end{matrix} + \right] + \left[ + \begin{matrix} + {x_1 } \\ + {x_2 } \\ + \cdot \\ + \cdot \\ + {x_n } \\ + \end{matrix} + \right] + = + \left[ + \begin{matrix} + {d_1 } \\ + {d_2 } \\ + \cdot \\ + \cdot \\ + {d_n } \\ + \end{matrix} + \right]. +$ + + In this case, we can make use of the Sherman-Morrison formula to avoid the additional operations of Gaussian elimination and still use the Thomas algorithm. We will now solve + + :$(A + uv^T)x = d$ + + where + + :$u^T = [b_1\ 0\ 0\ ...\ 0\ c_n],\ v^T = [1\ 0\ 0\ ...\ 0\ a_1/b_1].$ + + $A$ is a slightly different tridiagonal system than above, and the solution to the perturbed system is obtained by solving + + :$Ay=d,\ Aq=u$ + + and compute $x$ as + + :$x = y - (v^T y)/(1 - (v^T q)) q$ In other situation, the system of equation may be ''block tridiagonal'', with smaller submatrices arranged as the individual elements in the above matrix system.  Simplified forms of Gaussian elimination have been developed for these situations. In other situation, the system of equation may be ''block tridiagonal'', with smaller submatrices arranged as the individual elements in the above matrix system.  Simplified forms of Gaussian elimination have been developed for these situations. Line 75: Line 125: #{{reference-book|author=Conte, S.D., and deBoor, C.|year=1972|title=Elementary Numerical Analysis|rest= McGraw-Hill, New York.}} #{{reference-book|author=Conte, S.D., and deBoor, C.|year=1972|title=Elementary Numerical Analysis|rest= McGraw-Hill, New York.}} - ==External link== + ''Still TODO: Add more references, more on the variants, make things nicer looking, and maybe more performance type info'' - *[http://en.wikipedia.org/wiki/Tridiagonal_matrix_algorithm Wikipedia article] + --[[User:Jasond|Jasond]] 22:59, 7 April 2006 (MDT) - + - ''TODO: Add more references, more on the variants, and maybe more performance type info'' + - --[[User:Jasond|Jasond]] 16:50, 18 December 2005 (MST) +

## Introduction

The tridiagonal matrix algorithm (TDMA), also known as the Thomas algorithm, is a simplified form of Gaussian elimination that can be used to solve tridiagonal systems of equations. A tridiagonal system may be written as

$a_i x_{i - 1} + b_i x_i + c_i x_{i + 1} = d_i,$

where $a_1 = 0$ and $c_n = 0$. In matrix form, this system is written as

$\left[ \begin{matrix} {b_1} & {c_1} & { } & { } & { 0 } \\ {a_2} & {b_2} & {c_2} & { } & { } \\ { } & {a_3} & {b_3} & \cdot & { } \\ { } & { } & \cdot & \cdot & {c_{n-1}}\\ { 0 } & { } & { } & {a_n} & {b_n}\\ \end{matrix} \right] \left[ \begin{matrix} {x_1 } \\ {x_2 } \\ \cdot \\ \cdot \\ {x_n } \\ \end{matrix} \right] = \left[ \begin{matrix} {d_1 } \\ {d_2 } \\ \cdot \\ \cdot \\ {d_n } \\ \end{matrix} \right].$

For such systems, the solution can be obtained in $O(n)$ operations instead of $O(n^3)$ required by Gaussian Elimination. A first sweep eliminates the $a_i$'s, and then an (abbreviated) backward substitution produces the solution. Example of such matrices commonly arise from the discretization of 1D problems (e.g. the 1D Poisson problem).

## Algorithm

The following algorithm performs the TDMA, overwriting the original arrays. In some situations this is not desirable, so some prefer to copy the original arrays beforehand.

Forward elimination phase

for k = 2 step until n do
$m = {{a_k } \over {b_{k - 1} }}$
$b_k = b_k - mc_{k - 1}$
$d_k = d_k - md_{k - 1}$
end loop (k)

Backward substitution phase

$x_n = {{d_n } \over {b_n }}$
for k = n-1 stepdown until 1 do
$x_k = {{d_k - c_k x_{k + 1} } \over {b_k }}$
end loop (k)

## Variants

In some situations, particularly those involving periodic boundary conditions, a slightly perturbed form of the tridiagonal system may need to be solved:

$a_1 x_{n} + b_1 x_1 + c_1 x_2 = d_1,$
$a_i x_{i - 1} + b_i x_i + c_i x_{i + 1} = d_i,\, i = 2,\ldots,n-1$
$a_n x_{n-1} + b_n x_n + c_n x_1 = d_n.$

In matrix form, this is

$\left[ \begin{matrix} {b_1} & {c_1} & { } & { } & {a_1} \\ {a_2} & {b_2} & {c_2} & { } & { } \\ { } & {a_3} & {b_3} & \cdot & { } \\ { } & { } & \cdot & \cdot & {c_{n-1}}\\ {c_n} & { } & { } & {a_n} & {b_n}\\ \end{matrix} \right] \left[ \begin{matrix} {x_1 } \\ {x_2 } \\ \cdot \\ \cdot \\ {x_n } \\ \end{matrix} \right] = \left[ \begin{matrix} {d_1 } \\ {d_2 } \\ \cdot \\ \cdot \\ {d_n } \\ \end{matrix} \right].$

In this case, we can make use of the Sherman-Morrison formula to avoid the additional operations of Gaussian elimination and still use the Thomas algorithm. We will now solve

$(A + uv^T)x = d$

where

$u^T = [b_1\ 0\ 0\ ...\ 0\ c_n],\ v^T = [1\ 0\ 0\ ...\ 0\ a_1/b_1].$

$A$ is a slightly different tridiagonal system than above, and the solution to the perturbed system is obtained by solving

$Ay=d,\ Aq=u$

and compute $x$ as

$x = y - (v^T y)/(1 - (v^T q)) q$

In other situation, the system of equation may be block tridiagonal, with smaller submatrices arranged as the individual elements in the above matrix system. Simplified forms of Gaussian elimination have been developed for these situations.

## References

1. Conte, S.D., and deBoor, C. (1972), Elementary Numerical Analysis, McGraw-Hill, New York..

Still TODO: Add more references, more on the variants, make things nicer looking, and maybe more performance type info --Jasond 22:59, 7 April 2006 (MDT)