Is there any c++ library that can be used for matrix calculation?
I am looking for a freware c / c++ library that can be used for matrix calculation.
More precisely to calculate the inverse of a matrix. A matrix whose size could be 40000x40000 entries, for example.
Do you have any information about this?
Thanks in advance.
I'm guessing that you are referring to a sparse matrix? If the matrix is dense, you are looking at 40,000*40,000*8 bytes or roughly 13GB for memory requirements. Most machines today have around 2GB so just storing the matrix, if dense, will take at minimum 7 machines in parallel.
I'd have to know more about the matrix structure but if you want an off the shelf tool for parallel matrix operations, I would suggest Petsc. I have this capability in my own CFD solver but I can tell you that you do not want to attempt to code it yourself unless you have significant experience in MPI and parallel computing. Depending on how you have the matrix stored you can just swap the indexes for a transpose, in essence virtually transposing the matrix for whatever operation you are attempting afterwards.
If you let me know why you need to do this I might be able to suggest further solutions.
Thanks for your quick response.
Yes, you are right, I am thinking of a sparse matrix.
As you perfectly states, using a dense matrix is out of scope of a normal personal computer.
Nowadays, I have coded my own implementation of an sparse matrix solver, but is a bit slow.
I have taken a look at serveral books and references that explains how to implement this kind of solvers, but they start from the point where the matrix has been initialized.
But, the problem arises when you have to build the matrix from scratch, adding, substracting and so on the elements of the matrix, befor calculating the inverse.
It a bit mess.
I am using this implementation of the matrix solver in my own cfd solver based in the finite element method. It works, but quite slowly.
You have commented you are using Petsc. Can be used with Visual C++?
I don't mind having to code it with mpi, if it is easy to implement.
Would you be so kind so as to provide me further details?
I would appreciate it.
Thank you very much for your help and support.
You mentioned you are calculating an inverse... this is not generally a good idea for the reasons you stated above. It is slow and expensive. Iterative methods (Gauss-Seidel or GMRES) are generally the norm in this field for this reason.
If you have your own implementation of a matrix solver I would suggest using compressed row storage to eliminate storing the zero blocks and increase your cache hit/miss ratio. It will be significantly faster. See my blog Ramblings of an Engineer for a description of this performance increase.
I personally do not use Petsc but my own personal code. It hides all of the MPI calls from the user, making it very user friendly for large linear systems. And yes, petsc requires a C++ compiler and you also have to install MPI on the systems you plan on using in parallel. There are several examples on their website. If you have a multi-core workstation you can just install it locally and it will utilize all the cores in your machine.
try PyAMG , it is good specially if you are using FEM.
Thanks for the information.
I have just taken a look on PyAMG. I have searched in Google andit appears to be Python.
I was wondering how I can integrate this library in my own C++ code. Is it easy to carry out this?
I have taken a look at your blog.
Very good job, and very instructive.
You have stated that solving an sparse matrix is slower than solving a matrix using iterative methods.
I am using the Numerical recipes in C++, Willian H. Press, as reference for coding the inverse of the matrix.
In the sparse matrix chapter (2.7), it uses a compressed row storage. What I mean is that zero entries are not stored in memory. Is this what you are refering to when talking about CRS?
Additionally, the algorithm uses a biconjugated gradient method to find the inverse of the matrix. But you stated that this strategy is not good, and it is better to use the Gauss-Seidel method. How faster is the Gauss-Seidel method compare with the Biconjugated Gradient method.
As far as I know, Biconjugated gradient method is an iterative method as well. But maybe I am wrong.
Any comment would be appreciated.
Thank you very much.
Thank you. I'm glad you found it useful. You may also want to watch the site CFDengineer.com as I am currently in the process of putting up all of my class notes and current development work there. It just takes some time to typeset everything for web consumption. Also, if you ask in the forums there I will likely be faster to respond.
Yes, CRS stands for compressed row storage. I don't have my copy of Numerical Recipes in front of me so I cannot say whether the method they are using is more efficient. However, if you are just wanting the solution of a system A * x = b. That is, you desire the solution x = A^-1 * b then inverting the matrix is an unnecessary step.
In general, if you are solving the system for more than one right hand side b with the same A for each, the inverse is useful. If not, then the system should just be solved in place.
Biconjugate gradient is a Krylov subspace method (as is GMRES) and though I have not used it personally, I do use GMRES quite often. It is an iterative method indeed. However, you should be able to use the method directly to find x instead of finding the inverse first. This is obviously much less computationally intensive.
If the system you are solving is very stiff (poorly conditioned) I would suggest staying with a Krylov method. However, if the system is diagonally dominant with reasonable conditioning then Gauss-Seidel iteration is cheaper both in terms of computation (most of the time) and memory (all of the time.
front end of PyAMG is python but AMG part is all c++. It supports CSR format for matrix so as long as you are able to provide matrix in CSR format you shall be able to use it.
I have never used it myself though, but i have read their code when I wrote my AMG solver, which is now part of inavier.
The performance of smoothed aggregation is very good, so if you used it you defnitely will be happy.
There are other options too as others have suggested, you definitely can try them too.
Edited to add:
When I learned AMG, I wrote a small C++ package by following PyAMG. I actually sent it to prof Olsen (of PyAMG) so that he can put it online and people like you can use it into their C++ code. I can myself put it online for you (and others) to download and use it in their projects. There are few drawbacks in that library that i wrote:
1. Construction of AMG is much slower for my taste (Very recently when I wrote MPI version of same this part got improved a lot but i can not share it).
2. Strength of connection part is disabled (though if i have time, i might add it back). This means it might be bit slower to converge. (But still be faster than Kyrilov type solvers such as BiCGStab etc).
I will see when I can find that old code and upload it.
Thank you again for the information.
I would like to say that the system I am trying to solve is not diagonally dominant.
The inverse I need to calculate are stiffness matrices and so on, obtained in the discretization process of the FEM.
They are symetric, but they could happen to be non-banded, indeed.
I will try to investigate a bit more the GMRES method.
It sounds interesting.
By the way, do you know an efficient manner to compute the determinant of an sparse matrix?
I will try to take a look again to the PyAMG documentation, and try to take a look again to AMG. It is a lot of information to be awre of.
This is a never ending history: :)
I will come back, when I have finished to take a grasp of all of this information you have mentioned.
I'm not aware of a quick way to compute a determinant of a large sparse matrix. Thats an expensive operation even for small matrices and unusual in FEM of FV methods. Before you go down that road, ask yourself if you really need to. There may be other ways of gleaning similar information depending on why you want to compute it.
I need the the determinant of the matrix to calculate the jacobian of the transformation (shape functions transformation) needed in the integral to calculate the different stiffness matrices that are used in FEM calculations.
I am thinking about it now, and since this matrices are 3x3 or 2x2 matrices, maybe I can use the direct formula for its calculation.
|All times are GMT -4. The time now is 23:31.|