|
[Sponsors] |
What are the data transfer patterns for parallelization of implicit solvers? |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
August 29, 2022, 19:41 |
What are the data transfer patterns for parallelization of implicit solvers?
|
#1 |
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 |
Hello everyone,
Trying to understand parallelization of implicit solvers. Specifically, I'm interested to learn, which tasks we're parallelizing, and which data are transmitted between each processor in every iteration. In explicit solvers, we mainly need to transfer ghost cell data between each partition, I'm not sure what happens in implicit codes. Here's what I currently think might happen: In 2D, I can store the whole mesh in a single node, since memory requirement is less in 2D. So to keep my code simple, I'm thinking of only MPI parallelizing small sections of the code. For example, the Krylov solver (GMRES) will need to be parallelized, even though I currently don't know how. That is, we form the sparse system on MPI's root processor (rank=0), then distribute the iterative solver work to our cluster. I don't know if we can build the matrix in parallel. We probably can, but it will be complicated, so I will try to avoid it for now. Parallelizing the flux calculations would also be very good, but that might introduce lots of data transfer between processors. And, I don't know what's the correct and efficient way to do this. As I see currently, only the linear solver, seems easily parallelizeable to me (as there are many papers on the topic), and the flux calculation, seems inefficient to parallelize (maybe i'm prematurely optimizing). PS: I will probably need Async MPI data transfer, and some more advanced MPI data transfer/access routines. Which ones do you think, will be important? Thanks! |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[OpenFOAM] How to get the coordinates of velocity data at all cells and at all times | vidyadhar | ParaView | 9 | May 20, 2020 20:06 |
Question about heat transfer coefficient setting for CFX | Anna Tian | CFX | 1 | June 16, 2013 06:28 |
Error finding variable "THERMX" | sunilpatil | CFX | 8 | April 26, 2013 07:00 |
Transfer data in MPI | hall | Main CFD Forum | 0 | May 3, 2004 18:57 |
Data transfer | H. P. LIU | Main CFD Forum | 5 | May 19, 2003 10:47 |