CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

What are the data transfer patterns for parallelization of implicit solvers?

Register Blogs Community New Posts Updated Threads Search

Like Tree12Likes
  • 2 Post By arjun
  • 1 Post By naffrancois
  • 1 Post By sbaffini
  • 2 Post By sbaffini
  • 1 Post By sbaffini
  • 1 Post By sbaffini
  • 2 Post By naffrancois
  • 2 Post By sbaffini

 
 
LinkBack Thread Tools Search this Thread Display Modes
Prev Previous Post   Next Post Next
Old   August 29, 2022, 19:41
Default What are the data transfer patterns for parallelization of implicit solvers?
  #1
Senior Member
 
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8
aerosayan is on a distinguished road
Hello everyone,

Trying to understand parallelization of implicit solvers.

Specifically, I'm interested to learn, which tasks we're parallelizing, and which data are transmitted between each processor in every iteration.

In explicit solvers, we mainly need to transfer ghost cell data between each partition, I'm not sure what happens in implicit codes.

Here's what I currently think might happen:

In 2D, I can store the whole mesh in a single node, since memory requirement is less in 2D. So to keep my code simple, I'm thinking of only MPI parallelizing small sections of the code.

For example, the Krylov solver (GMRES) will need to be parallelized, even though I currently don't know how. That is, we form the sparse system on MPI's root processor (rank=0), then distribute the iterative solver work to our cluster. I don't know if we can build the matrix in parallel. We probably can, but it will be complicated, so I will try to avoid it for now.

Parallelizing the flux calculations would also be very good, but that might introduce lots of data transfer between processors. And, I don't know what's the correct and efficient way to do this.

As I see currently, only the linear solver, seems easily parallelizeable to me (as there are many papers on the topic), and the flux calculation, seems inefficient to parallelize (maybe i'm prematurely optimizing).

PS: I will probably need Async MPI data transfer, and some more advanced MPI data transfer/access routines. Which ones do you think, will be important?

Thanks!
aerosayan is offline   Reply With Quote

 


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[OpenFOAM] How to get the coordinates of velocity data at all cells and at all times vidyadhar ParaView 9 May 20, 2020 20:06
Question about heat transfer coefficient setting for CFX Anna Tian CFX 1 June 16, 2013 06:28
Error finding variable "THERMX" sunilpatil CFX 8 April 26, 2013 07:00
Transfer data in MPI hall Main CFD Forum 0 May 3, 2004 18:57
Data transfer H. P. LIU Main CFD Forum 5 May 19, 2003 10:47


All times are GMT -4. The time now is 21:24.