CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

Fortran Stream Access and MPI_IO

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 21, 2017, 08:49
Default Fortran Stream Access and MPI_IO
  #1
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,152
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
Dear all,

I will be soon in the process of rearranging most of the output of my Fortran based CFD code. As a consequence, I also want to evaluate all the possibilities of enhancement. Two of them, especially, have been on my list for a while:

- using STREAM ACCESS for all input/output, in order to eliminate wasted memory space and simplify indexing in the files
- MPI_IO, to avoid most of my IO bottlenecks and complexities in parallel.

To test the ideas, I produced the simple program you can find attached (rename it as .f90), which contains most of the ingredients I will need.

What I learned from running the program under few OS and compilers (intel on ubuntu and fedora and gofrtran on ubuntu and windows) is that, in practice, MPI_IO in native format and the STREAM ACCESS in Fortran 2003 and over are, indeed, compatible (at least for integers).

I found this to be implicitly assumed also on several Stack Overflow threads.

Now, my question is: do you know of any official source where this is implicitly/explicitly stated?

It totally makes sense, but would be appalling to only discover later that this might, indeed, not be the case.

Thank you all
Attached Files
File Type: f mpiio.f (6.6 KB, 9 views)
sbaffini is offline   Reply With Quote

Old   September 21, 2017, 09:33
Default
  #2
Super Moderator
 
Praveen. C
Join Date: Mar 2009
Location: Bangalore
Posts: 342
Blog Entries: 6
Rep Power: 18
praveen is on a distinguished road
Have you tried to use HDF5 ? Wouldn't this be a better option to save solution files ? We have implemented this in one of our unstructured FV codes. Each partition writes its own solution file. Then an xdmf file describes how the files are interpreted, which can be opened in Visit. It is possible for all partitions to write solution to one file in parallel, but we have not yet figured out how to this. But the HDF api provides way to do this I think.
praveen is offline   Reply With Quote

Old   September 21, 2017, 11:23
Default
  #3
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,152
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
Dear Praveen,

thanks for the suggestion. I actually tought about this in the past but came to the following conclusions:

- HDF5 is not as spectacular as pure MPI_IO (according to some tests I found online at the time).

- HDF5 is probably a large overkill with respect to what I want to achieve (which really is well represented by the few lines in the attached code). This might also explain some of the results as per previous point.

- This is totally a personal opinion, but I always tend to avoid external libraries whenever I can and I only introduce them if I have also an internal backup solution (however poor might it perform). The only exception to this is, at the moment, MPI, of course.

According to the previous points, I have not considered HDF5 as an actual option for me. Especially for what concerns the last point, if I used HDF5 I would then end up with two IO implementations, which is something I want to avoid. If instead there is the actual compatibility with the STREAM ACCESS, I would end up with only one.

Still, do you have any comparison between pure MPI_IO and HDF5? How much complex the implementation ended up to be with respect to your previous solution?
sbaffini is offline   Reply With Quote

Old   September 23, 2017, 03:09
Default
  #4
Super Moderator
 
Praveen. C
Join Date: Mar 2009
Location: Bangalore
Posts: 342
Blog Entries: 6
Rep Power: 18
praveen is on a distinguished road
I have never used MPI_IO directly but HDF uses that when writing in parallel I think. I agree it may be overkill in some cases to use HDF. I like it because it saves compressed files that can be read on other systems. And I can view those files in Visit via xdmf.
praveen is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On



All times are GMT -4. The time now is 02:03.