CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Main CFD Forum (https://www.cfd-online.com/Forums/main/)
-   -   Fortran Stream Access and MPI_IO (https://www.cfd-online.com/Forums/main/193285-fortran-stream-access-mpi_io.html)

sbaffini September 21, 2017 08:49

Fortran Stream Access and MPI_IO
 
1 Attachment(s)
Dear all,

I will be soon in the process of rearranging most of the output of my Fortran based CFD code. As a consequence, I also want to evaluate all the possibilities of enhancement. Two of them, especially, have been on my list for a while:

- using STREAM ACCESS for all input/output, in order to eliminate wasted memory space and simplify indexing in the files
- MPI_IO, to avoid most of my IO bottlenecks and complexities in parallel.

To test the ideas, I produced the simple program you can find attached (rename it as .f90), which contains most of the ingredients I will need.

What I learned from running the program under few OS and compilers (intel on ubuntu and fedora and gofrtran on ubuntu and windows) is that, in practice, MPI_IO in native format and the STREAM ACCESS in Fortran 2003 and over are, indeed, compatible (at least for integers).

I found this to be implicitly assumed also on several Stack Overflow threads.

Now, my question is: do you know of any official source where this is implicitly/explicitly stated?

It totally makes sense, but would be appalling to only discover later that this might, indeed, not be the case.

Thank you all

praveen September 21, 2017 09:33

Have you tried to use HDF5 ? Wouldn't this be a better option to save solution files ? We have implemented this in one of our unstructured FV codes. Each partition writes its own solution file. Then an xdmf file describes how the files are interpreted, which can be opened in Visit. It is possible for all partitions to write solution to one file in parallel, but we have not yet figured out how to this. But the HDF api provides way to do this I think.

sbaffini September 21, 2017 11:23

Dear Praveen,

thanks for the suggestion. I actually tought about this in the past but came to the following conclusions:

- HDF5 is not as spectacular as pure MPI_IO (according to some tests I found online at the time).

- HDF5 is probably a large overkill with respect to what I want to achieve (which really is well represented by the few lines in the attached code). This might also explain some of the results as per previous point.

- This is totally a personal opinion, but I always tend to avoid external libraries whenever I can and I only introduce them if I have also an internal backup solution (however poor might it perform). The only exception to this is, at the moment, MPI, of course.

According to the previous points, I have not considered HDF5 as an actual option for me. Especially for what concerns the last point, if I used HDF5 I would then end up with two IO implementations, which is something I want to avoid. If instead there is the actual compatibility with the STREAM ACCESS, I would end up with only one.

Still, do you have any comparison between pure MPI_IO and HDF5? How much complex the implementation ended up to be with respect to your previous solution?

praveen September 23, 2017 03:09

I have never used MPI_IO directly but HDF uses that when writing in parallel I think. I agree it may be overkill in some cases to use HDF. I like it because it saves compressed files that can be read on other systems. And I can view those files in Visit via xdmf.


All times are GMT -4. The time now is 18:54.