CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

File format - binary? (plot3d)

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree3Likes
  • 1 Post By Joachim
  • 1 Post By Joachim
  • 1 Post By Joachim

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   January 11, 2015, 14:54
Default File format - binary? (plot3d)
  #1
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
Hey everyone!

I have read online it would be better to save the flow fields, etc provided by my CFD code in binary format. I have spent some time trying to understand MPI-IO, but I am still having some issues.

If I take a simple plot3d grid provided by NASA (http://turbmodels.larc.nasa.gov/flatplate_grids.html), how can I translate it to binary format for example (is it what they call "unformatted"?). I have tried to simply read the grid using a fortran code and exporting it using MPI-IO, but I don't know how to check if I exported the right thing or not! How could I do that??

Thank you very much for your help!

Joachim
qinglonger likes this.
Joachim is offline   Reply With Quote

Old   January 11, 2015, 17:28
Default
  #2
Senior Member
 
Tom-Robin Teschner
Join Date: Dec 2011
Location: Cranfield, UK
Posts: 211
Rep Power: 16
t.teschner is on a distinguished road
yes, binary requires less storage then ascii data but as you pointed out it does not have the benefit that you can just look at the file. for that reason I really like the CGNS format as it writes all in binary (mesh, flow solution etc.) but there are a bunch of tools out there (some very useful tools are already packed together with the library when downloading it) that lets you look at the cgns file in a file browser kind of style.

anyway, can you post your code here that you use for reading the plot3D file(s)? and I assume you try to read the structured mesh? the (probably) easiest way to check that what you are reading is correct, is to store the coordinate in some simple mesh format (I would suggest the tecplot format, readable by tecplot, paraview and probably other softwares but I just know those two), it has a very simple header (specify what you are going to store, i.e in your case x, y (and z for 3D version), how many entries in x, y and z and the mesh type) followed by the actual data.

here is an example file that you coulduse to adopt for your case

TITLE = "3D Channel Example"
Variables="X","Y","Z"
Zone I= 5, J= 3, K= 2, F=POINT
0.0 0.0 0.0
1.0 0.0 0.0
2.0 0.0 0.0
3.0 0.0 0.0
4.0 0.0 0.0
0.0 1.0 0.0
1.0 1.0 0.0
2.0 1.0 0.0
3.0 1.0 0.0
4.0 1.0 0.0
0.0 2.0 0.0
1.0 2.0 0.0
2.0 2.0 0.0
3.0 2.0 0.0
4.0 2.0 0.0
0.0 0.0 1.0
1.0 0.0 1.0
2.0 0.0 1.0
3.0 0.0 1.0
4.0 0.0 1.0
0.0 1.0 1.0
1.0 1.0 1.0
2.0 1.0 1.0
3.0 1.0 1.0
4.0 1.0 1.0
0.0 2.0 1.0
1.0 2.0 1.0
2.0 2.0 1.0
3.0 2.0 1.0
4.0 2.0 1.0



in your case, if you have the 2D mesh, just get rid of the "Z" variable, the K entry on the next line (and of course remove the third coordinate column)
I, J, K are the number of coordinates in x,y,z direction. in my simple example you see that i have x ranging from 0-5, y from 0-2 and z from 0-1 (all incremented by 1) so i hope this will give you an easy way to write your coordinates to a file and easily check if what you have is correct. leave F as point.
hope that helps
t.teschner is offline   Reply With Quote

Old   January 13, 2015, 11:15
Default
  #3
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
Thank you for your answer! However, I am still having some issues with MPI-IO. I have read that all lines in the file were supposed to have the same length, so that I don't know how to handle the first lines.

For example, in the plot3d format, I can write the x, y, z coordinates, but I don't know how to add the first two lines with the number of blocks and grid dimensions...

Would you know how to do that?

Thank you very much!

Joachim
Joachim is offline   Reply With Quote

Old   January 13, 2015, 11:42
Default
  #4
Senior Member
 
Tom-Robin Teschner
Join Date: Dec 2011
Location: Cranfield, UK
Posts: 211
Rep Power: 16
t.teschner is on a distinguished road
I think I understand your problem now, I have unfortunately not too much experience with MPI IO. However, what you could try is to first open the file by only one processor (that is handled with the communication handle) and then have the rest of the processor append their data to the file.

so in pseudo code something like

if(myrank.eq.0) then ! master processor
call mpi_file_open(mpi_comm_self,...) ! mpi_comm_self refers to only the calling processor
write(*,*) blocksize
write(*,*) dimX, dimY
endif

call mpi_file_open(...) ! let each processor append their data
call mpi_file_set_view(...)
call mpi_file_write_all(...)


I am not exactly sure if that works but I hope you get the idea and can play around with it, let me know if it works
t.teschner is offline   Reply With Quote

Old   January 13, 2015, 11:53
Default
  #5
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
Thank you very much once again!
Actually, i tried this:

call mpi_file_open (...)
disp = 0
call mpi_file_seek( mpi_fh, disp, mpi_seek_set, ierr )

call mpi_file_write( mpi_fh, 1, 1, mpi_integer, stat, ierr )
call mpi_file_write( mpi_fh, (/2, 1, 1 /), 3, mpi_integer, stat, ierr )

call mpi_file_write( mpi_fh, (/0.2d0, 0.8d0/), 2, mpi_double_precision, stat, ierr )
call mpi_file_write( mpi_fh, (/1.2d0, 1.8d0/), 2, mpi_double_precision, stat, ierr )
call mpi_file_write( mpi_fh, (/2.2d0, 2.8d0/), 2, mpi_double_precision, stat, ierr )
call mpi_file_close( mpi_fh, ierr )

so that all the data are on the same line. I only considered two points in this case (in 3D), but it was correctly imported with Tecplot. I am now going to do it with the entire grid and I will post the code here if it works.
t.teschner likes this.
Joachim is offline   Reply With Quote

Old   January 13, 2015, 13:09
Default
  #6
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
Ok! I got it to work. Since it was kind of a pain, let me write it there for posterity.
The code reads a formatted plot3d grid (structured, single block), decomposes into domains, and write down everything in binary in a single file using MPI-IO. I hope it helps someone someday!

Code:
program convertToBinary

  use mpi
  implicit none

  ! filenames in/out
  character(40), parameter :: fileIn  = 'fplate_69x2x49.p3d'
  character(40), parameter :: fileOut = 'fplate_69x2x49.in'

  ! domain decomposition
  integer, parameter :: xprocs = 2
  integer, parameter :: yprocs = 1
  integer, parameter :: zprocs = 3

  ! local variables
  integer :: i, j, k, imax, jmax, kmax, stat, blocks, n
  integer :: is, ie, js, je, ks, ke
  integer :: loc_x, loc_y, loc_z, points_per_proc
  integer :: proc_num, numprocs, ierr
  integer :: mpi_fh, new_type
  integer, dimension(3) :: sizes, subsizes, starts
  integer(kind=mpi_offset_kind) :: offset
  integer, dimension(mpi_status_size) :: mstat
  real(kind=8), dimension(:,:,:), allocatable :: x, y, z
  real(kind=8), dimension(:,:,:), allocatable :: xbuf, ybuf, zbuf
  real(kind=8), dimension(:,:,:), allocatable :: buf1, buf2, buf3
  integer, dimension(:), allocatable :: b_is, b_ie, b_js, b_je, b_ks, b_ke

  ! initialize mpi
  call mpi_init(ierr)
  call mpi_comm_size(mpi_comm_world, numprocs, ierr)
  call mpi_comm_rank(mpi_comm_world, proc_num, ierr)

  ! check number of procs
  if (numprocs /= xprocs*yprocs*zprocs) then
    write(*,*) ' The number of processors does not match the domain decomposition! '
    stop
  endif 

  ! read grid dimensions and broadcast to other procs
  if (proc_num == 0) then
    open(unit = 1, file = fileIn, form = 'formatted', status = "old", iostat = stat)
    if (stat > 0) stop "*** Cannot open grid file ***"
    read(1,*) blocks
    read(1,*) imax, jmax, kmax
    close(1)
  endif
  call mpi_bcast(   imax, 1, mpi_integer, 0, mpi_comm_world, ierr )
  call mpi_bcast(   jmax, 1, mpi_integer, 0, mpi_comm_world, ierr )
  call mpi_bcast(   kmax, 1, mpi_integer, 0, mpi_comm_world, ierr )
  call mpi_bcast( blocks, 1, mpi_integer, 0, mpi_comm_world, ierr )

  ! decompose the grid into blocks attributed to each processors
  loc_z = proc_num/(xprocs*yprocs)
  loc_y = (proc_num-loc_z*(xprocs*yprocs))/xprocs
  loc_x = proc_num - loc_z*(xprocs*yprocs) - loc_y*xprocs

  points_per_proc = (imax+xprocs-1)/xprocs
  is = loc_x*points_per_proc + 1
  ie = min((loc_x + 1)*points_per_proc, imax)
  allocate(b_is(0:numprocs-1), b_ie(0:numprocs-1))
  b_is(proc_num) = is
  b_ie(proc_num) = ie

  points_per_proc = (jmax+yprocs-1)/yprocs
  js = loc_y*points_per_proc + 1
  je = min((loc_y + 1)*points_per_proc, jmax)
  allocate(b_js(0:numprocs-1), b_je(0:numprocs-1))
  b_js(proc_num) = js
  b_je(proc_num) = je

  points_per_proc = (kmax+zprocs-1)/zprocs
  ks = loc_z*points_per_proc + 1
  ke = min((loc_z + 1)*points_per_proc, kmax)
  allocate(b_ks(0:numprocs-1), b_ke(0:numprocs-1))
  b_ks(proc_num) = ks
  b_ke(proc_num) = ke

  ! broadcast the block indices to all processors
  do n = 0, numprocs-1
    call mpi_bcast(b_is(n), 1, mpi_integer, n, mpi_comm_world, ierr)
    call mpi_bcast(b_ie(n), 1, mpi_integer, n, mpi_comm_world, ierr)
    call mpi_bcast(b_js(n), 1, mpi_integer, n, mpi_comm_world, ierr)
    call mpi_bcast(b_je(n), 1, mpi_integer, n, mpi_comm_world, ierr)
    call mpi_bcast(b_ks(n), 1, mpi_integer, n, mpi_comm_world, ierr)
    call mpi_bcast(b_ke(n), 1, mpi_integer, n, mpi_comm_world, ierr)
  enddo

  ! allocate grid blocks
  allocate( x(is:ie,js:je,ks:ke), &
                 y(is:ie,js:je,ks:ke), &
                 z(is:ie,js:je,ks:ke)  )

  if (proc_num == 0) then

    ! allocate and initialize grid buffer
    allocate( xbuf(imax,jmax,kmax), &
                   ybuf(imax,jmax,kmax), &
                   zbuf(imax,jmax,kmax)  )

    open(unit = 1, file = fileIn, form = 'formatted', status = "old", iostat = stat)
    if (stat > 0) stop "*** Cannot open grid file ***"
    read(1,*)
    read(1,*)
    read(1,*) (((xbuf(i,j,k),i=1,imax),j=1,jmax), k=1,kmax), &
                   (((ybuf(i,j,k),i=1,imax),j=1,jmax), k=1,kmax), &
                   (((zbuf(i,j,k),i=1,imax),j=1,jmax), k=1,kmax)
    close(1)

    ! decompose grid in blocks and send to processors
    ! store for processor 0
    x = xbuf(is:ie,js:je,ks:ke)
    y = ybuf(is:ie,js:je,ks:ke)
    z = zbuf(is:ie,js:je,ks:ke)
    ! send to other processors
    do n = 1, numprocs-1
      allocate( buf1(b_is(n):b_ie(n),b_js(n):b_je(n),b_ks(n):b_ke(n)), &
                     buf2(b_is(n):b_ie(n),b_js(n):b_je(n),b_ks(n):b_ke(n)), &
                     buf3(b_is(n):b_ie(n),b_js(n):b_je(n),b_ks(n):b_ke(n))  )
      buf1 = xbuf(b_is(n):b_ie(n),b_js(n):b_je(n),b_ks(n):b_ke(n))
      buf2 = ybuf(b_is(n):b_ie(n),b_js(n):b_je(n),b_ks(n):b_ke(n))
      buf3 = zbuf(b_is(n):b_ie(n),b_js(n):b_je(n),b_ks(n):b_ke(n))

      call mpi_send( buf1, (b_ie(n)-b_is(n)+1)*(b_je(n)-b_js(n)+1)*(b_ke(n)-b_ks(n)+1), &
                     mpi_double_precision, n, 1, mpi_comm_world, ierr)
      call mpi_send( buf2, (b_ie(n)-b_is(n)+1)*(b_je(n)-b_js(n)+1)*(b_ke(n)-b_ks(n)+1), &
                     mpi_double_precision, n, 2, mpi_comm_world, ierr)
      call mpi_send( buf3, (b_ie(n)-b_is(n)+1)*(b_je(n)-b_js(n)+1)*(b_ke(n)-b_ks(n)+1), &
                     mpi_double_precision, n, 3, mpi_comm_world, ierr)
      deallocate(buf1, buf2, buf3)
    enddo

    ! deallocate grid buffer
    deallocate( xbuf, ybuf, zbuf )

  endif

  ! receive the grids from master proc
  if (proc_num /= 0) then
    call mpi_recv( x, (ie-is+1)*(je-js+1)*(ke-ks+1),         &
                   mpi_double_precision, 0, 1, mpi_comm_world, mstat, ierr)
    call mpi_recv( y, (je-js+1)*(ie-is+1)*(ke-ks+1),         &
                   mpi_double_precision, 0, 2, mpi_comm_world, mstat, ierr)
    call mpi_recv( z, (je-js+1)*(ie-is+1)*(ke-ks+1),         &
                   mpi_double_precision, 0, 3, mpi_comm_world, mstat, ierr)
  endif
  call mpi_barrier( mpi_comm_world, ierr )

  ! write file
  call mpi_file_open( mpi_comm_world, fileOut, mpi_mode_wronly + mpi_mode_create, &
                      mpi_info_null, mpi_fh, ierr )

  ! define new type
  sizes    = (/ imax, jmax, kmax /)
  subsizes = (/ ie-is+1, je-js+1, ke-ks+1 /)
  starts   = (/ is-1, js-1, ks-1 /)
  call mpi_type_create_subarray( 3, sizes, subsizes, starts, mpi_order_fortran, &
                                 mpi_double_precision, new_type, ierr )
  call mpi_type_commit( new_type, ierr )

  ! write grid dimensions
  offset = 0
  call mpi_file_seek( mpi_fh, offset, mpi_seek_set, ierr )
  call mpi_file_write( mpi_fh, blocks, 1, mpi_integer, mstat, ierr )
  call mpi_file_write( mpi_fh, (/imax, jmax, kmax/), 3, mpi_integer, mstat, ierr )

  ! write grid
  offset = 16
  call mpi_file_set_view( mpi_fh, offset, mpi_double_precision, new_type, "native", mpi_info_null, ierr )
  call mpi_file_write_all( mpi_fh, x, (ie-is+1)*(je-js+1)*(ke-ks+1), mpi_double_precision, mstat, ierr )
  call mpi_file_write_all( mpi_fh, y, (ie-is+1)*(je-js+1)*(ke-ks+1), mpi_double_precision, mstat, ierr )
  call mpi_file_write_all( mpi_fh, z, (ie-is+1)*(je-js+1)*(ke-ks+1), mpi_double_precision, mstat, ierr )

  ! finalize MPI
  call mpi_file_close( mpi_fh, ierr )
  call mpi_type_free( new_type, ierr )
  deallocate(x, y, z)
  deallocate(b_is, b_ie, b_js, b_je, b_ks, b_ke)
  call mpi_finalize(ierr)

endprogram convertToBinary
I am sorry, only the last few lines are actually interesting, but I needed the rest to decompose the domain into blocks...
juliom likes this.
Joachim is offline   Reply With Quote

Old   January 14, 2015, 13:17
Default
  #7
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
I have been working on the solution file now, but I am still having an issue with the overflow-plot3d format. The original plot3d format works just fine:

Code:
 READ(1) NGRID
 READ(1) JD,KD,LD
 READ(1) REFMACH,ALPHA,REY,TIME
 READ(1) ((((Q(J,K,L,N),J=1,JD),K=1,KD),L=1,LD),N=1,5)
with the following MPI-IO code:

Code:
  offset = 0
  call mpi_file_seek( mpi_fh, offset, mpi_seek_set, ierr )
  call mpi_file_write( mpi_fh, 1, 1, mpi_integer, mstat, ierr )
  call mpi_file_write( mpi_fh, (/JD, KD, LD/), 3, mpi_integer, mstat, ierr )
  call mpi_file_write( mpi_fh, (/REFMACH,ALPHA,REY,TIME/), 4, mpi_double_precision, mstat, ierr )

  offset = 48
  call mpi_file_set_view( mpi_fh, offset, mpi_double_precision, new_type, "native", mpi_info_null, ierr )
  do n = 1, 5
    call mpi_file_write_all( mpi_fh, q(js,ks,ls,n), &
             (je-js+1)*(ke-ks+1)*(le-ls+1), mpi_double_precision, mstat, ierr )
  enddo
but the OVERFLOW format (defined here http://overflow.larc.nasa.gov/files/...pendix_A.pdf):

Code:
 READ(1) NGRID
 READ(1) JD,KD,LD,NQ,NQC
 READ(1) REFMACH,ALPHA,REY,TIME,GAMINF,BETA,TINF,
 & IGAM,HTINF,HT1,HT2,(RGAS(I),I=1,MAX(2,NQC)),
 & FSMACH,TVREF,DTVREF
 READ(1) ((((Q(J,K,L,N),J=1,JD),K=1,KD),L=1,LD),N=1,NQ)
does not work with the following MPI-IO code:

Code:
  call mpi_file_write( mpi_fh, 1, 1, mpi_integer, mstat, ierr )
  call mpi_file_write( mpi_fh, (/jd, kd, ld, nq, nqc/), 5, mpi_integer, mstat, ierr )
  call mpi_file_write( mpi_fh, (/REFMACH,ALPHA,REY,TIME,GAMINF,BETA,TINF/), 7, mpi_double_precision, mstat, ierr )
  call mpi_file_write( mpi_fh, IGAM, 1, mpi_integer, mstat, ierr )
  call mpi_file_write( mpi_fh, (/HTINF,HT1,HT2,RGAS1,RGAS2,FSMACH,TVREF,DTVREF/), 8, mpi_double_precision, mstat, ierr )

  offset = 148
  call mpi_file_set_view...

(same after)
It is pretty weird, because I only changed the first lines where the solution parameters are defined. After that, everything is the same. Also, I realized that tecplot actually loaded the solution without complaining if I set the offset to 144 instead of 148, However, the data is completely wrong (but it does not crash at least!).

When I say that the code works, I mean that it can be opened with Tecplot (which recognizes the overflow format). I have also tried with a simple fortran (non-MPI) code, and everything worked just fine with both format.

what could be wrong in my code??
Thank you very much!
Joachim is offline   Reply With Quote

Old   January 14, 2015, 13:22
Default
  #8
Senior Member
 
Tom-Robin Teschner
Join Date: Dec 2011
Location: Cranfield, UK
Posts: 211
Rep Power: 16
t.teschner is on a distinguished road
how do you calculate offset?
t.teschner is offline   Reply With Quote

Old   January 14, 2015, 13:24
Default
  #9
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
for the plot3d format, I write 4 integers and 4 double precision numbers, so that makes 4*4 + 4*8 = 48 bytes

for the overflow format, i write 7 integers and 15 double precision numbers, so that makes 7*4 + 15*8 = 148 bytes

does that sound right?
Joachim is offline   Reply With Quote

Old   January 14, 2015, 13:39
Default
  #10
Senior Member
 
Tom-Robin Teschner
Join Date: Dec 2011
Location: Cranfield, UK
Posts: 211
Rep Power: 16
t.teschner is on a distinguished road
hm, i remember in the code that i was given, i had to change the offset value everytime i call mpi_file_write. now it is strange that it is working in the first example, in my logic you need to tell mpi i/o always where you want to dump your data into, hence the existance of the offset value.

can you try the following, for each mpi_file_write, can you calculate the offset first, then make a call to mpi_file_set_view and then do mpi_file_write, so that you have something like

offset = 0 ! and then 4,8,12,16,20,24,32,40,48,56,64,72,80,88,96,104,112 ,120,128,136,144
call mpi_file_set_view(offset)
call mpi_file_write(...)


i don't know what mpi_file_seek does, maybe your structure is working with that call but the above should work just fine.
t.teschner is offline   Reply With Quote

Old   January 14, 2015, 15:06
Default
  #11
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
mpi_file_seek(offset) basically tells the code to write at location offset. I tried to add the offset between each write statement,

Code:
  offset = 0
  call mpi_file_seek( mpi_fh, offset, mpi_seek_set, ierr )
  call mpi_file_write( mpi_fh, 1, 1, mpi_integer, mstat, ierr )

  offset = 4
  call mpi_file_seek( mpi_fh, offset, mpi_seek_set, ierr )
  call mpi_file_write( mpi_fh, JD, 1, mpi_integer, mstat, ierr )
 
   ...
but I still get the same bug. What is interesting is that if I put the final offset (just before writing the actual data) to 144 instead of 148, Tecplot recognizes the overflow format and loads (incorrectly) the file. However, when I look at the auxiliary data, I can see that the header has been read correctly (Re, M, aoa, etc.) However, since tecplot does not display the final parameter DTVREF, there is no way of telling whether it has been cut or not...
Joachim is offline   Reply With Quote

Old   January 14, 2015, 15:14
Default
  #12
Senior Member
 
Tom-Robin Teschner
Join Date: Dec 2011
Location: Cranfield, UK
Posts: 211
Rep Power: 16
t.teschner is on a distinguished road
the only think i can think of is that maybe the filetypes are wrong (i.e. the variables are declared as single precision but written in double precision (I don't expect that but that has caused for me sometimes some troubles writing to binary)) just for the sake of trying, could you use MPI_REAL as type? But to be honest I am more poking with a stick in the dark here ...
t.teschner is offline   Reply With Quote

Old   January 14, 2015, 15:19
Default
  #13
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
I have actually tried with this non-MPI code:

Code:
  write(2) NGRID
  write(2) JD, KD, LD, NQ, NQC
  write(2) REFMACH,ALPHA,REY,TIME,GAMINF,BETA,TINF, &
           IGAM,HTINF,HT1,HT2,RGAS1,RGAS2, &
           FSMACH,TVREF,DTVREF
  write(2) ((((Q(J,K,L,N),J=1,JD),K=1,KD),L=1,LD),N=1,NQ)
which worked perfectly fine, no matter the precision (kind=4 or 8), as long as NGRID, JD, KD, LD, NQ, NQC and IGAM are integers. Therefore, I think the error can only come from a bug in my MPI-IO code...

I would like to check whether the grid dimensions + entire header has been written properly, but I don't really know how to do that...

Thank you very much for your help!
Joachim is offline   Reply With Quote

Old   January 14, 2015, 16:18
Default
  #14
Senior Member
 
Tom-Robin Teschner
Join Date: Dec 2011
Location: Cranfield, UK
Posts: 211
Rep Power: 16
t.teschner is on a distinguished road
if you just write a small sequential fortran code to read the file, or rather only the header, you should be able to read it, no?
t.teschner is offline   Reply With Quote

Old   January 14, 2015, 17:06
Default
  #15
Senior Member
 
Joachim
Join Date: Mar 2012
Location: Paris, France
Posts: 145
Rep Power: 15
Joachim is on a distinguished road
well, it is kind of weird. I used the following code:

Code:
!--------------------------------------------------------------------------------
!                   WRITE NON MPI DATA
!---------------------------------------------------------------------------------

  open(2, file=fileNotMPI, form='unformatted', convert='little_endian')
  write(2) NGRID
  close(2)

!--------------------------------------------------------------------------------
!                   WRITE MPI DATA
!---------------------------------------------------------------------------------

  call mpi_file_open( mpi_comm_world, fileMPI, mpi_mode_wronly + mpi_mode_create, &
                      mpi_info_null, mpi_fh, ierr )
  offset = 0
  call mpi_file_seek( mpi_fh, offset, mpi_seek_set, ierr )
  call mpi_file_write( mpi_fh, NGRID, 1, mpi_integer, mstat, ierr )
  call mpi_file_close(mpi_fh, ierr)

!--------------------------------------------------------------------------------
!                            READ DATA
!---------------------------------------------------------------------------------

  open(3, file=fileNotMPI, form='unformatted')
  read(3) NGRID1
  close(3)

  call mpi_file_open( mpi_comm_world, fileMPI, mpi_mode_wronly + mpi_mode_create, &
                      mpi_info_null, mpi_fh, ierr )
  offset = 0
  call mpi_file_seek( mpi_fh, offset, mpi_seek_set, ierr )
  call mpi_file_read( mpi_fh, NGRID2, 1, mpi_integer, mstat, ierr )
  call mpi_file_close( mpi_fh, ierr)

!--------------------------------------------------------------------------------
!                           CHECK RESULTS
!---------------------------------------------------------------------------------

  write(*,*)  NGRID1, NGRID2
However, even with this very simple code, the values returned are:

NGRID1 = 1 (ok) --- NGRID2 = 0 (not ok!)
Joachim is offline   Reply With Quote

Old   January 14, 2015, 19:21
Default
  #16
Senior Member
 
Tom-Robin Teschner
Join Date: Dec 2011
Location: Cranfield, UK
Posts: 211
Rep Power: 16
t.teschner is on a distinguished road
yep, i get the same. also if you look at the error tag ierr of the mpi call mpi_file_read, i get a very high integer value (335627796) though when enquiring about its nature with mpi_error_string the program throws a SIGABRT at me.

also, the file that is created with mpi_file_write has 17.2 GB! on my disk. when I swap the mpi_file_seek for mpi_file_set_view the file gets the correct size of 4 bytes, though i get the same error message and can't read the file.

what i am suspecting is that mpi follows the c standard, i.e. starts counting at 0, while fortran at 1. so for example, in the code that i have, the code first reads the data into a temp array and then adjust the indexing, see code below

REAL(MYTYPE) :: PHI(0:NCELL-1)

c--U1 Velocity

IOFFSET_FILE=0
CALL MPI_FILE_SET_VIEW(IFH,IOFFSET_FILE,MPI_MYTYPE,IFIL ETYPE,
#'NATIVE',MPI_INFO_NULL,IERR)

CALL MPI_FILE_READ_ALL(IFH,PHI,NCELL,MPI_MYTYPE,
#MPI_STATUS_IGNORE,IERR)

U1(1:NCELL)=PHI(0:NCELL-1)


I've tried to fiddle around with that but wasn't able to get it to work. the error code for mpi_file_read only seems to disappear when you set the count to zero but then you would not be reading the file and that is not what you want. anyway, maybe that helps you in some way
t.teschner is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Problem compiling a custom Lagrangian library brbbhatti OpenFOAM Programming & Development 2 July 7, 2014 12:32
[swak4Foam] swak4Foam-groovyBC build problem zxj160 OpenFOAM Community Contributions 18 July 30, 2013 14:14
[swak4Foam] build problem swak4Foam OF 2.2.0 mcathela OpenFOAM Community Contributions 14 April 23, 2013 14:59
DxFoam reader update hjasak OpenFOAM Post-Processing 69 April 24, 2008 02:24
[blockMesh] Axisymmetrical mesh Rasmus Gjesing (Gjesing) OpenFOAM Meshing & Mesh Conversion 10 April 2, 2007 15:00


All times are GMT -4. The time now is 10:51.