CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Visualization & Post-Processing Software > Tecplot

Writing parallel non-partitioned ordered data (TecIO).

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree5Likes
  • 1 Post By davetaflin
  • 1 Post By davetaflin
  • 1 Post By artu72
  • 1 Post By davetaflin
  • 1 Post By davetaflin

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   November 13, 2017, 04:17
Default Writing parallel non-partitioned ordered data (TecIO).
  #1
New Member
 
Andrea
Join Date: Aug 2012
Posts: 8
Rep Power: 13
artu72 is on a distinguished road
Hi, I am trying to write out a szplt file using TecIO MPI library. I wonder if one can write out some non partitioned ordered IJK data. I have written a simple fortran 90 code in order to test mpi writing. It compiles flawlessly, but it does not work properly: it writes only the zone of main rank mpi process (0 in the code). Can you suggest how to make it operative? TecIO manual and example codes are not helpful. Thank you in advance.


Code:
program test

use mpi

implicit none

include 'tecio.f90'

character(len=1) :: nullchr
character(len=20) :: dataname,vars,filename,scratchd,zonetitle
integer :: debug,npts,nelm,isdouble,b,i,j,k,lvb
integer :: ni,nj,nk
integer :: ierr,me,numproc
real, dimension(:,:,:), allocatable :: x,y,z,r
real, dimension(8) :: xoff,yoff,zoff
double precision :: soltime
integer :: mainrank, tecomm
integer :: visdouble, filetype
integer :: fileformat
integer :: zonetype,strandid,parentzn,isblock
integer :: icellmax,jcellmax,kcellmax,nfconns,fnmode,shrconn
integer :: totnumfcnodes, numconbcfaces, totnumbcconns
integer, dimension(:),pointer :: null
integer, dimension(4) :: valueloc
integer :: npartitions
integer, dimension(1) :: pntranks

data xoff/-1.0,0.0,-1.0,0.0,-1.0,0.0,-1.0,0.0/
data yoff/-1.0,-1.0,0.0,0.0,-1.0,-1.0,0.0,0.0/
data zoff/-1.0,-1.0,-1.0,-1.0,0.0,0.0,0.0,0.0/

call mpi_init(ierr)
call mpi_comm_rank(mpi_comm_world,me,ierr)
call mpi_comm_size(mpi_comm_world,numproc,ierr)

nullify(null)
nullchr = char(0)
!
!... open the file and write the tecplot datafile 
!... header information.
!
dataname='TGV'//nullchr
vars='x y z r'//nullchr
filename='flow'//nullchr
scratchd='./'//nullchr
debug   = 1 ! 0=do not write info, 1 = write
filetype = 0 ! 0=full (grid and variables)
fileformat = 1  ! 0 = plt, 1 = szplt
isdouble = 0 ! 0 = single, 1 = double

i = tecini142(dataname,vars,filename,scratchd,fileformat,filetype,debug,isdouble)
!
! initialize MPI
!
! tecomm is the communicator
mainrank = 0 ! is the rank of main process
i = tecmpiinit142(mpi_comm_world, mainrank)
!
!... write the zone header information.
!
ni    = 20
nj    = 20
nk    = 20
b = me + 1
zonetitle='Block '//char(48+b)//nullchr
zonetype = 0 ! ordered, it is OK here
soltime = 0.0 ! double precision, it is the time of flow (ttime)
strandid = 0 ! strand number, ok
parentzn = 0 ! ok as it is
isblock = 1 ! ok as it is
icellmax = 0 ! ok as it is
jcellmax = 0 ! ok as it is
kcellmax = 0 ! ok as it is
nfconns = 0 ! ok as it is
fnmode = 0 ! ok as it is
totnumfcnodes = 0 ! ok as it is
numconbcfaces = 0 ! ok as it is
totnumbcconns = 0 ! ok as it is
shrconn = 0 ! ok as it is
valueloc = 1 ! 0 = cell-centered, 1 = nodal (number of variables)
valueloc(4) = 0
i = teczne142(zonetitle, zonetype, ni, nj, nk, icellmax, jcellmax, kcellmax, &
              soltime, strandid, parentzn, isblock, nfconns, fnmode,         &
              totnumfcnodes, numconbcfaces, totnumbcconns,                   &
              null, valueloc, null, shrconn                                  )
pntranks = me ! process of the zone
i = tecznemap142(npartitions, pntranks)
!
!... write out the field data.
!
allocate(x(ni,nj,nk),y(ni,nj,nk),z(ni,nj,nk),r(ni-1,nj-1,nk-1))
b = me+1
do k=1,nk
  do j = 1,nj
    do i = 1,ni
      x(i,j,k) = (i-1.0)/(ni-1.0) + xoff(b)
      y(i,j,k) = (j-1.0)/(nj-1.0) + yoff(b)
      z(i,j,k) = (k-1.0)/(nk-1.0) + zoff(b)
    enddo
  enddo
enddo
do k=1,nk-1
  do j = 1,nj-1
    do i = 1,ni-1
      r(i,j,k) = b + (i*j*k-1.0)/((ni-1.0)*(nj-1.0)*(nk-1.0))
    enddo
  enddo
enddo
lvb = ni*nj*nk
i   = tecdat142(lvb,x,isdouble)
i   = tecdat142(lvb,y,isdouble)
i   = tecdat142(lvb,z,isdouble)
lvb = (ni-1)*(nj-1)*(nk-1)
i   = tecdat142(lvb,r,isdouble)
deallocate(x,y,z,r)
!
! end, close file and deallocate
!
i = tecend142()

call mpi_finalize(ierr)

stop
end
artu72 is offline   Reply With Quote

Old   November 22, 2017, 13:08
Default Writing parallel non-partitioned ordered data (TecIO).
  #2
New Member
 
Dave Taflin
Join Date: Nov 2017
Posts: 12
Rep Power: 8
davetaflin is on a distinguished road
Andrea,

I'd make a small change to your code, but this also exposes a bug in Tecio-MPI, which I'll need to fix before any corrections to your code would really be helpful. I'll post again when the Tecio-MPI fix is available. Sorry about that.

Dave Taflin
Senior Software Development Engineer
Tecplot, Inc.
artu72 likes this.
davetaflin is offline   Reply With Quote

Old   December 14, 2017, 17:24
Default Bug-fixed TecIO/TecIO-MPI
  #3
New Member
 
Dave Taflin
Join Date: Nov 2017
Posts: 12
Rep Power: 8
davetaflin is on a distinguished road
Andrea, please download the latest TecIO distribution, which fixes the bug you encountered, and also adds an example of outputting nonpartitioned zones to the ijkpartitioned example.

As for your code, bear in mind that the main output rank must call teczne142 and tecznemap142 for every zone being output, whether it is doing that output or not. So while your code should work for other ranks, the main rank should call teczne142 and tecznemap142 for all zones, but then call the data output routine tecdat142 only for the one zone it actually outputs.

Here is your code, modified to do this, please feel free to reply or contact our Technical support if you encounter any other difficulties, and thanks for using TecIO-MPI:

Code:
program test

use mpi

implicit none

include 'tecio.f90'

character(len=1) :: nullchr
character(len=20) :: dataname,vars,filename,scratchd,zonetitle
integer :: debug,npts,nelm,isdouble,b,i,j,k,lvb
integer :: ni,nj,nk
integer :: ierr,me,numproc
real, dimension(:,:,:), allocatable :: x,y,z,r
real, dimension(8) :: xoff,yoff,zoff
double precision :: soltime
integer :: mainrank, tecomm
integer :: visdouble, filetype
integer :: fileformat
integer :: zonetype,strandid,parentzn,isblock
integer :: icellmax,jcellmax,kcellmax,nfconns,fnmode,shrconn
integer :: totnumfcnodes, numconbcfaces, totnumbcconns
integer, dimension(:),pointer :: null
integer, dimension(4) :: valueloc
integer :: npartitions
integer, dimension(1) :: pntranks

data xoff/-1.0,0.0,-1.0,0.0,-1.0,0.0,-1.0,0.0/
data yoff/-1.0,-1.0,0.0,0.0,-1.0,-1.0,0.0,0.0/
data zoff/-1.0,-1.0,-1.0,-1.0,0.0,0.0,0.0,0.0/

call mpi_init(ierr)
call mpi_comm_rank(mpi_comm_world,me,ierr)
call mpi_comm_size(mpi_comm_world,numproc,ierr)

nullify(null)
nullchr = char(0)
!
!... open the file and write the tecplot datafile 
!... header information.
!
dataname='TGV'//nullchr
vars='x y z r'//nullchr
filename='flow'//nullchr
scratchd='./'//nullchr
debug   = 1 ! 0=do not write info, 1 = write
filetype = 0 ! 0=full (grid and variables)
fileformat = 1  ! 0 = plt, 1 = szplt
isdouble = 0 ! 0 = single, 1 = double

i = tecini142(dataname,vars,filename,scratchd,fileformat,filetype,debug,isdouble)
!
! initialize MPI
!
! tecomm is the communicator
mainrank = 0 ! is the rank of main process
i = tecmpiinit142(mpi_comm_world, mainrank)
!
!... write the zone header information.
!
ni    = 20
nj    = 20
nk    = 20
b = me + 1
zonetitle='Block '//char(48+b)//nullchr
zonetype = 0 ! ordered, it is OK here
soltime = 0.0 ! double precision, it is the time of flow (ttime)
strandid = 0 ! strand number, ok
parentzn = 0 ! ok as it is
isblock = 1 ! ok as it is
icellmax = 0 ! ok as it is
jcellmax = 0 ! ok as it is
kcellmax = 0 ! ok as it is
nfconns = 0 ! ok as it is
fnmode = 0 ! ok as it is
totnumfcnodes = 0 ! ok as it is
numconbcfaces = 0 ! ok as it is
totnumbcconns = 0 ! ok as it is
shrconn = 0 ! ok as it is
valueloc = 1 ! 0 = cell-centered, 1 = nodal (number of variables)
valueloc(4) = 0
do b = 0, numproc - 1
  if (b == me .or. me == mainrank) then
    i = teczne142(zonetitle, zonetype, ni, nj, nk, icellmax, jcellmax, kcellmax, &
                  soltime, strandid, parentzn, isblock, nfconns, fnmode,         &
                  totnumfcnodes, numconbcfaces, totnumbcconns,                   &
                  null, valueloc, null, shrconn                                  )
    npartitions = 1
    pntranks = b ! process of the zone
    i = tecznemap142(npartitions, pntranks)
    if (b == me) then
      !
      !... write out the field data.
      !
      allocate(x(ni,nj,nk),y(ni,nj,nk),z(ni,nj,nk),r(ni-1,nj-1,nk-1))
      do k=1,nk
        do j = 1,nj
          do i = 1,ni
            x(i,j,k) = (i-1.0)/(ni-1.0) + xoff(b + 1)
            y(i,j,k) = (j-1.0)/(nj-1.0) + yoff(b + 1)
            z(i,j,k) = (k-1.0)/(nk-1.0) + zoff(b + 1)
          enddo
        enddo
      enddo
      do k=1,nk-1
        do j = 1,nj-1
          do i = 1,ni-1
            r(i,j,k) = b + 1 + (i*j*k-1.0)/((ni-1.0)*(nj-1.0)*(nk-1.0))
          enddo
        enddo
      enddo
      lvb = ni*nj*nk
      i   = tecdat142(lvb,x,isdouble)
      i   = tecdat142(lvb,y,isdouble)
      i   = tecdat142(lvb,z,isdouble)
      lvb = (ni-1)*(nj-1)*(nk-1)
      i   = tecdat142(lvb,r,isdouble)
      deallocate(x,y,z,r)
    endif
  endif
enddo
!
! end, close file and deallocate
!
i = tecend142()

call mpi_finalize(ierr)

stop
end
artu72 likes this.
davetaflin is offline   Reply With Quote

Old   December 20, 2017, 03:57
Default
  #4
New Member
 
Andrea
Join Date: Aug 2012
Posts: 8
Rep Power: 13
artu72 is on a distinguished road
Quote:
Originally Posted by davetaflin View Post
Andrea, please download the latest TecIO distribution, which fixes the bug you encountered, and also adds an example of outputting nonpartitioned zones to the ijkpartitioned example.
Thank you very much. I have downloaded and installed latest TecIO distribution, and compiled your code. It works flawlessly!
davetaflin likes this.
artu72 is offline   Reply With Quote

Old   December 21, 2017, 05:30
Default
  #5
New Member
 
Andrea
Join Date: Aug 2012
Posts: 8
Rep Power: 13
artu72 is on a distinguished road
Quote:
Originally Posted by davetaflin View Post
As for your code, bear in mind that the main output rank must call teczne142 and tecznemap142 for every zone being output, whether it is doing that output or not. So while your code should work for other ranks, the main rank should call teczne142 and tecznemap142 for all zones, but then call the data output routine tecdat142 only for the one zone it actually outputs.
I have examined the code you replied me. If I am not wrong, teczne142 and tecznemap142 are called both by main rank and the rank owning that zone, isn't it? I agree that tecdat142 is instead called by one rank.
artu72 is offline   Reply With Quote

Old   December 21, 2017, 16:54
Default
  #6
New Member
 
Dave Taflin
Join Date: Nov 2017
Posts: 12
Rep Power: 8
davetaflin is on a distinguished road
Yes, I made that correction to the code that I posted.
davetaflin is offline   Reply With Quote

Old   January 4, 2018, 08:06
Default
  #7
New Member
 
Andrea
Join Date: Aug 2012
Posts: 8
Rep Power: 13
artu72 is on a distinguished road
Dave,

I would like to inform that there is a small bug in actual TecIO code that produces an invalid floating point operation when a call to tecdat142 is performed, if the code is compiled with floating point exceptions trapping enabled. The code however is working disabling the trapping, but i would enable it in debug version so I can trap other floating problem in my code. Can you check it? Thank you in advance. Below you find the code, you can run it even on a single node in order to reproduce problem. In gfortran, you can use the -ffpe-trap=invalid option to trigger the error.

Code:
program test

use mpi

implicit none

include 'tecio.f90'

character(len=1) :: nullchr
character(len=:), allocatable :: vars,dataname,filename,scratchd,zonetitle
integer :: debug,isdouble,b,i,j,k,lvb
integer, dimension(:), allocatable :: nb,ni,nj,nk,ib
integer :: ipc,lb,gnb,ierr,me,numproc
real, dimension(:,:,:), allocatable :: x,y,z,r
real, dimension(8) :: xoff,yoff,zoff
double precision :: soltime
integer :: mainrank
integer :: filetype
integer :: fileformat
integer :: zonetype,strandid,parentzn,isblock
integer :: icellmax,jcellmax,kcellmax,nfconns,fnmode,shrconn
integer :: totnumfcnodes, numconbcfaces, totnumbcconns
integer, dimension(4) :: null
integer, dimension(4) :: valueloc
integer :: npartitions
integer, dimension(1) :: pntranks

data xoff/-1.0,0.0,-1.0,0.0,-1.0,0.0,-1.0,0.0/
data yoff/-1.0,-1.0,0.0,0.0,-1.0,-1.0,0.0,0.0/
data zoff/-1.0,-1.0,-1.0,-1.0,0.0,0.0,0.0,0.0/

call mpi_init(ierr)
call mpi_comm_rank(mpi_comm_world,me,ierr)
call mpi_comm_size(mpi_comm_world,numproc,ierr)

gnb = 8

allocate(nb(numproc))
allocate(ib(numproc))
allocate(ni(gnb))
allocate(nj(gnb))
allocate(nk(gnb))

nb = gnb / numproc
ib(1) = 0
do i=1,mod(gnb,numproc)
  nb(i) = nb(i) + 1
enddo
do i=2,numproc
  ib(i) = ib(i-1) + nb(i-1)
enddo
do i=1,gnb
  ni(i) = 2*i
  nj(i) = 2*i
  nk(i) = 2*i
enddo

!if(me==mainrank)then
!  write(*,*) 'ib:',ib
!  write(*,*) 'nb:',nb
!endif

null = 0
nullchr = char(0)
!
!... open the file and write the tecplot datafile 
!... header information.
!
dataname='TGV'//nullchr
vars='x'//','//'y'//','//'z'
vars=vars//','//'Fluid Density'//nullchr
filename='flow'//nullchr
scratchd='./'//nullchr
debug   = 0 ! 0=do not write info, 1 = write
filetype = 0 ! 0=full (grid and variables)
fileformat = 1  ! 0 = plt, 1 = szplt
isdouble = 0 ! 0 = single, 1 = double

i = tecini142(dataname,vars,filename,scratchd,fileformat,filetype,debug,isdouble)
!
! initialize MPI
!
! tecomm is the communicator
mainrank = 0 ! is the rank of main process
i = tecmpiinit142(mpi_comm_world, mainrank)
!
!... write the zone header information
!
zonetype = 0 ! ordered, it is OK here
soltime = 0.0 ! double precision, it is the time of flow (ttime)
strandid = 0 ! strand number, ok
parentzn = 0 ! ok as it is
isblock = 1 ! ok as it is
icellmax = 0 ! ok as it is
jcellmax = 0 ! ok as it is
kcellmax = 0 ! ok as it is
nfconns = 0 ! ok as it is
fnmode = 0 ! ok as it is
totnumfcnodes = 0 ! ok as it is
numconbcfaces = 0 ! ok as it is
totnumbcconns = 0 ! ok as it is
shrconn = 0 ! ok as it is
valueloc = 1 ! 0 = cell-centered, 1 = nodal (number of variables)
valueloc(4) = 0
npartitions = 1
do lb=1,nb(me+1)
  b = lb + ib(me+1)
  zonetitle='Block '//char(48+b)//nullchr
  i = teczne142(zonetitle, zonetype, ni(b), nj(b), nk(b), icellmax, jcellmax, kcellmax, &
                soltime, strandid, parentzn, isblock, nfconns, fnmode,                  &
                totnumfcnodes, numconbcfaces, totnumbcconns,                            &
                null, valueloc, null, shrconn                                           )
  pntranks = me ! process of the zone
  i = tecznemap142(npartitions, pntranks)
!
! write out the field data.
!
  allocate(x(ni(b),nj(b),nk(b)),y(ni(b),nj(b),nk(b)),z(ni(b),nj(b),nk(b)),r(ni(b),nj(b),nk(b)))
  do k=1,nk(b)
    do j = 1,nj(b)
      do i = 1,ni(b)
        x(i,j,k) = (i-1.0)/(ni(b)-1.0) + xoff(b)
        y(i,j,k) = (j-1.0)/(nj(b)-1.0) + yoff(b)
        z(i,j,k) = (k-1.0)/(nk(b)-1.0) + zoff(b)
      enddo
    enddo
  enddo
  do k=1,nk(b)-1
    do j = 1,nj(b)-1
      do i = 1,ni(b)-1
        r(i,j,k) = b + (i*j*k-1.0)/((ni(b)-1.0)*(nj(b)-1.0)*(nk(b)-1.0))
!        r(i,j,k) = b + (i*j*k-1.0)/(ni(b)*nj(b)*nk(b))
      enddo
    enddo
  enddo
  lvb = ni(b)*nj(b)*nk(b)
  i   = tecdat142(lvb,x,isdouble)
  i   = tecdat142(lvb,y,isdouble)
  i   = tecdat142(lvb,z,isdouble)
  lvb = (ni(b)-1)*(nj(b)-1)*(nk(b)-1)
  i   = tecdat142(lvb,r(1:ni(b)-1,1:nj(b)-1,1:nk(b)-1),isdouble)
  deallocate(x,y,z,r)
enddo

if( me == mainrank .and. numproc>1) then
  do ipc = 2, numproc
    do lb = 1,nb(ipc)
      b = lb + ib(ipc)
      zonetitle='Block '//char(48+b)//nullchr ! this determine zone properties even in other ranks
      i = teczne142(zonetitle, zonetype, ni(b), nj(b), nk(b), icellmax, jcellmax, kcellmax, &
                    soltime, strandid, parentzn, isblock, nfconns, fnmode,         &
                    totnumfcnodes, numconbcfaces, totnumbcconns,                   &
                    null, valueloc, null, shrconn                                  )
      pntranks = ipc-1 ! process of the zone
      i = tecznemap142(npartitions, pntranks)
    enddo
  enddo
endif

!
! end, close file and deallocate
!
i = tecend142()

call mpi_finalize(ierr)

deallocate(nb,ib,ni,nj,nk)

stop

end
artu72 is offline   Reply With Quote

Old   January 6, 2018, 01:37
Default
  #8
New Member
 
Dave Taflin
Join Date: Nov 2017
Posts: 12
Rep Power: 8
davetaflin is on a distinguished road
Thanks, Andrea, I'll have a look.

Dave
artu72 likes this.
davetaflin is offline   Reply With Quote

Old   March 28, 2018, 04:55
Default
  #9
New Member
 
Andrea
Join Date: Aug 2012
Posts: 8
Rep Power: 13
artu72 is on a distinguished road
Some news? Thanks...
artu72 is offline   Reply With Quote

Old   April 2, 2018, 19:09
Default
  #10
New Member
 
Dave Taflin
Join Date: Nov 2017
Posts: 12
Rep Power: 8
davetaflin is on a distinguished road
Thanks for the reminder. I've fixed the floating-point exception you're seeing, and that fix should be released with our next release of Tecplot 360, around the middle of 2018.

With an Intel compiler (not gfortran), I am also seeing a floating-point exception in this line:

i = tecdat142(lvb,r(1:ni(b)-1,1:nj(b)-1,1:nk(b)-1),isdouble)

Here, you are calling tecdat142 with a temporary array formed from a subset of the r array. BTW, this can be a performance penalty, since the temporary array must be allocated and the subset of values copied to it; I recommend you avoid passing subsets of arrays like this if you can avoid it. Anyway, this floating-point exception is encountered in the middle of forming and filling the temporary array--neither in your code nor ours, but in the generated code that allocates and fills the temporary array. I can't see why this is happening, but you can avoid the issue by avoiding creating the temporary array. Allocate r with dimensions (ni(b)-1,nj(b)-1,nk(b)-1) and just pass r to tecdat142:

i = tecdat142(lvb,r,isdouble)

Dave
artu72 likes this.
davetaflin is offline   Reply With Quote

Old   March 8, 2019, 12:33
Default
  #11
Senior Member
 
Julio Mendez
Join Date: Apr 2009
Location: Fairburn, GA. USA
Posts: 290
Rep Power: 18
juliom is on a distinguished road
Send a message via Skype™ to juliom
It seems this post does not get old at all. I am trying to implement TecIO-MPI to my MPI code too, but I can not see the bridge between my implementation and this subroutine I need to add. This snippet does not have comments so it makes difficult for me to follow the author through the process (That's ok). Therefore, I need extra help to digest this and couple it with my MPI CFD code which is a 2D cell-vertex implementation.

What are those parameters shown in line 28-30 ?

gnb on line 36 ? in this the number of nodes in each MPI rank?

nb on line 44 ?
ib on line 45?

Where can I get more info about the parameters shown on line 89-105?

Thanks
juliom is offline   Reply With Quote

Old   March 8, 2019, 13:01
Default
  #12
Senior Member
 
Julio Mendez
Join Date: Apr 2009
Location: Fairburn, GA. USA
Posts: 290
Rep Power: 18
juliom is on a distinguished road
Send a message via Skype™ to juliom
Could you please share more details about how you compiled it ?

Great contribution thanks!
juliom is offline   Reply With Quote

Old   March 11, 2019, 05:54
Default
  #13
New Member
 
Andrea
Join Date: Aug 2012
Posts: 8
Rep Power: 13
artu72 is on a distinguished road
Quote:
Originally Posted by juliom View Post
It seems this post does not get old at all. I am trying to implement TecIO-MPI to my MPI code too, but I can not see the bridge between my implementation and this subroutine I need to add. This snippet does not have comments so it makes difficult for me to follow the author through the process (That's ok). Therefore, I need extra help to digest this and couple it with my MPI CFD code which is a 2D cell-vertex implementation.

What are those parameters shown in line 28-30 ?

Hi Julio, since my CFD code deals with structured grids, I store cell nodes coordinates and variables are associated to cell centers. Moreover, I use Domain Decomposition, so I refer to groups of cells, called "blocks". In each block there are NI,NJ,NK cells on computational coordinates I,J,K respectively.



In this code I have built a very simple case, since I am not interested to the precise topology of my CFD code, but I want to test the interaction with TecIO library.



Their coordinates are defined in lines 119-127, and in the lines 28-30 there are offset in a way that the entire domain, divided in 8 blocks, spans the cube with edge 2 and center in (0,0,0)



Quote:
gnb on line 36 ? in this the number of nodes in each MPI rank?

nb on line 44 ?
ib on line 45?
gnb is the global numer of blocks in the run (8 in this case),

nb is the LOCAL (rank-specific) number of blocks (8/np, where np is the number of MPI processes, obviously np must be a exact 8 divisor (1,2,4,8), assuming equal number of blocks on each rank. ib is the block offset for each rank.


Quote:
Where can I get more info about the parameters shown on line 89-105?
Thanks
You can find them in Tecplot Data format guide PDF, available in the TecIO tarball, but also on Tecplot website.
artu72 is offline   Reply With Quote

Old   March 11, 2019, 06:04
Default
  #14
New Member
 
Andrea
Join Date: Aug 2012
Posts: 8
Rep Power: 13
artu72 is on a distinguished road
Quote:
Originally Posted by juliom View Post
Could you please share more details about how you compiled it ?

Great contribution thanks!

Hi Julio, the compilation is simple, just insert the right includes and libs:


Code:
gfortran -o test test.f90 -I/opt/tecplot_2017_R2/360ex_2017r2/include/ -ltecio -L./ -lstdc++ -lpthread
Here the include directory (containing tecio.f90) points to the Tecplot installation directory, whereas the library is in the same directory as the source code (I have compiled it, you can just try the provided default library in the Tecplot installation that you find in bin/ directory).
artu72 is offline   Reply With Quote

Old   March 11, 2019, 14:08
Default
  #15
Senior Member
 
Julio Mendez
Join Date: Apr 2009
Location: Fairburn, GA. USA
Posts: 290
Rep Power: 18
juliom is on a distinguished road
Send a message via Skype™ to juliom
Thanks Artu72,
Let me go through your comments and more questions that popped up while I was digesting your answers. Thanks in advance !!

1.-
Quote:
gnb is the global number of blocks in the run (8 in this case),

nb is the LOCAL (rank-specific) number of blocks (8/np, where np is the number of MPI processes, obviously np must be a exact 8 divisor (1,2,4,8), assuming equal number of blocks on each rank. ib is the block offset for each rank.
Did you use Three-dimensional Ordered Data (IJK-ordered), where each block has 8 nodes? If you had 8 blocks and 1 MPI process then you end up with 216 nodes (NI*NJ*NZ).

2.-
You said that np must be an exact 8 divisor, then why did you write this loop?
Code:
do i=1,mod(gnb,numproc)
           nb(i) = nb(i) + 1
           enddo
The remainder will always be zero if np is an exact divisor of 8. Thus, nb( = 0.0

What is the purpose of the array nb()?

3.-
I saw you put this instruction inside a loop

Code:
do lb=1,nb(me+1)
  b = lb + ib(me+1)
  zonetitle='Block '//char(48+b)//nullchr
  i = teczne142(zonetitle, zonetype, ni(b), nj(b), nk(b), icellmax, jcellmax, kcellmax, &
                soltime, strandid, parentzn, isblock, nfconns, fnmode,                  &
                totnumfcnodes, numconbcfaces, totnumbcconns,                            &
                null, valueloc, null, shrconn                                           )
  pntranks = me ! process of the zone
  i = tecznemap142(npartitions, pntranks)
!
! write out the field data.
!
  allocate(x(ni(b),nj(b),nk(b)),y(ni(b),nj(b),nk(b)),z(ni(b),nj(b),nk(b)),r(ni(b),nj(b),nk(b)))

  !coordinates
  do k=1,nk(b)
    do j = 1,nj(b)
      do i = 1,ni(b)
        x(i,j,k) = (i-1.0)/(ni(b)-1.0) + xoff(b)
        y(i,j,k) = (j-1.0)/(nj(b)-1.0) + yoff(b)
        z(i,j,k) = (k-1.0)/(nk(b)-1.0) + zoff(b)
      enddo
    enddo
  enddo
  !!!!!
  do k=1,nk(b)-1
    do j = 1,nj(b)-1
      do i = 1,ni(b)-1
        r(i,j,k) = b + (i*j*k-1.0)/((ni(b)-1.0)*(nj(b)-1.0)*(nk(b)-1.0))
!        r(i,j,k) = b + (i*j*k-1.0)/(ni(b)*nj(b)*nk(b))
      enddo
    enddo
  enddo
  lvb = ni(b)*nj(b)*nk(b)
  i   = tecdat142(lvb,x,isdouble)
  i   = tecdat142(lvb,y,isdouble)
  i   = tecdat142(lvb,z,isdouble)
  lvb = (ni(b)-1)*(nj(b)-1)*(nk(b)-1)
  i   = tecdat142(lvb,r(1:ni(b)-1,1:nj(b)-1,1:nk(b)-1),isdouble)
  deallocate(x,y,z,r)
enddo
So assuming I am right, the main loop
Code:
do lb=1,nb(me+1)
will go from 1 to MyRank +1. So each rank will execute this loop n number of times. Why do we do this? For example, if you have 8 MPI processes, then the last rank (MyRank = 7) will do the allocation and the coordinate calculations and all the other tasks 8 times.....
Why does each rank need to perform the loop instead of just the same instruction without the loop? Under this scenario, the instruction will be executed only once in each rank.

I read in the manual that for ordered data, the ghost cells are not needed. Why did you use the ib() if your data is structured?

4.-
It seems that the following command seems to be reasonable:

Code:
if( me == mainrank .and. numproc>1) then
  do ipc = 2, numproc
    do lb = 1,nb(ipc)
      b = lb + ib(ipc)
      zonetitle='Block '//char(48+b)//nullchr ! this determine zone properties even in other ranks
      i = teczne142(zonetitle, zonetype, ni(b), nj(b), nk(b), icellmax, jcellmax, kcellmax, &
                    soltime, strandid, parentzn, isblock, nfconns, fnmode,         &
                    totnumfcnodes, numconbcfaces, totnumbcconns,                   &
                    null, valueloc, null, shrconn                                  )
      pntranks = ipc-1 ! process of the zone
      i = tecznemap142(npartitions, pntranks)
    enddo
  enddo
endif
I think that the main rank needs to do some housekeeping and define the zones from each rank. Can you please briefly explain this?

5.-
As far as the compilation did you install Boost as it is suggested in the manual for TecIO-MPI.
You provided the compilation instruction but you use gfortran, should this be mpirun or mpiexec?

Thank you very much for your time and contribution.

Julio
juliom is offline   Reply With Quote

Old   March 13, 2019, 08:34
Default
  #16
New Member
 
Andrea
Join Date: Aug 2012
Posts: 8
Rep Power: 13
artu72 is on a distinguished road
Hi Julio.


I reply here to your questions in the same order:



1.



Yes, I am using 3D IJK-ordered data. Each block has ni=nj=nk=2*i cells (lines 52-56), where i is the block number. So in this specific code sample we have 1296 cells, whatever is the number of mpi processes.


2.


Sorry, I was wrong. np does not to be an exact 8 divisor. These lines distribute up the remaining blocks between first available ranks. E.g.: 8 blocks over 3 nodes. Line 44 (gnb/numproc) gives each rank 2 blocks, and the remaining 2 blocks are then assigned to ranks 0 and 1, respectively. So
nb(1 ==rank 0) = 3
nb(2 ==rank 1) = 3
nb(3 ==rank 2) = 2.
Again, I stress here that this is a pure sample fictious grid, since here I had to test the tecIO interface only. nb stores hence the number of block in each rank.

3.


In order to make clean things, let's suppose again we have 8 blocks on 3 mpi processes. So, rank 0 has the blocks 1,2,3; rank 1 has blocks 4,5,6; rank 2 has blocks 7 and 8. The ib offset are (lines 45,49-51) rank 0: 0; rank 1: 3; rank 2: 6. Now the loop you are citing performs in this way:
Rank 0: lb (LOCAL rank block index) goes from 1 to 3 (see nb evaluation in first answer) and b (GLOBAL block index over MPI processes) from 1 to 3
Rank 1: lb goes from 1 to 3 and b from 4 to 6
Rank 2: lb goes from 1 to 2 and b from 7 to 8
In this way, each grid block will executed once in all MPI run.

4.
Yes, that is simple. The command teczne142 needs to be invoked both by main rank and the rank zone (block) belongs to. Obviously, if the zone is in the main rank (0 as default) the command is run once. So the node index in this code (ipc) will start from 2 (rank will be ipc-1, as you can see in pntranks assignment).



I cite you the tec360 data format manual http://download.tecplot.com/360/curr...rmat_guide.pdf (at top of page 59):


"When using TecIO-MPI, must be called immediately after calling TECZNE142, and must be called by the main process and by each process that will write a zone to indicate which processes will output each partition of the zone. A zone may be written in non-partitioned fashion (that is, as in the standard TecIO library) by indicating that a single process will output it (npartitions of 1, and a one-element ptnranksarray)."


5.


I have not installed boost, it is suggested but not mandatory in building teciompi. And, yes, I was wrong again, compiler is mpif90, not gfortran.


I hope that the situation is now clear.
artu72 is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[General] Extracting ParaView Data into Python Arrays Jeffzda ParaView 30 November 6, 2023 21:00
decomposePar problem: Cell 0contains face labels out of range vaina74 OpenFOAM Pre-Processing 37 July 20, 2020 05:38
Explicitly filtered LES saeedi Main CFD Forum 16 October 14, 2015 11:58
Importing surface/profile for data writing in Fluent fadiga FLUENT 4 July 12, 2014 01:23
How to update polyPatchbs localPoints liu OpenFOAM Running, Solving & CFD 6 December 30, 2005 17:27


All times are GMT -4. The time now is 20:13.