CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > SU2

SU2 v 7.0.2 not writing solution files

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   March 5, 2020, 07:21
Default SU2 v 7.0.2 not writing solution files
  #1
New Member
 
Join Date: Feb 2018
Posts: 21
Rep Power: 6
CarlosLozano is on a distinguished road
Hi,
I've just switched to v7 (.0.2) and I am having trouble with the solution files. I am running the naca0012 laminar test case (with the .cfg file in
TestCases\cont_adj_navierstokes\naca0012_sub) in 1 processor with the mpi binary executable for windows and the code only writes the restart (.csv) and history (.dat) files, but not the solution files (flow and surface_flow). I execute SU2_SOL and it does nothing.
Any advice?
CarlosLozano is offline   Reply With Quote

Old   March 5, 2020, 08:39
Default
  #2
pcg
Senior Member
 
Pedro Gomes
Join Date: Dec 2017
Posts: 402
Rep Power: 10
pcg is on a distinguished road
The behaviour of the "OUTPUT_FILES" option is documented here: https://su2code.github.io/docs_v7/Custom-Output/
pcg is offline   Reply With Quote

Old   March 5, 2020, 09:16
Default
  #3
New Member
 
Join Date: Feb 2018
Posts: 21
Rep Power: 6
CarlosLozano is on a distinguished road
Quote:
Originally Posted by pcg View Post
The behaviour of the "OUTPUT_FILES" option is documented here: https://su2code.github.io/docs_v7/Custom-Output/
Thanks! It seems this has changed considerably from previous versions.
CarlosLozano is offline   Reply With Quote

Old   March 5, 2020, 09:26
Default
  #4
pcg
Senior Member
 
Pedro Gomes
Join Date: Dec 2017
Posts: 402
Rep Power: 10
pcg is on a distinguished road
Yup, whenever the major version number increases, you can expect significant changes and loss of backwards compatibility.
pcg is offline   Reply With Quote

Old   March 6, 2020, 10:41
Default
  #5
New Member
 
cfdjetman
Join Date: Mar 2019
Posts: 25
Rep Power: 5
cfdjetman is on a distinguished road
Has anyone had problems writing the surface and flow files when they use more than 16 processors using SU2 v7.0.0?


When I use more than 16 processors, the surface and flow files write incomplete data. I had the same issue when I use the shape_optimization.py script. In this case the deformed mesh file is incomplete and therefore my optimization stops.
cfdjetman is offline   Reply With Quote

Old   March 11, 2020, 04:06
Default
  #6
pcg
Senior Member
 
Pedro Gomes
Join Date: Dec 2017
Posts: 402
Rep Power: 10
pcg is on a distinguished road
That issue kind of rings a bell. Try 7.0.2 we have monthly releases now so that we can distribute small fixes as early as possible.
pcg is offline   Reply With Quote

Old   March 14, 2020, 03:24
Default
  #7
New Member
 
cfdjetman
Join Date: Mar 2019
Posts: 25
Rep Power: 5
cfdjetman is on a distinguished road
Unfortunately, I have the same problem after installing 7.0.2.
cfdjetman is offline   Reply With Quote

Old   March 14, 2020, 05:56
Default
  #8
pcg
Senior Member
 
Pedro Gomes
Join Date: Dec 2017
Posts: 402
Rep Power: 10
pcg is on a distinguished road
Can you give a concrete example? I never had that problem before and I just tried to run the quick start case on 64 cores and the files seem ok.
pcg is offline   Reply With Quote

Old   March 15, 2020, 15:18
Default
  #9
New Member
 
cfdjetman
Join Date: Mar 2019
Posts: 25
Rep Power: 5
cfdjetman is on a distinguished road
I use a cluster to run my simulations. Each node on the cluster has 16 processors. When I use more than 1 node, that’s when the problem begins.

So when I run an airfoil cfd case using more than 1 node, the surface file only writes out values from 0 to 0.3 of the chord length, not all the way till chord length of 1. This does not happen when I use 1 node. When I try to open the flow.dat file in tecplot, I cannot open it as it has some data missing.

When I run the airfoil optimization case, it runs the first design iteration, then it deforms the mesh and prints out the mesh_deform.su2 file. This file does not contain the NPOIN part of the mesh file, hence su2 cannot run the cfd simulation for the next design iteration.
cfdjetman is offline   Reply With Quote

Old   March 16, 2020, 03:19
Default
  #10
pcg
Senior Member
 
Pedro Gomes
Join Date: Dec 2017
Posts: 402
Rep Power: 10
pcg is on a distinguished road
Is this only a problem when you use the python scripts or also if you directly launch SU2_CFD/DEF?
Conceptually there is no difference between what the code does when running on one or multiple nodes, mpi makes all that opaque.
That being said I never tried the python scripts on a multi-node environment (I always assumed they would not work as they are not "mpi-ready").
pcg is offline   Reply With Quote

Old   March 16, 2020, 11:29
Default
  #11
New Member
 
cfdjetman
Join Date: Mar 2019
Posts: 25
Rep Power: 5
cfdjetman is on a distinguished road
I have the same problem when I run SU2_CFD/DEF using more than one node.
cfdjetman is offline   Reply With Quote

Old   April 1, 2020, 20:48
Default
  #12
New Member
 
cfdjetman
Join Date: Mar 2019
Posts: 25
Rep Power: 5
cfdjetman is on a distinguished road
Pedro Gomes,

Do you see any reason why this could be happening?
cfdjetman is offline   Reply With Quote

Old   April 3, 2020, 04:10
Default
  #13
pcg
Senior Member
 
Pedro Gomes
Join Date: Dec 2017
Posts: 402
Rep Power: 10
pcg is on a distinguished road
If no output format is working, I would guess you have some issue with the file system, or the way you prepare the working directory is not adequate (maybe some nodes do not have write permissions).
But I've never setup distributed systems... If I have issues I go to the people that maintain the system.

If that is not an option, and you are comfortable programming, edit SU2_CFD.cpp and add instructions to print the mpi rank to screen and create a file with the mpi rank as its name. On screen you should see numbers 0 to n-1, if numbers repeat you are launching two simulations. On disk you should see files 0 to n-1, if some are missing it is either because they failed to open (which you can detect when you open the file in c++) or got lost.

If some output formats work and others don't, please open an issue on GitHub.
pcg is offline   Reply With Quote

Old   April 18, 2020, 10:06
Default
  #14
Member
 
Zach Davis
Join Date: Jan 2010
Location: Huntsville, AL
Posts: 98
Rep Power: 14
RcktMan77 is on a distinguished road
I don't mean to hijack this thread. I do have a similar issue, but it seems isolated with the Tecplot binary writer in SU2 v7.0.3. I can write the volume solution and surface solution files out just fine if I have OUTPUT_FILES= ( RESTART, PARAVIEW, SURFACE_PARAVIEW). However, if I use the Tecplot writer as OUTPUT_FILES= ( RESTART, TECPLOT, SURFACE_TECPLOT), then SU2_CFD will quit with a cryptic error: Error 137.

I have been able to run the 2D Quick Start example case with the Tecplot writer, and I do get a flow.szplt volume solution in that case. I'm not sure whether the parallel Tecplot writer is running out of memory in my much larger 3-D parallel case, or if there is something else going on. I would have expected to it to error with a Segmentation Fault if memory were an issue, but I don't exactly see that here.

If anyone else has experienced anything similar with the Tecplot binary writer in recent releases of SU2 with large grids run in parallel, then I would be interested in knowing that. For now, I'll just continue with the ParaView file format until I have a system with a lot more memory to test on.
RcktMan77 is offline   Reply With Quote

Old   June 13, 2020, 03:41
Default
  #15
New Member
 
cfdjetman
Join Date: Mar 2019
Posts: 25
Rep Power: 5
cfdjetman is on a distinguished road
Hi Zach Davis,


Have you been able to fix your issue? I changed my OUTPUT_FILES to ( RESTART, PARAVIEW, SURFACE_PARAVIEW) to see if this would resolve my problem, but I have the same issue as before. I am using SU2 v7.0.2.
cfdjetman is offline   Reply With Quote

Old   June 13, 2020, 19:37
Default
  #16
New Member
 
Hernán David Cogollo
Join Date: Jun 2020
Location: Bogotá, Colombia
Posts: 6
Rep Power: 3
hdavidcogollo is on a distinguished road
Hi everyone.

I'm trying to run SU2 on Windows but the output files should be ".vtk" and the files that I obtain are ".vtu", I don´t know the reason of this and I would like what can I do for resolve it.
Attached Images
File Type: jpg Anotación 2020-06-13 183534.jpg (125.5 KB, 17 views)
hdavidcogollo is offline   Reply With Quote

Old   July 17, 2021, 01:27
Default
  #17
Senior Member
 
Arijit Saha
Join Date: Feb 2019
Location: Germany
Posts: 127
Rep Power: 5
ari003 is on a distinguished road
Quote:
Originally Posted by hdavidcogollo View Post
Hi everyone.

I'm trying to run SU2 on Windows but the output files should be ".vtk" and the files that I obtain are ".vtu", I don´t know the reason of this and I would like what can I do for resolve it.
Make the setup in your cfg file as OUTPUT_FILES = ( RESTART, PARAVIEW_ASCII, SURFACE_PARAVIEW_ASCII)
ari003 is offline   Reply With Quote

Old   August 3, 2021, 11:01
Default
  #18
Senior Member
 
Pay D.
Join Date: Aug 2011
Posts: 163
Blog Entries: 1
Rep Power: 12
pdp.aero is on a distinguished road
Hi guys,


I have been experiencing some similar issues referred down here by people.


I have 24 cores on a node and when I was going to launch an optimization problem on more than on node let's say 72 cores, I was getting some error. I also had experience what Zach Davis was experiencing with writing tecplot binaries for a medium-size mesh. I had to switch to ASCII OUTPUT_FILES = (RESTART_ASCII, TECPLOT_ASCII, SURFACE_TECPLOT_ASCII)


But now I am randomly getting this:


MPI_ABORT was invoked on rank 23 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] 71 more processes have sent help message help-mpi-btl-openib.txt / ib port not selected
[node229:197058] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[node229:197058] 71 more processes have sent help message help-mpi-btl-openib.txt / error in device init
[node229:197058] 71 more processes have sent help message help-mpi-api.txt / mpi-abort


This happens during the optimization cycle randomly. In case I navigate to the same directory and launch the CFD job manually instead of using FADO to do that everything goes fine.


This happens to happen out of blue. It was during the primal solution of the second design iteration, I turned off symlink in FADO and then disappeared and appeared in the fifth design iteration, again primal solution. Any idea?


Best,
Pay


p.s. this is SU2 v 7.1.1

Last edited by pdp.aero; August 3, 2021 at 11:13. Reason: version
pdp.aero is offline   Reply With Quote

Old   August 19, 2021, 09:43
Default
  #19
Senior Member
 
Pay D.
Join Date: Aug 2011
Posts: 163
Blog Entries: 1
Rep Power: 12
pdp.aero is on a distinguished road
Quote:
Originally Posted by pdp.aero View Post
Hi guys,


I have been experiencing some similar issues referred down here by people.


I have 24 cores on a node and when I was going to launch an optimization problem on more than on node let's say 72 cores, I was getting some error. I also had experience what Zach Davis was experiencing with writing tecplot binaries for a medium-size mesh. I had to switch to ASCII OUTPUT_FILES = (RESTART_ASCII, TECPLOT_ASCII, SURFACE_TECPLOT_ASCII)


But now I am randomly getting this:


MPI_ABORT was invoked on rank 23 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] PMIX ERROR: UNREACHABLE in file server/pmix_server.c at line 2193
[node229:197058] 71 more processes have sent help message help-mpi-btl-openib.txt / ib port not selected
[node229:197058] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[node229:197058] 71 more processes have sent help message help-mpi-btl-openib.txt / error in device init
[node229:197058] 71 more processes have sent help message help-mpi-api.txt / mpi-abort


This happens during the optimization cycle randomly. In case I navigate to the same directory and launch the CFD job manually instead of using FADO to do that everything goes fine.


This happens to happen out of blue. It was during the primal solution of the second design iteration, I turned off symlink in FADO and then disappeared and appeared in the fifth design iteration, again primal solution. Any idea?


Best,
Pay


p.s. this is SU2 v 7.1.1



Hi there,


I just wanna point out the above error had something to do with MPI and rather not SU2. It has been solved by:


export OMPI_MCA_btl_openib_allow_ib=1
export OMPI_MCA_btl_openib_if_include="mlx5_0:1"
export OMPI_MCA_btl=self,tcp


Now everything running fine even with 300 cores.



Cheers,
Pay
pdp.aero is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[Other] Contribution a new utility: refine wall layer mesh based on yPlus field lakeat OpenFOAM Community Contributions 58 December 23, 2021 02:36
[Other] refineWallLayer Error Yuby OpenFOAM Meshing & Mesh Conversion 2 November 11, 2021 11:04
Writing report files and creating report definitions. cfd_worker99 FLUENT 3 June 12, 2020 12:55
Forcing the creation of solution files OVS SU2 3 April 15, 2016 04:13
OpenFOAM15 paraFoam bug koen OpenFOAM Bugs 19 June 30, 2009 10:46


All times are GMT -4. The time now is 02:28.