CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   SU2 (https://www.cfd-online.com/Forums/su2/)
-   -   merge_grid_su2.py overwrote original grid with zero size (https://www.cfd-online.com/Forums/su2/112481-merge_grid_su2-py-overwrote-original-grid-zero-size.html)

jentink January 29, 2013 14:08

merge_grid_su2.py overwrote original grid with zero size
 
I was testing things in the euler/naca0012 directory.

I finally got a parallel run to work, and ended up with the partitioned grid files in the directory after the run.

I tried using merge_grid_su2.py to combine them, and it looked like it ran fine, and there were no errors, but it wrote a zero size file.

Anyone else experience this?

I've been having python issues, but I think most are figured out at this point.

economon January 29, 2013 15:37

Currently, when running a parallel solution, SU2_DDC partitions the original mesh and writes the requested number of sub-grids (gridname_1.su2, grid name_2.su2, etc.), but it also leaves the original mesh in the directory. While the solution files (.vtk or .plt) must be merged at the termination of the solver (this will be handled automatically with the parallel_computation.py script), there is no need to merge the partitioned mesh files.

However, if one is interested in performing the full design loop in parallel (flow solution->adjoint solution->gradient projection->optimizer->mesh deformation), then the meshes will be deformed in parallel between design cycles. These deformed partitions can then be merged at the end of the design process using the merge_grid_su2.py script, as they make up a grid that is indeed different from the original.

jentink January 30, 2013 09:20

I was merely interested in using the merge capability to try things out with the 8 grids the partitioning process left.

I should be able to use merge_grid_su2.py to do this, right? I ran it, it had no errors, and I ended up with a merged grid of zero size.

jentink January 30, 2013 09:23

I haven't had much luck getting anything else working with the naca0012 test case except for the standard cfd run in parallel.

So I was just playing around and thought I'd try to merge the 8 grids I got when running the parallel case.

economon February 1, 2013 01:24

Hi Tom,

Just wanted to follow up on this. Sorry you're having trouble with getting things working with the NACA 0012. Are there any other errors or specific problems you want to report?

As for the grid merging, you can indeed do this after a parallel run (although, as I mention above, it shouldn't be necessary as the original mesh will not be removed during partitioning). The merge_grid_su2.py script is written to work inside of the parallel_deformation.py script, and therefore it assumes that the mesh file names for the original mesh and the newly deformed partitions are different (e.g. mesh_NACA0012_inv.su2 might be the original, while mesh_out_1.su2, mesh_out_2.su2, etc. might be the output from SU2_MDC in parallel). To merge after a parallel run, you could make a separate copy of the original mesh with a different name, run the parallel computation, and change the filenames under the MESH_FILENAME and MESH_OUT_FILENAME to be the names of the copied original mesh file and the root of the partitioned mesh files (without the "_*" suffix), respectively. In short, the merge_grid_su2.py script expects the partitioned meshes to have a different root filename than the original mesh, and if they are the same, then the original mesh file will be overwritten with a zero size file, as you noted.

I hope this clears things up!

jentink February 1, 2013 10:08

thanks. i'll try that.

I found out, though, that my 'parallel' runs were really just my one job repeated 8 times (8=ncpu). It took 2x longer than the serial run.

So now I'm needing to figure out how to get the parallel job working properly on the cluster I'm using (mpich2_intel with a pbs queueing system)


All times are GMT -4. The time now is 00:48.