CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > SU2

decomposing a grid in parallel

Register Blogs Community New Posts Updated Threads Search

 
 
LinkBack Thread Tools Search this Thread Display Modes
Prev Previous Post   Next Post Next
Old   January 23, 2014, 08:09
Default decomposing a grid in parallel
  #1
New Member
 
Join Date: Nov 2009
Posts: 19
Rep Power: 16
deeps is on a distinguished road
Dear All,

If we want to run a simulation in parallel, we can do so in SU2 using the following :

parallel_computation.py -f file -p np

How about only partitioning a grid file in parallel? Lets say I have a 21M grid and I want to decompose it in 40 parts using 20 processors. Due to the large size of the file if I attempt doing this on a single processor, it would not run due to memory issues. Also, I tried:

mpirun -n 40 SU2_DDC file_name

but it still does the partitioning using 1 processor. (Although it shows 40 processors being used but entire memory is used only for 1 processor)

Is there a way of partitioning grids in parallel using n processors ? This would help in dealing with large grid sizes.

Thanks and regards,
Deepanshu.
deeps is offline   Reply With Quote

 


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Can not run OpenFOAM in parallel in clusters, help! ripperjack OpenFOAM Running, Solving & CFD 5 May 6, 2014 15:25
parallel Grief: BoundaryFields ok in single CPU but NOT in Parallel JR22 OpenFOAM Running, Solving & CFD 2 April 19, 2013 16:49
Parallel partion lines on grid Allan Walsh FLUENT 1 January 12, 2009 18:09
grid adaption on parallel Nurul Murad FLUENT 0 May 28, 2002 03:34
Combustion Convergence problems Art Stretton Phoenics 5 April 2, 2002 05:59


All times are GMT -4. The time now is 10:46.