CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Post-Processing

Need a workaround for making streaklines on a large case

Register Blogs Community New Posts Updated Threads Search

Like Tree1Likes
  • 1 Post By KTG

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   October 9, 2017, 03:52
Default Need a workaround for making streaklines on a large case
  #1
KTG
Senior Member
 
Abe
Join Date: May 2016
Posts: 119
Rep Power: 9
KTG is on a distinguished road
Hi all,

I have a large decomposed case on a cluster, and a visualization node that can barely display it. When I try to produce streaklines in paraview, it crashes (with no error output). Does anyone know if there is a function object that I can use to produce something like streaklines (not streamlines)? Alternatively, is there a way to export a small section of the case to a new casefile so that I can post process it without crashing paraview?


All ideas welcome!

Thanks!
KTG is offline   Reply With Quote

Old   October 9, 2017, 04:34
Default
  #2
C-L
Member
 
Charlie Lloyd
Join Date: Feb 2016
Posts: 57
Rep Power: 10
C-L is on a distinguished road
Hi Abe,

You can run paraview in parallel and create the streaklines without the GUI using mpirun and pvBatch. You just need to generate a python script! Not sure if you are familiar but under 'tools' > 'start trace' in paraview you have the option to generate python commands for each of your post-processing functions. You can then run this in the terminal without the gui.
C-L is offline   Reply With Quote

Old   October 9, 2017, 17:43
Default Re:
  #3
KTG
Senior Member
 
Abe
Join Date: May 2016
Posts: 119
Rep Power: 9
KTG is on a distinguished road
Thanks for the response. I have actually tried the method you are suggesting, but am doing something wrong, because I get a core dump when I try to run the output of the trace. I tried it on a small serial case just to start.

Honestly, I am not really a power user when it comes to paraview / pvbatch. Do you know where I can find a tutorial on doing this for openfoam cases? Is it as simple as running

pvbatch trace_output.py ?
KTG is offline   Reply With Quote

Old   October 9, 2017, 19:40
Default --mesa
  #4
KTG
Senior Member
 
Abe
Join Date: May 2016
Posts: 119
Rep Power: 9
KTG is on a distinguished road
Well, it turns out that the installation on the cluster I am using requires the --mesa option because of some opengl issue. pvbatch is surprisingly easy to use otherwise, it just worked when I added mpi.

Thanks for convincing me to re-visit the idea!
C-L likes this.
KTG is offline   Reply With Quote

Old   February 1, 2018, 08:15
Default
  #5
New Member
 
Joćo Duarte Miranda
Join Date: Jan 2012
Posts: 13
Rep Power: 14
JoaoDMiranda is on a distinguished road
Dear KTG,

I am having some troubles with parallel pvbatch as well. I Have a decomposed case and the script seems to run fine, however, at the end. My export only shows one of the partitions and not the whole case. Can you please let me know if you had any similar problems?

Thanks a lot.

Best wishes!
JoaoDMiranda is offline   Reply With Quote

Old   February 1, 2018, 08:22
Default
  #6
C-L
Member
 
Charlie Lloyd
Join Date: Feb 2016
Posts: 57
Rep Power: 10
C-L is on a distinguished road
Joao,

Have you ensured that you have specified the case type as decomposed?

something like:

case = OpenFOAMReader(FileName = './caseNAme.foam')
case.CaseType = 'Decomposed Case'

I generally generate the python scripts using the paraview tracing function and then edit the output script to make it more general for different cases.
C-L is offline   Reply With Quote

Old   February 1, 2018, 09:24
Default
  #7
New Member
 
Joćo Duarte Miranda
Join Date: Jan 2012
Posts: 13
Rep Power: 14
JoaoDMiranda is on a distinguished road
Dear Charlie,

First of all thanks for your fast answer!
Indeed I have specified the decomposed case and I can see the different processes running.

My problem is that the generated .x3d file only refers to one of the processors. The same code in one processor without being decomposed works fine.

I am running for a case with 2 processors:

mpiexec -n 2 pvbatch --mpi --parallel --use-offscreen-rendering MakeFiles.py

If you have any other suggestion it is most welcome.
Thanks once again.

Best wishes,

Joao
JoaoDMiranda is offline   Reply With Quote

Old   February 1, 2018, 23:18
Default
  #8
KTG
Senior Member
 
Abe
Join Date: May 2016
Posts: 119
Rep Power: 9
KTG is on a distinguished road
Hi Joao,

I wish I had a good answer for you! I actually abandoned the figure I was working on because I could not get good results - some weird jumps kept happening that did not make sense. Honestly, I could never figure out what was going on under the hood with pvbatch, I got it working in serial but never managed to scale it up to the larger case I was working on. The questionable output I got using mpi took forever - it may be the case that I mistakenly ran a ton of serial jobs through mpi. I didn't' have the same trouble you are having. I think I ended up with VTK files rather than .x3d - don't remember -sorry.

If you get it working, it would be cool to see some working pvbatch example code - let us know how it goes! If you have a small case file you want to post I can try running it.
KTG is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Simple channel case using cyclicAMI will not converge cbcoutinho OpenFOAM Running, Solving & CFD 3 August 4, 2015 12:28
Changing the grid on the same set-up Katya FLUENT 7 October 8, 2009 16:31
Free surface boudary conditions with SOLA-VOF Fan Main CFD Forum 10 September 9, 2006 12:24
Turbulent Flat Plate Validation Case Jonas Larsson Main CFD Forum 0 April 2, 2004 10:25
Body force - Does it work? Jan Rusås CFX 5 August 27, 2002 09:50


All times are GMT -4. The time now is 18:43.