|
[Sponsors] |
Need a workaround for making streaklines on a large case |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
October 9, 2017, 03:52 |
Need a workaround for making streaklines on a large case
|
#1 |
Senior Member
Abe
Join Date: May 2016
Posts: 119
Rep Power: 9 |
Hi all,
I have a large decomposed case on a cluster, and a visualization node that can barely display it. When I try to produce streaklines in paraview, it crashes (with no error output). Does anyone know if there is a function object that I can use to produce something like streaklines (not streamlines)? Alternatively, is there a way to export a small section of the case to a new casefile so that I can post process it without crashing paraview? All ideas welcome! Thanks! |
|
October 9, 2017, 04:34 |
|
#2 |
Member
Charlie Lloyd
Join Date: Feb 2016
Posts: 57
Rep Power: 10 |
Hi Abe,
You can run paraview in parallel and create the streaklines without the GUI using mpirun and pvBatch. You just need to generate a python script! Not sure if you are familiar but under 'tools' > 'start trace' in paraview you have the option to generate python commands for each of your post-processing functions. You can then run this in the terminal without the gui. |
|
October 9, 2017, 17:43 |
Re:
|
#3 |
Senior Member
Abe
Join Date: May 2016
Posts: 119
Rep Power: 9 |
Thanks for the response. I have actually tried the method you are suggesting, but am doing something wrong, because I get a core dump when I try to run the output of the trace. I tried it on a small serial case just to start.
Honestly, I am not really a power user when it comes to paraview / pvbatch. Do you know where I can find a tutorial on doing this for openfoam cases? Is it as simple as running pvbatch trace_output.py ? |
|
October 9, 2017, 19:40 |
--mesa
|
#4 |
Senior Member
Abe
Join Date: May 2016
Posts: 119
Rep Power: 9 |
Well, it turns out that the installation on the cluster I am using requires the --mesa option because of some opengl issue. pvbatch is surprisingly easy to use otherwise, it just worked when I added mpi.
Thanks for convincing me to re-visit the idea! |
|
February 1, 2018, 08:15 |
|
#5 |
New Member
Joćo Duarte Miranda
Join Date: Jan 2012
Posts: 13
Rep Power: 14 |
Dear KTG,
I am having some troubles with parallel pvbatch as well. I Have a decomposed case and the script seems to run fine, however, at the end. My export only shows one of the partitions and not the whole case. Can you please let me know if you had any similar problems? Thanks a lot. Best wishes! |
|
February 1, 2018, 08:22 |
|
#6 |
Member
Charlie Lloyd
Join Date: Feb 2016
Posts: 57
Rep Power: 10 |
Joao,
Have you ensured that you have specified the case type as decomposed? something like: case = OpenFOAMReader(FileName = './caseNAme.foam') case.CaseType = 'Decomposed Case' I generally generate the python scripts using the paraview tracing function and then edit the output script to make it more general for different cases. |
|
February 1, 2018, 09:24 |
|
#7 |
New Member
Joćo Duarte Miranda
Join Date: Jan 2012
Posts: 13
Rep Power: 14 |
Dear Charlie,
First of all thanks for your fast answer! Indeed I have specified the decomposed case and I can see the different processes running. My problem is that the generated .x3d file only refers to one of the processors. The same code in one processor without being decomposed works fine. I am running for a case with 2 processors: mpiexec -n 2 pvbatch --mpi --parallel --use-offscreen-rendering MakeFiles.py If you have any other suggestion it is most welcome. Thanks once again. Best wishes, Joao |
|
February 1, 2018, 23:18 |
|
#8 |
Senior Member
Abe
Join Date: May 2016
Posts: 119
Rep Power: 9 |
Hi Joao,
I wish I had a good answer for you! I actually abandoned the figure I was working on because I could not get good results - some weird jumps kept happening that did not make sense. Honestly, I could never figure out what was going on under the hood with pvbatch, I got it working in serial but never managed to scale it up to the larger case I was working on. The questionable output I got using mpi took forever - it may be the case that I mistakenly ran a ton of serial jobs through mpi. I didn't' have the same trouble you are having. I think I ended up with VTK files rather than .x3d - don't remember -sorry. If you get it working, it would be cool to see some working pvbatch example code - let us know how it goes! If you have a small case file you want to post I can try running it. |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Simple channel case using cyclicAMI will not converge | cbcoutinho | OpenFOAM Running, Solving & CFD | 3 | August 4, 2015 12:28 |
Changing the grid on the same set-up | Katya | FLUENT | 7 | October 8, 2009 16:31 |
Free surface boudary conditions with SOLA-VOF | Fan | Main CFD Forum | 10 | September 9, 2006 12:24 |
Turbulent Flat Plate Validation Case | Jonas Larsson | Main CFD Forum | 0 | April 2, 2004 10:25 |
Body force - Does it work? | Jan Rusås | CFX | 5 | August 27, 2002 09:50 |