|
[Sponsors] |
January 18, 2013, 05:26 |
Paraview Free/Delete all memory
|
#1 |
New Member
Join Date: Feb 2012
Posts: 13
Rep Power: 14 |
Hello,
I am encountering some difficulties to remove all the data from paraview. After loading several time steps of a pvd files (Multi-Block Data) using a python script, the pc runs out of memory and paraview crashes. However, after each time steps, I delete all the variables with instructions "Delete(var)" and "del var". It is the same problem if I do a Edit/Delete all, I still have the memory used according to the Memory inspector "client"... The only way I see to free the memory is to kill the process paraview from a terminal or to kill the client in the memory inspector. Nevertheless, this also closes paraview. Does someone knows how to delete all the memory between the first time-step and the second load? Maybe there is a way using server connections. My idea would be : 1) create a connection. 2) load the data and run a job. 3) close the server before reopening a new one to compute the second load... Do you have a solution, suggestions or other ideas? Have a good day, Eric |
|
January 19, 2013, 14:54 |
|
#2 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,975
Blog Entries: 45
Rep Power: 128 |
Greetings Eric,
I know that ParaView is known to have memory leaking problems, but last reports I've seen indicate that ParaView 3.98.0 no longer has this problem... it's here: http://www.cfd-online.com/Forums/ope...ls-my-ram.html Best regards, Bruno
__________________
|
|
January 20, 2013, 07:34 |
|
#3 |
New Member
Join Date: Feb 2012
Posts: 13
Rep Power: 14 |
Dear Bruno,
Thank you a lot for your comment. Indeed, I read that too and I am currently using this version of paraview (3.98) because of this problem and because it has additional export possibilities in vectorial format... Nevertheless, I still get memory leaking problems with this version. Maybe because I am controlling paraview from python scripts... In fact, I am using scripts because I did not find some capabilities like saving automatically contours in new vtk files using filters, or computing and saving computed area of surfaces or volumes but I know how to do that using scripts... Thanks again and best regards, Eric |
|
September 12, 2015, 18:31 |
|
#4 |
New Member
Join Date: Jul 2013
Posts: 6
Rep Power: 12 |
Dear Foamers,
this discussion is a little old, but maybe somebody may still be interested in the topic. I'm using Paraview 4.1.0 (the version that comes with OF 2.4.0) and I'm trying to export in batch some data (from vtk files to csv files) using pvpython. The script loops over some 1000s of files in several directories and after 6000-7000 files a memory leak error happens, which is quite annoying considering that I'm using a 32 GB workstation... In the script loop I delete all the objects I generated within a single loop instance to try free memory but this doesn't help. Has anybody found a solution to this memory leak issue? Thanks and Best Regards Cristian |
|
September 13, 2015, 15:50 |
|
#5 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,975
Blog Entries: 45
Rep Power: 128 |
Quick answer: Without example code for reproducing the same error, it's pretty hard to try and diagnose the issue. Either way, have you seen this answer: http://stackoverflow.com/a/26514796 ?
|
|
September 13, 2015, 18:02 |
|
#6 |
New Member
Join Date: Jul 2013
Posts: 6
Rep Power: 12 |
Dear wyldckat,
thanks for your reply. I was already aware of the solution proposed in the link you posted. I actually found it on the main Paraview wiki: http://www.paraview.org/Wiki/ParaView/Python_Scripting I am not a python expert and maybe I made some mistakes with the Delete() and del functions. I here report my script (I hope I am doing it in the right way): Code:
#Script used to export csv from vtk datafiles import fnmatch import os import sys import re from paraview.simple import * #servermanager.Connect() # Directories where to run #dirs = ["01mm", "02mm", "03mm", "04mm", "05mm"] dirs = ["01mm"] # VTK file names rhoBasename = "rho_sampleSurf_" rhoOutName = "rho_coord." # Generate the list of files on which to operate # and the corresponding output csv names files = [] for dd in dirs: for ff in os.listdir("./"+dd): if fnmatch.fnmatch(ff, rhoBasename+"*"): nameRead = "./" + dd + "/" + ff numb = (re.findall("\d+", nameRead))[1] nameWrite = "./" + dd + "/" + rhoOutName + numb + ".csv" files.append([nameRead,nameWrite]) #print files #sys.exit() print "Start export" zz = 1 for ff in files: if (zz%1000==0): print "written " + str(zz) + " csv files\n" # Open the correct file reader = OpenDataFile(ff[0]) # Apply filters to it pd2cd = PointDatatoCellData(Input = reader) cc = CellCenters(Input = pd2cd) # Prepare the output name # Write the file writer = CreateWriter(ff[1], cc) # NO NEED IF LOOP DONE EXPLICITLY writer.WriteAllTimeSteps = 1 writer.FieldAssociation = "Points" writer.UpdatePipeline() # Delete the generated objects del writer cc.Input = [] Delete(cc) del cc pd2cd.Input = [] Delete(pd2cd) del pd2cd Delete(reader) del reader zz += 1 print "Export completed" Thanks for the help, Best Regards, Cristian |
|
September 15, 2015, 07:03 |
|
#7 |
Senior Member
Mikko
Join Date: Jul 2014
Location: The Hague, The Netherlands
Posts: 243
Rep Power: 12 |
Hi Cristian,
Probably a better way to loop over files is to first build the pipeline and then only change the reader's and writer's filenames. Here is an example script: Code:
reader = OpenDataFile(files[0][0]) pd2cd = PointDatatoCellData(Input = reader) cc = CellCenters(Input = pd2cd) writer = CreateWriter(files[0][1], cc) writer.FieldAssociation = "Points" for ff in files: reader.FileNames = ff[0] writer.FileName = ff[1] writer.UpdatePipeline() -Mikko |
|
September 15, 2015, 09:22 |
|
#8 |
New Member
Join Date: Jul 2013
Posts: 6
Rep Power: 12 |
Dear Mikko,
Thanks for your suggestion. Your snippet made my day! Now paraview doesn't leak memory while the script is running!. Maybe creating / deleting a reader and a writer at any loop instance (as I used to do) made paraview leaving some garbage in memory... I report here the new version of the script since maybe it could be useful to others having similar needs/issues: Code:
#Script used to export csv from rho datafiles import fnmatch import os import sys import re from paraview.simple import * # Directories where to run dirs = ["01mm", "02mm", "03mm", "04mm", "05mm"] # File rho names rhoBasename = "rho_sampleSurf_" #rho_sampleSurf_2576.vtk rhoOutName = "rho_coord." #rho_coord.550.csv # Generate the list of files on which to operate # and the corresponding output csv names files = [] for dd in dirs: for ff in os.listdir("./"+dd): if fnmatch.fnmatch(ff, rhoBasename+"*"): nameRead = "./" + dd + "/" + ff numb = (re.findall("\d+", nameRead))[1] nameWrite = "./" + dd + "/" + rhoOutName + numb + ".csv" files.append([nameRead,nameWrite]) print "Start export" # Generate the export pipeline reader = OpenDataFile(files[0][0]) pd2cd = PointDatatoCellData(Input = reader) cc = CellCenters(Input = pd2cd) writer = CreateWriter(files[0][1], cc) writer.FieldAssociation = "Points" # Processed files counter zz = 1 for ff in files: # Print out the counter value from time to time if (zz%1000==0): print "written " + str(zz) + " csv files\n" # Update the pipeline and write the csv file reader.FileNames = ff[0] writer.FileName = ff[1] writer.UpdatePipeline() # Update counter zz += 1 print "Export completed" Best regards, Cristian |
|
September 15, 2015, 10:05 |
|
#9 |
Senior Member
Mikko
Join Date: Jul 2014
Location: The Hague, The Netherlands
Posts: 243
Rep Power: 12 |
I'm glad that it worked.
I just wanted to add that I think Paraview is not leaking because I've made same kind of loops with big datasets and for many files. I think you are not deleting the writer. Thus at the end of your loop you should delete all the sources in inverse order than generating them: Code:
# Delete the generated objects Delete(writer) Delete(cc) Delete(pd2cd) Delete(reader) Code:
for key, value in GetSources().items(): Delete(value) print 'Deleted:', key, value |
|
September 15, 2015, 11:33 |
|
#10 |
New Member
Join Date: Jul 2013
Posts: 6
Rep Power: 12 |
Dear Mikko,
Thanks again for your reply. I actually tried several combination of del / Delete() deleting the pipeline objects in reverse order, but without success. In particular paraview complained for the "Delete(writer)" line... You can see an attempt of that in my second post where I just do "del writer" Probably I made some mistakes somewhere.... I anyway prefer the solution you posted since changing with a loop the reader input seems to me more clever and efficient rather than creating/deleting the reader at every time. I actually wasn't aware of that possibility...good to know for my next scripts Thanks and Best Regards, Cristian |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[OpenFOAM] Paraview client/server does not work with ParaView 5.0.1 | snak | ParaView | 0 | October 17, 2016 10:22 |
[General] shared memory parallelization with Paraview | kabz | ParaView | 1 | August 21, 2015 19:23 |
[General] Paraview 4.3.1 crashes when saving animation due to a memory leak | Suslik | ParaView | 1 | August 12, 2015 20:04 |
[OpenFOAM] Annoying issue of automatic "Rescale to Data Range " with paraFoam/paraview 3.12 | keepfit | ParaView | 60 | September 18, 2013 03:23 |
CFX CPU time & real time | Nick Strantzias | CFX | 8 | July 23, 2006 17:50 |