CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Hardware (https://www.cfd-online.com/Forums/hardware/)
-   -   Experiences with PCI-Express SSDs (https://www.cfd-online.com/Forums/hardware/126452-experiences-pci-express-ssds.html)

flotus1 November 18, 2013 06:50

Experiences with PCI-Express SSDs
 
Since the release of the Mushkin Scorpion Deluxe 1920GB SSD at an affordable price I was wondering if the benefits from this kind of storage device are as I would expect.

We are currently facing the following problems:
The results from our Lattice-Boltzmann solver are in a VTK file format for post-processing in Paraview. When we have Large Eddy results from simulations with around 100 million collision centers, it takes AGES to load a result in Paraview. Switching between the time steps of such a simulation results in a coffe-break.
I would expect the high sequential read performance of around 2000 MByte/s to bring a huge improvement in this situation. Am I right or am I forgetting about another bottleneck?

Another kind of simulations we run are particle tracking simulations, which create a huge amount (several 100k) of small files.
Again, I would expect the handling of these files to be improved by the high random read/write performance of the device compared to conventional harddisks. Am I right here?

Please share your experience and thoughts on this topic.

JBeilke November 19, 2013 17:09

Have you ever tried another postprocessing software. Paraview might be good for academic purpose but dealing with real world transient cases was always a big pain when I tried it. Just try EnSight at first before you start to change the hardware.

wyldckat November 20, 2013 17:31

Greetings to all!

@flotus1 - A few questions:
  1. "100 million collision centers" - how many mega/gigabytes does that equate to in VTK file sizes?
  2. What CPU does your machine have?
  3. How much of the CPU is being used by ParaView?
  4. Have you checked the throughput of the SSD while ParaView is reading the VTK files?
    • On Linux, you can use iotop.
    • On Windows 7, the "Task Manager" has got a button for the "Resource Monitor", if my memory doesn't fail me.
  5. Have you tried using ParaView's multi-core feature? I'll quote myself from another thread:
    Quote:

    Originally Posted by wyldckat (Post 451011)
    You can try turning on the multi-core feature that ParaView has got, if it was built with MPI capabilities, which is accessible at the Settings dialogue, namely the check box "Use Multi-Core": http://www.paraview.org/Wiki/ParaVie...Guide/Settings

  6. Are the VTK files in ASCII or binary format?
Best regards,
Bruno

flotus1 November 21, 2013 10:30

My bad... only the model surface information is written in a VTK format, the actual results are in an Ensight gold format (ASCII apparently).

Nevertheless, Here is some of the requested information. I tested this with a model of around 51 mio collision centers.
For one scalar quantity, this equals around 1050 MByte of disk space per time step.
Thus for the absolute minimum results of our solver (1 scalar and a 3-dimensional vector) we have 4200 MByte of disk space.
Usually we have more result fields, this was just for testing purposes.
The geometry file (.geo) has a size of around 7100 MByte.

Loading these results into paraview took around 4 minutes. Unfortunately, IOtop complains about CONFIG_TASK_DELAY_ACCT not enabled and displays 0 disk read for paraview.
The CPU load on 1 core was between 80% and 100%.
Switching to another time step takes around 2 minutes here where the CPU load goes up to 100% again.
EDIT: Finally got IOtop to work.
With the conventional drive, the read speed starts off at around 90 M/s and decreases to 65 M/s over time.
With the SSD, the read speed is constantly around 90 M/s but has peaks of around 250 M/s in between.

Concerning the use of several cores: I wasnt able to activate this feature on this machine. However I already tried this before, resulting in even longer loading times.
The machine has 2 Xeon E5-2687W CPUs and 128 GByte of memory... so no bottleneck here.
Just to make sure, right now I have a conventional drive installed.

evcelica November 21, 2013 13:21

If you have a conventional drive installed now that is definately your bottleneck. Even a single standard SSD would be MUCH faster. But that scorpion is interesting, thanks for introducing it.

flotus1 November 22, 2013 03:16

Quote:

Originally Posted by JBeilke (Post 462609)
Have you ever tried another postprocessing software. Paraview might be good for academic purpose but dealing with real world transient cases was always a big pain when I tried it. Just try EnSight at first before you start to change the hardware.

Thanks for the suggestion, I will give this a try as soon as possible.
Nevertheless, there is this open source/freeware preference at my institute that stands against it.

Quote:

Even a single standard SSD would be MUCH faster
In terms of sequential read/write speed and I/O performance, definitely.
My only doubts are that the CPU load is already at 100% when loading from the conventional drive.
I think I will try to get my hands on a normal SSD first to get a clearer picture.

Edit: totally forgot that there is a small SSD on this machine for the OS.
It is a Micron RealSSD 256GB with a nominal sequential read speed of 415 MByte/s.
Copied the case here and loaded it with paraview. The result was somewhat disappointing.
The loading time went down to around 80% of that previously measured with the conventional drive.
So I guess the speed is not so much limited by the performance of the drive but by the CPU speed.

flotus1 November 22, 2013 04:00

New post for clarity...
Now I have to find out two things: how do I get PV to run on multiple cores on this machine and why did I previously see a lower performance running on multiple cores on a different machine.

Here is what I get when I activate the multi-core feature of PV:
OS is OpenSuse 12.3 and PV is the latest version I could find (4.1 RC1, but the problem is the same wit 3.98)
Code:

AutoMPI: SUCCESS: command is:
 "/usr/Paraview/ParaView-4.1.0-RC1-Linux-64bit/lib/paraview-4.1/mpiexec" "-np" "2" "/usr/Paraview/ParaView-4.1.0-RC1-Linux-64bit/lib/paraview-4.1/pvserver" "--server-port=40062"
AutoMPI: starting process server
-------------- server output --------------
ssh: Could not resolve hostname P-H-287-20LIX.site: Name or service not known
AutoMPI: server never started.
vtkProcessModuleAutoMPIInternals: Server never started.
Generic Warning: In /home/utkarsh/Dashboards/MyTests/NightlyMaster/ParaViewSuperbuild-Release/paraview/src/paraview/ParaViewCore/ServerManager/Core/vtkSMSession.cxx, line 315
Failed to automatically launch 'pvserver' for multi-core support. Defaulting to local session.

The output message in PV is:
Quote:

Generic Warning: In /home/utkarsh/Dashboards/MyTests/NightlyMaster/ParaViewSuperbuild-Release/paraview/src/paraview/ParaViewCore/ServerManager/Core/vtkSMSession.cxx, line 315
Failed to automatically launch 'pvserver' for multi-core support. Defaulting to local session.

JBeilke November 22, 2013 13:42

Quote:

Originally Posted by flotus1 (Post 462949)
My bad... only the model surface information is written in a VTK format, the actual results are in an Ensight gold format (ASCII apparently).

It should read much faster when you use the binary (Ensight gold) format. Also the file size will be much smaller. When you have EnSight you might run a batch job after the calculation to do this conversion. The better solution would be to save the binary format from your solver.

flotus1 November 22, 2013 14:46

That would be the elegant solution. Unfortunately, I dont have access to the source code of this specific solver.
I will give the person (actually the same who complained about the long loading times:rolleyes:) a heads up.

flotus1 November 28, 2013 12:29

If anyone has an idea how I can get Paraviev to run in parallel I would really appreciate it.

wyldckat November 29, 2013 14:00

Hi Alex,

Please ask that question about running ParaView in parallel in the respective forum: http://www.cfd-online.com/Forums/paraview/ ;)
This way future forum members can more easily find the answer given there :)

And please provide as many details as you can, to make it easier to help you.

Best regards,
Bruno

evcelica December 2, 2013 23:26

I just ordered the 480GB version of the Mushkin scorpion deluxe PCIe SSD. I'll report back on how it affects my FEA analyses vs my 2x RAID 0 Samsung 840 Pro SSDs.

evcelica February 18, 2014 10:37

Quote:

Originally Posted by evcelica (Post 464505)
I just ordered the 480GB version of the Mushkin scorpion deluxe PCIe SSD. I'll report back on how it affects my FEA analyses vs my 2x RAID 0 Samsung 840 Pro SSDs.


Finally got the hard drive "in stock my A##"
performance is roughly the same as two Sata SSDs in RAID 0. This may be very good at large sequential files, but it's nothing special at smaller I/O, like what is used in most workstations.

Difference in an ANSYS mechanical simulation was less than 1%, and it was reading and writing nearly the whole time, hovered around 800MB/s, and maxed out at 1GB/sec during the analysis.

flotus1 February 18, 2014 11:04

Thanks for your feedback.
In principle it confirms what the benchmarks suggest, the performance for smaller blocks is similar to usual SSDs while the sequential write speed is still quite high.
I guess in the end it is not really worth it unless you know exactly that sequential throughput is what you need.
The fact that they are still not available from german retailers kept me from buying one until now.

Edit: May I ask what kind of simulation writes such a huge amount of data to the hard disk all the time?

evcelica February 18, 2014 13:08

Quote:

Originally Posted by flotus1 (Post 475518)
May I ask what kind of simulation writes such a huge amount of data to the hard disk all the time?

I'm not 100% sure, it was a mechanical benchmark (just a .dat file) someone in our analysis group gave me. It was a 10MDOF problem that was solving partly out of core, using ANSYS's "optimal out of core" mode.


All times are GMT -4. The time now is 05:54.