CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Post-Processing (https://www.cfd-online.com/Forums/openfoam-post-processing/)
-   -   No times selected after successful parallel run (https://www.cfd-online.com/Forums/openfoam-post-processing/96150-no-times-selected-after-successful-parallel-run.html)

Myoldmopar January 13, 2012 14:34

No times selected after successful parallel run
 
Greetings,

I have spent some time familiarizing myself with OpenFoam after spending a good amount of time with Fluent. I was able to decompose a domain for 32 processors and successfully load up all 8 "processors" on my local machine, plus 8 on each of 3 more machines connected via ethernet.

Everything looked great, no errors during simulation. After it completed, I tried to reconstructPar, only to be given this:

--> FOAM FATAL ERROR:
No times selected

From function reconstructPar
in file reconstructPar.C at line 139.

FOAM exiting

I have searched around, but most of the problems are actually getting the parallel runs working, and I couldn't find a solution to this.

Any insights?

Thanks!

Bernhard January 14, 2012 04:10

Hi Edwin,

You have to specify which time-steps you want to reconstruct. You can either use -latestTime or specify -time as defined here: http://www.openfoamwiki.net/index.ph...ection_Options

Myoldmopar January 16, 2012 11:26

Thanks for that!

So I think there is still a problem. As I mentioned, I am on one computer, lets say "A", and I have three slave machines "B-D." The simulations are run on all 8 cores of each of the four machines.

When I look at the output data from the runs, I notice each machine only has a fourth of the final output data, which is essentially what I expect:
If I log into "B", and look through the directories, I notice that processor0 folder has all the time step data output folders. processor1 only has 0 and constant subdirectories, same with processor2 and 3 folders. Then processor4 folder has all the data in it again. This is mimicked across all four machines. Each one, including the master "A," only has its own data.
What I expected was that each machine would transfer its output back to the master machine. However, I believe I am approaching that a bit wrong. When I manually merge the folders back onto the master, and do a reconstructPar, it reconstructs it pretty well, except for a different problem which I'll post about separately.

So, do I need to setup an NFS share? I hadn't done it simply because I thought everything could be communicated well enough over ssh. I am not against setting one up, just didn't want to waste any time.

Thanks so much!

vitors January 25, 2012 08:45

Quote:

Originally Posted by Myoldmopar (Post 339550)
Thanks for that!

So I think there is still a problem. As I mentioned, I am on one computer, lets say "A", and I have three slave machines "B-D." The simulations are run on all 8 cores of each of the four machines.

When I look at the output data from the runs, I notice each machine only has a fourth of the final output data, which is essentially what I expect:
If I log into "B", and look through the directories, I notice that processor0 folder has all the time step data output folders. processor1 only has 0 and constant subdirectories, same with processor2 and 3 folders. Then processor4 folder has all the data in it again. This is mimicked across all four machines. Each one, including the master "A," only has its own data.
What I expected was that each machine would transfer its output back to the master machine. However, I believe I am approaching that a bit wrong. When I manually merge the folders back onto the master, and do a reconstructPar, it reconstructs it pretty well, except for a different problem which I'll post about separately.

So, do I need to setup an NFS share? I hadn't done it simply because I thought everything could be communicated well enough over ssh. I am not against setting one up, just didn't want to waste any time.

Thanks so much!

I'm experiencing exactly the same problem... Waiting for replies.

Vitor

olivierG January 25, 2012 10:31

hello,

The simples way is to setup a nfs (or sshfs) file system.

But you can go without. You should setup in your decomposeParDict:
distributed yes;
roots
4 <- nbr of node)
(
/path/for/machine1/case
/path/for/machine2/case
...
)
In this case, the data is local to the machine, so less I/O through the network.

NB: in fact, if you use Paraview for post processing, you can load all the data from each node without recontructing, and post process all .

regards,
olivier


All times are GMT -4. The time now is 14:35.