![]() |
mpirun openfoam output is buffered, only output at the end
Hello,
Title says it all; when using mpirun to run any solver in parallel the openfoam output is buffered and only appears once the run is finished; i.e. not great to follow it. I went through mpirun manual and the forum but so far haven't found anything? Is this also happening to some of you? any fix? Thanks, N. |
Quote:
I use something like the following to run the job and dump the output to a log file: Code:
>>mpirun -np (# of processors) (executable) -parallel > output.log & |
Hello,
Yes exactly the same. The job runs fine, output as expected but only available at the end. Thanks, N |
I am not sure what you are asking. If you want ouput to the file and stdout, you
could do the following: Code:
mpirun -np (# of processors) (executable) -parallel 2>&1 | tee output.log |
Ill try to be a bit clearer:
when I run the mpirun command (with required arguments), the processes are starting ok and running fine. No standard output are generated until the process ends. Only when it does ends that output in generated. It's why I used the term "buffered", it seems mpirun buffers all standard output and only spit it out at the end. |
Greetings to all!
@newbie_cfd: You will have to provide more details about your working environment, because that is not a common thing to occur with mpirun itself. A few examples of what I mean by "work environment":
The problem here is that we cannot guess what you're actually using, because mpirun usually always gives us the output while it's running, unless we use a job scheduler on a cluster or supercomputer; although for these latter ones the job scheduler usually is not named "mpirun". Best regards, Bruno |
Hello,
Thanks for helping me onto that one. So I am running on a workstation with CentOS. Code:
mpirun --version mpirun (Open MPI) 1.8.2 Code:
which mpirun /usr/mpi/gcc/openmpi-1.8.2/bin/mpirun Code:
cat /etc/centos-release CentOS release 6.5 (Final) Hope this helps, Thanks, N |
Try the following (change the number of cores and solver name to your own):
Code:
mpirun -np 4 -output-filename file.log interFoam -parallel & Code:
tail -f file.log.1.0 Another detail is that perhaps you shouldn't use all of the cores available in the workstation. For example, try with a case with only 2 sub-domains, so that this way you're certain that your machine still has enough cores to do other stuff. If this still has issues, then something else is getting in the way, possibly some configuration of the file system. ---------- Edit: Also, it seems that you're using a custom installation of Open-MPI, because the default is 1.8.1 on CentOS. Check the contents of the file "openmpi-mca-params.conf", which you can find with this command: Code:
find /usr/mpi/gcc/openmpi-1.8.2/ -name "*mca-params.conf" |
Ok, great. Solved!
Running with: Code:
-output-filename file.log But, the "mca-params.conf" file you mentioned was the 'culprit', from here it's mentionned that some config have "opal_event_include=poll" in their config files which shouldn't be there. I had "opal_event_include=epoll" which I commented. It fixes my issues and I get the output updated as the simulations runs. Thanks Wyldckat!, N |
All times are GMT -4. The time now is 14:07. |