I am a student relatively new to OpenFOAM writing my thesis, and for that I need to somehow determine the load-balancing of processors while running parallel solvers in OpenFOAM 2.2. My first idea was to use some MPI profiling tool to measure the time each separate process spends either calculating or communicating, but I cannot get any tool running.
Does anybody have some other idea on how to get some measurement of the load balancing on the processors while running solvers, or some idea on what combinations of mpi implementations and profiling tools might work?
So far I have tried to use: valgrind, fpmpi, Tau, MPE, ipm, mpip with OpenMPI 1.6.3 and MPICH 1.1.1p1 and 18.104.22.168p1.
Valgrind does work, but because of the huge overhead it introduces on the calculations it simply is not feasible.
My problem with fpmpi and Tau is that the MPI versions coming with openFoam ( OpenMPI and MPICH ) disable the mpi profiling interfaces and if I re-enable them the the configure scripts break, thus I cannot use them.
MPE: I canot link the libraries, because I get a linker error:
/usr/bin/ld: /usr/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/../../../../lib/liblmpe.a(log_mpi_core.o): relocation R_X86_64_32S against `.bss' can not be used when making a shared object; recompile with -fPIC
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/../../../../lib/liblmpe.a: could not read symbols: Bad value
collect2: error: ld returned 1 exit status
recompiling MPE and the Pstream library with -fPIC didn't solve the problem.
ipm, mpip: the configure script breaks.
I have also tried using googlePerformance tools but they generate segmentation faults at runtime.
My current idea is to use the linux time command to get a measurement of the CPU time from each process, or to put some time measurements into the PStream library, but I don't know how accurate these can be, thus I would like to have a more precise method.
Any new ideas are much appreciated :)
Did you read this: https://www.hpc.ntnu.no/display/hpc/...filing+Solvers
Thank you for your answer!
Unfortunately, I don't even get that far with ipm, because the configure script breaks, so I cannot compile and install it.
Here is my attempt: ( the script breaks at the same point without the extra options too)
you need to look up the configure file. Apparently the code for MPI_STATUS_COUNT doesn't compile. I made it to work by replacing this bit of code
Now the configure works, but make breaks. Let me know how it works for you.
I compiled ipm on blackbird if you're interested. There's something strange about OpenMPI and IPM. The code of IPM assumed that MPI_Status structure should contain completely different fields than OpenMPI offers. I consulted mpi.h to see it and after a bit of "fun" I replaced
Let me know if this works at all. From the comments it seems that these are internal bits of OpenMPI and shouldn't be touched. I compiled sucessfuly, but I have no idea if it works as intended.
Thank you so much!
I have managed to compile and run a solver linked to ipm with these workarounds. I will now try to do some actual profiling, and see if it works properly.
Thank you for the tipps for IPM compiling. Sadly, even though IPM did compile, I couldn't use it. But since then I have figured out how to fix the Profiling interface in the openMPI version that comes with OpenFOAM 2.2
As it turns out, even though it is specified on the openMPI website that it only uses cuda if one explicitly enables it with the -with-cuda= option, it does use it to compile vampire trace. At cuda 5 they changed the interface, thus vampire trace no longer compiles. Thus the solution to enable the profiling interface was to replace the:
also, I had to remove the -j $WM_NCOMPPROCS
from line 109 : make -j $WM_NCOMPPROCS && make install
After this IVP still didn't work, but I could manage to get TAU to work with a reasonable amount of effort.
Could you please tell me the procedure that you have used to implement "tau" or "ipm" in your system and to profile openfoam with it.
Im using OF 22x in centOS 6.4 cluster...
Thanks in advance,
Getting error while installing IPM for profiling
I am getting following error while profiling using IPM
rishikesh@rishikesh-desktop:~/ipm$ make install
cd src; make ipm
make: Entering directory `/home/rishikesh/ipm/src'
/home/rishikesh/ipm/bin/make_wrappers -funderscore_post ../ipm_key
cc -DIPM_DISABLE_EXECINFO -I/home/rishikesh/ipm/include -DWRAP_FORTRAN -I../include -o ../bin/ipm ipm.c
In file included from ipm.c:114:0:
ipm_init.c: In function ‘ipm_init’:
ipm_init.c:39:2: error: ‘region_wtime_init’ undeclared (first use in this function)
region_wtime_init = task.ipm_trc_time_init;
ipm_init.c:39:2: note: each undeclared identifier is reported only once for each function it appears in
make: *** [ipm] Error 1
make: Leaving directory `/home/rishikesh/ipm/src'
make: *** [ipm] Error 2
what is cause of this error?
IPM (Integrated Performance Monitoring profiler) does not output any result file
I am trying to analyze the performance of a parallel program using IPM. I have successfully installed the IPM software following the guidelines described at https://github.com/nerscadmin/ipm and followed the explanation at https://www.hpc.ntnu.no/display/hpc/...filing+Solvers to setup a custom solver.
The problem is that when I run the program, for example
the program just runs without any time analysis report. Am i doing something wrong or how do I get a result printed?
Hi banji. To write the output in a file you have to run:
Thanks for your reply.
Actually, I know how to redirect messages to an output file. The problem is I can't find any related to IPM i.e. maybe how much time is spent on MPI_wait, MPI_recv , e.t.c.
[ Moderator note: Moved from here: http://www.cfd-online.com/Forums/ope...s-cluster.html ]
|All times are GMT -4. The time now is 10:09.|