March 3, 2011, 05:07
|
Problem running a parallel fluent job on local machine via mpd
|
#1
|
Member
Join Date: Feb 2011
Posts: 62
Rep Power: 16
|
Hello Everyone,
I am having a problem that I can not even identify while trying to start fluent with the following command:
fluent -v3ddp -t4 &
and this is the error message I get:
Code:
Host spawning Node 0 on machine "<username>" (unix).
/home/Fluent.Inc/Fluent.Inc/fluent6.3.26/bin/fluent -r6.3.26 3ddp -node -t4 -pethernet -mpi=hp -mport 127.0.0.2:127.0.0.2:52584:0
Starting /home/Fluent.Inc/Fluent.Inc/fluent6.3.26/multiport/mpi/lnamd64/hp/bin/mpirun -np 4 /home/Fluent.Inc/Fluent.Inc/fluent6.3.26/lnamd64/3ddp_node/fluent_mpi.6.3.26 node -mpiw hp -pic ethernet -mport 127.0.0.2:127.0.0.2:52584:0
mpiexec_<username>: mpd_uncaught_except_tb handling:
<type 'exceptions.OSError'>: [Errno 2] No such file or directory
/usr/lib64/python2.6/os.py 368 _execvpe
func(file, *argrest)
/usr/lib64/python2.6/os.py 353 execvpe
_execvpe(file, args, env)
/home/MPICH2_ins_dir/mpich2-install/bin/mpdlib.py 1198 __init__
os.execvpe(mpdroot,[mpdroot,self.conFilename,str(self.sock.fileno())],{})
/home/Fluent.Inc/Fluent.Inc/fluent6.3.26/multiport/mpi/lnamd64/hp/bin/mpirun 225 mpiexec
conSock = MPDConClientSock(mpdroot=mpdroot,secretword=parmdb['MPD_SECRETWORD'])
/home/Fluent.Inc/Fluent.Inc/fluent6.3.26/multiport/mpi/lnamd64/hp/bin/mpirun 1446 <module>
mpiexec()
mpiexec_<username> (__init__ 1208): forked process failed; status=1
I would appreciate any help to overcome this problem. Thanks in advance.
Kind regards,
Ozer
|
|
|