CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   libfiniteVolume.so: file not recognized (https://www.cfd-online.com/Forums/openfoam-solving/67995-libfinitevolume-so-file-not-recognized.html)

torvic September 2, 2009 20:05

libfiniteVolume.so: file not recognized
 
Hi

I don't know how to solve this error. I got it after running the buoyantPisofoam tutorial.

.
.
.
g++ -m64 -Dlinux64 -DWM_DP -Wall -Wno-strict-aliasing -Wextra -Wno-unused-parameter -Wold-style-cast -Wnon-virtual-dtor -O3 -DNoRepository -ftemplate-depth-40 -I/home/foam15/OpenFOAM/OpenFOAM-1.6/src/finiteVolume/lnInclude -IlnInclude -I. -I/home/foam15/OpenFOAM/OpenFOAM-1.6/src/OpenFOAM/lnInclude -I/home/foam15/OpenFOAM/OpenFOAM-1.6/src/OSspecific/POSIX/lnInclude -fPIC Make/linux64GccDPOpt/setHotRoom.o -L/home/foam15/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt \
-lfiniteVolume -lOpenFOAM -liberty -ldl -lm -o /home/foam15/OpenFOAM/foam15-1.6/applications/bin/linux64GccDPOpt/setHotRoom
/home/foam15/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libfiniteVolume.so: file not recognized: File format not recognized
collect2: ld returned 1 exit status
make: ***
[/home/foam15/OpenFOAM/foam15-1.6/applications/bin/linux64GccDPOpt/setHotRoom] Error 1

Nevertheless the solver runs and got results.

I also posted it in another thread regarding the installation of pv3foam http://www.cfd-online.com/Forums/ope...tml#post228221

I have SLED 10 sp1, cmake and gcc4.3.3.3 compiled, 64 bits. and I cannot use the gcc binary of Thirdparty

However, in openSUSE 11 64 bits this does not happen. and here, I do use the gcc binary of thirdparty.

any hint is appreciated

thanks :)

Victor

lakeat September 2, 2009 23:17

Quote:

Originally Posted by torvic (Post 228333)
Hi

I don't know how to solve this error. I got it after running the buoyantPisofoam tutorial.

.
.
.
g++ -m64 -Dlinux64 -DWM_DP -Wall -Wno-strict-aliasing -Wextra -Wno-unused-parameter -Wold-style-cast -Wnon-virtual-dtor -O3 -DNoRepository -ftemplate-depth-40 -I/home/foam15/OpenFOAM/OpenFOAM-1.6/src/finiteVolume/lnInclude -IlnInclude -I. -I/home/foam15/OpenFOAM/OpenFOAM-1.6/src/OpenFOAM/lnInclude -I/home/foam15/OpenFOAM/OpenFOAM-1.6/src/OSspecific/POSIX/lnInclude -fPIC Make/linux64GccDPOpt/setHotRoom.o -L/home/foam15/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt \
-lfiniteVolume -lOpenFOAM -liberty -ldl -lm -o /home/foam15/OpenFOAM/foam15-1.6/applications/bin/linux64GccDPOpt/setHotRoom
/home/foam15/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libfiniteVolume.so: file not recognized: File format not recognized
collect2: ld returned 1 exit status
make: ***
[/home/foam15/OpenFOAM/foam15-1.6/applications/bin/linux64GccDPOpt/setHotRoom] Error 1

Nevertheless the solver runs and got results.

I also posted it in another thread regarding the installation of pv3foam http://www.cfd-online.com/Forums/ope...tml#post228221

I have SLED 10 sp1, cmake and gcc4.3.3.3 compiled, 64 bits. and I cannot use the gcc binary of Thirdparty

However, in openSUSE 11 64 bits this does not happen. and here, I do use the gcc binary of thirdparty.

any hint is appreciated

thanks :)

Victor

I guess something wrong with a mix-use of 32 bits and 64 bits binaries.

torvic September 3, 2009 16:55

Thanks Lakeat

I only downloaded 64 bits binaries, and use the same downloaded files in different computers both with 64 bits. But in SLED I cannot have a working openfoam installation.
What else do you suggest :)

best

Victor

elliot_hfx June 11, 2010 18:10

Hi Victor,

I also met such problem" /home/foam15/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libfiniteVolume.so: file not recognized: File format not recognized
collect2: ld returned 1 exit status
", when I compile the groovyBC in the clusters. I also use the binary gcc, because my system gcc is old version, and I do not have the root power to intall the new one.
Could you tell me how did you solve this problem? Thanks.

Elliot

lakeat June 11, 2010 18:25

You dont need the root power to using your own gcc, just put your desired gcc version inside of the ThirdParty directory and it will work, this is linux, everything you need can be easily user defined.

libfiniteVolume.so: file not recognized.
What's your system? Is it 32bits or 64bits? Are you installing the correct binary version? Or you can build it yourself, it's not that hard.

elliot_hfx June 11, 2010 18:50

Hi Daniel WEI,

Thanks for your reply.
My system is : SUSE Linux Enterprise Server 10 (x86_64)
VERSION = 10
PATCHLEVEL = 1
I think I install binary version correctly, I have run some tutorial case correctly.
I tried to compile the gcc in user account, it requires GMP 4.1+ and MPFR 2.3.0+. I need to install these two software, I do not know what I will need to install when I finish this step.
In my own desktop and labtop, I can install OpenFOAM correctly, when I follow same step in the cluster in my university, it does not work. Thanks.

Elliot

lakeat June 11, 2010 19:03

install gmp and mpfr first, for I am not sure whether your system gmp and mpfr satisfy the demand of new gcc, so try install them into your ThirdParty directory. It's easy and just takes a few minutes.
After that configure the gcc using "--with-gmp=" and "--with mpfr=" something like that, you will find it using google, not difficult, but the gcc compiling process is not short, so you need to have some coffee and sleep and go back to see if it 's finished.

Since your system is SUSE, so you are lucky, IMHO, SUSE is the best system for OpenFOAM

elliot_hfx June 11, 2010 20:46

Thanks. I will try it.

Elliot

torvic June 11, 2010 21:05

Hello Elliot

Hope you're doing fine

Honestly due to lack of time I didn't try more things and kept on working with OF-1.5.

Best wishes in your project

Victor

elliot_hfx June 11, 2010 21:25

Hi Victor,

Thanks.

Elliot

elliot_hfx June 21, 2010 15:50

Hi Daniel WEI,

I have recompiled all the OpenFOAM libraries, gcc, and openmpi on the supercomputer in my university. When I run a damBreak case in parallel to do a test. It has following mistakes,

Quote:

WARNING: There are more than one active ports on host 'r2i0n15', but the
default subnet GID prefix was detected on more than one of these
ports. If these ports are connected to different physical IB
networks, this configuration will fail in Open MPI. This version of
Open MPI requires that every physically separate IB subnet that is
used between connected MPI processes must have different subnet ID
values.

Please see this FAQ entry for more details:

http://www.open-mpi.org/faq/?categor...ult-subnet-gid

NOTE: You can turn off this warning by setting the MCA parameter
btl_openib_warn_default_gid_prefix to 0.
--------------------------------------------------------------------------
[r2i0n15:08535] 7 more processes have sent help message help-mpi-btl-openib.txt / default subnet prefix
[r2i0n15:08535] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[[54778,1],0][../../../../../ompi/mca/btl/openib/btl_openib_component.c:2951:handle_wc] from r2i0n15 to: r1i0n4 error polling LP CQ with status RETRY EXCEEDED ERROR status number 12 for wr_id 8419328 opcode 0 vendor error 129 qp_idx 1
--------------------------------------------------------------------------
The InfiniBand retry count between two MPI processes has been
exceeded. "Retry count" is defined in the InfiniBand spec 1.2
(section 12.7.38):

The total number of times that the sender wishes the receiver to
retry timeout, packet sequence, etc. errors before posting a
completion error.

This error typically means that there is something awry within the
InfiniBand fabric itself. You should note the hosts on which this
error has occurred; it has been observed that rebooting or removing a
particular host from the job can sometimes resolve this issue.

Two MCA parameters can be used to control Open MPI's behavior with
respect to the retry count:

* btl_openib_ib_retry_count - The number of times the sender will
attempt to retry (defaulted to 7, the maximum value).
* btl_openib_ib_timeout - The local ACK timeout parameter (defaulted
to 10). The actual timeout value used is calculated as:

4.096 microseconds * (2^btl_openib_ib_timeout)

See the InfiniBand spec 1.2 (section 12.7.34) for more details.

Below is some information about the host that raised the error and the
peer to which it was connected:

Local host: r2i0n15
Local device: mthca0
Peer host: r1i0n4

You may need to consult with your system administrator to get this
problem fixed.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 8536 on
node r2i0n15 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
73,1
Did you meet similar problems when you run OpenFOAM cases on supercomputers in your school?

I have no idea to fix this problem, do I need to get root power to solver it?

Thanks.


Elliot

lakeat June 21, 2010 16:06

Hi, you mean in your school's own super computer?

elliot_hfx June 21, 2010 17:29

Yes. On the supercomputer in our school, I need to install the OpenFOAM by myself. And it only has mich mpi, does not have openmpi. I recompiled the openmpi, and try to run a test case, get the above mistakes. Thanks

Elliot

lakeat June 23, 2010 19:47

Sorry for English Users, I have to reply in Chinese TEMPORARILY.
  1. 我没有见过你这样的状况,所以,不知道怎么解决;
  2. 你的是PBS系统,那么递交的脚本,不知道是什么,我想一定要先确定你的脚本没有问题;
  3. 检查,简单的“串行”以及“直接并行”命令是否能够通过,这将有助于你的进一步确认问题所在。

panda60 June 23, 2010 23:14

哈哈,在这里说中国语还真是别有一番风味。^_^。普及汉语,人人有责。

lakeat June 24, 2010 11:21

Sorry, I thought you were her, sorry for that.

Would you pls show me your submitted scripts, that would be helpful for examination.

Another possible solution is:
If you could let me log into your system via vpn, I could help and have a look at it.

I would HIGHLY RECOMMEND a Chinese Speaking Sub-Forum be opened, so as to help the Chinese dummies.

For instance, Home > Forums > OpenFOAM > Chinese Users (for Dummies)

elliot_hfx June 24, 2010 12:22

The submitted script is
Quote:

#!/bin/csh
#PBS -j oe
#PBS -N interfoam
#PBS -l select=8:ncpus=1:mpiprocs=1
#PBS -l walltime=48:00:00


limit stacksize unlimited

setenv MACHTYPE x86-suse-linux
setenv TERM xterm
setenv MPI_BUFFER_MAX 2000000
setenv MPI_BUFS_PER_PROC 1024
setenv DAPL_MAX_CM_RESPONSE_TIME 22
setenv I_MPI_DEVICE rdssm
setenv I_MPI_DAPL_PROVIDER OpenIB-cma
setenv I_MPI_PIN_PROCS allcores
setenv I_MPI_PIN_MODE lib
setenv I_MPI_DEVICE_FALLBACK 0

echo "The nodefile is:"
cat $PBS_NODEFILE

setenv PROCS `cat $PBS_NODEFILE | wc -l`

setenv NODES `cat $PBS_NODEFILE | sort -u | wc -l`
echo "Running on $PROCS processors"

echo "Home directory: $HOME"
cd $HOME/OpenFOAM/fhuang-1.6/run/ras/
echo "Working in directory: $PWD"


# source openfoam
source $HOME/OpenFOAM/OpenFOAM-1.6/etc/cshrc
mpirun --hostfile $PBS_NODEFILE -np 8 interFoam -parallel -case damBreakFine >log
The error message is

Quote:

The nodefile is:
r2i0n15
r1i0n4
r1i0n4
r1i0n4
r1i0n4
r1i0n4
r1i0n4
r1i0n4
Running on 8 processors
Home directory: /home/fhuang
Working in directory: /home/fhuang/OpenFOAM/fhuang-1.6/run/ras
--------------------------------------------------------------------------
WARNING: There are more than one active ports on host 'r2i0n15', but the
default subnet GID prefix was detected on more than one of these
ports. If these ports are connected to different physical IB
networks, this configuration will fail in Open MPI. This version of
Open MPI requires that every physically separate IB subnet that is
used between connected MPI processes must have different subnet ID
values.

Please see this FAQ entry for more details:

http://www.open-mpi.org/faq/?categor...ult-subnet-gid

NOTE: You can turn off this warning by setting the MCA parameter
btl_openib_warn_default_gid_prefix to 0.
--------------------------------------------------------------------------
[r2i0n15:08796] 7 more processes have sent help message help-mpi-btl-openib.txt / default subnet prefix
[r2i0n15:08796] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[[55025,1],0][../../../../../ompi/mca/btl/openib/btl_openib_component.c:2951:handle_wc] from r2i0n15 to: r1i0n4 error polling LP CQ with status RETRY EXCEEDED ERROR status number 12 for wr_id 8419328 opcode 0 vendor error 129 qp_idx 1
--------------------------------------------------------------------------
The InfiniBand retry count between two MPI processes has been
exceeded. "Retry count" is defined in the InfiniBand spec 1.2
(section 12.7.38):

The total number of times that the sender wishes the receiver to
retry timeout, packet sequence, etc. errors before posting a
completion error.

This error typically means that there is something awry within the
InfiniBand fabric itself. You should note the hosts on which this
error has occurred; it has been observed that rebooting or removing a
particular host from the job can sometimes resolve this issue.

Two MCA parameters can be used to control Open MPI's behavior with
respect to the retry count:

* btl_openib_ib_retry_count - The number of times the sender will
attempt to retry (defaulted to 7, the maximum value).
* btl_openib_ib_timeout - The local ACK timeout parameter (defaulted
to 10). The actual timeout value used is calculated as:

4.096 microseconds * (2^btl_openib_ib_timeout)

See the InfiniBand spec 1.2 (section 12.7.34) for more details.

Below is some information about the host that raised the error and the
peer to which it was connected:

Local host: r2i0n15
Local device: mthca0
Peer host: r1i0n4

You may need to consult with your system administrator to get this
problem fixed.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 8797 on
node r2i0n15 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
Thanks


Elliot

lakeat June 24, 2010 12:47

Well, just for reference, you need to ask those who is using PBS system, eg. your HPC-support.

It looks strange here:
Quote:

The nodefile is:
r2i0n15
r1i0n4
r1i0n4
r1i0n4
r1i0n4
r1i0n4
r1i0n4
r1i0n4
You see, the other 7 using the same nodes, isn't it strange?



In the old time, some told me his PBS scripts are something like:
Quote:

#!/bin/sh
#PBS -l walltime=10:00:00
#PBS -l mem=300mb
#PBS -l ncpus=4

module load openfoam
module load mpi intel-suite
cd $WORK/OpenFoam/cavity
mpiexec icoFoam -parallel > out.dat
You see, both nodefile and "-np *" are omissed, and it seems the PBS system will dynamically appoint the nodefile and cpu number to your job.

So look for tutorial of your school's HPC, and talk with them, to see if your scripts is correct.

Secondly, are you using your system mpirun or your own openfoam mpirun, I suggest you using system's, so load it first.

Just some thoughts

elliot_hfx June 24, 2010 13:02

Thanks for your suggestions.

I use my openfoam mpirun. The system only has inter-mpi. I tried to compile the OpenFOAM with inter-mpi, the error message show that it is not consistent. I have already send the scripts and log file to school's HPC support and wait for their reply.

Elliot

lakeat June 24, 2010 13:23

Many talks Intel-mpi performs better than openmpi, so don't give up, have a try.

elliot_hfx June 24, 2010 13:31

OK, I will keep on working with it and make it work.


All times are GMT -4. The time now is 11:55.