CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   SU2 (https://www.cfd-online.com/Forums/su2/)
-   -   Is running SU2 on an armv8/arm64 possible? (https://www.cfd-online.com/Forums/su2/226618-running-su2-armv8-arm64-possible.html)

EternalSeekerX May 2, 2020 17:13

Is running SU2 on an armv8/arm64 possible?
 
Hi everyone,

I am curious to know if anyone has build, compile and run Su2 on the arm platform? I know many other opensource packages have arm forks now so I was a bit curious. With linux on a rpi3/4 or linux on an android via chroot/proot it's possible to run arm64 based linux packages so I am a bit curious.

Thanks

pcg May 3, 2020 10:33

Now that is one excellent question! The plain vanilla version, no mpi, no cgns, no tecio, should only need a c++11 compiler.

EternalSeekerX June 7, 2020 04:33

I compiled SU2 7.0 Blackbird but it complains about mpi
 
Quote:

Originally Posted by pcg (Post 768471)
Now that is one excellent question! The plain vanilla version, no mpi, no cgns, no tecio, should only need a c++11 compiler.

Hello again,

So I compiled from source following the instructions on the SU2 website. And when i try to run the NACA 0012 test case, the compiler exits and complains about MPI. Unfortunately without root access on my android device I cannot use openmpi, is there away to run SU2_CFD command in serial for 7.0? Or do I need to recompile with a meson.py and a flag option for disable mpi?

pcg June 7, 2020 12:57

Exactly what error do you get?

You can force compilation without mpi by passing -Dwith-mpi=disabled to meson.py.

EternalSeekerX June 7, 2020 15:12

The mpi error is cannot find loop back device
 
Quote:

Originally Posted by pcg (Post 773716)
Exactly what error do you get?

You can force compilation without mpi by passing -Dwith-mpi=disabled to meson.py.

Hello again, so here is the error:

Code:

root@localhost:~/Desktop/QuickStart# SU2_CFD inv_NACA0012.cfg
[localhost:25079] opal_ifinit: ioctl(SIOCGIFHWADDR) failed with errno=13
[localhost:25080] opal_ifinit: ioctl(SIOCGIFHWADDR) failed with errno=13
[localhost:25080] pmix_ifinit: ioctl(SIOCGIFHWADDR) failed with errno=13
[localhost:25080] ptl_tcp: problems getting address for index 0 (kernel index -1)
[localhost:25080] oob_tcp: problems getting address for index 94832 (kernel index -1)
--------------------------------------------------------------------------
No network interfaces were found for out-of-band communications. We require
at least one available network for out-of-band messaging.
--------------------------------------------------------------------------
[localhost:25079] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 532
[localhost:25079] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a daemon on the local node in file ess_singleton_module.c at line 166
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  orte_ess_init failed
  --> Returned value Unable to start a daemon on the local node (-127) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: ompi_rte_init failed
  --> Returned "Unable to start a daemon on the local node" (-127) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[localhost:25079] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
root@localhost:~/Desktop/QuickStart#


EternalSeekerX June 7, 2020 21:16

Built SU2 with mpi disabled and it worked!
 
Quote:

Originally Posted by pcg (Post 773716)
Exactly what error do you get?

You can force compilation without mpi by passing -Dwith-mpi=disabled to meson.py.

I was able to get a full build running in under 30 mins now! I ran quickstart and it ran. I did notice that when i tried to edit tabular output from TECPLOT to PARAVIEW, it would not allow me to do so in 7.0. However if I keep the config file unchanged it runs fine, and it outputs a VTU file i can open in paraview anyway!

https://i.imgur.com/9vOL5Cb.jpg

The results look accurate to me, I think?

pcg June 8, 2020 05:35

At least the colors look reasonable.

The compressible solvers can be used in parallel without MPI by configuring with -Dwith-omp=true (it will use OpenMP instead).


All times are GMT -4. The time now is 01:38.