CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   OpenFOAM CPU Usage (https://www.cfd-online.com/Forums/openfoam-solving/115731-openfoam-cpu-usage.html)

musahossein April 5, 2013 10:36

OpenFOAM CPU Usage
 
1 Attachment(s)
Dear all:
I am running solshingTank2D in interDyMFoam. The tank has 40,000 cells. I did this to see how the cpu is utilized by OpenFOAM. My computer processor is a Intel i3 CPU M330@2.13Ghzx4 with 4 CPU's. I am running UBUNTU 12.10 and OpenFoam 2.2.0. When I check the processor performance, what I see is that one CPU is running at 99% while the others are below 10%. Is there any way to make OpenFOAM seek out and use all the CPU's? A screen shot of CPU usage while running OpenFOAM for this problem is attached.

Any advice would be greatly appreciated.

akidess April 5, 2013 10:40

Have you checked the user manual? http://www.openfoam.org/docs/user/ru...s-parallel.php

musahossein April 5, 2013 12:14

Quote:

Originally Posted by akidess (Post 418572)

Thankyou very much for your response. I looked into decomposPar dict file and it seems that it is set up for parallel processing over several computers -- or it does not care and looks for the number of subdomains specified in the same computer before looking elsewhere?

erichu April 5, 2013 14:36

Hello,

Are you running foamJob or mpirun? if not that explains why only one core is used.

Mojtaba.a April 5, 2013 14:39

Quote:

Originally Posted by musahossein (Post 418586)
Thankyou very much for your response. I looked into decomposPar dict file and it seems that it is set up for parallel processing over several computers -- or it does not care and looks for the number of subdomains specified in the same computer before looking elsewhere?

You can easily use it in the same computer with multiple processors.
Just set decomposeParDict correctly with respect to number of processors you have and you are done.
the rest is in the user's manual.
Good luck

musahossein April 5, 2013 17:07

I have 2 processors. Each processor has 2 CPUs. So I should set subdomain to the number of processors (2) or the number of CPUs (4). Also, I have to run blockMesh and setFields before running mpi, correct? Thanks.

Mojtaba.a April 5, 2013 17:15

Quote:

Originally Posted by musahossein (Post 418655)
I have 2 processors. Each processor has 2 CPUs. So I should set subdomain to the number of processors (2) or the number of CPUs (4). Also, I have to run blockMesh and setFields before running mpi, correct? Thanks.

If you are using Gnome run

Quote:

gnome-system-monitor
go to resources tab and see how many CPUs are listed there under CPU History. thats your number for defining in decomposeParDict.

Not sure about setFields, but about blockMesh, yes you must run it before mpi.

erichu April 5, 2013 18:02

I am putting a file in my case folder named 'machines'. In this file I later write my config, I.e.
Workstation cpu=2

Where workstation is the host name. Sub domains are based on cores, so in your case 4, I think.

My normal procedure is
0) having machines file setup
1) Blockmesh/ideasUnvToFoam
2) decomposePar
3) foamJob -s -p solverName

musahossein April 7, 2013 17:56

Running OpenFoam over multiple CPU's on the same computer
 
Gentlement:

Here is my attempt to run OpenFOAM over 4 CPU's that my computer has. I noted that in the first error message, it shows that it cannot find processor0. But I thought OpenFOAM would automatically detect the number of CPU's. Was that assumption incorrect? Thanks for your help / advice.
__________________________________________________ _____________________
musa@ubuntu:~/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D$ mpirun -np 4 interDyMFoam -parallel >log &
[1] 4814
musa@ubuntu:~/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D$ [0]
[0]
[0] --> FOAM FATAL ERROR:
[0] interDyMFoam: cannot open case directory "/home/musa/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D/processor0"
[0]
[0]
FOAM parallel run exiting
[0]
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 4815 on
node ubuntu exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

musahossein April 7, 2013 23:03

Quote:

Originally Posted by Mojtaba.a (Post 418657)
If you are using Gnome run

go to resources tab and see how many CPUs are listed there under CPU History. thats your number for defining in decomposeParDict.

Not sure about setFields, but about blockMesh, yes you must run it before mpi.

I am running Ubuntu 10.2. In the dash there is a way to check the system performance, and it shows 4 CPU's. But for some reason decomposePar is still finiding errors. Any comments/ suggestions will be appreciated. Thankyou

__________________________________________________ _____________________
Decomposing mesh region0

Create mesh

Calculating distribution of cells
Selecting decompositionMethod hierarchical


--> FOAM FATAL ERROR:
Wrong number of processor divisions in geomDecomp:
Number of domains : 4
Wanted decomposition : (2 2 2)

From function geomDecomp::geomDecomp(const dictionary& decompositionDict)
in file geomDecomp/geomDecomp.C at line 50.

FOAM exiting

erichu April 8, 2013 03:01

Do you have a possibility to upload the decomposeDict file? It might be easier to find the source of the problem. However (2 2 2) is to me decomposition for 8 processors and not for 4.

Mojtaba.a April 8, 2013 03:17

Quote:

Originally Posted by musahossein (Post 418962)
I am running Ubuntu 10.2. In the dash there is a way to check the system performance, and it shows 4 CPU's. But for some reason decomposePar is still finiding errors. Any comments/ suggestions will be appreciated. Thankyou

__________________________________________________ _____________________
Decomposing mesh region0

Create mesh

Calculating distribution of cells
Selecting decompositionMethod hierarchical


--> FOAM FATAL ERROR:
Wrong number of processor divisions in geomDecomp:
Number of domains : 4
Wanted decomposition : (2 2 2)

From function geomDecomp::geomDecomp(const dictionary& decompositionDict)
in file geomDecomp/geomDecomp.C at line 50.

FOAM exiting

As eric said upload your decomposeParDict to see how you have configured it.
also have a loook at this:
http://www.cfd-online.com/Forums/ope...tml#post189895

musahossein April 9, 2013 15:19

Thanks for your help. The decomposePar works now. However, it stalls at the end of the run with the following message:

----- I have deleted preceeeding out put to keep this to the point -----------

Number of processor faces = 812
Max number of cells = 27540 (49.998% above average 18360.2)
Max number of processor patches = 2 (0% above average 2)
Max number of faces between processors = 408 (0.492611% above average 406)

Time = 0

--> FOAM FATAL IO ERROR:
Cannot find patchField entry for lowerWall

file: /home/musa/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D/0/alpha1.org-old.boundaryField from line 25 to line 33.

From function GeometricField<Type, PatchField, GeoMesh>::GeometricBoundaryField::readField(const DimensionedField<Type, GeoMesh>&, const dictionary&)
in file /home/opencfd/OpenFOAM/OpenFOAM-2.2.0/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 154.

FOAM exiting
-----------------------------------------------------------------------------------------------------------------------
But, my blockmeshDict file has the patches as shown below:
vertices
(
(-0.05 -0.50 -0.35) // Vertex back lower left corner = 0
(-0.05 0.50 -0.35) // Vertex back lower right corner= 1
(-0.05 0.50 0.65) // Vertex back upper right corner= 2
(-0.05 -0.50 0.65) // Vertex back upper left corner = 3

(0.05 -0.50 -0.35) // Vertex front lower left corner = 4
(0.05 0.50 -0.35) // Vertex front lower right corner= 5
(0.05 0.50 0.65) // Vertex front upper right corner= 6
(0.05 -0.50 0.65) // Vertex front upper left corner = 7

);

blocks
(
// block0
hex (0 1 2 3 4 5 6 7)
(271 271 1)
simpleGrading (1 1 1)
);

//patches
boundary
(
lowerWall
{
type patch;
faces
(
(0 1 5 4)
);
}
rightWall
{
type patch;
faces
(
(1 2 6 5)
);
}
atmosphere
{
type patch;
faces
(
(2 3 7 6)
);
}
leftWall
{
type patch;
faces
(
(0 4 7 3)
);
}
frontAndBack
{
type Empty;
faces
(
(4 5 6 7)
(0 3 2 1)
);
}
);

Any comments / suggestions would be appreciated. Thankyou.

erichu April 9, 2013 16:06

I wonder if you are missing the patch name in one of the U, p .... files? Upload the boundary files as well and we can check if you cannot find the source of your problem.

musahossein April 9, 2013 19:24

Quote:

Originally Posted by erichu (Post 419495)
I wonder if you are missing the patch name in one of the U, p .... files? Upload the boundary files as well and we can check if you cannot find the source of your problem.

Actually I figured out what the problem was. OpenFOAM reads all the files in the "0" folder. So I had kept the original and revised versions of alpha1, p, and U files in the folder. It was trying to read all of them, eventhough they were named alpha1-old, p-old, U-old. Once I got rid of them, decomposePar ran w/o any problems. But when I do the mpi run, I get another error:


musa@ubuntu:~/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D$ mpirun -np 4 interDyMFoam -parallel >log &
[1] 2663
musa@ubuntu:~/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D$ [0]
[0]
[0] --> FOAM FATAL IO ERROR:
[0] cannot find file
[0]
[0] file: /home/musa/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D/processor0/0/alpha1 at line 0.
[0]
[0] From function regIOobject::readStream()
[0] in file db/regIOobject/regIOobjectRead.C at line 73.
[0]
FOAM parallel run exiting
[0]
[2]
[2]
[2] --> FOAM FATAL IO ERROR:
[2] cannot find file
[2]
[2] file: /home/musa/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D/processor2/0/alpha1 at line 0.
[2]
[3]
[3]
[3] --> FOAM FATAL IO ERROR:
[3] cannot find file
[3]
[3] file: /home/musa/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D/processor3/0/alpha1 at line 0.
[3]
[3] From function regIOobject::readStream()
[3] in file db/regIOobject/regIOobjectRead.C at line 73.
[3]
FOAM parallel run exiting
[3]
[2] From function regIOobject::readStream()
[2] in file db/regIOobject/regIOobjectRead.C at line 73.
[2]
FOAM parallel run exiting
[2]
[1]
[1]
[1] --> FOAM FATAL IO ERROR:
[1] cannot find file
[1]
[1] file: /home/musa/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D/processor1/0/alpha1 at line 0.
[1]
[1] From function regIOobject::readStream()
[1] in file db/regIOobject/regIOobjectRead.C at line 73.
[1]
FOAM parallel run exiting
[1]
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 3 with PID 2667 on
node ubuntu exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[ubuntu:02663] 3 more processes have sent help message help-mpi-api.txt / mpi-abort
[ubuntu:02663] Set MCA parameter "orte_base_help_aggregate" to 0 to see all helpmusa@ubuntu:~/OpenFOAM/musa-2.2.0/run/tutorials/multiphase/interDyMFoam/ras/sloshingTank2D$

musahossein April 9, 2013 19:51

If there are 4 CPU's then is the MPI error messages requiring that there be a alpha1.org folder for each processor?

JR22 April 9, 2013 21:05

i3 multicore performance
 
Does your computer have:
1. Two separate processors with i3 two cores each?, or
2. One dual core i3 processor with hyperthreading (aka multithreading)?

If you have case #1, you should see 8 logical CPUs, if you have #2, you should see 4 logical CPUs. Only 50% of logical CPUs have the power to crunch data at fullest. For some jobs, however, the other 50% could give you an edge. What I am saying, is that your model might finish faster with the settings set to two CPUs than four. It also might explain your original observation of two CPUs working harder than the other two.

This is a good post that touches on the subject of hyperthreading:
http://www.cfd-online.com/Forums/ope...processor.html

erichu April 10, 2013 01:06

Quote:

Originally Posted by musahossein (Post 419519)
If there are 4 CPU's then is the MPI error messages requiring that there be a alpha1.org folder for each processor?



I have never used the solver you are using but I cannot imagine that you need .org files. If the setup has hanged, you might also need to run decomposePar -force to update the folders.

In general, I would say that each decomposed processor needs a full set of boundary files.

I ran the tutorial of interDyMFoam sloshingTank2D (ras) using ./Allrun, then aborted when the solver started. Later decomposePar and finally
foamJob -s -p interDyMFaom

hakonbar April 10, 2013 03:52

Have you done the damBreak tutorial described in the user guide? It's a pretty good step-by-step walkthrough of setting up a parallel run. It's even multiphase, so it uses the variable alpha.

By the way, I think you forgot to rename the file called "alpha1.org" to "alpha1" before decomposing. The ".org"-ending doesn't do anything, it's just to keep a backup of the alpha1 file that's not modified by the setFields application. This is also described in the damBreak tutorial.

musahossein April 10, 2013 20:54

To all the forum members who put their time and effort in trying to help me out and responding to my posts -- a heartfelt thankyou. Paralle processing finally worked. The reasons it was not working are as follows:

1.decomposePar reads all the files in the "0" folder. So it not only read the alpha1, p and U folders that I had modified, but also read the original folders which I had saved with prefix such as "old" or "orignal". So the error was coming from decomposePar reading the original files after reading the files I had modified.

2. The ./Allclean kept deleting the alpha1, apha1.org files with the "rm" command. So I commented out that line.

3. I didnt realize that in the decomposePar dictionary, you should keep only the method you want to use and comment out or delete the other options.

After I took care of these items, the parallel processing works very well and the system monitor shows all the 4 CPU's running at 99%-100%.

erichu April 11, 2013 03:42

Nice to hear that finally works.

However, just another piece of information. The other decompose methods can be left in the decomposeParDict. But you have to choose one and have it properly configured, the other coeffs will not be used and are ignored. At least this is working for me.

Another thing which might be worth spending time is number of CPUs and decomposition direction.
2 CPUs might be faster than 4 in some cases due to slow communication between the cores. Therefore try with 1,2,3,4 CPUs and look at the time.

In some cases the direction also plays a part. I.e 2 CPUs decomposed in (2 1 1) can be faster than (1 1 2) depending on your geometry.

musahossein April 11, 2013 09:49

I started without invoking the mpi. The tank has 73,441 cells. Time step is 0.02, and total time is 12 (so I will need 12/0.02= 600 time steps to complete). When I ran this on my laptop w/o parallel processing, it took 16 hours to complete. When I used parallel processing to use the 4 CPU's that my prosessor has, the computation time was cut down to about 4 hours. So computation time seemed to have reduced linearly. When I checked the performance of the CPUs, it showed them running at 99%-100%. I dont know how much time was spent in fetch, but the fact that the computation time was cut down by 1/4 was a big leap.
I will try your suggestion about the decomposition. However, if my tank is 2 dimensional and the mesh is square -- 271 cells in each direction, will changing the decomposition make a difference? I will try and see.

nanes April 12, 2013 10:33

Quote:

Originally Posted by musahossein (Post 418655)
I have 2 processors. Each processor has 2 CPUs. So I should set subdomain to the number of processors (2) or the number of CPUs (4). Also, I have to run blockMesh and setFields before running mpi, correct? Thanks.

You do not have 2 processors, yuo have one processor with two cores, each core can execute two threads.

http://ark.intel.com/products/47663/...cache-2_13-ghz

The information of gnome-system-monitor is not very accurate.

However MPI processes in total are always 4.

The linear computing time is due to the small number of cells. When the number of cells becomes large (millions) the hyperthreading of your processor does not bring any benefits.

musahossein April 12, 2013 21:58

My processor is Intel® Core™ i3-330M Processor
(3M Cache, 2.13 GHz X4), which makes me think that there are 4 CPUs. You may be right that the genome is interpreting each thread as 1 CPU hence telling me that there are 4 CPUs. I already realize that the laptop is inadequate for the task at hand, because with only 73,000 cells it is taking 4 hours. So I will have to invest in a server which i am in the process of doing.

musahossein July 17, 2013 22:58

Problems with DecomposeParDic
 
Dear all:

I am running OpenFoam/interDyMFoam/sloshingTank2D on a 12 core (dual hexacore) machine. However, the parallell processing gives me the following error:

--> FOAM FATAL ERROR:
Wrong number of processor divisions in geomDecomp:
Number of domains : 12
Wanted decomposition : (4 2 2)

In the decomposeParDict, I have the doman as simple and decomposition as 4 2 2 as shown above. So why am I getting this error?

Any help will be greatly appreciated, Thanks!

akidess July 18, 2013 03:46

Because 4*2*2=16

musahossein July 18, 2013 09:03

you are right. I looked into the source code for gomeDecomp and noted that number of processors must be equal to nx X ny X nz. The manual does not explain that.

Thanks.


All times are GMT -4. The time now is 12:22.