CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   OpenFOAM (http://www.cfd-online.com/Forums/openfoam/)
-   -   chtMultiRegionFoam: problem with the tutorial (http://www.cfd-online.com/Forums/openfoam/99532-chtmultiregionfoam-problem-tutorial.html)

samiam1000 April 5, 2012 12:12

chtMultiRegionFoam: problem with the tutorial
 
Dear All,

I am trying to learn to use the chtMultiRegionFoam and I am starting with the tutorial.

The 1st tutorial I wanted to run is the multiRegionHeater.

I enter the dir of the case and I give the command:

Code:

./Allrun
I get this error:
Code:

lab@lab-laptop:~/Scrivania/multiRegionHeater$ ./Allrun
Running blockMesh on /home/lab/Scrivania/multiRegionHeater
Running topoSet on /home/lab/Scrivania/multiRegionHeater
Running splitMeshRegions on /home/lab/Scrivania/multiRegionHeater
Running chtMultiRegionFoam in parallel on /home/lab/Scrivania/multiRegionHeater using 2 processes


--> FOAM FATAL ERROR:
No times selected

    From function reconstructPar
    in file reconstructPar.C at line 139.

FOAM exiting



--> FOAM FATAL ERROR:
No times selected

    From function reconstructPar
    in file reconstructPar.C at line 139.

FOAM exiting



--> FOAM FATAL ERROR:
No times selected

    From function reconstructPar
    in file reconstructPar.C at line 139.

FOAM exiting



--> FOAM FATAL ERROR:
No times selected

    From function reconstructPar
    in file reconstructPar.C at line 139.

FOAM exiting



--> FOAM FATAL ERROR:
No times selected

    From function reconstructPar
    in file reconstructPar.C at line 139.

FOAM exiting


creating files for paraview post-processing

created 'multiRegionHeater{bottomAir}.OpenFOAM'
created 'multiRegionHeater{topAir}.OpenFOAM'
created 'multiRegionHeater{heater}.OpenFOAM'
created 'multiRegionHeater{leftSolid}.OpenFOAM'
created 'multiRegionHeater{rightSolid}.OpenFOAM'

Do you know what's wrong and what I should do?

Also, where can I find an explanation of this solver, since I guess it is a bit difficult to set everything properly?

Thanks,

Samuele

wyldckat April 5, 2012 12:53

Greetings Samuele,

The Allrun script uses a method of keeping a log of every application that is executed. If you look into the files "log.*", you should find the reason why things aren't working as expected.

As for documentation, I'm not familiar with any document online for the "chtMultiRegion*Foam" solvers, so I suggest that you search for it ;)
Failing that, start studying the files that the tutorial case has, as well as looking at the code for the solver itself.

Best regards,
Bruno

samiam1000 April 6, 2012 03:59

I looked at the different log files and I noticed that there are problems in the log.chtMultiRegionFoam and in the log.reconstructPar.

These are the 2 files:
Code:

/*---------------------------------------------------------------------------*\
| =========                |                                                |
| \\      /  F ield        | OpenFOAM: The Open Source CFD Toolbox          |
|  \\    /  O peration    | Version:  2.1.0                                |
|  \\  /    A nd          | Web:      www.OpenFOAM.org                      |
|    \\/    M anipulation  |                                                |
\*---------------------------------------------------------------------------*/
Build  : 2.1.0-0bc225064152
Exec  : chtMultiRegionFoam -parallel
Date  : Apr 05 2012
Time  : 16:55:55
Host  : "lab-laptop"
PID    : 7962
[0] --------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------

[0]
[0] --> FOAM FATAL ERROR:
[0] "/home/lab/Scrivania/multiRegionHeater/system/decomposeParDict" specifies 4 processors but job was started with 2 processors.
[0]
FOAM parallel run exiting
[0]
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 7962 on
node lab-laptop exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

and
Code:

/*---------------------------------------------------------------------------*\
| =========                |                                                |
| \\      /  F ield        | OpenFOAM: The Open Source CFD Toolbox          |
|  \\    /  O peration    | Version:  2.1.0                                |
|  \\  /    A nd          | Web:      www.OpenFOAM.org                      |
|    \\/    M anipulation  |                                                |
\*---------------------------------------------------------------------------*/
Build  : 2.1.0-0bc225064152
Exec  : reconstructPar -region rightSolid
Date  : Apr 05 2012
Time  : 16:55:56
Host  : "lab-laptop"
PID    : 7968
Case  : /home/lab/Scrivania/multiRegionHeater
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Actually, I do have 2 processors and I don't know why it doesn't work. How can I make it run on a single processor? What I should do. Could anyone help?

Thanks a lot,

Samuele

wyldckat April 6, 2012 15:48

Hi Samuele,

You didn't specify if you had changed anything in the simulation case. Anyway, here are the steps to fix things:
  1. Run Allclean:
    Code:

    ./Allclean
  2. Edit the file Allrun and find the following line:
    Code:

    runParallel `getApplication` 4
    The last number is the number of parallel processes to be used for running in parallel. Change this if you have to. I'll assume you want to use 2 processes.
  3. Edit the file "system/decomposeParDict" and find this line:
    Code:

    numberOfSubdomains  4;
    Change the number 4 to 2 as well, or whichever number you want to use. And keep the "method" in "scotch" mode:
    Code:

    method          scotch;
  4. Run Allrun once again.
Best regards,
Bruno

samiam1000 April 10, 2012 04:22

Hi Bruno and thanks for answering.

The steps you suggested make the tutorial work fine.

Thanks a lot,

Samuele

jam June 5, 2012 09:54

I does not work for me.

In the processor* directories, I don't have any time directories after the run except 0/ and constant/

processor0:
0 constant

processor1:
0 constant

processor2:
0 constant

processor3:
0 constant

All the time dir are in the base dir

0 10 20 30 Allclean Allrun ....... constant makeCellSets.setSet processor0 processor1 processor2 processor3 README.txt system


This is with Ubuntu 10.04

Everything else is ok and this was working with older version.

Any suggestions?
Thanks

This is written in a log file with mpirunDebug

*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)

wyldckat June 6, 2012 17:11

Greetings Alain,

Can you be a bit more specific?
  1. Are you 100% certain it's Ubuntu 10.04? Or is it 12.04?
  2. What OpenFOAM version are you talking about? Is it 2.1.1?
  3. Are you using the deb package version? Namely this one: http://www.openfoam.org/download/ubuntu.php ?
  4. Are you running the tutorial case "heatTransfer/chtMultiRegionFoam/multiRegionHeater"?
  5. What's the error message in file "log.chtMultiRegionFoam"?
Best regards,
Bruno

samiam1000 June 7, 2012 06:44

Dear Alain,

I suggest you to try to run the case on a single processor.

Than you can try to parallelize it!

Samuele

jam June 7, 2012 19:00

@wyldckat

1. It is 10.04
2. 2.1.0
3. From the deb pkg
4. All parallel tutorials give the same messages

mpirun -np 4 xxxxxx works as expected but the work is not distributed

mpirun -np 4 xxxxxx -parallel gives the error messages


I went back to the previous version 2.0.0 and everything is ok.

wyldckat June 8, 2012 17:38

Hi Alain,

OK, it would be really useful to see a good log with errors, so it can be easier to diagnose the real error. Please run mpirun in a similar way to this:
Code:

mpirun -n 4 interFoam -parallel > log.interFoam 2>&1
This way the errors are also sent to the main log file. Then search and replace any sensitive data on the log.
Then compress the file:
Code:

tar -czf log.interFoam.tar.gz log.interFoam
And attach the compressed file "log.interFoam.tar.gz" to your next post.

Another thing you can look at is if there is any folder and/or file present at "~/.OpenFOAM/", which is where OpenFOAM will look for global configuration files for the user.

Best regards,
Bruno

jam June 9, 2012 17:10

I found the only method that works so far with my setup:

http://www.cfd-online.com/Forums/ope...12-04-lts.html

All others (2.1.1 deb pkg , 2.1.1 tgz source) are not compiling or running as they should.

Only the 2.1.x from git works flawlessly.

Thanks for the suggestions anyway.

mukut October 16, 2013 21:50

In my case, the tutorial works fine, but after modifying the geometry by topoSetDict, after running Allrun script I have found following errors in log files as shown below:

log.reconstructPar

Code:

/*---------------------------------------------------------------------------*\
| =========                |                                                |
| \\      /  F ield        | OpenFOAM: The Open Source CFD Toolbox          |
|  \\    /  O peration    | Version:  2.2.1                                |
|  \\  /    A nd          | Web:      www.OpenFOAM.org                      |
|    \\/    M anipulation  |                                                |
\*---------------------------------------------------------------------------*/
Build  : 2.2.1-57f3c3617a2d
Exec  : reconstructPar -allRegions
Date  : Oct 16 2013
Time  : 19:53:41
Host  : "mukut-Endeavor-MR3300"
PID    : 5013
Case  : /home/mukut/OpenFOAM/mukut-2.2.1/run/tutorials/heatTransfer/chtMultiRegionFoam/multiRegionHeater
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time



--> FOAM FATAL ERROR:
No times selected

    From function reconstructPar
    in file reconstructPar.C at line 178.

FOAM exiting

log.chtMultiRegionFoam

Code:

/*---------------------------------------------------------------------------*\
| =========                |                                                |
| \\      /  F ield        | OpenFOAM: The Open Source CFD Toolbox          |
|  \\    /  O peration    | Version:  2.2.1                                |
|  \\  /    A nd          | Web:      www.OpenFOAM.org                      |
|    \\/    M anipulation  |                                                |
\*---------------------------------------------------------------------------*/
Build  : 2.2.1-57f3c3617a2d

Exec  : chtMultiRegionFoam -parallel
Date  : Oct 16 2013
Time  : 16:15:49
Host  : "mukut-Endeavor-MR3300"
PID    : 4242
Case  : /home/mukut/OpenFOAM/mukut-2.2.1/run/tutorials/heatTransfer/chtMultiRegionFoam/multiRegionHeater
nProcs : 4
Slaves :
3
(
"mukut-Endeavor-MR3300.4243"
"mukut-Endeavor-MR3300.4244"
"mukut-Endeavor-MR3300.4245"
)

Pstream initialized with:
    floatTransfer      : 0
    nProcsSimpleSum    : 0
    commsType          : nonBlocking
    polling iterations : 0
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations


// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time


Create fluid mesh for region bottomAir for time = 0

Create fluid mesh for region topAir for time = 0

Create solid mesh for region heater for time = 0

[0]
[0]
[0] --> FOAM FATAL ERROR:
[0] Cannot find file "points" in directory "heater/polyMesh" in times 0 down to constant
[0]
[0]    From function Time::findInstance(const fileName&, const word&, const IOobject::readOption, const word&)
[0]    in file db/Time/findInstance.C at line 203.
[0]
FOAM parallel run exiting
[0]
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[1]
[1]
[1] --> FOAM FATAL ERROR:
[1] Cannot find file "points" in directory "heater/polyMesh" in times 0 down to constant
[1]
[1]    From function Time::findInstance(const fileName&, const word&, const IOobject::readOption, const word&)
[1]    in file db/Time/findInstance.C at line 203.
[1]
FOAM parallel run exiting
[1]
[2]
[2]
[2] --> FOAM FATAL ERROR:
[2] Cannot find file "points" in directory "heater/polyMesh" in times 0 down to constant
[2]
[2]    From function Time::findInstance(const fileName&, const word&, const IOobject::readOption, const word&)
[2]    in file db/Time/findInstance.C at line 203.
[2]
FOAM parallel run exiting
[2]
[3]
[3]
[3] --> FOAM FATAL ERROR:
[3] Cannot find file "points" in directory "heater/polyMesh" in times 0 down to constant
[3]
[3]    From function Time::findInstance(const fileName&, const word&, const IOobject::readOption, const word&)
[3]    in file db/Time/findInstance.C at line 203.
[3]
FOAM parallel run exiting
[3]
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 4242 on
node mukut-Endeavor-MR3300 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[mukut-Endeavor-MR3300:04241] 3 more processes have sent help message help-mpi-api.txt / mpi-abort
[mukut-Endeavor-MR3300:04241] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

Besides, there is no time directory inside processors 0~3 directory.

Best regards,
Mukut

wyldckat October 17, 2013 15:38

Hi Mukut,

Not much information to work with. All I can guess is:
  1. There is nothing to reconstruct, which is why reconstructPar gave you that message.
  2. The solver is complaining about a missing mesh region. Either you:
    1. Have not updated the file that defines which regions are to be used on solid and on fluid;
    2. or something went wrong when you ran decomposePar;
    3. or you did not split up the mesh into the dedicated regions.
The output of topoSet and the content of "topoSetDict" would help understand things a bit better.

Best regards,
Bruno

mukut October 17, 2013 21:06

Thank you Mr. Bruno,

I found some mistakes in fvSchemes and fvSolutions file of a region that I have created after modifying the tutorial geometry and I have corrected those. Now simulation is going on but it takes long time, I have modified the controlDict as follows to complete simulation in a shorter time...

Code:

/*--------------------------------*- C++ -*----------------------------------*\
| =========                |                                                |
| \\      /  F ield        | OpenFOAM: The Open Source CFD Toolbox          |
|  \\    /  O peration    | Version:  2.2.1                                |
|  \\  /    A nd          | Web:      www.OpenFOAM.org                      |
|    \\/    M anipulation  |                                                |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version    2.0;
    format      ascii;
    class      dictionary;

    location    "system";
    object      controlDict;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //


libs
(
    "libcompressibleTurbulenceModel.so"
    "libcompressibleRASModels.so"
);

application    chtMultiRegionFoam;

startFrom      latestTime;

startTime      0.1;

stopAt          endTime;

endTime        0.2;

deltaT          0.1;

writeControl    adjustableRunTime;

writeInterval  0.1;

purgeWrite      0;

writeFormat    binary;

writePrecision  8;

writeCompression off;

timeFormat      general;

timePrecision  6;

runTimeModifiable yes;

maxCo          0.3;

// Maximum diffusion number
maxDi          10.0;

adjustTimeStep  yes;

// ************************************************************************* //

Almost 18hrs gone, now simulation time showed 0.125983 and end time is 0.2

How can I reduced the time of simulation?

Best regards,
mukut
Quote:

Originally Posted by wyldckat (Post 457515)
Hi Mukut,

Not much information to work with. All I can guess is:
  1. There is nothing to reconstruct, which is why reconstructPar gave you that message.
  2. The solver is complaining about a missing mesh region. Either you:
    1. Have not updated the file that defines which regions are to be used on solid and on fluid;
    2. or something went wrong when you ran decomposePar;
    3. or you did not split up the mesh into the dedicated regions.
The output of topoSet and the content of "topoSetDict" would help understand things a bit better.

Best regards,
Bruno


wyldckat October 18, 2013 04:55

Quote:

Originally Posted by mukut (Post 457545)
How can I reduced the time of simulation?

Quick answer: Sorry, not enough information to work with here :(
Knowing the characteristics of the mesh and the solver used, as well as the contents of the "fv*" files, and how exactly you are running the case, would help.

Ahmed Khattab October 24, 2013 04:54

Quote:

Originally Posted by mukut (Post 457545)
How can I reduced the time of simulation?


i have question here, your time step is 0.1s who your solver computes for time with another step ?

one way for reducing time is reducing number of cells especially in axis with small variation, then you can refine your mesh based on results of coarse mesh, you can find this method in openFoam user manual in chapter two i think.

Good Luck,

mukut October 24, 2013 05:12

Thanks for reply. I have changed to steady state solver: chtMultiRegionSimpleFoam. Now it worked with my modified geometry.

derekm March 27, 2014 11:01

Quote:

Originally Posted by wyldckat (Post 353484)
Hi Samuele,

You didn't specify if you had changed anything in the simulation case. Anyway, here are the steps to fix things:
  1. Run Allclean:
    Code:

    ./Allclean
  2. Edit the file Allrun and find the following line:
    Code:

    runParallel `getApplication` 4
    The last number is the number of parallel processes to be used for running in parallel. Change this if you have to. I'll assume you want to use 2 processes.
  3. Edit the file "system/decomposeParDict" and find this line:
    Code:

    numberOfSubdomains  4;
    Change the number 4 to 2 as well, or whichever number you want to use. And keep the "method" in "scotch" mode:
    Code:

    method          scotch;
  4. Run Allrun once again.
Best regards,
Bruno

Alas not quite that simple. (t least under 2.3 with tutorials/heatTransfer/chtMultiRegionFoam/multiRegionHeater).. you need to do this for each decomposeParDict for each region as well as the top level . i.e. system/[region]/decomposeParDict and system/decomposeParDict


All times are GMT -4. The time now is 14:20.