chtMultiRegionFoam: problem with the tutorial
Dear All,
I am trying to learn to use the chtMultiRegionFoam and I am starting with the tutorial. The 1st tutorial I wanted to run is the multiRegionHeater. I enter the dir of the case and I give the command: Code:
./Allrun Code:
lab@lab-laptop:~/Scrivania/multiRegionHeater$ ./Allrun Also, where can I find an explanation of this solver, since I guess it is a bit difficult to set everything properly? Thanks, Samuele |
Greetings Samuele,
The Allrun script uses a method of keeping a log of every application that is executed. If you look into the files "log.*", you should find the reason why things aren't working as expected. As for documentation, I'm not familiar with any document online for the "chtMultiRegion*Foam" solvers, so I suggest that you search for it ;) Failing that, start studying the files that the tutorial case has, as well as looking at the code for the solver itself. Best regards, Bruno |
I looked at the different log files and I noticed that there are problems in the log.chtMultiRegionFoam and in the log.reconstructPar.
These are the 2 files: Code:
/*---------------------------------------------------------------------------*\ Code:
/*---------------------------------------------------------------------------*\ Thanks a lot, Samuele |
Hi Samuele,
You didn't specify if you had changed anything in the simulation case. Anyway, here are the steps to fix things:
Bruno |
Hi Bruno and thanks for answering.
The steps you suggested make the tutorial work fine. Thanks a lot, Samuele |
I does not work for me.
In the processor* directories, I don't have any time directories after the run except 0/ and constant/ processor0: 0 constant processor1: 0 constant processor2: 0 constant processor3: 0 constant All the time dir are in the base dir 0 10 20 30 Allclean Allrun ....... constant makeCellSets.setSet processor0 processor1 processor2 processor3 README.txt system This is with Ubuntu 10.04 Everything else is ok and this was working with older version. Any suggestions? Thanks This is written in a log file with mpirunDebug *** An error occurred in MPI_Init *** before MPI was initialized *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) |
Greetings Alain,
Can you be a bit more specific?
Bruno |
Dear Alain,
I suggest you to try to run the case on a single processor. Than you can try to parallelize it! Samuele |
@wyldckat
1. It is 10.04 2. 2.1.0 3. From the deb pkg 4. All parallel tutorials give the same messages mpirun -np 4 xxxxxx works as expected but the work is not distributed mpirun -np 4 xxxxxx -parallel gives the error messages I went back to the previous version 2.0.0 and everything is ok. |
Hi Alain,
OK, it would be really useful to see a good log with errors, so it can be easier to diagnose the real error. Please run mpirun in a similar way to this: Code:
mpirun -n 4 interFoam -parallel > log.interFoam 2>&1 Then compress the file: Code:
tar -czf log.interFoam.tar.gz log.interFoam Another thing you can look at is if there is any folder and/or file present at "~/.OpenFOAM/", which is where OpenFOAM will look for global configuration files for the user. Best regards, Bruno |
I found the only method that works so far with my setup:
http://www.cfd-online.com/Forums/ope...12-04-lts.html All others (2.1.1 deb pkg , 2.1.1 tgz source) are not compiling or running as they should. Only the 2.1.x from git works flawlessly. Thanks for the suggestions anyway. |
In my case, the tutorial works fine, but after modifying the geometry by topoSetDict, after running Allrun script I have found following errors in log files as shown below:
log.reconstructPar Code:
/*---------------------------------------------------------------------------*\ Code:
/*---------------------------------------------------------------------------*\ Best regards, Mukut |
Hi Mukut,
Not much information to work with. All I can guess is:
Best regards, Bruno |
Thank you Mr. Bruno,
I found some mistakes in fvSchemes and fvSolutions file of a region that I have created after modifying the tutorial geometry and I have corrected those. Now simulation is going on but it takes long time, I have modified the controlDict as follows to complete simulation in a shorter time... Code:
/*--------------------------------*- C++ -*----------------------------------*\ How can I reduced the time of simulation? Best regards, mukut Quote:
|
Quote:
Knowing the characteristics of the mesh and the solver used, as well as the contents of the "fv*" files, and how exactly you are running the case, would help. |
Quote:
i have question here, your time step is 0.1s who your solver computes for time with another step ? one way for reducing time is reducing number of cells especially in axis with small variation, then you can refine your mesh based on results of coarse mesh, you can find this method in openFoam user manual in chapter two i think. Good Luck, |
Thanks for reply. I have changed to steady state solver: chtMultiRegionSimpleFoam. Now it worked with my modified geometry.
|
Quote:
|
Quote:
I hope you can help me. I am running a Multiregion case. I have read the instructions in the Allrun script of this tutorial. When I descomposed my case and every single processor has taken their respective part of each region, but, when I am going to release the solver appears the following problem: Cannot find file "points" in directory "polyMesh" in times 23.6 down to constant. (23.6 is my starting time). I have checked that every region in every processor has the repective constant folder with the respective polyMesh/points file. I executed this line: mpirun -np 4 my_solver -parallel 1> runlog Does exit an special statement for running multiRegion cases? I hope have been clear. Best regards, Miguel. |
Same Error
Cannot find file "points" in directory "polyMesh" in times 23.6 down to constant. (23.6 is my starting time).
I get the same error with decomposePar for a multi-region case, did you find any solution to this problem ? Thanks Arpan |
All times are GMT -4. The time now is 08:17. |