CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > OpenFOAM Meshing & Mesh Conversion

redistributePar do not interpolate properly on the processor boundaries

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree2Likes
  • 1 Post By wyldckat
  • 1 Post By wyldckat

Reply
 
LinkBack Thread Tools Display Modes
Old   May 28, 2013, 16:05
Default redistributePar do not interpolate properly on the processor boundaries
  #1
New Member
 
Matteo Cerminara
Join Date: Feb 2012
Posts: 14
Rep Power: 5
Pagoda is on a distinguished road
Hello,

here my initial and boundary condition on the pressure:
Code:
/*--------------------------------*- C++ -*----------------------------------*\
| =========                 |                                                 |
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |
|  \\    /   O peration     | Version:  2.0.0                                 |
|   \\  /    A nd           | Web:      www.OpenFOAM.com                      |
|    \\/     M anipulation  |                                                 |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version     2.0;
    format      ascii;
    class       volScalarField;
    location    "0";
    object      p;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

dimensions      [1 -1 -2 0 0 0 0];

internalField   uniform 101325;

boundaryField
{
    inlet
    {
        type        fixedValue;
	value       uniform 101325;
    }
    wall
    {
        type        fixedValue;
	value       uniform 101325;
    }
    vertical
    {
        type            totalPressure;
	U		U;
	p0	        uniform 101325;
	rho		rho;
	psi		none;
	gamma		1.4;
	value	        uniform 101325;
    }

    top
    {
        type            totalPressure;
        U               U;
        p0              uniform 101325;
        rho             rho;
        psi             none;
        gamma           1.4;
        value           uniform 101325;
    }
}


// ************************************************************************* //
and the decomposeParDict

Code:
/*--------------------------------*- C++ -*----------------------------------*\
| =========                 |                                                 |
| \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox           |
|  \\    /   O peration     | Version:  2.0.1                                 |
|   \\  /    A nd           | Web:      www.OpenFOAM.com                      |
|    \\/     M anipulation  |                                                 |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version     2.0;
    format      ascii;
    class       dictionary;
    object      decomposeParDict;
}

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

numberOfSubdomains 2;


method          simple;
//method          scotch;

simpleCoeffs
{
    n        (1 1 2);
    delta           0.001;
}

hierarchicalCoeffs
{
    n               (3 2 1);
    delta           0.001;
    order           xyz;
}

manualCoeffs
{
    dataFile        "cellDecomposition";
}


// ************************************************************************* //
When I use the following bash commands (N = 2)
Code:
let "Nm1 = N - 1"
for i in $( seq 0 1 $Nm1 )
do
    mkdir processor$i
done
mkdir processor0/{0,constant}
cp -r constant/polyMesh processor0/constant/
cp 0/* processor0/0/

mpirun -np $N redistributePar -parallel -overwrite
I get a strange decomposed case, where the pressure value is not correct at the boundary between the two processors (it falls down to zero)


There is someone understanding or guessing what's happening?

Thanks!

Matteo

PS: I want not use decomposePar because I need to do all the prepprocessing in parallel (of course on a bigger number of processors)
Pagoda is offline   Reply With Quote

Old   May 28, 2013, 17:37
Default
  #2
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,251
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Greetings Matteo and welcome to the forum!

Well, even if you don't want to use decomposePar, you still have to use decomposePar. This is because redistributePar needs sub-domain data to be present, otherwise it will do some very crazy stuff... I even wonder why it didn't crash in the first place.

For pre-processing in parallel, it depends on what you really want to do. Many of OpenFOAM's applications for pre-processing with work with the "-parallel" option, along with mpirun.

The only reason I can see for you to do this, is if your mesh is too large to be decomposed while using a single machine. Of course the question then is: how was the mesh generated in the first place?

Either way, if you can provide some more information about the workflow you need to achieve, then it'll be easier to give you some good directions on how to proceed.

Best regards,
Bruno
wyldckat is offline   Reply With Quote

Old   May 28, 2013, 19:34
Default
  #3
New Member
 
Matteo Cerminara
Join Date: Feb 2012
Posts: 14
Rep Power: 5
Pagoda is on a distinguished road
Greatings Bruno, and many thanks for the quick reply!!!

This forum is an irreplaceable resource!

About the first problem, I get the same result if I start decomposing the case in two processors and than redistribute the case in 4. Here the output image


I used the following commands:
Code:
decomposePar // on two processors

// modify the decomposeParDict dictionary

mpirun -np 4 redistributePar -parallel -overwrite  // on 4 processors
I found the possibility to use directly redistributePar in its header:
Code:
/*---------------------------------------------------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     |
    \\  /    A nd           | Copyright (C) 2011 OpenFOAM Foundation
     \\/     M anipulation  |
-------------------------------------------------------------------------------
License
    This file is part of OpenFOAM.

    OpenFOAM is free software: you can redistribute it and/or modify it
    under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    OpenFOAM is distributed in the hope that it will be useful, but WITHOUT
    ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
    FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
    for more details.

    You should have received a copy of the GNU General Public License
    along with OpenFOAM.  If not, see <http://www.gnu.org/licenses/>.

Application
    redistributePar

Description
    Redistributes existing decomposed mesh and fields according to the current
    settings in the decomposeParDict file.

    Must be run on maximum number of source and destination processors.
    Balances mesh and writes new mesh to new time directory.

    Can also work like decomposePar:
    \verbatim
        # Create empty processor directories (have to exist for argList)
        mkdir processor0
                ..
        mkdir processorN

        # Copy undecomposed polyMesh
        cp -r constant processor0

        # Distribute
        mpirun -np ddd redistributePar -parallel
    \endverbatim
\*---------------------------------------------------------------------------*/
Using this procedure can lead to having problems?

The test case I am using here has a mesh practically isotropic and orthogonal, with 36x36x72 cells.


Going forward, I try to explain my workflow, even if a little complicated.
The issue comes out because I have as many ram memory as I want for parallel applications but not for serial ones.
In principle, I would like to take a coarse test case, and:
- redistribute the coarser mesh to a bigger number N of processor, I'm using
mpirun -np N redistributePar -parallel -overwrite
- refine the coarser mesh, I'm using
mpirun -np $N refineMesh -parallel -overwrite
- map the nonuniform fields of the coarser mesh into the finer, I'm using
mapFields -consistent -parallelTarget -sourceTime 1e-08 .
because I use as source the non-decomposed case

Executing these steps, I found some problems, the thread is the first one. Than:
- I need to use mapFields because my pressure field is not uniform, both in the internalField and in the boundaryField entries. But it does not seem to act neither on the p0 entry of the totalPressure boundary condition nor on the inputValue entry of the inputOutput condition for the temperature field (while it works on the value entries).
- I found the way to use mapFields with a decomposed source too (using the flag -parallelSource), but I was not able to find a way to use it in parallel.

I thank you in advance for any hint or suggestion you could give me.

Best regards,
Matteo
Pagoda is offline   Reply With Quote

Old   May 29, 2013, 17:01
Default
  #4
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,251
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Hi Matteo,

I didn't remember about that header... I had seen it several months ago and no longer remembered about it. Although I thought that decomposePar did some more magic, even if we were to decompose to a single processor folder...

OK, there are only two things I can think of right now:
  1. It looks like you might be using OpenFOAM 2.0.0 or 2.0.1, at least according to the beginning of the files you posted. So I'd advise you to upgrade to either OpenFOAM 2.1.x or 2.2.x.
  2. Can you create a small test case for anyone else on the forum to try and help you? Something similar to the tutorial cases in OpenFOAM? Because setting up these cases takes time
Best regards,
Bruno
wyldckat is offline   Reply With Quote

Old   June 3, 2013, 11:34
Default Test case for refining and mapping in parallel -- with bug
  #5
New Member
 
Matteo Cerminara
Join Date: Feb 2012
Posts: 14
Rep Power: 5
Pagoda is on a distinguished road
Hi Bruno,
at the end I found a little bit of time to order my test case and to create one for the Forum. Here it is:

turbulentInletCFD.zip

You can find inside a bash script doing the steps I would like to do.

I would be grateful to anyone can help me in solving the problem described in the posts above.

Matteo
Pagoda is offline   Reply With Quote

Old   June 16, 2013, 15:59
Default
  #6
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,251
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Hi Matteo,

Sorry for taking so long to look into this but I finally figured out what's going on.
Actually, I detected a couple of bugs in redistributePar, thanks to your test case!

OK, let's look at one issue at a time:
  1. First it looked like the utility mapFields was not prepared for a situation where there is a specific value that has a non-uniform list or it's the boundary condition itself that is not coded to handle this kind of behaviour.
  2. But then I tried to use the following in "decomposeParDict":
    Code:
    preservePatches (vertical top);
    This is for cyclic patches, because usually cyclics shouldn't be split up between processors...
    And I switched to scotch, to make it easier to obey to this limitation.
  3. Which lead to having redistributePar not respect this "preservePatches" entry. Which is a bug, because this way preserving cyclics are not being respected.
  4. Worse even is that redistributePar is doing something very strange on the new processor folders:
    Code:
        vertical
        {
            type            totalPressure;
            rho             rho;
            psi             none;
            gamma           1.4;
            p0              nonuniform List<scalar> 
    2536
    (
    6.944773671333971e-310
    6.944773671333971e-310
    ...
    This is clearly garbage, because the other two processors have got this:
    Code:
        vertical
        {
            type            totalPressure;
            rho             rho;
            psi             none;
            gamma           1.4;
            p0              uniform 101325;
            value           uniform 101325;
        }
Now, I would suggest that you report this on the bug tracker: http://www.openfoam.org/bugs/
If you do not want to or cant report this for some reason (time?), please allow me permission to report this for you.


As for a solution in the mean time, it's simple: simply rely on changeDictionary to restore things back to normal after redistributing the mesh+fields:
Code:
echo "redistributing..."
mpirun -np $N redistributePar -parallel -overwrite > logRed 2>&1

echo "restoring initial 0 fields to the new decomposition..."
mpirun -np $N changeDictionary -parallel > logChg 2>&1

echo "refining..."
mpirun -np $N refineMesh -parallel -overwrite > logRef 2>&1

echo "mapping..."
mapFields -consistent -parallelTarget -sourceTime 1e-08 . > logMap 2>&1
Keep in mind that changeDictionary needs the file "changeDictionaryDict". And I used it before refining, so that the problem wouldn't grow out of proportion

Attached is the fixed case.

Best regards,
Bruno
Attached Files
File Type: zip turbulentInletCFD_fixed.zip (34.8 KB, 11 views)
Pagoda likes this.
wyldckat is offline   Reply With Quote

Old   June 17, 2013, 09:09
Default
  #7
New Member
 
Matteo Cerminara
Join Date: Feb 2012
Posts: 14
Rep Power: 5
Pagoda is on a distinguished road
Hi Bruno,
thanks for the fixed case!!! I will try it as soon as possible, and I will tell you how it works!

About the bug reporting, I never tried to submit a bug to http://www.openfoam.org/bugs/ so, if for you it will not take too long, please feel free to submit it. I will learn from your report how to do it!
Otherwise, I will try!

Best regards,
Matteo
Pagoda is offline   Reply With Quote

Old   June 17, 2013, 18:46
Default
  #8
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,251
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Hi Matteo,

It's going to be a long week for me. I'll look into submitting it during the next weekend.

Best regards,
Bruno
wyldckat is offline   Reply With Quote

Old   October 13, 2013, 16:06
Default
  #9
New Member
 
Dan Kokron
Join Date: Dec 2012
Posts: 23
Rep Power: 4
dkokron is on a distinguished road
Bruno,

Did these bugs get reported/resolved. I don't see anything related in mantis

Thanks
Dan
dkokron is offline   Reply With Quote

Old   October 14, 2013, 16:43
Default
  #10
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,251
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Hi Dan,

Unfortunately I haven't had the time yet to properly report this bug.
Specially because I haven't managed to reproduce the same bug with a simpler test case But it's still on my to-do list.

And I didn't want to provide this complicated test case, since it would make it harder for them to ascertain where the problem really is.

So Dan, if you have a simpler test case where this bug can be reproduced, feel free to report it!

Best regards,
Bruno
wyldckat is offline   Reply With Quote

Old   February 16, 2014, 15:53
Default
  #11
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,251
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Greetings to all!

OK, I've done a really quick test with the original case that Matteo provided and I believe that this issue has been fixed in OpenFOAM 2.2.x, thanks to this bug report: http://www.openfoam.org/mantisbt/view.php?id=1130

If anyone can double check this, please let us know if this is truly fixed or not!

Best regards,
Bruno
Pagoda likes this.
wyldckat is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
how to interpolate grad(p) victorconan OpenFOAM 6 September 3, 2012 08:24
Interpolate in some points ivan_cozza OpenFOAM Post-Processing 2 April 22, 2009 08:58
overflow when interpolate initial result richard CFX 1 June 11, 2008 07:36


All times are GMT -4. The time now is 14:40.