CFD Online Logo CFD Online URL
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

Keeping global LDU addressing using decomposePar?

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree1Likes
  • 1 Post By wyldckat

LinkBack Thread Tools Search this Thread Display Modes
Old   December 6, 2018, 17:35
Default Keeping global LDU addressing using decomposePar?
Senior Member
Join Date: Mar 2009
Posts: 142
Rep Power: 11
klausb is on a distinguished road

I work on a problem for which I need to maintain global LDU addresses of the matrix coefficients in each decomposed part of the matrix.

Usually OpenFoam applies a local numbering for each processor core/mpi process when a case is decomposed using decomposePar.

Is it possible to avoid the local numbering in the first place or can the global addresses be "re-" constructed?

klausb is offline   Reply With Quote

Old   December 30, 2018, 16:37
Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,616
Blog Entries: 42
Rep Power: 116
wyldckat is a glorious beacon of lightwyldckat is a glorious beacon of lightwyldckat is a glorious beacon of lightwyldckat is a glorious beacon of lightwyldckat is a glorious beacon of lightwyldckat is a glorious beacon of light
Greetings Klaus,

Sorry for the late response, although right now I don't have a complete answer.
However, what has come to mind is that you should try to get into the NUMAP-FOAM courses if the opportunity arises: - or the CFD with open source courses at Chalmers:
Given that it seems you've been looking into this topic for sometime now, perhaps you could have gotten into one of those courses...

Moving on, let me do a compilation of threads in which you've posted related questions to this, which I have not moved to here, given that they were somewhat in each thread in its correct topic... so the threads are:
  1. How can I get the number of elements per row form the matrix?
  2. MPI barrier inside OpenFoam
  3. How to do communication across processors
  4. the global index for cells and facess in parallel computation - this one you sent me via PM.

You probably already know this, but just in case, a few rules of thumb in this kind of development:
  1. Use small test meshes, something with 10-20 cells.
  2. First use 1D and/or 2D meshes.
  3. Test if your code works in serial mode.

Now, the first problem is that working directly with the LDU matrix structure has a fairly big inconvenience: it's a decomposition in itself, which means that correlating between the cells/faces back and forth with the matrix may not be easy to do. The fvMatrix class should pretty much have the code on how its done, but I've never studied it properly.

The second problem is preserving the order of the cells as they appear or at least having a way to identify them. In principle, you can rely on the manual decomposition methods or the simple decomposition method to forcefully decompose the mesh in the same order as the original cell order... but that only works well for structured 1D and possibly 2D meshes... but 3D will be a pain... therefore, it's not straight forward preserving the cell order across subdomains.

Now, what I might be able to do provide is the instructions on having two out of 3 missing pieces that you are looking for:
  1. A way to have the global numbering of the cells.
  2. To get a copy of the matrices from the remote processes.
The 3rd missing piece will be how exactly you will take the two major components and merge them together.

So, the first piece: having a global cell number. This is simple enough and I've roughly explained at the global index for cells and facess in parallel computation - post #2 - namely to generate a field that has the number of each original cell.
Pretty much it's a matter of having a utility or function object that generates a field with the cell IDs, which the following code should do the trick:
#include "argList.H"
#include "timeSelector.H"
#include "Time.H"
#include "fvMesh.H"
#include "volFields.H"

using namespace Foam;

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

int main(int argc, char *argv[])
    #include "addRegionOption.H"

    #include "setRootCase.H"
    #include "createTime.H"

    instantList timeDirs = timeSelector::select0(runTime, args);

    #include "createNamedMesh.H"

    forAll(timeDirs, timeI)
        runTime.setTime(timeDirs[timeI], timeI);

        Info<< "Time = " << runTime.timeName() << endl;

        // Check for new mesh

        volScalarField cci

        forAll(cci, celli)
            cci[celli] = scalar(celli);


    Info<< "\nEnd\n" << endl;

    return 0;
This is mostly an example and I didn't even try compiling it. The initialization of the field isn't pretty, but does the trick.

Then you need to modify the solver application you are using, to load this field. You can then get access to said field at runtime from anywhere in the code, if you follow the examples given here: Accessing boundary patch dictionary within thermodynamic library - either the lookup one or the dict one...

That said, the example given here: the global index for cells and facess in parallel computation - post #5 - already gives you both the IDs of the cells and how they are transferred back to the master process.

As for the second piece: transferring arrays to the master process... it's pretty much explained at How to do communication across processors - the problem is that all of the examples there are incomplete and only demonstrate the core transfer functionality.

My apologies, in order to give a better answer, I would need more time to construct the code myself and test it. And I've already spent ... I'm not sure, maybe nearly an hour gathering information and figuring things out...

With some luck, I'm able to look into this sometime in the near future. In the meantime, if you are able to provide an example application with which we do trial an error together, it would make it easier to help you here, instead of having me or anyone else doing all of the example code and testing it. This is mostly because there is no clear cut way to do what you want to do and requires trial and error...

Best regards,
Daniel_Khazaei likes this.
wyldckat is offline   Reply With Quote


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On

Similar Threads
Thread Thread Starter Forum Replies Last Post
pimpleDyMFoam computation randomly stops babapeti OpenFOAM Running, Solving & CFD 5 January 24, 2018 06:28
Floating point exception error lpz_michele OpenFOAM Running, Solving & CFD 53 October 19, 2015 03:50
Upgraded from Karmic Koala 9.10 to Lucid Lynx10.04.3 bookie56 OpenFOAM Installation 8 August 13, 2011 05:03
IcoFoam parallel woes msrinath80 OpenFOAM Running, Solving & CFD 9 July 22, 2007 03:58
Could anybody help me see this error and give help liugx212 OpenFOAM Running, Solving & CFD 3 January 4, 2006 19:07

All times are GMT -4. The time now is 01:47.