|
[Sponsors] |
February 6, 2020, 08:47 |
What does reduce do in parallel computing?
|
#1 |
Member
David Andersson
Join Date: Oct 2019
Posts: 46
Rep Power: 6 |
Hi foamers,
I am adapting a code I wrote to work in parallel and I have some basic problems. First of all, I don't understand what the reduce command is doing. Scatter and Gather I think I understand more or less; scatter divides my information and spread it on my procs and gather collects the information and assemble it on my master proc, correct? But reduce however I have no clue of what this is doing. Can someone help me here? In my code I am reading the pressure values of each face of some specified patches, doing some computations using those values and then finally updating a custom defined scalarField. I was then thinking that I should to do the following: gather the pressure field, read the pressure values, perform my computations, update my custom filed and lastly scatter the custom field before the next time step is taken. Is this correct? I am really fumbling in the dark here so any kind of clarification would be very much appreciated. Thanks, David |
|
February 7, 2020, 13:46 |
|
#2 |
Senior Member
Herpes Free Engineer
Join Date: Sep 2019
Location: The Home Under The Ground with the Lost Boys
Posts: 932
Rep Power: 12 |
I think it is just OpenMPI/MPI reduce command: reduce.
For example, let's assume you have a label (int): Code:
label myBeautifulLabel = 0; Code:
myBeautifulLabel = functionReturningProcessorID(); Code:
reduce(myBeautifulLabel, sumOp<label>()); If you do coding, you may want to use/search Doxygen time to time (or all the time). It provides good information on function signatures, and where they have been used etc etc.
__________________
The OpenFOAM community is the biggest contributor to OpenFOAM: User guide/Wiki-1/Wiki-2/Code guide/Code Wiki/Journal Nilsson/Guerrero/Holzinger/Holzmann/Nagy/Santos/Nozaki/Jasak/Primer Governance Bugs/Features: OpenFOAM (ESI-OpenCFD-Trademark) Bugs/Features: FOAM-Extend (Wikki-FSB) Bugs: OpenFOAM.org How to create a MWE New: Forkable OpenFOAM mirror |
|
May 11, 2022, 16:10 |
|
#3 |
New Member
Dimas
Join Date: Oct 2021
Posts: 4
Rep Power: 4 |
Hi! I was wondering, ¿What happens if i don't use the reduce commnad?
I'm trying to print the value of U in a specific cell. Then i do the following: Code:
posCellId = mesh().findCell(mappingPos_); uProbeVector = vector(1000,1000,1000); uProbeVector = U[posCellId]; Code:
reduce(uProbeVector, minOp<vector>()); |
|
May 12, 2022, 12:38 |
|
#4 |
Senior Member
Mark Olesen
Join Date: Mar 2009
Location: https://olesenm.github.io/
Posts: 1,686
Rep Power: 40 |
You probably have a bigger problem. With findCell() you return the index of the cell that encloses that particular position. In parallel this means that only one processor will deliver an index and all of the others will return -1. Don't use a -1 to access any arrays, unless you like SEGFAULT.
So which value would you use for processors where the position is not found? Generally you would do something like initialise with vector::uniform(GREAT) instead of your value of 1000. If you want to know your uProbeVector globally (ie, not just on the proc where it is found) you need to transmit that information somehow. This is what the reduce is about. The name is actually slightly misleading since it is actually an MPI_Allreduce. So after this operation all processors will have the same value. If that is not what you want, then don't do a reduce. |
|
May 12, 2022, 12:46 |
|
#5 |
New Member
Dimas
Join Date: Oct 2021
Posts: 4
Rep Power: 4 |
Thanks a lot for the reply! it has become much clearer now
|
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Parallel computing on a personal workstation | choiyun | STAR-CCM+ | 6 | April 10, 2017 09:58 |
Parallel computing debate on transient simulations | JuPa | CFX | 3 | December 17, 2013 05:22 |
Error with parallel computing: MPI | therandomestname | FLUENT | 1 | June 28, 2012 04:12 |
Parallel Computing | peter | Main CFD Forum | 7 | May 15, 2006 09:53 |
Parallel Computing Classes at San Diego Supercomputer Center Jan. 20-22 | Amitava Majumdar | Main CFD Forum | 0 | January 5, 1999 12:00 |