CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

What does reduce do in parallel computing?

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 6, 2020, 08:47
Default What does reduce do in parallel computing?
  #1
Member
 
David Andersson
Join Date: Oct 2019
Posts: 46
Rep Power: 6
sippanspojk is on a distinguished road
Hi foamers,

I am adapting a code I wrote to work in parallel and I have some basic problems.

First of all, I don't understand what the reduce command is doing. Scatter and Gather I think I understand more or less; scatter divides my information and spread it on my procs and gather collects the information and assemble it on my master proc, correct? But reduce however I have no clue of what this is doing. Can someone help me here?



In my code I am reading the pressure values of each face of some specified patches, doing some computations using those values and then finally updating a custom defined scalarField.

I was then thinking that I should to do the following: gather the pressure field, read the pressure values, perform my computations, update my custom filed and lastly scatter the custom field before the next time step is taken. Is this correct?


I am really fumbling in the dark here so any kind of clarification would be very much appreciated.

Thanks,
David
sippanspojk is offline   Reply With Quote

Old   February 7, 2020, 13:46
Default
  #2
HPE
Senior Member
 
HPE's Avatar
 
Herpes Free Engineer
Join Date: Sep 2019
Location: The Home Under The Ground with the Lost Boys
Posts: 932
Rep Power: 12
HPE is on a distinguished road
I think it is just OpenMPI/MPI reduce command: reduce.

For example, let's assume you have a label (int):

Code:
label myBeautifulLabel = 0;
You perform something on this label on each processor

Code:
myBeautifulLabel = functionReturningProcessorID();
And you finally want to obtain the total value of all `myBeautifulLabel`s across all processors:

Code:
reduce(myBeautifulLabel, sumOp<label>());
So the value of `myBeautifulLabel` is the total magnitude of all `myBeautifulLabel`s, and this value is now known in all processors locally.

If you do coding, you may want to use/search Doxygen time to time (or all the time). It provides good information on function signatures, and where they have been used etc etc.
HPE is offline   Reply With Quote

Old   May 11, 2022, 16:10
Default
  #3
New Member
 
Dimas
Join Date: Oct 2021
Posts: 4
Rep Power: 4
dimasbarile is on a distinguished road
Hi! I was wondering, ¿What happens if i don't use the reduce commnad?

I'm trying to print the value of U in a specific cell. Then i do the following:

Code:
posCellId = mesh().findCell(mappingPos_);

uProbeVector = vector(1000,1000,1000);

uProbeVector = U[posCellId];
OpenFOAM does this in all processor. Then if i use the reduce command:

Code:
reduce(uProbeVector, minOp<vector>());
My question is, if i don't use the reduce coomand and i ask OpenFOAM for the value of "uProbeVector", ¿What value does OpenFOAM return? I did the test and the result was similar (not equal) to the one with the reduce command, so i was wondering how it got to that value. Thanks!
dimasbarile is offline   Reply With Quote

Old   May 12, 2022, 12:38
Default
  #4
Senior Member
 
Mark Olesen
Join Date: Mar 2009
Location: https://olesenm.github.io/
Posts: 1,686
Rep Power: 40
olesen has a spectacular aura aboutolesen has a spectacular aura about
You probably have a bigger problem. With findCell() you return the index of the cell that encloses that particular position. In parallel this means that only one processor will deliver an index and all of the others will return -1. Don't use a -1 to access any arrays, unless you like SEGFAULT.


So which value would you use for processors where the position is not found? Generally you would do something like initialise with vector::uniform(GREAT) instead of your value of 1000. If you want to know your uProbeVector globally (ie, not just on the proc where it is found) you need to transmit that information somehow. This is what the reduce is about. The name is actually slightly misleading since it is actually an MPI_Allreduce. So after this operation all processors will have the same value.
If that is not what you want, then don't do a reduce.
olesen is offline   Reply With Quote

Old   May 12, 2022, 12:46
Default
  #5
New Member
 
Dimas
Join Date: Oct 2021
Posts: 4
Rep Power: 4
dimasbarile is on a distinguished road
Thanks a lot for the reply! it has become much clearer now
dimasbarile is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Parallel computing on a personal workstation choiyun STAR-CCM+ 6 April 10, 2017 09:58
Parallel computing debate on transient simulations JuPa CFX 3 December 17, 2013 05:22
Error with parallel computing: MPI therandomestname FLUENT 1 June 28, 2012 04:12
Parallel Computing peter Main CFD Forum 7 May 15, 2006 09:53
Parallel Computing Classes at San Diego Supercomputer Center Jan. 20-22 Amitava Majumdar Main CFD Forum 0 January 5, 1999 12:00


All times are GMT -4. The time now is 23:56.