|
[Sponsors] |
June 8, 2015, 11:15 |
Problem with parallel run
|
#1 | |
New Member
Join Date: Apr 2015
Posts: 16
Rep Power: 11 |
Hi,
There is here a copy of a run in progress, my problem is the value of my variable for the slave processor 7. Why is it different from the other slave processors ? I don't understand... I used a User Fortran routine but the problem does not seem to come from this. Thanks ! Quote:
|
||
June 8, 2015, 12:24 |
|
#2 |
Senior Member
Join Date: Jun 2009
Posts: 1,804
Rep Power: 32 |
Your code (user fortran) is writing some value of temperature from each partition, and you are asking why they are different. Is that value the temperature at ? say: node, boundary, global maximum, average etc ?
Describing what your goal is, how you are trying to achieve it, and the current symptoms could help others in the forum to pitch in; otherwise, it is nearly impossible to contribute. |
|
June 8, 2015, 15:10 |
|
#3 |
New Member
Join Date: Apr 2015
Posts: 16
Rep Power: 11 |
Yes I realize that I have not been very explicit.
TEMP is not a temperature, it is a variable that I set. This variable depends on the mass flow rate which is the inlet boundary condition of my problem. The goal of TEMP is to calculate at each iteration a criteria that measures the convergence. What I don't understand is the execution of the parallel run. On my other calculations the structure was : Slave 2 : ... Slave 3 : ... Slave 4 : ... Slave 5 : ... Slave 6 : ... Slave 7 : ... Slave 8 : ... Now it is : Slave 2 : ... Slave 3 : ... Slave 4 : ... Slave 5 : ... Slave 6 : ... Slave 7 : ... Slave 8 : ... Slave 7 : ... Why there is an other execution of slave 7 at the end ? |
|
June 8, 2015, 15:40 |
|
#4 | |||
Senior Member
Join Date: Jun 2009
Posts: 1,804
Rep Power: 32 |
Let us review,
Describing what your goal is, Quote:
Quote:
Quote:
In any case, you seem to be assuming a specific parallel programming paradigm (say: one ordered call per XXX) when writing your custom code (User Fortran); however, the software is free to call your custom code "on demand" and it is up to you to handle such events. Imagine the software could call the custom code on the inlet for a sub-group of element faces at a time? For example, lets say you have 100 faces on the inlet, and the code breaks them down in 3 groups of 30 and 1 group of 10 for a serial run. Then , the custom code for the inlet will be called 3 times. In a parallel run, the situation is more complex because some partitions will have faces on the inlet and some will not. Have you accounted for such situations ? |
||||
June 10, 2015, 04:15 |
|
#5 | |
New Member
Join Date: Apr 2015
Posts: 16
Rep Power: 11 |
Thanks for your reply Opaque !
Quote:
|
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
area does not match neighbour by ... % -- possible face ordering problem | St.Pacholak | OpenFOAM | 10 | February 7, 2024 21:50 |
First Parallel Run - need some help | Gian Maria | OpenFOAM | 3 | June 17, 2011 12:08 |
nonNewtonianIcoFoam - problem with parallel run | chris_sev | OpenFOAM Bugs | 4 | April 1, 2009 09:13 |
Parallel run with engineFoam | francesco | OpenFOAM Bugs | 1 | November 25, 2008 07:06 |
Problem on Parallel Run Setup | Hamidur Rahman | CFX | 0 | September 23, 2007 17:11 |