CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Fluent UDF and Scheme Programming

fluent UDF parallel problem

Register Blogs Members List Search Today's Posts Mark Forums Read

Sponsored Links

Reply
 
LinkBack Thread Tools Display Modes
Old   January 8, 2017, 21:24
Default fluent UDF parallel problem
  #1
New Member
 
Tan Peilai
Join Date: Jan 2017
Posts: 5
Blog Entries: 1
Rep Power: 2
tanpeilai is on a distinguished road
Sponsored Links
This is my UDF, I want to caculate the mass flow of pressure out, and return it to mass flow inlet.
It works well in serial, but when I use it in parallel, the mass flow was zero in inlet.
(It's in transient model, return mass at next timestep)
I read UDF mannul and look for the same question,modify it for weeks, try to use F_UDMI too, but it's works wrong.
Could someone help me , thank you very much!!

#include "udf.h"
#define P_outlet_ID 2

real flow = 0.;

DEFINE_PROFILE(MP_mass_pri_l,t,i)
{

face_t f;
begin_f_loop(f,t)
{
F_PROFILE(f,t,i) = flow;
}
end_f_loop(f,t)

}

DEFINE_EXECUTE_AT_END(MP_measure_mass_flow)
{

Domain *d;
Thread *th_p, *th_m;
face_t f;
d = Get_Domain(1);
th_p= Lookup_Thread(d, P_outlet_ID);

flow=0.0;
begin_f_loop(f,th_p)
{
flow += F_FLUX(f,th_p);
}
end_f_loop(f,th_p)


}

Last edited by tanpeilai; January 8, 2017 at 23:07.
tanpeilai is offline   Reply With Quote
Sponsored Links

Old   January 8, 2017, 23:10
Default
  #2
New Member
 
Tan Peilai
Join Date: Jan 2017
Posts: 5
Blog Entries: 1
Rep Power: 2
tanpeilai is on a distinguished road
I use message to get variable's value, only the last node return right value, else is zero, and the monitor of mass flow inlet is zero.
tanpeilai is offline   Reply With Quote

Old   January 9, 2017, 06:24
Default
  #3
Senior Member
 
Kevin
Join Date: Dec 2016
Posts: 138
Rep Power: 2
KevinZ09 is on a distinguished road
Parallel UDFs work quite a bit different than serial ones. The problem you're encoutering probably comes from the outlet being on another partition than the inlet. So when you calculate the mass flow through the outlet, the partition that contains that face will update its value for "flow". However, the partition responsible for the inlet still has "flow = 0.". So then when the inlet mass flow is set, it remains zero. So you're gonna need a "node_to_host_real_1" call and a "host_to_node_real_1" call to make sure the update value of flow is known by all the nodes.

For a similar reason F_UDMI doesn't work; the partion that contains the data is different from the partition that needs the data.

There's quite a good parallel UDF example in the UDF manual. It's in section 7.8.

See if it helps you. If not, let me know what you don't get/what's not working?
KevinZ09 is offline   Reply With Quote

Old   January 10, 2017, 01:36
Default
  #4
New Member
 
Tan Peilai
Join Date: Jan 2017
Posts: 5
Blog Entries: 1
Rep Power: 2
tanpeilai is on a distinguished road
Thank you for your answer. I read mannual and modify my udf. and I also found some macro may help, but has none results. The udf I tried as follow:
#include "udf.h"
#define P_outlet_ID 2
real flow;
DEFINE_PROFILE(mass_pri_l,t,i)
{
face_t f;
begin_f_loop(f,t)
{
F_PROFILE(f,t,i) = flow;
}
end_f_loop(f,t)
}

DEFINE_EXECUTE_AT_END(measure_mass_flow)
{
Domain *d;
Thread *th_p, *th_m;
face_t f;
d = Get_Domain(1);
th_p= Lookup_Thread(d, P_outlet_ID);

#if !RP_NODE
flow=0.0;

begin_f_loop(f,th_p)
{
flow += F_FLUX(f,th_p);
}
end_f_loop(f,th_p)

#endif
host_to_node_real_1(flow);
}

and another udf use flow=PRF_GRHIGH1(flow);

#include "udf.h"
#define P_outlet_ID 2

DEFINE_PROFILE(mass_pri_l,t,i)
{
Domain *d;
Thread *th_p;
face_t f1,f2;
real flow;
d = Get_Domain(1);
th_p= Lookup_Thread(d, P_outlet_ID);

flow=0.0;
begin_f_loop(f1,th_p)
{
flow += F_FLUX(f1,th_p);
}
end_f_loop(f1,th_p)

flow=PRF_GRSUM1(flow);

begin_f_loop(f2,t)
{
F_PROFILE(f2,t,i) = flow;
}
end_f_loop(f2,t)

}

Thank you!
tanpeilai is offline   Reply With Quote

Old   January 10, 2017, 12:28
Default
  #5
Senior Member
 
Kevin
Join Date: Dec 2016
Posts: 138
Rep Power: 2
KevinZ09 is on a distinguished road
I'm short on time now, probably have more time tomorrow. But my suggestion would be to use DEFINE_ADJUST instead of DEFINE_EXECUTE_AT_END/DEFINE_PROFILE. The former updates the value at the start of the timestep/iteration and then adjusts your boundary value accordingly. Then hook the DEFINE_ADJUST macro.
KevinZ09 is offline   Reply With Quote

Old   January 11, 2017, 02:44
Default
  #6
New Member
 
Tan Peilai
Join Date: Jan 2017
Posts: 5
Blog Entries: 1
Rep Power: 2
tanpeilai is on a distinguished road
Thank you for you kindness, and I try it now. I used DEFINE_ADJUST too, but I think I didn't use it correctly. Thank you and if I correct it, I tell you at first.
tanpeilai is offline   Reply With Quote

Old   January 11, 2017, 06:23
Default
  #7
Senior Member
 
Kevin
Join Date: Dec 2016
Posts: 138
Rep Power: 2
KevinZ09 is on a distinguished road
Here's a UDF I think should work, though haven't tried it. Either way, give it a shot, or compare it with yours, and see if it works. If not, or yours doesn't, let me know.

Code:
#include "udf.h"
#define P_outlet_ID 2

  real flow;  /* defined outside because will be used in multiple DEFINE macros */

DEFINE_ADJUST(adjust, domain)
{

  /* "Parallelized" Sections */
  #if !RP_HOST  /* Compile this section for computing processes only (serial
         and node) since these variables are not available on the host */
     Thread *thread;
     face_t f;
     thread = Lookup_Thread(domain, P_outlet_ID);

     flow = 0.0;

     begin_f_loop(f, thread) /* loop over all faces in thread "thread" */
     {
        /* If this is the node to which face "officially" belongs,*/
        if (PRINCIPAL_FACE_P(f,thread)) /* Always TRUE in serial version */
        {
           flow +=F_FLUX(f,thread);
        }
     }
     end_f_loop(f, thread)

     #if RP_NODE
        /* Perform node synchronized actions here. Does nothing in Serial */
        flow = PRF_GRSUM1(flow);
     #endif /* RP_NODE */

  #endif /* !RP_HOST */

}


DEFINE_PROFILE(mass_pri, thread, position)
{
  /* "Parallelized" Sections */
  #if !RP_HOST  /* Compile this section for computing processes only (serial
         and node) since these variables are not available on the host */
     face_t f;
     begin_f_loop(f, thread)
     {
        F_PROFILE(f, thread, position) = flow;
     }
     end_f_loop(f, thread)
 #endif /* !RP_HOST */
}
KevinZ09 is offline   Reply With Quote

Old   January 12, 2017, 04:17
Default
  #8
New Member
 
Tan Peilai
Join Date: Jan 2017
Posts: 5
Blog Entries: 1
Rep Power: 2
tanpeilai is on a distinguished road
Thank you again, it works very well. when I change adjust to EXECUTE_AT_END, it works great too.

But there some different, when use adjust, the mass flow in is from previous iteration, so the two monitors( pressure out and mass flow in) aren't same. when I use EXECUTE_AT_END, they are not same too, but it same to previous timestep. It doesn't change throughout the time step, I think maybe it is better to conservation of mass. thank you.
step flow-time surf-mon-1 surf-mon-2
239 8.6180e+01 -4.8995e-02 0.0000e+00

step flow-time surf-mon-1 surf-mon-2
240 8.6280e+01 -4.8406e-02 4.8995e-02

step flow-time surf-mon-1 surf-mon-2
241 8.6380e+01 -4.7777e-02 4.8406e-02

#include "udf.h"
#define P_outlet_ID 2

real flow; /* defined outside because will be used in multiple DEFINE macros */

DEFINE_EXECUTE_AT_END(measure_mass_flow)
{

/* "Parallelized" Sections */
#if !RP_HOST /* Compile this section for computing processes only (serial
and node) since these variables are not available on the host */
Domain *domain;
Thread *thread;
face_t f;

domain = Get_Domain(1);
thread = Lookup_Thread(domain, P_outlet_ID);

flow = 0.0;

begin_f_loop(f, thread) /* loop over all faces in thread "thread" */
{
/* If this is the node to which face "officially" belongs,*/
if (PRINCIPAL_FACE_P(f,thread)) /* Always TRUE in serial version */
{
flow +=F_FLUX(f,thread);
}
}
end_f_loop(f, thread)

#if RP_NODE
/* Perform node synchronized actions here. Does nothing in Serial */
flow = PRF_GRSUM1(flow);
#endif /* RP_NODE */

#endif /* !RP_HOST */

}


DEFINE_PROFILE(mass_pri, thread, position)
{
/* "Parallelized" Sections */
#if !RP_HOST /* Compile this section for computing processes only (serial
and node) since these variables are not available on the host */
face_t f;
begin_f_loop(f, thread)
{
F_PROFILE(f, thread, position) = flow;
}
end_f_loop(f, thread)
#endif /* !RP_HOST */
}

It's almost same to yours. Thank you very much.
tanpeilai is offline   Reply With Quote

Old   January 12, 2017, 04:58
Default
  #9
Senior Member
 
Kevin
Join Date: Dec 2016
Posts: 138
Rep Power: 2
KevinZ09 is on a distinguished road
In steady-state runs, there isn't much of a difference between the two, except for when they are called and executed. DEFINE_AT_END is indeed executed at end of iteration, however, the outcome isn't used yet. Meaning, you calculate the mass flow rate, but it isn't used until the next iteration starts, as the macro is executed at the end of the iteration. With DEFINE_ADJUST, it's called at the start of the iteration, before Fluent updates anything else. So it's basically used at the exact same time in either case, the only difference being when you, as user, can access the value of flow. But you're not using it until the next iteration and won't influence the solution until the next iteration. So, to my understanding, it won't affect mass conservation or anything either, as flow equations, residuals and convergence have already been updated before DEFINE_AT_END is called.

In transient runs, it's different though. DEFINE_AT_END is called only at the end of a timestep, while DEFINE_ADJUST is called at the start of every iteration. So the latter is called more frequently if you've got multiple iterations per timesteps. So depends what you want as well.
KevinZ09 is offline   Reply With Quote

Old   March 9, 2017, 16:19
Default
  #10
New Member
 
Nevada
Join Date: Apr 2014
Posts: 24
Rep Power: 5
razi.me05 is on a distinguished road
Hi since this is a new thread and the context is similar I thought I might ask hear my question: I have written the following udf for calculating power on a 2D wall. It runs totally fine in serial but when I run in parallel at crushes with sigsev error. I put some message flag to identify where it gets stuck and I saw that it stops before
Code:
node_to_host_real_1(power);
and it does not run any

Quote:
#if !RP_HOST
.
.
#endif
my udf is the following:

Code:
DEFINE_EXECUTE_AT_END(POWER_CALC_500)
{
	
	
	real power = 0.0;
	
	#if !RP_HOST
	Domain *dom;
	Thread *thl, *ths, *ct;
	Node *v;
	int n;
	cell_t c;
	face_t f;
	real x[2],y[2];
	real A[ND_ND];
	real dl;
	real tx, ty;
	real powl = 0.0, pows = 0.0;
	#endif
	
	#if !RP_NODE
	FILE *fp;
	fp = fopen("power_500.dat","a");
	#endif
	
	
	#if !RP_HOST
	dom = Get_Domain(1);
	thl = Lookup_Thread(dom,11);
	ths = Lookup_Thread(dom,14);
	
	begin_f_loop_int (f, thl)
	{
		if (PRINCIPAL_FACE_P(f, thl))
		{
		f_node_loop (f, thl, n)
        {
			v = F_NODE (f, thl, n);
			x[n] = NODE_X (v);
			y[n] = NODE_Y (v);
		}	 

		dl = sqrt(pow(x[1]-x[0],2)+pow(y[1]-y[0],2));	
		c = F_C0(f, thl);
		ct = THREAD_T1(thl);
		tx = -C_P(c,ct)+mu*(2*C_DUDX(c,ct)+C_DVDX(c,ct)+C_DUDY(c,ct));
		ty = -C_P(c,ct)+mu*(2*C_DVDY(c,ct)+C_DVDX(c,ct)+C_DUDY(c,ct));
		powl += (tx*C_U(c,ct)+ty*C_V(c,ct))*dl;
		}
	}
	end_f_loop_int (f, thl);

	
	begin_f_loop_int (f, ths)
	{
		if (PRINCIPAL_FACE_P(f, ths))
		{
		f_node_loop (f, ths, n)
        {
			v = F_NODE (f, ths, n);
			x[n] = NODE_X (v);
			y[n] = NODE_Y (v);
		}	 

		dl = sqrt(pow(x[1]-x[0],2)+pow(y[1]-y[0],2));	
		c = F_C0(f, ths);
		ct = THREAD_T0(ths);
		tx = -C_P(c,ct)+mu*(2*C_DUDX(c,ct)+C_DVDX(c,ct)+C_DUDY(c,ct));
		ty = -C_P(c,ct)+mu*(2*C_DVDY(c,ct)+C_DVDX(c,ct)+C_DUDY(c,ct));
		pows += (tx*C_U(c,ct)+ty*C_V(c,ct))*dl;
		} 
	}
	end_f_loop_int (f, ths);


	
	power = powl-pows;
	
	#if RP_NODE
	power = PRF_GRSUM1(power);
	#endif
	#endif
	
	
	
	node_to_host_real_1(power);
		
	#if !RP_NODE
	fprintf(fp, "%1.6e %1.6e \n", CURRENT_TIME, power);
   	fclose(fp);
	#endif
	
}
I would really appreciate if someone can help.
razi.me05 is offline   Reply With Quote

Old   March 9, 2017, 19:04
Default
  #11
New Member
 
Nevada
Join Date: Apr 2014
Posts: 24
Rep Power: 5
razi.me05 is on a distinguished road
Nevermind. Finally could figure it out. THREAD_T1 does not even exist in my case.
razi.me05 is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
problem of running parallel Fluent on linux cluster ivanbuz FLUENT 14 March 6, 2017 03:01
problem with compiling boundary condition udf in parallel mode chem engineer Fluent UDF and Scheme Programming 11 June 29, 2015 06:23
Problem with Parallel Fluent mohamed_ FLUENT 1 December 28, 2012 12:23
Transient pressure UDF for parallel fluent droberts Fluent UDF and Scheme Programming 5 October 11, 2010 04:13
Parallel Fluent +UDF Jack Martinez FLUENT 0 June 28, 2007 11:19

Sponsored Links


All times are GMT -4. The time now is 03:50.