CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Bugs

Cases with small length scale work fine on a single processor but fail in parallel

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 25, 2009, 16:44
Default Hopefully a quick question for
  #1
Member
 
Adam Donaldson
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37
Rep Power: 17
adona058 is on a distinguished road
Hopefully a quick question for someone out there to answer.

I have a case with a cell lenght scale of ~0.00002 (not unreasonably small). When decomposed, solvers such as InterDyMFoam fail to get past the first pressure iteration (i.e. the solver goes through the maximum number of iterations).

Yet when run on a single processor, everything proceeds fine.

I currently have 10 decimals of precision set in the controlDict file...


Does anyone know if I need to make some other settings changes to make a small-scale case work in parallel?

Thanks,

Adam
adona058 is offline   Reply With Quote

Old   March 6, 2009, 06:02
Default What is the case? Refinement/u
  #2
Senior Member
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26
mattijs is on a distinguished road
What is the case? Refinement/unrefinement or moving mesh? There should be no dependency on scale (only once dimensions/timesteps approach 1e-15 or so). Do you write the mesh binary?
mattijs is offline   Reply With Quote

Old   March 26, 2009, 01:46
Default GAMG / floatTransfer 1?
  #3
New Member
 
Nikolaos Spyrou
Join Date: Mar 2009
Posts: 22
Rep Power: 17
nikos_fb16 is on a distinguished road
Hi adam,

if you're using the GAMG solver probably the parameter "floatTransfer" in the global controlDict (etc/controlDict) is set to 1. You should set it to 0, this will help.
nikos_fb16 is offline   Reply With Quote

Old   April 16, 2009, 13:29
Default Still having problems... Test Case attached
  #4
Member
 
Adam Donaldson
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37
Rep Power: 17
adona058 is on a distinguished road
Hi Mattijs, Nikolaos.

Sorry for the late response to your replies, for some reason my email notification from the forum hasent been working.

I tired doing modifications based on both of your comments, with no luck. Using ascii or binary write formats did not appear to have an impact. I also tried moving the pdRefProbe location to ensure it was within the first processor (which didnt have an effect). Modifying the term in the controlDict file (etc/controDict) also didn't affect the results.

The simulation runs in parallel for a number of iterations, but eventually fails.

I have attached a simple test case in the hopes of obtaining some help in solving this problem (testcase.zip). To minimize the size, I only included the blockmesh dictionary file and boundary file (you'll have to run blockMesh)

The test case is for a droplet in simple shear. Try running the case on a single processor, and on multiple processors (currently set up to decompose into 4).

I have also attached the output from my machine for both the single and parallel runs (decomposeTest.zip).


Please take a look and get back to me with any additional suggetions. This is a rather large problem for me, as I have some large grids (multi-million cell) that I need to run cases on, which would take an unreasonable amount of time on a single processor.

I know that the problem is not limited to the interDyMFoam solver, as I have another custom solver that experiences similar problems. It also uses the GAMG solver for the pressure field.
Attached Files
File Type: zip testcase.zip (23.2 KB, 6 views)
File Type: zip decomposeTest.zip (82.0 KB, 7 views)

Last edited by adona058; April 16, 2009 at 13:54.
adona058 is offline   Reply With Quote

Old   April 16, 2009, 13:47
Default Still having problems... Additional Note
  #5
Member
 
Adam Donaldson
Join Date: Mar 2009
Location: Ottawa, Ontario, Canada
Posts: 37
Rep Power: 17
adona058 is on a distinguished road
As an additional Note, there is no adaptive meshing in this case. I am using interDyMFoam as an example since I need it for other similar cases.
adona058 is offline   Reply With Quote

Old   April 17, 2009, 05:41
Default
  #6
Senior Member
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26
mattijs is on a distinguished road
From your runParFoam.o56029 file:

Pstream initialized with:
floatTransfer : 1

If you look at the error traceback it is due to a sigfpe - usually a division by 0 or an overflow. A division by 0 would show up in the traceback as originating from '/' so it is an overflow. See also http://www.cfd-online.com/Forums/ope...tml#post210878
mattijs is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Why my code is ok with single processor but doesnbt work in openmpi xiuying OpenFOAM Running, Solving & CFD 0 November 23, 2007 14:44
Max mesh size on single processor? JP CFX 2 November 4, 2007 18:02
parallel computation "safe-fail" Jonathan FLUENT 1 August 31, 2007 05:54
Large Scale Model:Small scale computer gull COMSOL 0 March 5, 2007 07:37
Parallel run fail in CFX5.7.1 in Linux FC.4 Begonia CFX 1 August 2, 2005 09:03


All times are GMT -4. The time now is 00:27.