CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > OpenFOAM Pre-Processing

CyclicAMI BC preservePatches Parallel Run - again!?!

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Display Modes
Old   February 2, 2014, 12:15
Default CyclicAMI BC preservePatches Parallel Run - again!?!
  #1
Member
 
Join Date: Jan 2014
Posts: 63
Rep Power: 3
hxaxtma is on a distinguished road
Hi guys,

In my case I am trying to simulate an open couette channel flow with perdiodic BCs:

in: cyclicAMI
out: cyclicAMI
FrontAndBack: slip
bottom: no slip
top: no slip with fixed Value;
decomposingMethod: scotch with preservePatches (in out);

I know I have to keep my cyclic patches in one processor, therefore I used preservePatches (in out); in the decomposeDict File.

If i decompose in 4 cpus the simulation runs.

Checking
grep -A4 in processor*/constant/polyMesh/boundary | grep nFacesprocessors
results for in:
Code:
processor0/constant/polyMesh/boundary-        nFaces          0;
processor0/constant/polyMesh/boundary-        nFaces          0;
processor0/constant/polyMesh/boundary-        nFaces          5143;
processor0/constant/polyMesh/boundary-        nFaces          3474;
processor0/constant/polyMesh/boundary-        nFaces          2150;
processor1/constant/polyMesh/boundary-        nFaces          2320;
processor1/constant/polyMesh/boundary-        nFaces          2320;
processor1/constant/polyMesh/boundary-        nFaces          5143;
processor2/constant/polyMesh/boundary-        nFaces          0;
processor2/constant/polyMesh/boundary-        nFaces          0;
processor2/constant/polyMesh/boundary-        nFaces          3474;
processor2/constant/polyMesh/boundary-        nFaces          3392;
processor3/constant/polyMesh/boundary-        nFaces          0;
processor3/constant/polyMesh/boundary-        nFaces          0;
processor3/constant/polyMesh/boundary-        nFaces          2150;
processor3/constant/polyMesh/boundary-        nFaces          3392;
and for out in
Code:
processor0/constant/polyMesh/boundary-        nFaces          0;
processor1/constant/polyMesh/boundary-        nFaces          2320;
processor2/constant/polyMesh/boundary-        nFaces          0;
processor3/constant/polyMesh/boundary-        nFaces          0;
processor 1 contains my inlet patch with 2320 faces and my outlet patch with 2320 faces. For out the Output is clear? What are all the other faces for in Output (5143,3392,etc..).


So far so good, but if I try massive Parllelisation on Cluster with 128 cpus my simulation fails with following error: EVEN with 1 CPU! On my local workstation everything is OK

Code:
...
Reading/calculating face flux field phi

AMI: Creating addressing and weights between 2320 source faces and 2320 target faces
--> FOAM Warning : 
    From function AMIInterpolation<SourcePatch, TargetPatch>::checkPatches(const SourcePatch&, const TargetPatch&)
    in file lnInclude/AMIInterpolation.C at line 111
    Source and target patch bounding boxes are not similar
    source box span     : (42.4264 84.8528 10)
    target box span     : (26.0597 76.9991 10)
    source box          : (-42.4264 -42.4264 0) (3.55271e-15 42.4264 10)
    target box          : (-43.7641 -39.1969 0) (-17.7044 37.8022 10)
    inflated target box : (-47.8592 -43.292 -4.09511) (-13.6093 41.8973 14.0951)
[88] 
[88] 
[88] --> FOAM FATAL ERROR: 
[88] Unable to set source and target faces
Where is the problem?

Thanks for help

Last edited by hxaxtma; February 3, 2014 at 07:12.
hxaxtma is offline   Reply With Quote

Old   February 3, 2014, 05:53
Default
  #2
Member
 
Join Date: Jan 2014
Posts: 63
Rep Power: 3
hxaxtma is on a distinguished road
I really really need help on this, so any advise would be appreciated
hxaxtma is offline   Reply With Quote

Old   February 6, 2014, 18:48
Default
  #3
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 8,312
Blog Entries: 34
Rep Power: 84
wyldckat is just really nicewyldckat is just really nicewyldckat is just really nicewyldckat is just really nice
Quick answer: I don't have time any time soon to look into this, but I think you can find a lot of valuable information on this thread: Problem using AMI
wyldckat is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Case running in serial, but Parallel run gives error atmcfd OpenFOAM Running, Solving & CFD 14 July 25, 2015 14:15
parallel Grief: BoundaryFields ok in single CPU but NOT in Parallel JR22 OpenFOAM Running, Solving & CFD 2 April 19, 2013 16:49
Parallel Run on dynamically mounted partition braennstroem OpenFOAM Running, Solving & CFD 14 October 5, 2010 14:43
Unable to run OF in parallel on a multiple-node cluster quartzian OpenFOAM 3 November 24, 2009 14:37
Run in parallel a 2mesh case cosimobianchini OpenFOAM Running, Solving & CFD 2 January 11, 2007 07:33


All times are GMT -4. The time now is 18:43.