CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

How to do processor-spanning volume integrals?

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 14, 2015, 09:27
Default How to do processor-spanning volume integrals?
  #1
Senior Member
 
Join Date: Oct 2013
Posts: 397
Rep Power: 18
chriss85 will become famous soon enough
I'm wondering how to calculate volume integrals where the integrand consists of a field value and a position somewhere else in the mesh, and may thus need the values from two processors. Do I need to communicate one of the values first to all other processors, or is there a better method? If I need to transfer the position to all other processors, this means quite some communication, because the results from the cores need to be added together. This could probably be reduced by caching the positions though. Can anyone provide some input here if there are better methods?

For some context, I want to calculate the Biot-Savart equation for magnetic fields for use in boundary conditions.

Edit: One other method might be to load the whole mesh on each processor, but I'm not sure if this is a feasible and better approach.
Edit2: Just saw fvc::domainIntegrate(), I guess this could be what I need. I will have to execute this on each of the boundary faces for every processor, and pass the position of the boundary face somehow though. This will either have to use some advanced passing around of patch sizes and processor indices, or maybe the use of a function like Foam::fieldValue::combineFields().

Last edited by chriss85; September 14, 2015 at 10:27.
chriss85 is offline   Reply With Quote

Old   September 15, 2015, 09:31
Default
  #2
Senior Member
 
Join Date: Oct 2013
Posts: 397
Rep Power: 18
chriss85 will become famous soon enough
I've made some progress. I'm now using a modified version of the function Foam::fieldValue::combineFields(Field<Type>& field), which appears to be usable for combining fields from different processors. I had to add a scatterList() call to distribute the data from the master processor to every processor though
Once all fields are available everywhere, the integration can be caried out on the processor the patch face belongs to.

Unfortunately, this integration is still very slow, a time step now takes about 4x longer. I think the calculation could be improved by some optimizations of memory layout so more data stays in cache but I'm not really an expert on that.

I'm also thinking about doing this only on some faces or on a coarser mesh, and then interpolating to the finer mesh. The results would still be much better compared to using wrong boundary conditions for magnetic vector potential.
chriss85 is offline   Reply With Quote

Old   September 17, 2015, 11:40
Default
  #3
Senior Member
 
Join Date: Oct 2013
Posts: 397
Rep Power: 18
chriss85 will become famous soon enough
Going from the OpenFOAM field implementation to an std::vector container helped maybe 10-20%, using a coarser boundaryMesh on which to evaluate the integral was more beneficial to me.
chriss85 is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Isosurface, volume integrals, etc... Inside a Block instead of the whole Region JohnAB STAR-CCM+ 0 November 13, 2014 17:52
dynamic Mesh is faster than MRF???? sharonyue OpenFOAM Running, Solving & CFD 14 August 26, 2013 07:47
[blockMesh] non-orthogonal faces and incorrect orientation? nennbs OpenFOAM Meshing & Mesh Conversion 7 April 17, 2013 05:42
[blockMesh] BlockMesh FOAM warning gaottino OpenFOAM Meshing & Mesh Conversion 7 July 19, 2010 14:11
Surface and Volume Integrals philippe FLUENT 5 June 22, 2005 01:00


All times are GMT -4. The time now is 18:41.