Hello again Christina,
Base
Hello again Christina,
Based on your test case (rotor2Dggi.rar) and on the syntax used for the ggi patches definition in your boundary file, I can affirm right away that you are using a slightly outdated version of OF-1.5-dev. So the best solution I can propose to you is this: Either svn update or svn co https://openfoam-extend.svn.sourceforge.net/svnroot/openfoam-extend/trunk/Core/O penFOAM-1.5-dev OpenFOAM-1.5-dev Recompile this new version and take your test case out for a new spin. After a necessary modification to the ggi patch definition, your test case ran flawlessly on my system, from beginning to end, without any floating point errors. And some of the early intermediary velocity solutions I computed are looking a lot like the little picture you posted on the IMVT Web site. So if you plan on working with the GGI and OF-1.5-dev, I would strongly suggest that you keep your copy of OF-1.5-dev in sync with the latest version available on openfoam-extend. You will benefit right away from a high quality version of OF, and also from all of the new GGI stuff that will be made available in the months to come. Cheers, Martin |
Hi
My name is Fredrik Hells
Hi
My name is Fredrik Hellstrom and I have been using FOAM 1.4.1-dev for a while. I'm performing LES computations of the flow in a radial turbine with a rotating wheel, and hence, I'm using the mixerGgiFVMesh. It works well, both in serial and parallel mode. I have recently changed to version to 1.5-dev (date:090202). The simulations with the turbine (with a rotating wheel) are working fine in serial mode, but not in parallel. At the stage when the flux should be calculated for the first point, I'm getting floating point error on all sub-domains, expect the one where the sliding interface is located (I have both sides of the sliding interface located on the same processor). Do I do any thing wrong or is the mixerGgi function not implemented in a way that does work in parallel? Thanks in advanced for any help or tips. Fredrik |
Can you try the debug version
Can you try the debug version and give me the traceback please?
Hrv |
Hi Hrv
Here is the output (
Hi Hrv
Here is the output (if I manage to attach the file)! I hope it will help you! Best regards Fredrik |
Hi
I will try again to uplo
Hi
I will try again to upload the file. http://www.cfd-online.com/OpenFOAM_D...s/mime_txt.gif slurm-401560.out /Fredrik |
That won't help: I need the de
That won't help: I need the debug traceback with line numbers.
Hrv |
Hello Fredrik,
Your trace f
Hello Fredrik,
Your trace file is informative enough. Basically, we can see the 18 out of your 24 processors are crashing in a low-level GGI method used for computing the GGI weights (rescaleWeightingFactors). This means that your GGI is spread out on more than one processor, which is currently big NO NO. For a parallel simulation involving GGIs, you should expect to see messages related to the evaluation of the GGI weighting factors coming from only 1 out of your 'n' processors. So I would suggest you revisit your parallel decomposition... Martin |
Hello Martin,
thanks for ha
Hello Martin,
thanks for having a look at my case. I thought my OpenFoam version was up to date because I downloaded the last update from powerlab.fsb.hr on February 2. This morning I got the last version from SourceForge and installed it. I tried running the same case with the new version. The first thing I noticed is the missing definition of a pressure value at the ggi. Why do I have to define a pressure value at the interfaces? These values are also missing in the mixerGgi tutorial. Running my case I still have the same problems as before. The calculation and the flow field looks pretty good during the first timesteps until about 0.0025 s. At this point the flow field starts deverging. Increasing the number of correctors in the PISO loop I would expect the number of necessary iterations to decrease with each loop. But this doesn't happen during the calculation. The number of iterations remains at about 600. I tried to decrease the number of correctors and non orthogonal correctors and the the floating point error comes up. Do I really need so many corrector steps for the calculation? Do you have any idea what else I could change in order to make this case run? Thank you very much for you help, Christina |
Hi Hrvoje and Martin
My cas
Hi Hrvoje and Martin
My case is correct decomposed; the GGI interface is on one processor. If I delete the definition of the GGI boundary (insideSlider and outsideSlider) in each of the boundary files (processor*/constant/polyMesh/boundary) for each processor that does not contain any sliding interface it works. Of course I also have to change the number of defined boundaries in each boundary file. I have compiled the DEBUG version, but it gives no further information than I posted few days ago. I have to check the debugSwitches in controlDict. What do you think; can this problem be solved by modifying the decomposer or is it better to change the ggi-algorithm. Kindly regards Fredrik |
Hello Fredrik,
Found the pr
Hello Fredrik,
Found the problem. It is now fixed. Just update your local 1.5-dev installation from the svn repository on openfoam-extend. Thank you for your patience. Martin |
Hello Christina,
> The firs
Hello Christina,
> The first thing I noticed is the missing definition of a pressure value at the ggi. Why do I have to define a pressure value at the interfaces? >These values are also missing in the mixerGgi tutorial. Yea, this is new. And the tutorial is not updated. We need to explicitly specify an initial value for the ggi boundary fields because otherwise, they will all be set to 0 by default at first use. This was causing a division by zero error when using the k-epsilon turbulence model and the GGI. Basically, just set this initial value to the same value as the internal field. About your simulation blowing up: can't help you much here. I am about to run some simple ggi validation test cases in order to check if I do get strange problems as well. I will keep you posted. Martin |
Hi Martin
Thanky you, I wil
Hi Martin
Thanky you, I will do so! I'll come back if I found any more problems. Best regards Fredrik |
Hallo, all,
I am just beginn
Hallo, all,
I am just beginning to transfer from doing a 2d geometry on one computer, to a 3d geometry on a cluster. I am wondering about how to set up the interface for turbDyMFoam in a 3D model. Can anyone post the tutorial "mixer3D" for me? actually, I don't know where to find it. I am very grate for it. thx in advance. Hai Yu |
http://powerlab.fsb.hr/ped/ktu
|
http://wiki.uni-due.de/OpenFOA
|
Thank you, Louis, thank you, P
Thank you, Louis, thank you, Pal,
I have followed the tutorial. I just have one more question, actually, I am experiencing the same problem exectly as Christina. I am now using 1.5-dev(090202), must I update too? And, can anybody confirm me that, turbDyMFoam with Sliding Interface doesn't run in parallel? Very grateful. Regards. Hai |
It will run in parallel if the
It will run in parallel if the GGI interface is contained in one submesh.
As for the floating point errors I can't help you there because I myself get them sometimes and sometimes don't. -Louis |
Hi all
I have a question co
Hi all
I have a question concerning ggiFvPatch::makeDeltaCoeffs. In 1.4.1-dev the following expression was used for dc dc = (1.0 - weights())/(nf() & fvPatch::delta()); In 1.5-dev the expression was modified to dc = 1.0/max(nf() & fvPatch::delta(), 0.05*mag(fvPatch::delta())); which leads to an overestimated dc (as far as I understand the code). What was the reason for this implementation? Regards David |
Hello David,
You are compar
Hello David,
You are comparing pieces of code between a GGI implementation that works and a GGI implementation that did not. So you should not be surprised in seeing bug fixes here and there. For the code snippet you are interested in, it might be easier to understand the code if you just put yourself in the situation where your GGI patches are totally conformal. In all cases, the GGI cell to surface interpolation scheme needs to behave exactly like the basic OpenFOAM surface interpolation code. Now take a look at the implementation of surfaceInterpolation::makeDeltaCoeffs(). You should find something quite familiar. And for surfaceInterpolation::makeDeltaCoeffs(), there is obviously no problem in comparing the source code implementation between 1.4.1-dev and 1.5-dev; it is identical. Cheers! Martin |
1 Attachment(s)
Hi Martin
I agree that the GGI cell to surface interpolation scheme needs to behave exactly like the basic OpenFOAM surface interpolation code. But I think that we introduce an error with the current implementation. Example: pressure distribution for a 1D flow with a conformal GGI in the centre, simulated with potentialFoam p_orig is the current 1.5-dev implementation p_modified is calculated with dc = 1.0/max(nf() & delta(), 0.05*mag(delta())); As far as I understand the code, this dc should be similar to the dc in 1.4.1-dev. Regards david |
All times are GMT -4. The time now is 14:18. |