Error posix.c
Hi guys,
i'm doing a Les simulation about flow around circular cylinder; i have used pimplefoam and pisofoam but either produce same error: ] --> FOAM FATAL ERROR: [5] Couldn't create directory "/home/delbuono/simulazioni/LES_CilindroRe3900_piso3/processor5/0.281574" [5] [5] From function Foam::mkDir(const fileName&, mode_t) [5] in file POSIX.C at line 551. [5] FOAM parallel run exiting [5] Have you got any suggestions?is it an error about Schemes or Solution? Best Regards |
Hi,
The error has nothing to do with schemes or solution. Solver can not create a folder to write results there. Do you have enough disk space? |
Yes i have 200Gb available, this is strange!!
is it an error about Mesh?? |
Well, then I have more questions:
0. Do you really have 200G of disk space? Can you show df -h output? 1. Do you run locally or is it cluster environment? 2. On what file system /home is stored? 3. Is there any quotas on disk usage? 4. What is the total number of domains in parallel decomposition? |
I run in cluster enviroment and i use 40 subdomains;
this is df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda6 880G 466G 369G 56% / /dev/sda1 119M 21M 92M 19% /boot /dev/sda3 7.6G 146M 7.0G 2% /tmp /dev/sda5 16G 1.7G 13G 12% /var tmpfs 5.9G 0 5.9G 0% /dev/shm i deleted many files but when my simulation is exploded Avail was 230-250 and Use% was 72%. Sorry but i don't understand question 2 and 3 |
Hi,
2 is about file system type (ext4, xfs, nfs?). As for every write (assuming 40 subdomains and 5 domain variables) 40*6 files are created, so it is possible to run out of inodes before free space. 3 is about constraints on disk usage that are usually imposed in collective usage environments. Any way, the problem is neither in discretization schemes, nor linear system solver, nor mesh quality. The reason for simulation to blow up is underlying filesystem. How often do you save data? |
Hi Alexey,
file system type is ext3 and it is stored in home.I think there isn't constraints on disk usage.So what'is the problem?do you think if i use more subdomains is better? |
Hi,
There are still lots of unknown in your question. Still main problem is strong OpenFOAM's desire to create lots of files when it writes simulation results to disk. While df can report 30% of free space you can run out of inodes (see for example http://www.redhat.com/archives/ext3-.../msg00304.html). Did you try to reduce: - number of subdomains? - writeInterval? |
I don't understand..if the problem is the end of node's space why i must try to reduce write intervall or subdomains?I expect to have to increase these factors;
do you have other links about this problem? |
Hi,
OpenFOAM creates too many files, that is a problem. If you reduce number of sub-domains or writeInterval, you reduce number of files written by OpenFOAM. As I have written several posts earlier, for every write OpenFOAM creates 40*6 files (number of domains times number of domain variable). So if you have a simulation that runs from 0 to 1 second, with writeInterval 0.001 it will create 240,000 files. You can check inode hypothesis by issuing "dh -hi" after you simulation halts with mkDir problem. You will get something like: Code:
Sys. de fichiers Inodes IUtil. ILib. IUti% Monté sur Another links? Use your preferred search engine with "ext3 inode limit" query. |
But if i reduce writeintervall, 0.0001 for example, it will create more file(2400000). So i think i must icrease writeIntervall..
This is df -hi Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda6 112M 15M 97M 14% / Is there a way to determine the maximum number of files that can be created? |
Yes, you are right. You should increate the value of writeInterval (to reduce number of writes performed by solver).
Do you provide "df -hi" values for the moment of mkDir error? Maximum number of files of ext3, it depends on the options during FS creation. Overall it is not the things you ask here, it is the reason to talk to your cluster administrator. Here is an example on how to find the number: http://forums.fedoraforum.org/showthread.php?t=245633, of example. Also, exceeding the number of inodes is one of the possibilities. There can be other problems (with your cluster). Just to check, maybe someone with 40+ nodes willing to reproduce the error, can you please post archive with the case files? |
Hi alexeym,
i tried to reduce subdomains(from 40 to 20) and to increase writeIntervall (from 8.8e-6 to 1.75e-5).There has been improvements,infact now simulation explode after 0.5599s instead 0.2815. So thank you for your help! Now i will search a way to increase node's space! Best Regards Alessandro |
Hi alexeym,
do you know how to use "distribuited" option in decomposeParDict?do you think that will help to improve the inodes problem? Best Reagards |
Quick answers:
Quote:
Quote:
Why don't you create an image file that is formatted to ext4? You can find instructions for this here: Maintaining a local git repository on a portable disk image file or partition - See section "Preparing and using the single ext3/4 file filesystem". It would work as if you were writing into another partition, which is actually a single file. Keep in mind that the image file will not auto-expand! |
Quote:
I have the same problem with you, could you please tell me what's wrong with that? Thanks! Best Regards, Peng |
All times are GMT -4. The time now is 16:39. |