CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > OpenFOAM Bugs

Error posix.c

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree2Likes
  • 1 Post By alexeym
  • 1 Post By alexeym

Reply
 
LinkBack Thread Tools Display Modes
Old   May 9, 2015, 15:38
Default Error posix.c
  #1
New Member
 
Alessandro Del Buono
Join Date: May 2015
Posts: 12
Rep Power: 3
Alessandro1589 is on a distinguished road
Hi guys,
i'm doing a Les simulation about flow around circular cylinder; i have used pimplefoam and pisofoam but either produce same error:

] --> FOAM FATAL ERROR:
[5] Couldn't create directory "/home/delbuono/simulazioni/LES_CilindroRe3900_piso3/processor5/0.281574"
[5]
[5] From function Foam::mkDir(const fileName&, mode_t)
[5] in file POSIX.C at line 551.
[5]
FOAM parallel run exiting
[5]

Have you got any suggestions?is it an error about Schemes or Solution?

Best Regards
Alessandro1589 is offline   Reply With Quote

Old   May 10, 2015, 02:45
Default
  #2
Senior Member
 
Alexey Matveichev
Join Date: Aug 2011
Location: Nancy, France
Posts: 1,438
Rep Power: 25
alexeym will become famous soon enoughalexeym will become famous soon enough
Hi,

The error has nothing to do with schemes or solution. Solver can not create a folder to write results there. Do you have enough disk space?
alexeym is offline   Reply With Quote

Old   May 10, 2015, 04:05
Default
  #3
New Member
 
Alessandro Del Buono
Join Date: May 2015
Posts: 12
Rep Power: 3
Alessandro1589 is on a distinguished road
Yes i have 200Gb available, this is strange!!
is it an error about Mesh??
Alessandro1589 is offline   Reply With Quote

Old   May 10, 2015, 04:14
Default
  #4
Senior Member
 
Alexey Matveichev
Join Date: Aug 2011
Location: Nancy, France
Posts: 1,438
Rep Power: 25
alexeym will become famous soon enoughalexeym will become famous soon enough
Well, then I have more questions:

0. Do you really have 200G of disk space? Can you show df -h output?
1. Do you run locally or is it cluster environment?
2. On what file system /home is stored?
3. Is there any quotas on disk usage?
4. What is the total number of domains in parallel decomposition?
tjliang likes this.
alexeym is offline   Reply With Quote

Old   May 10, 2015, 04:58
Default
  #5
New Member
 
Alessandro Del Buono
Join Date: May 2015
Posts: 12
Rep Power: 3
Alessandro1589 is on a distinguished road
I run in cluster enviroment and i use 40 subdomains;
this is df -h:

Filesystem Size Used Avail Use% Mounted on
/dev/sda6 880G 466G 369G 56% /
/dev/sda1 119M 21M 92M 19% /boot
/dev/sda3 7.6G 146M 7.0G 2% /tmp
/dev/sda5 16G 1.7G 13G 12% /var
tmpfs 5.9G 0 5.9G 0% /dev/shm

i deleted many files but when my simulation is exploded Avail was 230-250 and Use% was 72%.

Sorry but i don't understand question 2 and 3
Alessandro1589 is offline   Reply With Quote

Old   May 10, 2015, 15:50
Default
  #6
Senior Member
 
Alexey Matveichev
Join Date: Aug 2011
Location: Nancy, France
Posts: 1,438
Rep Power: 25
alexeym will become famous soon enoughalexeym will become famous soon enough
Hi,

2 is about file system type (ext4, xfs, nfs?). As for every write (assuming 40 subdomains and 5 domain variables) 40*6 files are created, so it is possible to run out of inodes before free space.

3 is about constraints on disk usage that are usually imposed in collective usage environments.

Any way, the problem is neither in discretization schemes, nor linear system solver, nor mesh quality. The reason for simulation to blow up is underlying filesystem. How often do you save data?
alexeym is offline   Reply With Quote

Old   May 11, 2015, 12:55
Default
  #7
New Member
 
Alessandro Del Buono
Join Date: May 2015
Posts: 12
Rep Power: 3
Alessandro1589 is on a distinguished road
Hi Alexey,
file system type is ext3 and it is stored in home.I think there isn't constraints on disk usage.So what'is the problem?do you think if i use more subdomains is better?
Alessandro1589 is offline   Reply With Quote

Old   May 12, 2015, 02:18
Default
  #8
Senior Member
 
Alexey Matveichev
Join Date: Aug 2011
Location: Nancy, France
Posts: 1,438
Rep Power: 25
alexeym will become famous soon enoughalexeym will become famous soon enough
Hi,

There are still lots of unknown in your question. Still main problem is strong OpenFOAM's desire to create lots of files when it writes simulation results to disk. While df can report 30% of free space you can run out of inodes (see for example http://www.redhat.com/archives/ext3-.../msg00304.html).

Did you try to reduce:
- number of subdomains?
- writeInterval?
alexeym is offline   Reply With Quote

Old   May 12, 2015, 11:58
Default
  #9
New Member
 
Alessandro Del Buono
Join Date: May 2015
Posts: 12
Rep Power: 3
Alessandro1589 is on a distinguished road
I don't understand..if the problem is the end of node's space why i must try to reduce write intervall or subdomains?I expect to have to increase these factors;
do you have other links about this problem?
Alessandro1589 is offline   Reply With Quote

Old   May 12, 2015, 14:29
Default
  #10
Senior Member
 
Alexey Matveichev
Join Date: Aug 2011
Location: Nancy, France
Posts: 1,438
Rep Power: 25
alexeym will become famous soon enoughalexeym will become famous soon enough
Hi,

OpenFOAM creates too many files, that is a problem. If you reduce number of sub-domains or writeInterval, you reduce number of files written by OpenFOAM.

As I have written several posts earlier, for every write OpenFOAM creates 40*6 files (number of domains times number of domain variable). So if you have a simulation that runs from 0 to 1 second, with writeInterval 0.001 it will create 240,000 files. You can check inode hypothesis by issuing "dh -hi" after you simulation halts with mkDir problem. You will get something like:

Code:
Sys. de fichiers      Inodes   IUtil.  ILib. IUti% Monté sur
...
/dev/sda6               6,2M      11    6,2M    1% /
...
What is your value of ILib.? I am quite surprised that your /home is not on separate partition.

Another links? Use your preferred search engine with "ext3 inode limit" query.
wyldckat likes this.
alexeym is offline   Reply With Quote

Old   May 12, 2015, 15:54
Default
  #11
New Member
 
Alessandro Del Buono
Join Date: May 2015
Posts: 12
Rep Power: 3
Alessandro1589 is on a distinguished road
But if i reduce writeintervall, 0.0001 for example, it will create more file(2400000). So i think i must icrease writeIntervall..

This is df -hi
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda6 112M 15M 97M 14% /

Is there a way to determine the maximum number of files that can be created?
Alessandro1589 is offline   Reply With Quote

Old   May 12, 2015, 16:13
Default
  #12
Senior Member
 
Alexey Matveichev
Join Date: Aug 2011
Location: Nancy, France
Posts: 1,438
Rep Power: 25
alexeym will become famous soon enoughalexeym will become famous soon enough
Yes, you are right. You should increate the value of writeInterval (to reduce number of writes performed by solver).

Do you provide "df -hi" values for the moment of mkDir error? Maximum number of files of ext3, it depends on the options during FS creation. Overall it is not the things you ask here, it is the reason to talk to your cluster administrator. Here is an example on how to find the number: http://forums.fedoraforum.org/showthread.php?t=245633, of example.

Also, exceeding the number of inodes is one of the possibilities. There can be other problems (with your cluster). Just to check, maybe someone with 40+ nodes willing to reproduce the error, can you please post archive with the case files?
alexeym is offline   Reply With Quote

Old   May 17, 2015, 13:46
Default
  #13
New Member
 
Alessandro Del Buono
Join Date: May 2015
Posts: 12
Rep Power: 3
Alessandro1589 is on a distinguished road
Hi alexeym,
i tried to reduce subdomains(from 40 to 20) and to increase writeIntervall (from 8.8e-6 to 1.75e-5).There has been improvements,infact now simulation explode after 0.5599s instead 0.2815. So thank you for your help!
Now i will search a way to increase node's space!
Best Regards

Alessandro
Alessandro1589 is offline   Reply With Quote

Old   June 16, 2015, 15:32
Default
  #14
New Member
 
Alessandro Del Buono
Join Date: May 2015
Posts: 12
Rep Power: 3
Alessandro1589 is on a distinguished road
Hi alexeym,
do you know how to use "distribuited" option in decomposeParDict?do you think that will help to improve the inodes problem?

Best Reagards
Alessandro1589 is offline   Reply With Quote

Old   June 17, 2015, 14:55
Default
  #15
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 9,748
Blog Entries: 39
Rep Power: 103
wyldckat is a glorious beacon of lightwyldckat is a glorious beacon of lightwyldckat is a glorious beacon of lightwyldckat is a glorious beacon of lightwyldckat is a glorious beacon of light
Quick answers:
Quote:
Originally Posted by Alessandro1589 View Post
do you know how to use "distribuited" option in decomposeParDict?
Available here: Running OpenFOAM in parallel with different locations for each process

Quote:
Originally Posted by Alessandro1589 View Post
do you think that will help to improve the inodes problem?
It will only help if the other folders are located in other partitions.

Why don't you create an image file that is formatted to ext4? You can find instructions for this here: Maintaining a local git repository on a portable disk image file or partition - See section "Preparing and using the single ext3/4 file filesystem".
It would work as if you were writing into another partition, which is actually a single file. Keep in mind that the image file will not auto-expand!
wyldckat is offline   Reply With Quote

Old   May 12, 2016, 14:00
Default
  #16
Member
 
Peng Liang
Join Date: Mar 2014
Posts: 46
Rep Power: 4
tjliang is on a distinguished road
Quote:
Originally Posted by Alessandro1589 View Post
Hi guys,
i'm doing a Les simulation about flow around circular cylinder; i have used pimplefoam and pisofoam but either produce same error:

] --> FOAM FATAL ERROR:
[5] Couldn't create directory "/home/delbuono/simulazioni/LES_CilindroRe3900_piso3/processor5/0.281574"
[5]
[5] From function Foam::mkDir(const fileName&, mode_t)
[5] in file POSIX.C at line 551.
[5]
FOAM parallel run exiting
[5]

Have you got any suggestions?is it an error about Schemes or Solution?

Best Regards
Hello Alessandro,

I have the same problem with you, could you please tell me what's wrong with that? Thanks!

Best Regards,

Peng
tjliang is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On



All times are GMT -4. The time now is 12:19.