phpknight Posted August 14, 2007 Share Posted August 14, 2007 I am making a site on a dedicated box that could potentially get large, so I want to deisgn the directory structure right up front. From what I understand, a directory can have about 32K directories within it and an even larger number of files (up to a total of the number of inodes). However, I also read that once a directory number hits 100, this can be a performance drain. Does anybody have real-word experience with this to know when the performance issue might start hitting? I am not sure whether to make 25K directories and move to the next one or just start with like 50 directories and put each user in the one with the lowest number of users and fill them out evenly. For the latter method, the main question would be how many to start with. That being said, I would rather deal with this up front and not down the road. Although it might sound like a database might be in order for this, the files will be images, and I'd rather not store those in a database because that would be a larger problem. Quote Link to comment https://forums.phpfreaks.com/topic/64944-directory-and-file-numbers/ Share on other sites More sharing options...
neylitalo Posted August 14, 2007 Share Posted August 14, 2007 Unix filesystems are usually designed with a maximum capacity of X inodes per filesystem, not X directories and Y files. If you want to know the exact number of maximum inodes or files/directories that your FS type supports, Wikipedia usually has the answers. And just a word of advice: Filesystems are designed to handle very, very large amounts of data. You are very probably not going to even hit a point where you can see a performance loss. Quote Link to comment https://forums.phpfreaks.com/topic/64944-directory-and-file-numbers/#findComment-324065 Share on other sites More sharing options...
phpknight Posted August 14, 2007 Author Share Posted August 14, 2007 Great! About the subdirectories, though, I read this: "We use the ext3 hashed directory code, which has a theoretical limit of ~134 million files per directory, at which point the directory grows to more than 2 GB. The maximum number of subdirectories is 32,000 in versions prior to Lustre 1.2.6 and is unlimited in later versions (small ext3 format change)." http://www.clusterfs.com/faq-sizing.html#4 I think I read that in Wikipedia as well but cannot locate it now. So, it appears there are limits to some file systems. I do not know unix very well but am learning. How do I find out what file system I have (somebody told me CentOS or cPanel does not support ext3)? I did figure out the number of inodes already. Quote Link to comment https://forums.phpfreaks.com/topic/64944-directory-and-file-numbers/#findComment-324081 Share on other sites More sharing options...
trq Posted August 14, 2007 Share Posted August 14, 2007 How do I find out what file system I have cat /etc/fstab or mount Quote Link to comment https://forums.phpfreaks.com/topic/64944-directory-and-file-numbers/#findComment-324093 Share on other sites More sharing options...
phpknight Posted August 14, 2007 Author Share Posted August 14, 2007 Thanks! Quote Link to comment https://forums.phpfreaks.com/topic/64944-directory-and-file-numbers/#findComment-324101 Share on other sites More sharing options...
phpknight Posted August 14, 2007 Author Share Posted August 14, 2007 Okay, it looks like I have ext3, but from reading this, it appears there is a 32K limit on subdirectories. Is it possible, though, that this is old (2003) and there is no longer a limit? http://answers.google.com/answers/threadview?id=122241 Also, the article mentions a performance issue at about 10-15K files in a directory. However, I probably would not be searching it so much as I would be reading/writing what I wanted. Let me know if you have any input here. Thanks again for the help. Quote Link to comment https://forums.phpfreaks.com/topic/64944-directory-and-file-numbers/#findComment-324113 Share on other sites More sharing options...
solarisuser Posted August 17, 2007 Share Posted August 17, 2007 Maybe you can assume what the average file of the image would be (lets say the limit you allow is 2MB). Lets say the total comes out to 50GB for the total amount of files you're comfortable with. You can then partition your hard drive and create various partitions 50GB large, eliminating the problem. Also, if you're worried about performance, then look at your I/O, as its usually the biggest bottleneck. This means using RAID 1+0 for speed and redundancy, using ENGINE=memory for temporary data in MySQL, etc. Quote Link to comment https://forums.phpfreaks.com/topic/64944-directory-and-file-numbers/#findComment-326412 Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.