fatkatie Posted September 5, 2017 Share Posted September 5, 2017 I understand structure for human needs but does a unix file system need it? I'm accumulating a lot of image files. After I mangle the file name, the raw image files are being dumped in a single directory while all the image file's meta data is being stored in a database. Does a unix file system prefer structure, lots of little directories, or does it deal just fine with many files in one. All 'find that file' processes begin with the database. Thank you. Quote Link to comment Share on other sites More sharing options...
Solution requinix Posted September 5, 2017 Solution Share Posted September 5, 2017 Depends on the filesystem. ext4 (most common?) doesn't have a limit per directory. However directories themselves take space to store a list of their contents, and unless the system sorts directory entries (?) looking for a file requires scanning the directory list for it. It might be antiquated now but if you know the filename from another source then I'd partition. I tend to generate hex filenames and take the first two bytes' worth as two subdirectories. /01/23/0123456789abcdef.extAssuming an even distribution, by the time you've filled each second-level directory with 256 files you've stored 16M total. More than enough. Quote Link to comment Share on other sites More sharing options...
fatkatie Posted September 8, 2017 Author Share Posted September 8, 2017 I was thinking about hashes and cache at the time and thinking size made NO difference. But, then, I don't know. And in the end size does always seem to matter. Your method looks like a good way to 'distribute'. I'll use it. Looking at a directory with so many files has me on edge... like something is going to blow. Thanks for tip. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.