For the new storage server of Gaia, Galactus, we have set-up a XFS volume of 175TB.
So, I’ve considered using
inode64. I will describe my experience in this post.
inode64 parameter allows XFS to encode inodes on 64 bits, instead of 32 bits.
According to the XFS website:
By default, with 32bit inodes, XFS places inodes only in the first 1TB of a disk. If you have a disk with 100TB, all inodes will be stuck in the first TB. This can lead to strange things like "disk full" when you still have plenty space free, but there's no more place in the first TB to create a new inode. Also, performance sucks. To come around this, use the inode64 mount options for filesystems >1TB. Inodes will then be placed in the location where their data is, minimizing disk seeks.
- advantages: better performance, no risk to run out of inodes
- disadvantages: software compatibility, poor NFS support
Setup (XFS and NFS)
The system is running Centos 6.5 with the kernel
I’ve already configured LVM, and I just have to create the XFS partition.
Then, I export 2 directories from
/export/ with NFS v3:
/export/apps 10.228.0.0/16(async,rw,no_root_squash,no_subtree_check) /export/users 10.228.0.0/16(async,rw,no_root_squash,no_subtree_check)
First impression: it just works.
First test with NFS v3…
On the nodes, I realize I can mount
galactus:/export/users, but not
In the server logs, I get these messages:
On the nodes, I get these messages:
Well, the root directory of your NFS export must have an inode number with a size inferior to 32 bits.
So, here is a workaround:
If you are lucky, you will find some directory with small inodes.
Now, use them…
Now, we can mount
/export/apps on the nodes…
It’s an horrible solution, but it works…
But, what about application compatibility?
Greg Banks (engineer at SGI) answered this question on his blog.
He also gives a script which analyzes binaries in a directory and summarizes which of them depend on the old 32 bit stat system call family.
I’ve tested on our modules directory, and it’s definitely not safe to enable
Lots of binaries may (or not) break.
In a HPC center, with legacy code and closed source applications which can’t be easily recompiled, it will not be possible to debug and fix all the potential issues triggered by this change.
So, how-to downgrade?
You can’t :)
If you remove the
inode64 parameter, the new files will use 32 bits inodes,
but all the previously created files will keep their 64 bits inodes (at least,
it’s the behavior with Centos 6.5 and the kernel
The only solution is to reformat with
mkfs, and start from scratch with a clean filesystem…