XFS is a high-performance journaling file system created by Silicon Graphics, Inc. XFS is particularly proficient at parallel IO due to its allocation group based design. This enables extreme scalability of IO threads, filesystem bandwidth, file and filesystem size when spanning multiple storage devices.
For XFS userspace utilities install the package. It contains the tools necessary to manage an XFS file system.
To create a new filesystem on device use:
# mkfs.xfs device
meta-data=/dev/device isize=256 agcount=4, agsize=3277258 blks = sectsz=512 attr=2 data = bsize=4096 blocks=13109032, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=6400, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
-foption to overwrite that file system.. This operation will destroy all data contained in the previous filesystem.
After an XFS file system is created, its size cannot be reduced. However, it can still be enlarged using the xfs_growfs command. See #Resize.
Based upon CRC32 it provides for example additional protection against metadata corruption during unexpected power losses. Checksum is enabled by default when using 3.2.3 or later. If you need read-write mountable xfs for older kernel, It can be easily disabled using the
-m crc=0 switch when calling .
# mkfs.xfs -m crc=0 /dev/target_partition
The XFS v5 on-disk format is considered stable for production workloads starting Linux Kernel 3.15.
Free inode btree
Starting Linux 3.16, XFS has added a btree that tracks free inodes. It is equivalent to the existing inode allocation btree with the exception that the free inode btree tracks inode chunks with at least one free inode. The purpose is to improve lookups for free inode clusters for inode allocation. It improves performance on aged filesystems i.e. months or years down the track when you have added and removed millions of files to/from the filesystem. Using this feature does not impact overall filesystem reliability level or recovery capabilities.
This feature relies on the new v5 on-disk format that has been considered stable for production workloads starting Linux Kernel 3.15. It does not change existing on-disk structures, but adds a new one that must remain consistent with the inode allocation btree; for this reason older kernels will only be able to mount read-only filesystems with the free inode btree feature.
The feature enabled by default when using xfsprogs 3.2.3 or later. If you need writable filesystem for older kernel, it can be disable with
finobt=0 switch when formatting a XFS partition. You will need
# mkfs.xfs -m crc=0,finobt=0 /dev/target_partition
or shortly (
# mkfs.xfs -m crc=0 /dev/target_partition
Reverse mapping btree
The reverse mapping btree is at its core
a secondary index of storage space usage that effectively provides a redundant copy of primary space usage metadata. This adds some overhead to filesystem operations, but its inclusion in a filesystem makes cross-referencing very fast. It is an essential feature for repairing filesystems online because we can rebuild damaged primary metadata from the secondary copy.
The feature graduated from EXPERIMENTAL status in Linux 4.16 and is production ready. However, online filesystem checking and repair is (so far) the only use case for this feature, so it will remain opt-in at least until online checking graduates to production readiness.
The reverse mapping btree maps filesystem blocks to the owner of the filesystem block. Most of the mappings will be to an inode number and an offset, though there will also be mappings to filesystem metadata. This secondary metadata can be used to validate the primary metadata or to pinpoint exactly which data has been lost when a disk error occurs.
To try out this feature or future-proof new filesystems, pass the
-m rmapbt=1 parameter during filesystem creation:
# mkfs.xfs -m rmapbt=1 device
From XFS FAQ:
The default values already used are optimised for best performance in the first place. mkfs.xfs will detect the difference between single disk and MD/DM RAID setups and change the default values it uses to configure the filesystem appropriately.
In most cases, the only thing you need to to consider for (see #Stripe size and width)
mkfs.xfs is specifying the stripe unit and width for hardware RAID devices.
allocsizevalues, etc. The following articles may provide additional details about those flags:
For mount options, the only thing that will change metadata performance considerably are the
delaylog mount options. Increasing
logbsize reduces the number of journal IOs for a given workload, and
delaylog will reduce them even further. The trade off for this increase in metadata performance is that more operations may be "missing" after recovery if the system crashes while actively making modifications.
As of kernel 3.2.12, the default i/o scheduler, CFQ, will defeat much of the parallelization in XFS.
Therefore for optimal performance, in most cases you can just follow #Creating a new filesystem.
Stripe size and width
If this filesystem will be on a striped RAID you can gain significant speed improvements by specifying the stripe size to thecommand.
XFS can sometimes detect the geometry under software RAID, but in case you reshape it or you are using hardware RAID see how to calculate the correct sunit,swidth values for optimal performance
On some filesystems you can increase performance by adding the
noatime mount option to the
/etc/fstab file. For XFS filesystems
the default atime behaviour is 
relatime, which has almost no overhead compared to noatime but still maintains sane atime values. All Linux filesystems use this as the default now (since around 2.6.30), but XFS has used relatime-like behaviour since 2006, so no-one should really need to ever use noatime on XFS for performance reasons.
See Fstab#atime options for more on this topic.
Although the extent-based nature of XFS and the delayed allocation strategy it uses significantly improves the file system's resistance to fragmentation problems, XFS provides a filesystem defragmentation utility (xfs_fsr, short for XFS filesystem reorganizer) that can defragment the files on a mounted and active XFS filesystem. It can be useful to view XFS fragmentation periodically.
improves the organization of mounted filesystems. The reorganization algorithm operates on one file at a time, compacting or otherwise improving the layout of the file extents (contiguous blocks of file data).
Inspect fragmentation levels
To see how much fragmentation your file system currently has:
# xfs_db -c frag -r /dev/sda3
To begin defragmentation, use thecommand:
# xfs_fsr /dev/sda3
External XFS Journal
To reserve an external journal with a specified size when you create an XFS file system, specify the
-l logdev=device,size=size option to the
mkfs.xfs command. If you omit the
size parameter, a journal size based on the size of the file system is used. To mount the XFS file system so that it uses the external journal, specify the
-o logdev=device option to the mount command.
XFS has it dedicated sysctl variable for setting "writeback interval". Arch has a default value of 3000, larger value is possible to set, just keep in mind that too large may result data loss in some cases:
fs.xfs.xfssyncd_centisecs = 10000
XFS can be resized online, after the partition has been altered. Just run
xfs_growfs with the mount point as first parameter to grow the XFS filesystem to the maximal size possible.
# xfs_growfs /path/to/mnt/point
Online Metadata Checking (scrub)
xfs_scrub asks the kernel to scrub all metadata objects in the XFS filesystem. Metadata records are scanned for obviously bad values and then cross-referenced against other metadata. The goal is to establish a reasonable confidence about the consistency of the overall filesystem by examining the consistency of individual metadata records against the other metadata in the filesystem. Damaged metadata can be rebuilt from other metadata if there exists redundant data structures which are intact.
xfs_scrub_all.timer: the timer runs every Sunday at 3:10am and will be triggered immediately if it missed the last start time, i.e. due to the system being powered off.
Unlike other Linux file systems, xfs_repair does not run at boot time, even when an XFS file system was not cleanly unmounted. In the event of an unclean unmount, xfs_repair simply replays the log at mount time, ensuring a consistent file system.
If you cannot mount an XFS file system, you can use the xfs_repair -n command to check its consistency. Usually, you would only run this command on the device file of an unmounted file system that you believe has a problem. The xfs_repair -n command displays output to indicate changes that would be made to the file system in the case where it would need to complete a repair operation, but will not modify the file system directly.
If you can mount the file system and you do not have a suitable backup, you can use xfsdump to attempt to back up the existing file system data, However, the command might fail if the file system's metadata has become too corrupted.
You can use the xfs_repair command to attempt to repair an XFS file system specified by its device file. The command replays the journal log to fix any inconsistencies that might have resulted from the file system not being cleanly unmounted. Unless the file system has an inconsistency, it is usually not necessary to use the command, as the journal is replayed every time that you mount an XFS file system.
First unmount the filesystem, then run the tool:
# xfs_repair device
If the journal log has become corrupted, you can reset the log by specifying the -L option to xfs_repair.
The xfs_repair utility cannot repair an XFS file system with a dirty log. To clear the log, mount and unmount the XFS file system. If the log is corrupt and cannot be replayed, use the
-Loption ("force log zeroing") to clear the log, that is,
xfs_repair -L /dev/device. Be aware that this may result in further corruption or data loss.
Resetting the log can leave the file system in an inconsistent state, resulting in data loss and data corruption. Unless you are experienced in debugging and repairing XFS file systems using xfs_db, it is recommended that you instead recreate the file system and restore its contents from a backup.
If you cannot mount the file system or you do not have a suitable backup, running xfs_repair is the only viable option unless you are experienced in using xfs_db.
xfs_db provides an internal command set that allows you to debug and repair an XFS file system manually. The commands allow you to perform scans on the file system, and to navigate and display its data structures. If you specify the -x option to enable expert mode, you can modify the data structures.
# xfs_db [-x] device
For more information, see the
and , and the help command within xfs_db.
Root file system quota
XFS quota mount options (
prjquota, etc.) fail during re-mount of the file system. To enable quota for root file system, the mount option must be passed to initramfs as a kernel parameter
rootflags=. Subsequently, it should not be listed among mount options in
/etc/fstab for the root (
xfs_scrub_all fails if user "nobody" can not access the mountpoint
xfs_scrub_all, it will launch
xfs_scrub@.service for each mounted XFS file system. The service is run as user
nobody, so if
nobody can not navigate to the directory, it will fail with the error:
email@example.com: Changing to the requested working directory failed: Permission denied firstname.lastname@example.org: Failed at step CHDIR spawning /usr/bin/xfs_scrub: Permission denied email@example.com: Main process exited, code=exited, status=200/CHDIR
To allow the service to run, change the permissions of the mountpoint so that user
nobody has execute permissions.