According to the lead author the calling it Butter-FS originated because it comes from CoW which stands for copy-on-write, but many call it Better-FS, and it actually stands for B-tree file system. It’s been around since 2007, so is relatively new versus other file systems, but the feeling seems to be now that it is finally quite stable. The exception being for new features still being developed. I’m a novice still at BTRFS but these are some of my reasons for wanting to fully move across to it.
Potential downsides (yes it is no magic bullet) are that it can be slightly slower due to compression and validations, and this supposedly is especially not best for large databases. The noatime option can be used to disable the Linux ‘access time’ write to every file read. Also it will use a bit more space as it copies updated data to new sectors instead of overwriting data like ext4 and others do. Although there is RAID functionality, BTRFS is not actually doing backups (it mirrors). You still need to backup off-site and to other media. There is no in-built filesystem encryption (it is planned though), but you can use other standards, but these could potentially affect some advantages of BTRFS eg. using raw block devices.
But the advantages may well outweigh the disadvantages:
Copy-on-Write: Any existing data being edited or updated, is left untouched on the drive, which means potentially less loss, and way easier and quicker to reliably roll back to a previous state.
Snapshots: Manual or automated, are extremely quick as they do not recopy all previously existing data. It is essentially a snapshot of the metadata of the status of files. Where this is done for the boot drive, such snapshots can be configured to appear automatically in the GRUB boot menu, for quickly reverting back to a previous version. No manual booting from LiveCD, chrooting, Clonezilla restores needed. Suse produced an excellent Snapper app for managing snapshots, but Timeshift also supports them.
Software RAID: What stands out is that drives need not be matched sizes at all. You can also add to a running system, and just rebalance BTRFS. BTRFS rebuilds involve only the blocks actively used by the file system, so rebuilds are much quicker than most other systems.
Self-healing: Checksums for data and metadata, automatic detection of silent data corruptions. Checksums are verified each time a data block is read from disk.
Three different compression options: ZLIB, LZO, and ZSTD differ in terms of speed and amount of compression. You can compress only new files, or process the whole file system, or just do specific individual files if you wish. Compression is supported on a per mount basis. If compression makes the file size any bigger than the original, then the Btrfs filesystem will, by default, not compress that file.
Utilities: Scrub for validating checksums, defragmenting while subvolumes are mounted. Check (unmounted drives) is similar to fsck. Balancing for adding new drives to a RAID or other changes made to BTRFS.
Send/Receive of subvolume changes: Very efficient way of mirroring to a remote system via various options over a LAN or the Internet.
Disk Partitioning: In theory you could use no partitions at all, but it is recommended you create at least one (GRUB prefers it). Rest of the drive though can be BTRFS subvolumes that you can resize on the fly without unmounting or using a LiveCD.
Very large VM files: You can add them to separate subvolumes you create, and have them act as independent files without copy (remember you are still backing up aren’t you).
Conversion from other filesystems (ext2, ext3, ext4, reiserfs) to btrfs: Copy on write algorithms allow BTRFS to preserve an unmodified copy of the original FS, and allow administrator to undo the conversion, even after making changes in the resulting BTRFS filesystem.
Linux kernel includes BTRFS support so no need to install drivers, just the software utility apps to manage it.
So doing my /home partition was quite easy to just do a conversion. I’ve not yet quite decided how to do my boot drive as I need to think about what I want to do with subvolume creation and what best practices I need to consider with inclusion of GRUB etc. A Clonezilla copy of my boot drive means I can experiment and quickly restore without worries, though.
See https://gadgeteer.co.za/why-im-interested-btrfs-filesystem-instead-ext4-linux
#BTRFS #Linux #filesystem #opensource
I’ve been using it for many years now, can’t complain. With compression enabled it can be just as fast as with ext4, at least for “desktop” usage.
Specially with zstd compression (and even more with lower compression levels) you really can’t note the difference (outside of really specific workloads).
Here are some results from my Pinebook Pro, with compsize:
compsize -x / Processed 158309 files, 103091 regular extents (114530 refs), 72799 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 57% 3.2G 5.6G 7.0G none 100% 2.0G 2.0G 2.4G zstd 33% 1.2G 3.5G 4.5G
compsize -x /home Processed 7203 files, 15051 regular extents (22906 refs), 1738 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 99% 28G 29G 29G none 100% 28G 28G 28G zstd 30% 99M 330M 363M prealloc 100% 4.0M 4.0M 34M
Where there is easily-compressible data (/), the gains are definitely here. I’ve even reached as low as 49% on my main machine, also with ZSTD set to 3.
However, where the data is not so easily compressible like /home, where I mostly have music and games, the compression is almost non existent.
I haven’t measured it, but I’m pretty sure there is a performance improvement even on the Pinebook Pro’s weak CPU. Reading 100MB of data and uncompressing it is faster than having to read 200MB directly, especially on very weak/slow disks.