Snapper: Difference between revisions

From ArchWiki
(add zh-hans translation)
(add a new GUI tool which is maintained by Garuda Linux distro.)
 
(204 intermediate revisions by 54 users not shown)
Line 1: Line 1:
[[Category:File systems]]
[[Category:File systems]]
[[ja:Snapper]]
[[ja:Snapper]]
[[pt:Snapper]]
[[zh-hans:Snapper]]
[[zh-hans:Snapper]]
{{Related articles start}}
{{Related articles start}}
{{Related|Btrfs}}
{{Related|Yabsnap}}
{{Related|mkinitcpio-btrfs}}
{{Related|mkinitcpio}}
{{Related articles end}}
{{Related articles end}}


[http://snapper.io Snapper] is a tool created by openSUSE's Arvin Schnell that helps with managing snapshots of [[Btrfs]] subvolumes and thin-provisioned [[LVM]] volumes. It can create and compare snapshots, revert between snapshots, and supports automatic snapshots timelines.
[http://snapper.io Snapper] is a tool created by openSUSE's Arvin Schnell that helps with managing snapshots of [[Btrfs]] subvolumes and thin-provisioned [[LVM]] volumes. It can create and compare snapshots, revert between snapshots, and supports automatic snapshots timelines.


==Installation==
== Installation ==


[[Install]] the {{pkg|snapper}} package. The development version {{AUR|snapper-git}} is also available.
[[Install]] the {{Pkg|snapper}} package. The development version {{AUR|snapper-git}} is also available.


Additionally, a GUI is available with {{AUR|snapper-gui-git}}.
Additionally, GUIs are available with {{AUR|snapper-gui-git}}, {{AUR|btrfs-assistant}}, and {{AUR|snapper-tools}}.


==Create a new configuration==
== Creating a new configuration ==


{{Expansion|Add instructions for using thin-provisioned [[LVM]] snapshots.|Talk:Snapper#LVM thin-provisioned snapshots}}
Before creating a snapper configuration for a Btrfs subvolume, the subvolume must already exist. If it does not, you should [[Btrfs#Creating a subvolume|create]] it before generating a snapper configuration.


Before creating a snapper configuration for a btrfs subvolume, the subvolume must already exist. If it does not, you should [[Btrfs#Creating a subvolume|create]] it before generating a snapper configuration.
To create a new snapper configuration named {{ic|''config''}} for the Btrfs subvolume at {{ic|''/path/to/subvolume''}}, run:
 
To create a new snapper configuration named {{ic|''config''}} for the btrfs subvolume at {{ic|''/path/to/subvolume''}} do:


  # snapper -c ''config'' create-config ''/path/to/subvolume''
  # snapper -c ''config'' create-config ''/path/to/subvolume''


This will:
This will:
*create a configuration file at {{ic|/etc/snapper/configs/''config''}} based on the default template from {{ic|/etc/snapper/config-templates}}.
*create a subvolume at {{ic|''/path/to/subvolume''/.snapshots}} where future snapshots of for this configuration will be stored. A snapshot's path is {{ic|''/path/to/subvolume''/.snapshots/''#''/snapshot}}, where {{ic|''#''}} is the snapshot number.
*add {{ic|''config''}} to {{ic|SNAPPER_CONFIGS}} in {{ic|/etc/conf.d/snapper}}.


For example, to create a configuration file for the subvolume mounted at {{ic|/}} do:
* Create a configuration file at {{ic|/etc/snapper/configs/''config''}} based on the default template from {{ic|/usr/share/snapper/config-templates}}.
* Create a subvolume at {{ic|''/path/to/subvolume''/.snapshots}} where future snapshots for this configuration will be stored. A snapshot's path is {{ic|''/path/to/subvolume''/.snapshots/''#''/snapshot}}, where {{ic|''#''}} is the snapshot number.
* Add {{ic|''config''}} to {{ic|SNAPPER_CONFIGS}} in {{ic|/etc/conf.d/snapper}}.
 
For example, to create a configuration file for the subvolume mounted at {{ic|/}}, run:


  # snapper -c root create-config /
  # snapper -c root create-config /
{{Note|If you are using the suggested [[Btrfs]] partition layout from [[archinstall]] then the {{ic|@.snapshots}} subvolume will already be mounted to {{ic|/.snapshots}}, and the {{ic|snapper create-config}} command will fail [https://github.com/archlinux/archinstall/issues/1808]. To use the {{ic|@.snapshots}} subvolume for Snapper backups, do the following:
* Unmount the {{ic|@.snapshots}} subvolume and delete the existing mountpoint.
* Create the Snapper config.
* Delete the subvolume created by Snapper.
* Re-create the {{ic|/.snapshots}} mount point and re-mount the {{ic|@.snapshots}} subvolume.
}}


At this point, the configuration is active. If your [[cron]] daemon is running, snapper will take [[#Automatic timeline snapshots]]. If you do not use a [[cron]] daemon, you will need to use the systemd service and timer. See [[#Enable/disable]].
At this point, the configuration is active. If your [[cron]] daemon is running, snapper will take [[#Automatic timeline snapshots]]. If you do not use a [[cron]] daemon, you will need to use the systemd service and timer. See [[#Enable/disable]].


See [[man page]] for {{ic|snapper-configs}}.
See also {{man|5|snapper-configs}}.


== Take snapshots ==
== Taking snapshots ==


=== Automatic timeline snapshots ===
=== Automatic timeline snapshots ===


A snapshot timeline can be created with a configurable number of snapshots kept per hour/day/month/year. When the timeline is enabled, by default a snapshot gets created once an hour. Once a day the snapshots get cleaned up by the timeline cleanup algorithm.
A snapshot timeline can be created with a configurable number of hourly, daily, weekly, monthly, and yearly snapshots kept. When the timeline is enabled, by default a snapshot gets created once an hour. Once a day the snapshots get cleaned up by the timeline cleanup algorithm. Refer to the {{ic|TIMELINE_*}} variables in {{man|5|snapper-configs}} for details.


==== Enable/disable ====
==== Enable/disable ====
Line 48: Line 55:
If you have a [[cron]] daemon, this feature should start automatically. To disable it, edit the configuration file corresponding with the subvolume you do not want to have this feature and set:
If you have a [[cron]] daemon, this feature should start automatically. To disable it, edit the configuration file corresponding with the subvolume you do not want to have this feature and set:


{{bc|<nowiki>TIMELINE_CREATE="no"</nowiki>}}
TIMELINE_CREATE="no"


If you do not have a [[cron]] daemon, you can use the provided systemd units. [[Start]] and [[enable]] {{ic|snapper-timeline.timer}} to start the automatic snapshot timeline. Additionally, [[start]] and [[enable]] {{ic|snapper-cleanup.timer}} to periodically cleanup older snapshots.
If you do not have a [[cron]] daemon, you can use the provided systemd units. [[Start]] and [[enable]] {{ic|snapper-timeline.timer}} to start the automatic snapshot timeline. Additionally, [[start]] and [[enable]] {{ic|snapper-cleanup.timer}} to periodically clean up older snapshots.
 
{{Note|If you have a cron daemon and also enable the systemd units, this may result in duplicate snapshots being created. If you wish to disable cron integration while using the systemd units, one possible solution is not to install the snapper package's cron files via [[pacman]]'s [[Pacman#Skip files from being installed to system|NoExtract]] and [[Pacman#Skip file from being upgraded|NoUpgrade]] configuration options. See [https://unix.stackexchange.com/questions/425570/snapper-has-recently-started-performing-duplicate-snapshots-each-hour].}}


==== Set snapshot limits ====
==== Set snapshot limits ====
Line 57: Line 66:


Here is an example section of a configuration named {{ic|''config''}} with only 5 hourly snapshots, 7 daily ones, no monthly and no yearly ones:
Here is an example section of a configuration named {{ic|''config''}} with only 5 hourly snapshots, 7 daily ones, no monthly and no yearly ones:
{{hc|head=/etc/snapper/configs/''config''|output=
 
{{hc|/etc/snapper/configs/''config''|2=
TIMELINE_MIN_AGE="1800"
TIMELINE_MIN_AGE="1800"
TIMELINE_LIMIT_HOURLY="5"
TIMELINE_LIMIT_HOURLY="5"
Line 68: Line 78:
==== Change snapshot and cleanup frequencies ====
==== Change snapshot and cleanup frequencies ====


If you are using the provided systemd timers, you can [[Systemd#Editing_provided_units|edit]] them to change the snapshot and cleanup frequency.
If you are using the provided systemd timers, you can [[edit]] them to change the snapshot and cleanup frequency.


For example, when editing the {{ic|snapper-timeline.timer}}, add the following to make the frequency every five minutes, instead of hourly:
For example, when editing the {{ic|snapper-timeline.timer}}, add the following to make the frequency every five minutes, instead of hourly:


  [Timer]
  [Timer]
OnCalendar=
  OnCalendar=*:0/5
  OnCalendar=*:0/5
{{Note|The configuration parameter {{ic|TIMELINE_LIMIT_HOURLY}} is tied to the above setting. In the above example it now refers to how many 5-minute snapshots are kept.}}


When editing {{ic|snapper-cleanup.timer}}, you need to change {{ic|OnUnitActiveSec}}. To make cleanups occur every hour instead of every day, add:
When editing {{ic|snapper-cleanup.timer}}, you need to change {{ic|OnUnitActiveSec}}. To make cleanups occur every hour instead of every day, add:
Line 82: Line 91:
  OnUnitActiveSec=1h
  OnUnitActiveSec=1h


See [[Systemd/Timers]] and [[Systemd#Drop-in_files]].
See [[systemd/Timers]] and [[systemd#Drop-in files]].


=== Manual snapshots ===
=== Manual snapshots ===


==== Simple snapshots ====
==== Single snapshots ====


By default snapper takes snapshots that are of the ''simple'' type, having no special relationship to other snapshots.
By default snapper takes snapshots that are of the ''single'' type, having no special relationship to other snapshots.


To take a snapshot of a subvolume manually, do:
To take a snapshot of a subvolume manually, do:


  # snapper -c ''config'' create --description ''desc''
# snapper -c ''config'' create --description ''desc''


The above command does not use any cleanup algorithm, so the snapshot is stored permanently or until [[#Delete_a_snapshot|deleted]].
The above command does not use any cleanup algorithm, so the snapshot is stored permanently or until [[#Delete a snapshot|deleted]].


To set a cleanup algorithm, use the {{ic|-c}} flag after {{ic|create}} and choose either {{ic|number}}, {{ic|timeline}}, {{ic|pre}}, or {{ic|post}}. {{ic|number}} sets snapper to periodically remove snapshots that have exceeded a set number in the configuration file. For example, to create a snaphot that uses the {{ic|number}} algorithm for cleanup do:
To set a cleanup algorithm, use the {{ic|-c}} flag after {{ic|create}} and choose either {{ic|number}}, {{ic|timeline}}, {{ic|pre}}, or {{ic|post}}. {{ic|number}} sets snapper to periodically remove snapshots that have exceeded a set number in the configuration file. For example, to create a snaphot that uses the {{ic|number}} algorithm for cleanup do:


  # snapper -c ''config'' create -c number
# snapper -c ''config'' create -c number


See [[#Automatic timeline snapshots]] for how {{ic|timeline}} snapshots work and see [[#Pre/post snapshots]] on how {{ic|pre}} and {{ic|post}} work.
See [[#Automatic timeline snapshots]] for how {{ic|timeline}} snapshots work and see [[#Pre/post snapshots]] on how {{ic|pre}} and {{ic|post}} work.
Line 104: Line 113:
==== Pre/post snapshots ====
==== Pre/post snapshots ====


In addition to ''simple'' snapshots, you can also create ''pre/post'' snapshots where ''pre'' snapshots always have a corresponding ''post'' snapshot. The purpose of these pairs is to create a snapshot before and after a system modification.
The other type of snapshots - ''pre/post'' snapshots - are intended to be created as a pair, one before and one after a significant change (such as a system update).


To create a pre/post snapshot pair, first create a ''pre'' snapshot:
If the significant change is/can be invoked by a single command, then {{ic|snapper create --command}} can be used to invoke the command and automatically create ''pre/post'' snapshots:


  # snapper -c ''config'' create -t pre -p
# snapper -c ''config'' create --command ''cmd''


Note the number of the snapshot printed, as it is required for the post snapshot.
{{Tip|To wrap any shell command around ''pre/post'' snapshots, one may also consider using the {{AUR|snp}} shell script, which provides better output redirection than the native {{ic|--command}} option of snapper.}}


Then perform a system modification (*e.g.*, install a new program, upgrade, etc.).
Alternatively, the ''pre/post'' snapshots can be created manually.


Now create the ''post'' snapshot:
First create a ''pre'' snapshot:


  # snapper -c ''config'' create -t post --pre-number ''N''
# snapper -c ''config'' create -t pre -p


where {{ic|''N''}} is the corresponding ''pre'' snapshot number.
Note the number of the new snapshot (it is required to create the ''post'' snapshot).


An alternative method is to use the {{ic|--command}} flag for {{ic|create}}, which wraps a command with pre/post snapshots:
Now perform the actions that will modify the filesystem (*e.g.*, install a new program, upgrade, etc.).


  # snapper -c ''config'' create --command ''cmd''
Finally, create the ''post'' snapshot, replacing {{ic|''N''}} with the number of the ''pre'' snapshot:


where {{ic|''cmd''}} is the command you wish to wrap with pre/post snapshots.
# snapper -c ''config'' create -t post --pre-number ''N''


See [[#Wrapping pacman transactions in snapshots]].
See also [[#Wrapping pacman transactions in snapshots]].


=== Snapshots on boot ===
=== Snapshots on boot ===


To have snapper take a snapshot of the {{ic|root}} configuration, [[enable]] {{ic|snapper-boot.timer}}.
To have snapper take a snapshot of the {{ic|root}} configuration, [[enable]] {{ic|snapper-boot.timer}}. (These snapshots are of type ''single''.)
 
== Managing snapshots ==
 
=== List configurations ===
 
To list all [[#Creating a new configuration|configurations]] that have been created do:


== List snapshots ==
# snapper list-configs
 
=== List snapshots ===


To list snapshots taken for a given configuration ''config'' do:
To list snapshots taken for a given configuration ''config'' do:


  # snapper -c ''config'' list
# snapper -c ''config'' list


== List configurations ==
=== Restore snapshot ===


To list all [[#Create a new configuration|configurations]] you have created do:
A file may be kept as is when restoring a snapshot, either because was not included in the snapshot (e.g. it resides on another subvolume), or because a filter configuration excluded the file.


  # snapper list-configs
==== Filter configuration ====


== Delete a snapshot ==
{{Accuracy|{{ic|/etc/mtab}} is a symlink to {{ic|/proc/self/mounts}}, so reverting it has no effect on the system.}}
 
Some files keep state information of the system, e.g. {{ic|/etc/mtab}}. Such files should never be reverted. The default configuration in arch linux ensures this. To help users, snapper allows one to ignore these files. Each line in all files {{ic|/etc/snapper/filters/*.txt}} and {{ic|/usr/share/snapper/filters/*.txt}} specifies a pattern. When snapper computes the difference between two snapshots it ignores all files and directories matching any of those patterns. Note that filters do not exclude files or directories from being snapshotted. For that, use subvolumes or mount points.
 
{{Accuracy|How is the list from SLES documentation relevant for Arch Linux?}}
 
See also the [https://documentation.suse.com/sles/12-SP4/html/SLES-all/cha-snapper.html#snapper-dir-excludes Directories That Are Excluded from Snapshots]{{Dead link|2024|03|03|status=404}} in SLES documentation.
 
==== Restore using the default layout ====
 
{{Accuracy|What is the "default layout"? What is the alternative?}}
 
If you are using the default layout of snapper, each snapshot is sub-subvolume in the {{ic|.snapshots}} directory of a subvolume, e.g. {{ic|@home}}.
 
{{Accuracy|Subvolumes that are not used for {{ic|/}} can be restored from the system itself. Just log in as root, make sure that the subvolume is not used, and unmount it.}}
 
To restore {{ic|/home}} using one of snapper's snapshots, first boot into a live Arch Linux USB/CD.
 
Mount btrfs root-volume into {{ic|/mnt}} using the [[UUID]]:
 
# mount -t btrfs -o subvol=/ /dev/disk/by-uuid/''UUID_of_root_volume'' /mnt
# cd /mnt
 
{{Accuracy|This was written for the live Arch Linux USB/CD where no snapper service can be running.}}
 
If the snapper service is running on a running system, stop it. Check if any {{ic|snapper-''unit''.timers}} [[Systemd/Timers#Management|are running]], then [[stop]] them.
 
Move a broken/old subvolume out of the way e.g. {{ic|@home}} to {{ic|@home-backup}}:
 
# mv @home @home-backup
 
Find the number of the snapshot that you want to recover (there is one line for each snapshot, so you can easily match up number and date of each snapshot):
 
{{hc|# grep -r '<date>'  /mnt/@home-backup /.snapshots/*/info.xml|
...
/mnt/@home-backup/.snapshots/''number''/info.xml:  <date>2021-07-26 22:00:00</date>
...
}}
 
{{Note|The time zone for the date and time recorded in {{ic|info.xml}} is [[Wikipedia:Coordinated_Universal_Time|UTC]], so the time difference from local time must be taken into account.}}
 
Remember the {{ic|''number''}}.
 
Create a new snapshot {{ic|@home}} from snapshot number {{ic|''number''}} to be restored.
 
# btrfs subvolume snapshot @home-backup/.snapshots/''number''/snapshot @home
 
Get the directory {{ic|.snapshots}} back to the healthy subvolume, e.g. {{ic|@home}}
 
# mv @home-backup/.snapshots @home/
 
If subvolid was used for the {{ic|/home}} mount entry option in [[fstab]], instead of {{ic|/path/to/subvolume}}, change subvolid in the {{ic|/mnt/@/etc/fstab}} file (assuming that {{ic|@}} is the subvolume that is mounted as {{ic|/}} in the system) to the new subvolid that can be found with {{ic|btrfs subvolume list /mnt {{!}} grep @home$}}.
 
Reboot.
 
Check if your system is working as intended, the delete the old/broken snapshot (e.g. {{ic|@home-backup}}) if desired. You should check if it contains useful data that you can get back.
 
=== Delete a snapshot ===


To delete a snapshot number {{ic|''N''}} do:
To delete a snapshot number {{ic|''N''}} do:


  # snapper -c ''config'' delete ''N''
# snapper -c ''config'' delete ''N''


Multiple snapshots can be deleted at one time. For example, to delete snapshots 65 and 70 of the root configuration do:
Multiple snapshots can be deleted at one time. For example, to delete snapshots 65 and 70 of the root configuration do:


  # snapper -c root delete 65 70
# snapper -c root delete 65 70
 
To delete a range of snapshots, in this example between snapshots 65 and 70 of the root configuration do:
 
# snapper -c root delete 65-70
 
To free the space used by the snapshot(s) immediately, use {{ic|--sync}}:
 
# snapper -c root delete --sync 65


{{Note|When deleting a pre snapshot, you should always delete its corresponding post snapshot and vice versa.}}
{{Note|When deleting a pre snapshot, you should always delete its corresponding post snapshot and vice versa.}}


==Access for non-root users==
=== Access for non-root users ===
 
Each config is created with the root user, and by default, only root can see and access it.
Each config is created with the root user, and by default, only root can see and access it.


To be able to list the snapshots for a given config for a specific user, simply change the value of {{ic|ALLOW_USERS}} in your {{ic|/etc/snapper/configs/''config''}} file. You should now be able to run {{ic|snapper -c ''config''list}} as a normal user.
To be able to list the snapshots for a given config for a specific user, simply change the value of {{ic|ALLOW_USERS}} in your {{ic|/etc/snapper/configs/''config''}} file. You should now be able to run {{ic|snapper -c ''config'' list}} as a normal user.


Eventually, you want to be able to browse the {{ic|.snapshots}} directory with a user, but the owner of this directory must stay root. Therefore, you should change the group owner by a group containing the user you are interested in, such as {{ic|users}} for example:
Eventually, you want to be able to browse the {{ic|.snapshots}} directory with a user, but the owner of this directory must stay root. Therefore, you should change the group owner by a group containing the user you are interested in, such as {{ic|users}} for example:
Line 168: Line 251:
== Tips and tricks ==
== Tips and tricks ==


=== Wrapping pacman transactions in snapshots===
=== Wrapping pacman transactions in snapshots ===


There are a couple of packages used for automatically creating snapshots upon a pacman transaction:
There are a couple of packages used for automatically creating snapshots upon a pacman transaction:


* {{App|snap-pac|"Makes pacman automatically use snapper to create [[#Pre/post snapshots]] like openSUSE's YaST". Uses [[Pacman#Hooks]].|https://github.com/wesbarnett/snap-pac|{{pkg|snap-pac}}}}
* {{App|snap-pac|Makes pacman automatically use snapper to create [[#Pre/post snapshots|pre/post snapshots]] like openSUSE's YaST. Uses [[pacman hooks]].|https://github.com/wesbarnett/snap-pac|{{Pkg|snap-pac}}}}
* {{App|pacupg|"Script that wraps package and AUR upgrades in btrfs snapshots and provides an easy way to roll them back."|https://github.com/crossroads1112/bin/tree/master/pacupg|{{AUR|pacupg}}}}
* {{App|grub-btrfs|Includes a daemon (''grub-btrfsd'') that can be enabled via ''systemctl'' to look for new snapshots and automatically includes them in the [[GRUB]] menu.|https://github.com/Antynea/grub-btrfs|{{Pkg|grub-btrfs}}}}
* {{App|snap-pac-grub|Additionally updates [[GRUB]] entries for {{Pkg|grub-btrfs}} after {{Pkg|snap-pac}} made the snapshots. Also uses [[pacman hooks]].|https://github.com/maximbaz/snap-pac-grub|{{AUR|snap-pac-grub}}}}
* {{App|refind-btrfs|Adds entries to [[rEFInd]] after {{Pkg|snap-pac}} made the snapshots.|https://github.com/Venom1991/refind-btrfs|{{AUR|refind-btrfs}}}}
* {{App|snp|Wraps any shell command in a snapper pre-post snapshot (e.g. {{ic|snp pacman -Syu}}), with better output than the native {{ic|--command}} option of snapper (see [[#Pre/post snapshots]]).|https://gist.github.com/erikw/5229436|{{AUR|snp}}}}
 
==== Booting into read-only snapshots ====
 
Users who rely on {{Pkg|grub-btrfs}} or {{AUR|snap-pac-grub}} should note that by default, Snapper's snapshots are read-only, and there are some inherent difficulties booting into read-only snapshots. Many services, such as a desktop manager, require a writable {{ic|/var}} directory, and will fail to start when booted from a read-only snapshot.
 
To work around this, you can either make the snapshots writable, or use the developer-approved method of booting the snapshots with overlayfs, causing the snapshot to behave similar to a live CD environment.


==== Backup non-btrfs boot partition on pacman transactions ====
{{Note| Any changes you make to files within this snapshot will not be saved, as the filesystem only exists within RAM.}}


If your {{ic|/boot}} partition is on a non btrfs filesystem (e.g. an [[ESP]]) you are not able to do snapper backups with it. You can copy the boot partition automatically on a kernel update to your btrfs root with a hook. This also plays nice together with {{Pkg|snap-pac}}.
To boot snapshots with overlayfs:
{{hc|1=/usr/share/libalpm/hooks/50_bootbackup.hook|2=
[Trigger]
Operation = Upgrade
Operation = Install
Operation = Remove
Type = Package
Target = linux


[Action]
* Ensure {{Pkg|grub-btrfs}} is installed on your system.
Depends = rsync
* Add {{ic|grub-btrfs-overlayfs}} to the end of the {{ic|HOOKS}} array in {{ic|/etc/mkinitcpio.conf}}. For example: {{bc|HOOKS{{=}}(base udev autodetect microcode modconf kms keyboard keymap consolefont block filesystems fsck grub-btrfs-overlayfs)}}{{Note|Because ''grub-btrfs-overlayfs'' only provides a [[Mkinitcpio#Runtime_hooks|runtime hook]] and no systemd unit, it is '''not''' compatible with a systemd based initramfs. Make sure you use a [[Mkinitcpio#Common_hooks|Busybox based initramfs]] instead. See [https://github.com/Antynea/grub-btrfs/issues/199 this GitHub issue] for more details.}}
Description = Backing up /boot...
* [[Regenerate the initramfs]].
When = PreTransaction
 
Exec = /usr/bin/rsync -avzq --delete /boot /.bootbackup
Further reading:
}}


=== Incremental backup to external drive ===
* [https://github.com/Antynea/grub-btrfs/blob/master/initramfs/readme.md grub-btrfs README] (includes instructions for those who use {{Pkg|dracut}} instead of {{Pkg|mkinitcpio}})
* [https://github.com/Antynea/grub-btrfs/issues/92 Discussion on Github]


The following packages use {{ic|btrfs send}} and {{ic|btrfs receive}} to send backups incrementally to an external drive. Refer to their documenation to see differences in implementation, features, and requirements.
==== Backup non-Btrfs boot partition on pacman transactions ====


* {{App|snap-sync|"Use snapper snapshots to backup to external drive."|https://github.com/wesbarnett/snap-sync.git|{{AUR|snap-sync}}}}
If your {{ic|/boot}} partition is on a non Btrfs filesystem (e.g. an [[ESP]]) you are not able to do snapper backups with it. See [[System backup#Snapshots and /boot partition]] to copy the boot partition automatically on a kernel update to your Btrfs root with a hook. This also plays nice together with {{Pkg|snap-pac}}.


* {{App|snapsync|"A synchronization tool for snapper"|https://github.com/doudou/snapsync|{{AUR|ruby-snapsync}}}}
=== Incremental backup to external drive ===


* {{App|buttersink|"Buttersink is like rsync for btrfs snapshots."|https://github.com/AmesCornish/buttersink.git|{{AUR|buttersink-git}}}}
Some tools can use snapper to automate backups. See [[Btrfs#Incremental backup to external drive]].


===Suggested filesystem layout===
=== Suggested filesystem layout ===


{{Note|1=The following layout is intended ''not'' to be used with {{ic|snapper rollback}}, but is intended to mitigate inherit problems with restoring {{ic|/}} with that command. See [https://bbs.archlinux.org/viewtopic.php?id=194491 this forum thread].}}
{{Note|1=The following layout is intended ''not'' to be used with {{ic|snapper rollback}}, but is intended to alleviate the inherent problems of [[#Restoring / to its previous snapshot]]. See this [https://bbs.archlinux.org/viewtopic.php?id=194491 this forum thread].}}


Here is a suggested file system layout for easily restoring your {{ic|/}} to a previous snapshot:
Here is a suggested file system layout for easily restoring the subvolume {{ic|@}} that is mounted at root to a previous snapshot:
{| class="wikitable"
|+ Filesystem layout
! Subvolume !! Mountpoint
|-
| @ || /
|-
| @home || /home
|-
| @snapshots || /.snapshots
|-
| @var_log || /var/log
|}


  subvolid=5
  subvolid=5
    |
  |
    ├── @
  ├── @ -|
     |      |
  |     contained directories:
    |      ├── /usr
  |      ├── /usr
    |      |
  |      ├── /bin
    |      ├── /bin
  |      ├── /.snapshots
    |      |
  |      ├── ...
    |      ├── /.snapshots
  |
    |      |
  ├── @home
    |      ├── ...
  ├── @snapshots
    |
  ├── @var_log
    ├── @snapshots
  └── @...
    |
    └── @...


Where {{ic|/.snapshots}} is a mountpoint for {{ic|@snapshots}}. {{ic|@...}} are subvolumes that you want to keep separate from the subvolume you will be mounting as {{ic|/}} ({{ic|@}}). When taking a snapshot of {{ic|/}}, these other subvolumes are not included. However, you can still snapshot these other subvolumes separately by creating other snapper configurations for them. Additionally, if you were to restore your system to a previous snapshots of {{ic|/}}, these other subvolumes will remain unaffected.
The subvolumes {{ic|@...}} are mounted to any other directory that should have its own subvolume.
{{Note|
* When taking a snapshot of {{ic|@}} (mounted at the root {{ic|/}}), other subvolumes are not included in the snapshot. Even if a subvolume is nested below {{ic|@}}, a snapshot of {{ic|@}} will ''not'' include it. Create snapper configurations for additional subvolumes besides {{ic|@}} of which you want to keep snapshots.
* Due to a [[Btrfs#Swap file|Btrfs limitation]], snapshotted volumes cannot contain [[Swap#Swap file|swap files]]. Either put the swap file on another subvolume or create a [[Swap#Swap partition|swap partition]].
}}


For example if you want to be able restore {{ic|/}} to a previous snapshot but keep your {{ic|/home}} intact, you should create a subvolume that will be mounted at {{ic|/home}}. See [[Btrfs#Mounting subvolumes]].
If you were to restore your system to a previous snapshots of {{ic|@}}, these other subvolumes will remain unaffected. For example, this allows you to restore {{ic|@}} to a previous snapshot while keeping your {{ic|/home}} unchanged, because of the subvolume that is mounted at {{ic|/home}}.


This layout allows the snapper utility to take regular snapshots of {{ic|/}}, while at the same time making it easy to restore {{ic|/}} from an Arch Live CD if it becomes unbootable.
This layout allows the snapper utility to take regular snapshots of {{ic|/}}, while at the same time making it easy to restore {{ic|/}} from an Arch Live CD if it becomes unbootable.


In this sceneario, after the initial setup, snapper needs no changes, and will work as expected.
In this scenario, after the initial setup, snapper needs no changes, and will work as expected.
 
{{Tip|
* Consider creating subvolumes for other directories that contain data you do not want to include in snapshots and rollbacks of the {{ic|@}} subvolume, such as {{ic|/var/cache}}, {{ic|/var/spool}}, {{ic|/var/tmp}}, {{ic|/var/lib/machines}} ([[systemd-nspawn]]), {{ic|/var/lib/docker}} ([[Docker]]), {{ic|/var/lib/postgres}} ([[PostgreSQL]]), and other data directories under {{ic|/var/lib/}}. It is up to you if you want to follow the ''flat'' layout or create nested subvolumes. On the other hand, the pacman database in {{ic|/var/lib/pacman}} must stay on the root subvolume ({{ic|@}}).
* You can run Snapper on {{ic|@home}} and any other subvolume to have separate snapshot and rollback capabilities for data.
}}
 
==== Configuration of snapper and mount point ====
 
It is assumed that the subvolume {{ic|@}} is mounted at root {{ic|/}}. It is also assumed that {{ic|/.snapshots}} is ''not'' mounted and does ''not'' exist as folder, this can be ensured by the commands:


{{Note|Even if a subvolume is nested below {{ic|@}}, a snapshot of {{ic|/}} will ''not'' include it. Be sure to set up snapper for any additional subvolumes you want to keep snapshots of besides the one mounted at {{ic|/}}.}}
# umount /.snapshots
# rm -r /.snapshots


====Configuration of snapper and mount point====
Then [[#Creating a new configuration|create a new configuration]] for {{ic|/}}. Snapper create-config automatically creates a subvolume {{ic|.snapshots}} with the root subvolume {{ic|@}} as its parent, that is not needed for the suggested filesystem layout, and can be deleted.


Make sure {{ic|/.snapshots}} is ''not'' mounted and does ''not'' exist as folder.
# btrfs subvolume delete /.snapshots
  # umount /.snapshots
  # rm -r /.snapshots


Then [[#Create a new configuration]] for {{ic|/}}.
After deleting the subvolume, recreate the directory {{ic|/.snapshots}}.
 
# mkdir /.snapshots
 
Now [[mount]] {{ic|@snapshots}} to {{ic|/.snapshots}}. For example, for a file system located on {{ic|/dev/sda1}}:
 
# mount -o subvol=@snapshots /dev/sda1 /.snapshots


Now [[mount]] {{ic|@snapshots}} to {{ic|/.snapshots}}.
For example, for a file system located on {{ic|/dev/sda1}}:
  # mount -o subvol=@snapshots /dev/sda1 /.snapshots
To make this mount permanent, add an entry to your [[fstab]].
To make this mount permanent, add an entry to your [[fstab]].


Or if you have an existing fstab entry remount the snapshot subvolume:
Or if you have an existing fstab entry remount the snapshot subvolume:
  # mount -a


Give the folder {{ic|750}} [[Permissions#Numeric_method|permissions]].
# mount -a
 
Give the folder {{ic|750}} [[Permissions#Numeric method|permissions]].


This will make all snapshots that snapper creates be stored outside of the {{ic|@}} subvolume, so that {{ic|@}} can easily be replaced anytime without losing the snapper snapshots.
This will make all snapshots that snapper creates be stored outside of the {{ic|@}} subvolume, so that {{ic|@}} can easily be replaced anytime without losing the snapper snapshots.


====Restoring {{ic|/}} to a previous snapshot of {{ic|@}} ====
==== Restoring / to its previous snapshot ====
 
To restore {{ic|/}} using one of snapper's snapshots, first boot into a live Arch Linux USB/CD.
 
[[Mount]] the toplevel subvolume (subvolid=5). That is, omit any {{ic|subvolid}} or {{ic|subvol}} mount flags.


If you ever want to restore {{ic|/}} using one of snapper's snapshots, first boot into a live Arch Linux USB/CD.
Find the number of the snapshot that you want to recover:


[[Mount]] the toplevel subvolume (subvolid=5). That is, omit any {{ic|subvolid}} mount flags.
# grep -r '<date>' /mnt/@snapshots/*/info.xml


Find the snapshot you want to recover in {{ic|/mnt/@snapshots/*/info.xml}}.
The output should look like so, there is one line for each snapshot, so you can easily match up number and date of each snapshot.
{{Tip| You can use {{ic|vi}} to easily browse through each file:
/mnt/@snapshots/''number''/info.xml: <date>2021-07-26 22:00:00</date>
  # vi /mnt/@snapshots/*/info.xml
Use {{ic|:n}} to see the next file and {{ic|:rew}} to go back to the first file.}}


Browse through the {{ic|<description>}} tags and the {{ic|<date>}} tags, and when you find the snapshot you wish to restore, remember the {{ic|<num>}} number.
{{Note|The time zone for the date and time recorded in {{ic|info.xml}} is [[Wikipedia:Coordinated_Universal_Time|UTC]], so the time difference from local time must be taken into account.}}


Now, move {{ic|@}} to another location (''e.g.'' {{ic|/@.broken}}) to save a copy of the current system. Alternatively, simply delete {{ic|@}} using {{ic|btrfs subvolume delete}}.
Remember the {{ic|''number''}}.
 
Now, move {{ic|@}} to another location (''e.g.'' {{ic|/@.broken}}) to save a copy of the current system. Alternatively, simply delete {{ic|@}} using {{ic|btrfs subvolume delete /mnt/@}}.


Create a read-write snapshot of the read-only snapshot snapper took:
Create a read-write snapshot of the read-only snapshot snapper took:


  # btrfs subvol snapshot /mnt/@snapshots/''#''/snapshot /mnt/@
# btrfs subvolume snapshot /mnt/@snapshots/''number''/snapshot /mnt/@


Where {{ic|''#''}} is the number of the snapper snapshot you wish to restore. Your {{ic|/}} has now been restore to the previous snapshot. Now just simply reboot.
Where {{ic|''number''}} is the number of the snapper snapshot you wish to restore.


===Deleting files from snapshots===
If subvolid was used for the {{ic|/}} mount entry option in [[fstab]], instead of {{ic|/path/to/subvolume}}, change subvolid in the {{ic|/mnt/@/etc/fstab}} file to the new subvolid that can be found with {{ic|btrfs subvolume list /mnt {{!}} grep @$}}. Also change the boot loader configuration such as {{ic|refind_linux.conf}}, if it contains the subvolid.


If you want to delete a specific file or folder from past snapshots without deleting the snapshots themselves, [https://pypi.python.org/pypi/snapperS snapperS] is a script that adds this functionality to Snapper. This script can also be used to manipulate past snapshots in a number of other ways that Snapper does not currently support.
Finally, unmount the top-level subvolume (ID=5), then [[Btrfs#Mounting subvolumes|mount]] {{ic|@}} to {{ic|/mnt}} and your [[ESP]] or boot partition to the appropriate mount point. [[Change root]] to your restored snapshot in order to [[Mkinitcpio#Manual generation|regenerate your initramfs image]].


If you want to remove a file without using an extra script, you just need to [http://unix.stackexchange.com/a/149933/3587 make your snapshot subvolume read-write], which you can do with:
Your {{ic|/}} has now been restored to the previous snapshot. Now just simply reboot.
 
{{Tip|You can also use the automatic rollback tool made for this layout: {{AUR|snapper-rollback}}. Edit the config file at {{ic|/etc/snapper-rollback.conf}} to match your system.}}
 
==== Restoring other subvolumes to their previous snapshot ====
 
See [[#Restore snapshot]].
 
=== Deleting files from snapshots ===
 
If you want to delete a specific file or folder from past snapshots without deleting the snapshots themselves, {{AUR|snappers}} is a script that adds this functionality to Snapper. This script can also be used to manipulate past snapshots in a number of other ways that Snapper does not currently support.
 
If you want to remove a file without using an extra script, you just need to [https://unix.stackexchange.com/a/149933 make your snapshot subvolume read-write], which you can do with:


  # btrfs property set /path/to/.snapshots/<snapshot_num>/snapshot ro false
  # btrfs property set /path/to/.snapshots/<snapshot_num>/snapshot ro false


verify that <tt>ro=false</tt>:
Verify that {{ic|1=ro=false}}:


  # btrfs property get /path/to/.snapshots/<snapshot_num>/snapshot
  # btrfs property get /path/to/.snapshots/<snapshot_num>/snapshot
  ro=false
  ro=false


You can now modify files in <tt>/path/to/.snapshots/<snapshot_num>/snapshot</tt> like normal.  You can use a shell loop to work on your snapshots in bulk.
You can now modify files in {{ic|/path/to/.snapshots/<snapshot_num>/snapshot}} like normal.  You can use a shell loop to work on your snapshots in bulk.


=== Preventing slowdowns ===
=== Preventing slowdowns ===


Keeping many of snapshots for a large timeframe on a busy filesystem like {{ic|/}}, where many system updates happen over time) can cause serious slowdowns. You can prevent it by:
Keeping many snapshots for a large timeframe on a busy filesystem like {{ic|/}}, where many system updates happen over time, can cause serious slowdowns. You can prevent it by:
* [[Btrfs#Creating_a_subvolume|Creating]] subvolumes for things that are not worth being snapshotted, like {{ic|/var/cache/pacman/pkg}}, {{ic|/var/abs}}, {{ic|/var/tmp}}, and {{ic|/srv}}.
 
* [[Btrfs#Creating a subvolume|Creating]] subvolumes for things that are not worth being snapshotted, like {{ic|/var/cache/pacman/pkg}}, {{ic|/var/abs}}, {{ic|/var/tmp}}, and {{ic|/srv}}.
* Editing the default settings for hourly/daily/monthly/yearly snapshots when using [[#Automatic timeline snapshots]].
* Editing the default settings for hourly/daily/monthly/yearly snapshots when using [[#Automatic timeline snapshots]].


==== updatedb ====
==== updatedb ====


By default, {{ic|updatedb}} will also index the {{ic|.snapshots}} directory created by snapper, which can cause serious slowdown and excessive memory usage if you have many snapshots. You can prevent {{ic|updatedb}} from indexing over it by editing:
By default, {{ic|updatedb}} (see [[mlocate]]) will also index the {{ic|.snapshots}} directory created by snapper, which can cause serious slowdown and excessive memory usage if you have many snapshots. You can prevent {{ic|updatedb}} from indexing over it by editing:
 
{{hc|/etc/updatedb.conf|2=PRUNENAMES = ".snapshots"}}
{{hc|/etc/updatedb.conf|2=PRUNENAMES = ".snapshots"}}


=== Preserving log files ===
==== Disable quota groups ====
 
There are reports of significant slow downs being caused by quota groups, if for instance {{ic|snapper ls}} takes many minutes to return a result this could be the cause. See [https://www.reddit.com/r/btrfs/comments/fmucrq/btrfs_snapshots_make_entire_system_lag_cpu_usage/].
 
To determine whether or not quota groups are enabled use the following command:
 
# btrfs qgroup show /
 
Quota groups can then be disabled with:
 
# btrfs quota disable /
 
==== Count the number of snapshots ====
 
If disabling quota groups did not help with slow down, it may be helpful to count the number of snapshots, this can be done with:
 
# btrfs subvolume list -s / | wc -l
 
=== Create subvolumes for user data and logs ===
 
It is recommended to store directories on their own subvolume, rather than the root subvolume {{ic|/}}, if they contain user data e.g. emails, or logs. That way if a snapshot of {{ic|/}} is restored, user data and logs  will not also be reverted to the previous state. A separate timeline of snapshots can be maintained for user data. It is not recommended to create snapshots of logs in {{ic|/var/log}}. This makes it easier to troubleshoot.
 
Directories can also be skipped during a restore using [[#Filter configuration]].
 
{{Accuracy|How is the list from SLES documentation relevant for Arch Linux?}}


It's recommended to create a subvolume for {{ic|/var/log}} so that snapshots of {{ic|/}} exclude it. That way if a snapshot of {{ic|/}} is restored your log files will not also be reverted to the previous state. This make it easier to troubleshoot.
See also the [https://documentation.suse.com/sles/12-SP4/html/SLES-all/cha-snapper.html#snapper-dir-excludes Directories That Are Excluded from Snapshots]{{Dead link|2024|03|03|status=404}} in SLES documentation.


=== Cleanup based on disk usage ===
=== Cleanup based on disk usage ===
Line 311: Line 464:


== Troubleshooting ==
== Troubleshooting ==
===Snapper logs===
 
=== Snapper logs ===
 
Snapper writes all activity to {{ic|/var/log/snapper.log}} - check this file first if you think something goes wrong.
Snapper writes all activity to {{ic|/var/log/snapper.log}} - check this file first if you think something goes wrong.


If you have issues with hourly/daily/weekly snapshots, the most common cause for this so far has been that the cronie service (or whatever cron daemon you are using) was not running.
If you have issues with hourly/daily/weekly snapshots, the most common cause for this so far has been that the cronie service (or whatever cron daemon you are using) was not running.


===IO error===
=== IO error ===
 
If you get an 'IO Error' when trying to create a snapshot please make sure that the [https://bbs.archlinux.org/viewtopic.php?id=164404 .snapshots] directory associated to the subvolume you are trying to snapshot is a subvolume by itself.
If you get an 'IO Error' when trying to create a snapshot please make sure that the [https://bbs.archlinux.org/viewtopic.php?id=164404 .snapshots] directory associated to the subvolume you are trying to snapshot is a subvolume by itself.


Another possible cause is that .snapshots directory does not have root as an owner (You will find {{ic|Btrfs.cc(openInfosDir):219 - .snapshots must have owner root}} in the {{ic|/var/log/snapper.log}}).
Another possible cause is that .snapshots directory does not have root as an owner (You will find {{ic|Btrfs.cc(openInfosDir):219 - .snapshots must have owner root}} in the {{ic|/var/log/snapper.log}}).
=== Orphaned snapshots causing wasted disk space ===
It is possible for snapshots to get 'lost', where they still exist on disk but are not tracked by snapper. This can result in a large amount of wasted, unaccounted-for disk space. To check for this, compare the output of
# snapper -c <config> list
to
# btrfs subvolume list -o <parent subvolume>/.snapshots
Any subvolume in the second list which is not present in the first is an orphan and can be [[Btrfs#Deleting a subvolume|deleted]] manually.


== See also ==
== See also ==
* [http://snapper.io/ Snapper homepage]
* [http://snapper.io/ Snapper homepage]
* [https://en.opensuse.org/Portal:Snapper openSUSE Snapper portal]
* [https://en.opensuse.org/Portal:Snapper openSUSE Snapper portal]
* [https://btrfs.wiki.kernel.org/index.php/Main_Page Btrfs homepage]
* [https://btrfs.wiki.kernel.org/index.php/Main_Page Btrfs homepage]
* [https://www.linux.com/news/enterprise/systems-management/878490-snapper-suses-ultimate-btrfs-snapshot-manager/ Linux.com: Snapper: SUSE's Ultimate Btrfs Snapshot Manager]
* [https://web.archive.org/web/20160327174528/https://www.linux.com/news/enterprise/systems-management/878490-snapper-suses-ultimate-btrfs-snapshot-manager/ Linux.com: Snapper: SUSE's Ultimate Btrfs Snapshot Manager]

Latest revision as of 18:06, 7 April 2024

Snapper is a tool created by openSUSE's Arvin Schnell that helps with managing snapshots of Btrfs subvolumes and thin-provisioned LVM volumes. It can create and compare snapshots, revert between snapshots, and supports automatic snapshots timelines.

Installation

Install the snapper package. The development version snapper-gitAUR is also available.

Additionally, GUIs are available with snapper-gui-gitAUR, btrfs-assistantAUR, and snapper-toolsAUR.

Creating a new configuration

Before creating a snapper configuration for a Btrfs subvolume, the subvolume must already exist. If it does not, you should create it before generating a snapper configuration.

To create a new snapper configuration named config for the Btrfs subvolume at /path/to/subvolume, run:

# snapper -c config create-config /path/to/subvolume

This will:

  • Create a configuration file at /etc/snapper/configs/config based on the default template from /usr/share/snapper/config-templates.
  • Create a subvolume at /path/to/subvolume/.snapshots where future snapshots for this configuration will be stored. A snapshot's path is /path/to/subvolume/.snapshots/#/snapshot, where # is the snapshot number.
  • Add config to SNAPPER_CONFIGS in /etc/conf.d/snapper.

For example, to create a configuration file for the subvolume mounted at /, run:

# snapper -c root create-config /
Note: If you are using the suggested Btrfs partition layout from archinstall then the @.snapshots subvolume will already be mounted to /.snapshots, and the snapper create-config command will fail [1]. To use the @.snapshots subvolume for Snapper backups, do the following:
  • Unmount the @.snapshots subvolume and delete the existing mountpoint.
  • Create the Snapper config.
  • Delete the subvolume created by Snapper.
  • Re-create the /.snapshots mount point and re-mount the @.snapshots subvolume.

At this point, the configuration is active. If your cron daemon is running, snapper will take #Automatic timeline snapshots. If you do not use a cron daemon, you will need to use the systemd service and timer. See #Enable/disable.

See also snapper-configs(5).

Taking snapshots

Automatic timeline snapshots

A snapshot timeline can be created with a configurable number of hourly, daily, weekly, monthly, and yearly snapshots kept. When the timeline is enabled, by default a snapshot gets created once an hour. Once a day the snapshots get cleaned up by the timeline cleanup algorithm. Refer to the TIMELINE_* variables in snapper-configs(5) for details.

Enable/disable

If you have a cron daemon, this feature should start automatically. To disable it, edit the configuration file corresponding with the subvolume you do not want to have this feature and set:

TIMELINE_CREATE="no"

If you do not have a cron daemon, you can use the provided systemd units. Start and enable snapper-timeline.timer to start the automatic snapshot timeline. Additionally, start and enable snapper-cleanup.timer to periodically clean up older snapshots.

Note: If you have a cron daemon and also enable the systemd units, this may result in duplicate snapshots being created. If you wish to disable cron integration while using the systemd units, one possible solution is not to install the snapper package's cron files via pacman's NoExtract and NoUpgrade configuration options. See [2].

Set snapshot limits

The default settings will keep 10 hourly, 10 daily, 10 monthly and 10 yearly snapshots. You may want to change this in the configuration, especially on busy subvolumes like /. See #Preventing slowdowns.

Here is an example section of a configuration named config with only 5 hourly snapshots, 7 daily ones, no monthly and no yearly ones:

/etc/snapper/configs/config
TIMELINE_MIN_AGE="1800"
TIMELINE_LIMIT_HOURLY="5"
TIMELINE_LIMIT_DAILY="7"
TIMELINE_LIMIT_WEEKLY="0"
TIMELINE_LIMIT_MONTHLY="0"
TIMELINE_LIMIT_YEARLY="0"

Change snapshot and cleanup frequencies

If you are using the provided systemd timers, you can edit them to change the snapshot and cleanup frequency.

For example, when editing the snapper-timeline.timer, add the following to make the frequency every five minutes, instead of hourly:

[Timer]
OnCalendar=
OnCalendar=*:0/5

When editing snapper-cleanup.timer, you need to change OnUnitActiveSec. To make cleanups occur every hour instead of every day, add:

[Timer]
OnUnitActiveSec=1h

See systemd/Timers and systemd#Drop-in files.

Manual snapshots

Single snapshots

By default snapper takes snapshots that are of the single type, having no special relationship to other snapshots.

To take a snapshot of a subvolume manually, do:

# snapper -c config create --description desc

The above command does not use any cleanup algorithm, so the snapshot is stored permanently or until deleted.

To set a cleanup algorithm, use the -c flag after create and choose either number, timeline, pre, or post. number sets snapper to periodically remove snapshots that have exceeded a set number in the configuration file. For example, to create a snaphot that uses the number algorithm for cleanup do:

# snapper -c config create -c number

See #Automatic timeline snapshots for how timeline snapshots work and see #Pre/post snapshots on how pre and post work.

Pre/post snapshots

The other type of snapshots - pre/post snapshots - are intended to be created as a pair, one before and one after a significant change (such as a system update).

If the significant change is/can be invoked by a single command, then snapper create --command can be used to invoke the command and automatically create pre/post snapshots:

# snapper -c config create --command cmd
Tip: To wrap any shell command around pre/post snapshots, one may also consider using the snpAUR shell script, which provides better output redirection than the native --command option of snapper.

Alternatively, the pre/post snapshots can be created manually.

First create a pre snapshot:

# snapper -c config create -t pre -p

Note the number of the new snapshot (it is required to create the post snapshot).

Now perform the actions that will modify the filesystem (*e.g.*, install a new program, upgrade, etc.).

Finally, create the post snapshot, replacing N with the number of the pre snapshot:

# snapper -c config create -t post --pre-number N

See also #Wrapping pacman transactions in snapshots.

Snapshots on boot

To have snapper take a snapshot of the root configuration, enable snapper-boot.timer. (These snapshots are of type single.)

Managing snapshots

List configurations

To list all configurations that have been created do:

# snapper list-configs

List snapshots

To list snapshots taken for a given configuration config do:

# snapper -c config list

Restore snapshot

A file may be kept as is when restoring a snapshot, either because was not included in the snapshot (e.g. it resides on another subvolume), or because a filter configuration excluded the file.

Filter configuration

The factual accuracy of this article or section is disputed.

Reason: /etc/mtab is a symlink to /proc/self/mounts, so reverting it has no effect on the system. (Discuss in Talk:Snapper)

Some files keep state information of the system, e.g. /etc/mtab. Such files should never be reverted. The default configuration in arch linux ensures this. To help users, snapper allows one to ignore these files. Each line in all files /etc/snapper/filters/*.txt and /usr/share/snapper/filters/*.txt specifies a pattern. When snapper computes the difference between two snapshots it ignores all files and directories matching any of those patterns. Note that filters do not exclude files or directories from being snapshotted. For that, use subvolumes or mount points.

The factual accuracy of this article or section is disputed.

Reason: How is the list from SLES documentation relevant for Arch Linux? (Discuss in Talk:Snapper)

See also the Directories That Are Excluded from Snapshots[dead link 2024-03-03 ⓘ] in SLES documentation.

Restore using the default layout

The factual accuracy of this article or section is disputed.

Reason: What is the "default layout"? What is the alternative? (Discuss in Talk:Snapper)

If you are using the default layout of snapper, each snapshot is sub-subvolume in the .snapshots directory of a subvolume, e.g. @home.

The factual accuracy of this article or section is disputed.

Reason: Subvolumes that are not used for / can be restored from the system itself. Just log in as root, make sure that the subvolume is not used, and unmount it. (Discuss in Talk:Snapper)

To restore /home using one of snapper's snapshots, first boot into a live Arch Linux USB/CD.

Mount btrfs root-volume into /mnt using the UUID:

# mount -t btrfs -o subvol=/ /dev/disk/by-uuid/UUID_of_root_volume /mnt
# cd /mnt

The factual accuracy of this article or section is disputed.

Reason: This was written for the live Arch Linux USB/CD where no snapper service can be running. (Discuss in Talk:Snapper)

If the snapper service is running on a running system, stop it. Check if any snapper-unit.timers are running, then stop them.

Move a broken/old subvolume out of the way e.g. @home to @home-backup:

# mv @home @home-backup

Find the number of the snapshot that you want to recover (there is one line for each snapshot, so you can easily match up number and date of each snapshot):

# grep -r '<date>'  /mnt/@home-backup /.snapshots/*/info.xml
...
/mnt/@home-backup/.snapshots/number/info.xml:  <date>2021-07-26 22:00:00</date>
...
Note: The time zone for the date and time recorded in info.xml is UTC, so the time difference from local time must be taken into account.

Remember the number.

Create a new snapshot @home from snapshot number number to be restored.

# btrfs subvolume snapshot @home-backup/.snapshots/number/snapshot @home

Get the directory .snapshots back to the healthy subvolume, e.g. @home

# mv @home-backup/.snapshots @home/

If subvolid was used for the /home mount entry option in fstab, instead of /path/to/subvolume, change subvolid in the /mnt/@/etc/fstab file (assuming that @ is the subvolume that is mounted as / in the system) to the new subvolid that can be found with btrfs subvolume list /mnt | grep @home$.

Reboot.

Check if your system is working as intended, the delete the old/broken snapshot (e.g. @home-backup) if desired. You should check if it contains useful data that you can get back.

Delete a snapshot

To delete a snapshot number N do:

# snapper -c config delete N

Multiple snapshots can be deleted at one time. For example, to delete snapshots 65 and 70 of the root configuration do:

# snapper -c root delete 65 70

To delete a range of snapshots, in this example between snapshots 65 and 70 of the root configuration do:

# snapper -c root delete 65-70

To free the space used by the snapshot(s) immediately, use --sync:

# snapper -c root delete --sync 65
Note: When deleting a pre snapshot, you should always delete its corresponding post snapshot and vice versa.

Access for non-root users

Each config is created with the root user, and by default, only root can see and access it.

To be able to list the snapshots for a given config for a specific user, simply change the value of ALLOW_USERS in your /etc/snapper/configs/config file. You should now be able to run snapper -c config list as a normal user.

Eventually, you want to be able to browse the .snapshots directory with a user, but the owner of this directory must stay root. Therefore, you should change the group owner by a group containing the user you are interested in, such as users for example:

# chmod a+rx .snapshots
# chown :users .snapshots

Tips and tricks

Wrapping pacman transactions in snapshots

There are a couple of packages used for automatically creating snapshots upon a pacman transaction:

https://github.com/wesbarnett/snap-pac || snap-pac
  • grub-btrfs — Includes a daemon (grub-btrfsd) that can be enabled via systemctl to look for new snapshots and automatically includes them in the GRUB menu.
https://github.com/Antynea/grub-btrfs || grub-btrfs
https://github.com/maximbaz/snap-pac-grub || snap-pac-grubAUR
  • refind-btrfs — Adds entries to rEFInd after snap-pac made the snapshots.
https://github.com/Venom1991/refind-btrfs || refind-btrfsAUR
  • snp — Wraps any shell command in a snapper pre-post snapshot (e.g. snp pacman -Syu), with better output than the native --command option of snapper (see #Pre/post snapshots).
https://gist.github.com/erikw/5229436 || snpAUR

Booting into read-only snapshots

Users who rely on grub-btrfs or snap-pac-grubAUR should note that by default, Snapper's snapshots are read-only, and there are some inherent difficulties booting into read-only snapshots. Many services, such as a desktop manager, require a writable /var directory, and will fail to start when booted from a read-only snapshot.

To work around this, you can either make the snapshots writable, or use the developer-approved method of booting the snapshots with overlayfs, causing the snapshot to behave similar to a live CD environment.

Note: Any changes you make to files within this snapshot will not be saved, as the filesystem only exists within RAM.

To boot snapshots with overlayfs:

  • Ensure grub-btrfs is installed on your system.
  • Add grub-btrfs-overlayfs to the end of the HOOKS array in /etc/mkinitcpio.conf. For example:
    HOOKS=(base udev autodetect microcode modconf kms keyboard keymap consolefont block filesystems fsck grub-btrfs-overlayfs)
    Note: Because grub-btrfs-overlayfs only provides a runtime hook and no systemd unit, it is not compatible with a systemd based initramfs. Make sure you use a Busybox based initramfs instead. See this GitHub issue for more details.
  • Regenerate the initramfs.

Further reading:

Backup non-Btrfs boot partition on pacman transactions

If your /boot partition is on a non Btrfs filesystem (e.g. an ESP) you are not able to do snapper backups with it. See System backup#Snapshots and /boot partition to copy the boot partition automatically on a kernel update to your Btrfs root with a hook. This also plays nice together with snap-pac.

Incremental backup to external drive

Some tools can use snapper to automate backups. See Btrfs#Incremental backup to external drive.

Suggested filesystem layout

Note: The following layout is intended not to be used with snapper rollback, but is intended to alleviate the inherent problems of #Restoring / to its previous snapshot. See this this forum thread.

Here is a suggested file system layout for easily restoring the subvolume @ that is mounted at root to a previous snapshot:

Filesystem layout
Subvolume Mountpoint
@ /
@home /home
@snapshots /.snapshots
@var_log /var/log
subvolid=5
  |
  ├── @ -|
  |     contained directories:
  |       ├── /usr
  |       ├── /bin
  |       ├── /.snapshots
  |       ├── ...
  |
  ├── @home
  ├── @snapshots
  ├── @var_log
  └── @...

The subvolumes @... are mounted to any other directory that should have its own subvolume.

Note:
  • When taking a snapshot of @ (mounted at the root /), other subvolumes are not included in the snapshot. Even if a subvolume is nested below @, a snapshot of @ will not include it. Create snapper configurations for additional subvolumes besides @ of which you want to keep snapshots.
  • Due to a Btrfs limitation, snapshotted volumes cannot contain swap files. Either put the swap file on another subvolume or create a swap partition.

If you were to restore your system to a previous snapshots of @, these other subvolumes will remain unaffected. For example, this allows you to restore @ to a previous snapshot while keeping your /home unchanged, because of the subvolume that is mounted at /home.

This layout allows the snapper utility to take regular snapshots of /, while at the same time making it easy to restore / from an Arch Live CD if it becomes unbootable.

In this scenario, after the initial setup, snapper needs no changes, and will work as expected.

Tip:
  • Consider creating subvolumes for other directories that contain data you do not want to include in snapshots and rollbacks of the @ subvolume, such as /var/cache, /var/spool, /var/tmp, /var/lib/machines (systemd-nspawn), /var/lib/docker (Docker), /var/lib/postgres (PostgreSQL), and other data directories under /var/lib/. It is up to you if you want to follow the flat layout or create nested subvolumes. On the other hand, the pacman database in /var/lib/pacman must stay on the root subvolume (@).
  • You can run Snapper on @home and any other subvolume to have separate snapshot and rollback capabilities for data.

Configuration of snapper and mount point

It is assumed that the subvolume @ is mounted at root /. It is also assumed that /.snapshots is not mounted and does not exist as folder, this can be ensured by the commands:

# umount /.snapshots
# rm -r /.snapshots

Then create a new configuration for /. Snapper create-config automatically creates a subvolume .snapshots with the root subvolume @ as its parent, that is not needed for the suggested filesystem layout, and can be deleted.

# btrfs subvolume delete /.snapshots

After deleting the subvolume, recreate the directory /.snapshots.

# mkdir /.snapshots

Now mount @snapshots to /.snapshots. For example, for a file system located on /dev/sda1:

# mount -o subvol=@snapshots /dev/sda1 /.snapshots

To make this mount permanent, add an entry to your fstab.

Or if you have an existing fstab entry remount the snapshot subvolume:

# mount -a

Give the folder 750 permissions.

This will make all snapshots that snapper creates be stored outside of the @ subvolume, so that @ can easily be replaced anytime without losing the snapper snapshots.

Restoring / to its previous snapshot

To restore / using one of snapper's snapshots, first boot into a live Arch Linux USB/CD.

Mount the toplevel subvolume (subvolid=5). That is, omit any subvolid or subvol mount flags.

Find the number of the snapshot that you want to recover:

# grep -r '<date>' /mnt/@snapshots/*/info.xml

The output should look like so, there is one line for each snapshot, so you can easily match up number and date of each snapshot.

/mnt/@snapshots/number/info.xml:  <date>2021-07-26 22:00:00</date>
Note: The time zone for the date and time recorded in info.xml is UTC, so the time difference from local time must be taken into account.

Remember the number.

Now, move @ to another location (e.g. /@.broken) to save a copy of the current system. Alternatively, simply delete @ using btrfs subvolume delete /mnt/@.

Create a read-write snapshot of the read-only snapshot snapper took:

# btrfs subvolume snapshot /mnt/@snapshots/number/snapshot /mnt/@

Where number is the number of the snapper snapshot you wish to restore.

If subvolid was used for the / mount entry option in fstab, instead of /path/to/subvolume, change subvolid in the /mnt/@/etc/fstab file to the new subvolid that can be found with btrfs subvolume list /mnt | grep @$. Also change the boot loader configuration such as refind_linux.conf, if it contains the subvolid.

Finally, unmount the top-level subvolume (ID=5), then mount @ to /mnt and your ESP or boot partition to the appropriate mount point. Change root to your restored snapshot in order to regenerate your initramfs image.

Your / has now been restored to the previous snapshot. Now just simply reboot.

Tip: You can also use the automatic rollback tool made for this layout: snapper-rollbackAUR. Edit the config file at /etc/snapper-rollback.conf to match your system.

Restoring other subvolumes to their previous snapshot

See #Restore snapshot.

Deleting files from snapshots

If you want to delete a specific file or folder from past snapshots without deleting the snapshots themselves, snappersAUR is a script that adds this functionality to Snapper. This script can also be used to manipulate past snapshots in a number of other ways that Snapper does not currently support.

If you want to remove a file without using an extra script, you just need to make your snapshot subvolume read-write, which you can do with:

# btrfs property set /path/to/.snapshots/<snapshot_num>/snapshot ro false

Verify that ro=false:

# btrfs property get /path/to/.snapshots/<snapshot_num>/snapshot
ro=false

You can now modify files in /path/to/.snapshots/<snapshot_num>/snapshot like normal. You can use a shell loop to work on your snapshots in bulk.

Preventing slowdowns

Keeping many snapshots for a large timeframe on a busy filesystem like /, where many system updates happen over time, can cause serious slowdowns. You can prevent it by:

  • Creating subvolumes for things that are not worth being snapshotted, like /var/cache/pacman/pkg, /var/abs, /var/tmp, and /srv.
  • Editing the default settings for hourly/daily/monthly/yearly snapshots when using #Automatic timeline snapshots.

updatedb

By default, updatedb (see mlocate) will also index the .snapshots directory created by snapper, which can cause serious slowdown and excessive memory usage if you have many snapshots. You can prevent updatedb from indexing over it by editing:

/etc/updatedb.conf
PRUNENAMES = ".snapshots"

Disable quota groups

There are reports of significant slow downs being caused by quota groups, if for instance snapper ls takes many minutes to return a result this could be the cause. See [3].

To determine whether or not quota groups are enabled use the following command:

# btrfs qgroup show /

Quota groups can then be disabled with:

# btrfs quota disable /

Count the number of snapshots

If disabling quota groups did not help with slow down, it may be helpful to count the number of snapshots, this can be done with:

# btrfs subvolume list -s / | wc -l

Create subvolumes for user data and logs

It is recommended to store directories on their own subvolume, rather than the root subvolume /, if they contain user data e.g. emails, or logs. That way if a snapshot of / is restored, user data and logs will not also be reverted to the previous state. A separate timeline of snapshots can be maintained for user data. It is not recommended to create snapshots of logs in /var/log. This makes it easier to troubleshoot.

Directories can also be skipped during a restore using #Filter configuration.

The factual accuracy of this article or section is disputed.

Reason: How is the list from SLES documentation relevant for Arch Linux? (Discuss in Talk:Snapper)

See also the Directories That Are Excluded from Snapshots[dead link 2024-03-03 ⓘ] in SLES documentation.

Cleanup based on disk usage

This article or section needs expansion.

Reason: See [4] for ideas. (Discuss in Talk:Snapper)

Troubleshooting

Snapper logs

Snapper writes all activity to /var/log/snapper.log - check this file first if you think something goes wrong.

If you have issues with hourly/daily/weekly snapshots, the most common cause for this so far has been that the cronie service (or whatever cron daemon you are using) was not running.

IO error

If you get an 'IO Error' when trying to create a snapshot please make sure that the .snapshots directory associated to the subvolume you are trying to snapshot is a subvolume by itself.

Another possible cause is that .snapshots directory does not have root as an owner (You will find Btrfs.cc(openInfosDir):219 - .snapshots must have owner root in the /var/log/snapper.log).

Orphaned snapshots causing wasted disk space

It is possible for snapshots to get 'lost', where they still exist on disk but are not tracked by snapper. This can result in a large amount of wasted, unaccounted-for disk space. To check for this, compare the output of

# snapper -c <config> list

to

# btrfs subvolume list -o <parent subvolume>/.snapshots 

Any subvolume in the second list which is not present in the first is an orphan and can be deleted manually.

See also