#{{{ Vorbereitung ================= # mkdir /mnt/btrfs $ rm -rf /home/flo/btrfsdemo $ mkdir /home/flo/btrfsdemo $ dd if=/dev/zero of=/home/flo/btrfsdemo/btrfs-vol0.img bs=1G count=1 $ dd if=/dev/zero of=/home/flo/btrfsdemo/btrfs-vol1.img bs=1G count=1 $ dd if=/dev/zero of=/home/flo/btrfsdemo/btrfs-vol2.img bs=1G count=1 $ dd if=/dev/zero of=/home/flo/btrfsdemo/btrfs-vol3.img bs=1G count=1 # losetup /dev/loop0 /home/flo/btrfsdemo/btrfs-vol0.img # losetup /dev/loop1 /home/flo/btrfsdemo/btrfs-vol1.img # losetup /dev/loop2 /home/flo/btrfsdemo/btrfs-vol2.img # losetup /dev/loop3 /home/flo/btrfsdemo/btrfs-vol3.img #}}} #{{{ Diverses -> Prompt verkürzen export PS1="\w: " -> shell für Beamer: konsole -> vergrößern: [CTRL] [+] -> verkleinern: [CTRL] [-] #}}} #{{{ Online Resize ================== -> btrfs kommando # btrfs # mkfs.btrfs /dev/loop0 # mount /dev/loop0 /mnt/btrfs -> Größe anzeigen # btrfs filesystem show /dev/loop0 -> verkleinern (dmesg, dauert ev. ein bisschen) -> ev. remount # btrfs filesystem resize -500m /mnt/btrfs -> Größe anzeigen # btrfs filesystem show /dev/loop0 -> Wieder auf maximale Größe erweitern # btrfs filesystem resize max /mnt/btrfs -> Größe anzeigen # btrfs filesystem show /dev/loop0 #}}} #{{{ Festplatten hinzufügen/entfernen ===================================== # mkfs.btrfs /dev/loop0 # mount /dev/loop0 /mnt/btrfs -> files anlegen dd if=/dev/urandom of=/mnt/btrfs/1.file bs=1M count=10 dd if=/dev/urandom of=/mnt/btrfs/2.file bs=1M count=10 dd if=/dev/urandom of=/mnt/btrfs/3.file bs=1M count=10 -> Größe anzeigen # btrfs filesystem show /dev/loop0 -> device hinzufügen -> m: mirror, raid1 -> d: stripe, raid0 btrfs device add /dev/loop1 /mnt/btrfs -> Größe anzeigen -> die neue platte ist unused -> Größe hat sich geändert # btrfs filesystem show /dev/loop0 -> neues file schreiben, dass größer als der allokierte block ist # dd if=/dev/urandom of=/mnt/btrfs/5.file bs=1M count=200 -> Größe anzeigen -> neue disk wird verwendet # btrfs filesystem show /dev/loop0 -> device wieder entfernen -> ja, loop0 -> Dateien werden wieder zurück kopiert # btrfs device delete /dev/loop0 /mnt/btrfs -> Dateisystem anzeigen (*** Some devices missing) -> bogus; falsches reporting in den userspace tools -> workaround: remount # btrfs filesystem show /dev/loop0 # umount /mnt/btrfs # mount /dev/loop1 /mnt/btrfs #}}} #{{{ Online Balance =================== -> Umgebung von oben wieder verwenden! -> loop2 hinzufügen btrfs device add /dev/loop2 /mnt/btrfs -> Größe anzeigen -> die neue platte ist unused -> Größe hat sich geändert # btrfs filesystem show /dev/loop1 -> Balance # btrfs balance start /mnt/btrfs # btrfs balance start -dconvert=raid0 /mnt/btrfs # btrfs balance start -mconvert=raid0 /mnt/btrfs -f -> Daten sind neu verteilt worden -> remount ?? # btrfs filesystem show /dev/loop0 #}}} #{{{ RAID ========= -> RAID1 anlegen # mkfs.btrfs -m raid1 -d raid1 /dev/loop0 /dev/loop1 # mount /dev/loop0 /mnt/btrfs -> files anlegen # dd if=/dev/urandom of=/mnt/btrfs/1.file bs=1M count=10 # dd if=/dev/urandom of=/mnt/btrfs/2.file bs=1M count=10 # dd if=/dev/urandom of=/mnt/btrfs/3.file bs=1M count=10 -> umount und einen Ausfall simulieren # umount /mnt/btrfs # losetup -d /dev/loop1 -> mounten schlägt fehl # mount /dev/loop0 /mnt/btrfs -> was ist los? -> aber degraded mount ist möglich # btrfs filesystem show /dev/loop0 # mount -o degraded /dev/loop0 /mnt/btrfs -> neues device hinzufügen -> balance -> kaputtes device entfernen # btrfs device add /dev/loop2 /mnt/btrfs # btrfs balance start /mnt/btrfs # btrfs device delete missing /mnt/btrfs -> neu mounten ohne degraded umount /mnt/btrfs mount /dev/loop0 /mnt/btrfs # btrfs filesystem show /dev/loop0 -> Sicherheitshalber auch gleich die 2. Platte tauschen -> ein device im laufenden Betrieb ersetzen: btrfs replace start /dev/loop0 /dev/loop3 /mnt/btrfs # btrfs filesystem show /mnt/btrfs #}}} #{{{ Subvolumes und Snapshots ============================= # mkfs.btrfs /dev/loop0 # mount /dev/loop0 /mnt/btrfs -> Subvolume erstellen # cd /mnt/btrfs # btrfs subvolume create my-subvol -> Dateien im subvolume anlegen # touch my-subvol/test1.file my-subvol/test2.file -> Subvolumes samt ID anzeigen # btrfs subvolume list /mnt/btrfs -> Das neue Subvolume mounten # cd # umount /mnt/btrfs # mount -o subvolid= /dev/loop0 /mnt/btrfs # mount -o subvol=my-subvol /dev/loop0 /mnt/btrfs # ls /mnt/btrfs -> wieder btrfs root mounten # umount /mnt/btrfs # mount /dev/loop0 /mnt/btrfs -> Use case: Backup -> Einen read-only snapshot erstellen -> Backup erstellen -> Snapshot löschen # btrfs subvolume snapshot -r /mnt/btrfs/my-subvol/ /mnt/btrfs/snap-my-subvol/ # # do your backup here cp, rsync, … # btrfs subvolume delete /mnt/btrfs/snap-my-subvol -> Use case: Experimentieren im Dateisystem (Update, …) -> Snapshot erstellen -> $herumwerken -> Ursprungsvolume löschen -> umbennen # btrfs subvolume snapshot /mnt/btrfs/my-subvol /mnt/btrfs/experiment # btrfs subvolume delete /mnt/btrfs/my-subvol # mv /mnt/btrfs/experiment /mnt/btrfs/my-subvol -> Nur mehr 1 Snapshot vorhanden # btrfs subvolume list /mnt/btrfs #}}} #{{{ Bonus # old tools The filesystem automatically allocates space out of the pool as it needs it. In this case, it has allocated 394.02 GiB of space (197.01+197.01), out of a total of 893.00 GiB (427.24+465.76) across two devices. The allocation is simply setting aside a region of a disk for some purpose (e.g. "data will be stored in this region"). The output from btrfs fi show, however, doesn't tell the whole story. The btrfs fi show output only shows how much has been allocated out of the total available space. In order to see how that allocation is used, and how much of it contains useful data, you also need btrfs fi df: Note that the "total" values here add up to the "used" values in btrfs fi show. The "used" values here tell you how much useful information is stored. The rest is free space available for data (or metadata). So, in this case, there is 77.63 GiB of space (376-298.37) for data allocated but unused, and a further 498.98 GiB (427.24+465.76-197.01-197.01) unallocated and available for further use. Some of the unallocated space may be used for metadata as the FS grows in size. # new tool: This is showing the space usage for the same filesystem at the end of the previous section, with two 40 GiB devices. The "Overall" section shows aggregate information for the whole filesystem: There are 80 GiB of raw storage in total, with 32.02 GiB allocated, and 29.33 GiB of that allocation used. The "data ratio" and "metadata ratio" values are the proprtion of raw disk space that is used for each byte of data written. For a RAID-1 or RAID-10 filesystem, this should be 2.0; for RAID-5 or RAID-6, this should be somewhere between 1.0 and 1.5. In the example here, the value of 2.0 for data indicates that the 29.34 GiB of used space is holding 14.67 GiB of actual data (29.34 / 2.0). The "free" value is an estimate of the amount of data that can still be written to this FS, based on the current usage profile. The "min" value is the minimum amount of data that you can expect to be able to get onto the filesystem. Below the "overall" section, information is shown for each of the types type of allocation, summarised and then broken down by device. In this section, you are seeing the values after the data or metadata ratio is accounted for, so we can see the 14.65 GiB of actual data used, and the 15 GiB of allocated space it lives in. (The small discrepancy with the value we worked out previously can be accounted for by rounding errors). #}}} #{{{ Nachbereitung ================== # umount /mnt/btrfs # losetup -D # rm -rf /home/flo/btrfsdemo # rm -rf /mnt/btrfs #}}} # vim: set fdm=marker fdl=1 ts=2 sw=2 et: