This is a Work In Progress walk-through, restoring a system archived with fsarchiver
UPDATED: 1 July 2021
FOR LATEST STAGES OF TESTING, SEE BELOW ***
INITIAL TESTING ON A FRESH INSTALL OF DEBIAN 9
1 - Install basic Debian 9 system - Will look to do this from a Live Linux CD
2 - Install fsarchiver;
apt install fsarchiver
3 - Create new partition with cfdisk (lets say sda3)
4 - run partprobe to try and detect the changes in partition table. (step 5 or 6 will complain if there are any issues, if so try rebooting).
5 - Format the filesystem (not actually required) mkfs -t ext3 /dev/sda3
6 - Restore the filesystem for example;
fsarchiver restfs somefile.fsa id=0,dest=/dev/sda3
(where some file.fsa is the name of the archive and /dev/sda3 is the new partition).
7 - Make a new directory to mount the restored filesystem mkdir /restoredroot
8 - mount the restored filesystem mount /dev/sda3 /restoredroot
This part of the test is now complete, we've proved in part that the archives are working.
FURTHER TESTING USING RESCUE MEDIA
The following was carried out on a Parallels VM with a 240Gb hard disk.
The rescue disk used was https://www.system-rescue.org
1 - Boot using the rescue iso as the boot device.
2 - Use gparted to create a partition table (use gpt). (ONLY if we know partition sizes, otherwise, proceed to Step 4).
3 - Reboot, setting the boot order to CD 1st and the CD should be the rescue iso.
4 - Mounted usb drive containing the archive files with;
Firstly created a mount point with mkdir /mnt/1 I put a lot of thought into the name...
Then mount /dev/sdb2 /mnt/1 where sdb2 was the usb disk, use dmesg to give hints.
5 - Use fsarchiver archinfo archiveFile to find out some information that we will use next. Mainly the id, the filesystem and size, an example can be seen in Fig.1 below.
If the archive has a df.list file, now would be a good time to have a look at it.
Do this for all entries in Fig.1 and all archive files and make a note for each archive similar to the below example;
- Archive 1 - root - new mount point /dev/sda1
- Archive 5 - var - new mount point /dev/sda2
- Archive 6 - tmp - new mount point /dev/sda3
- Archive 7 - usr - new mount point /dev/sda4
- Archive 8 - home - new mount point /dev/sda5
6 - Make new partitions (use gparted or cfdisk) to suit the information from the above step. (size and type), DO NOT MOUNT. We can guess the sizes based on Fig.1
Don't forget to do this for all entries in Fig.1 and all archive files and it's probably a good idea to create a partition for the swap space at this stage too.
Its also worth mentioning that we may need to run partprobe after this to try and detect the changes in partition table
7 - Restore as per fsarchiver restfs somefile.fsa id=0,dest=/dev/sda1
Where id is taken from Fig.1 and the dest is the new partition we have setup in Step 6. (note: no space between id=0,dest=/dev/sda1)
8 - Repeat Step 7 for the remaining archives.
9 - At this stage, the VM would boot ONLY with the recovery iso (using option findroot), this was resolved by repairing grub as follows;
grub-mkdevicemap
grub-install /dev/sda
update-grub
Note running grub-install gave errors, see here I believe this could be resolved by either creating an msdos type partition table or by adding a small (16Mb) partition of type BIOS boot as per the post mentioned.
As a result, I chose to use grub-install /dev/sda --force
On completing the above and shutting down the VM, reordering the boot order so that the drive is 1st, the system booted successfully.
The above has been successfully tested with Dalzell Process Control Server, FEC-A - Dec 2020.
*** FURTHER TESTING WITH VIRTUALBOX
The following was carried out on Virtual Box with a 150Gb hard disk.
The rescue disk used was https://www.system-rescue.org
Procedure as above, however this time, the backup files were on a shared network drive, summarized as below;
1 - Boot using the rescue iso as the boot device.
2 - Use gparted to create a partition table (use msdos) and setup the partitions based on the details above. Take some time to do this, ensure that you get it right. In this case, I setup a logical partition, the full size of the disk and then se the primary partitions as required (6 of them, including the swap). On reflection, I would say that I may have created 1 Primary Partition and 1 Extended Partition (the remaining size of the drive), from which I created 5 Logical partitions, including the swap partition.
3 - Obtain my hn_restore.sh file from TBA and carefully amend the configuration options;
This is the local mount point and will be created by the script.
MOUNT_IP="192.168.254.198"
MOUNT_DIR="/var/STORAGE/STUFF/"
SERVER_NAME=$1
BACKUP_FILE_PREFIX="FEC-A.c0d0p"
DEVICE="/dev/sda"
The number of pairs of entries above will depend on the number of fsa files.
Take a bit of time to reacquaint and/or understand the convention above before running the script for real.
4 - make the file executable, with chmod +x hn_restore.sh
5 - run the file with the name of the server as an argument, for example;
./hn_restore.sh FEC
6 - Once complete, review for any errors and reboot the VM to the recovery iso again and use option findroot
7 - Once the OS has started up, login and carry out the following commands;
grub-mkdevicemap
grub-install /dev/sda
update-grub
There were no errors carrying out the above, if there are, use the to use grub-install /dev/sda --force as before.
Reboot again, remembering to unmount or remove the recovery iso and we should be up and running.
It's worth noting that if we setup a swap partition during stage 2, we may need to edit /etc/fstab and follow the notes here for how to enable swap.
UPDATE: 22/02/22
During a recovery from a CF installation to a SATA drive on a different system, the system would not boot. (restored using Acronis)
Booted system using recovery iso and using the findroot option as detailed above but the procedures detailed in 7 above would not work (command not found etc).
Noticed that while the system mostly functioned, there were several parts of the system that did not work and this was due to the fact that the system when booted with findroot uses the kernel of the rescue CD and not that of the target system, a point previously not noticed as the above step 7 was always used and the restored systems always booted normally afterwards.
Note: To check the running kernel, do uname -r
To install grub manually, do the following;
Check which disk we are using by running fdisk -l then do
grub-install --recheck --no-floppy --root-directory=/ /dev/hda
Note: /dev/hda given that this particular system was pre sata I believe.
UPDATE: 22/02/22 part 2
The above did not work with the system that had been moved from the CF card install to the SATA disk system.
Resolving was as follows;
Edit the /etc/fstab entry to change from /dev/hda1 to /dev/sda1
Edit /boot/grub/menu.lst to change the /dev/hda1 to /dev/sda1
There was no need to reinstall grub.
UPDATE: April 2024
Note: when creating partitions, I have mentioned both msdos and gpt, msdos only allows the creation of 4 primary partitions and was always my preferred choice but in situations where more partitions are required, we should choose gpt which allows for more but if we wish to stick with msdos style then we need to create an extended partitions and from there create a number of logical disks. This works but changes the device mappings somewhat, for example, we may end up with /dev/sda5, /dev/sda6, /dev/sda7 etc as opposed to /dev/sda1, /dev/sda2 etc...
My preference is to create a single primary partition and an extended partition, then create a number of logical drives on the extended partition. This allows the primary partition to be /dev/sda1 for example and can be the boot partition, while the logical drives would then follow the /dev/sda5, /dev/sda6, /dev/sda7 etc as opposed to /dev/sda1, /dev/sda2 etc... as mentioned above.
TODO:
grub-mkconfig
update-grub
update-initramfs -k -u /boot/initrd...ver
See also here which covers some of the problems I had with a restored system MCC-A which wouldn't boot.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.