Saturday, February 19, 2022

How to upgrade hard drives on a Linux file server

 Another in my series of blog posts about "The way I do things that fit my very specific and peculiar set of needs".  I recently upgraded my HDDs in my Linux file server, and found a pattern that worked quite well with minimum disruption.

How I have things set up

I have a headless Linux server with a SSD as the primary drive, and then a few spinning HDDs for bulk media storage.  All the drives are just formatted in ext4.  I do backups to S3 for important files.  Then, every time I upgrade the storage of these drives I copy everything to the new drive, and then remove the old drive from the machine.  That old drive serves as a backup for the bulk media stuff, combined with the plan to just redownload anything I got recently.

I've never bothered with RAID or more complex file systems because they only made sense to me if I had 4 or more drives, which I've never had.  This makes upgrading the drives a simple process.

My /etc/fstab file has a bunch of lines like this:
UUID=9fa3b7dc-3cd7-42c1-93a8-46dcf38da09d  /mnt/media         ext4  defaults  0  2

And then I share those drives using NFS by putting these lines in /etc/exports:

/mnt/media 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)


Format the new drive

Put the drive in the machine and power it on.  If the drive is brand new it won't have any partition tables on it, or the UUID needed for the fstab file.  Here's a bunch of commands which are useful for figuring out which drive is which:

sudo blkid
lsblk -f
sudo fdisk -l

Once you know what the path to the drive is (eg, /dev/sdb), you can use fdisk to format the drive.  Be sure you have picked the right drive, because you'll wipe all your data on whatever disk you run fdisk on.  Use m to see the list of fdisk commands. But both F and p are useful to confirm you have the right (empty) disk.  Once you're sure you can run g to create the partition table, and then n to create the new partition.  The defaults should be fine.  Once you are sure you have things right, you can write your changes to the disk with w.

You now need to create the ext4 filesystem on your partition.  Do that with sudo mkfs.ext4 /dev/sdb1 making sure that your use the correct drive letter.  

Mount the new drive

Now run the above commands used to identify the drives again, and hopefully you see a UUID for the new drive.  Copy that UUID down and edit your fstab file with sudo nano /etc/fstab

and add the line for your new drive there, with a temporary mount point, like /mnt/media_new.  Then you need to make sure you create that mount point (sudo mkdir /mnt/media_new).  Then you can mount the drive with sudo mount -av . Now you can poke around and make sure things look right, you probably want to update the owner with sudo chown.

Copy the old data to the new drive

There are many ways you could copy the data, but I like rsync.  One nice thing is that if it gets interrupted it'll pick up where it left off.  Here's the command I came up with after exploring the options for a while: rsync -axHAWXS --info=progress2 /mnt/media/ /mnt/media_new/ The options largely came from here, so you can go there if you want to read what they do.  That took about 12 hours for me to copy about 5 TB.  While it will print out the progress, I found it much better to just ssh in on a new tab and then run df -Th and compare the disk usage of the old and new drives.  After the first pass though, running it again, to make sure there wasn't anything new, only took a couple minutes.

Make the swap

This isn't a foolproof process, you should close anything that is using the disks, and run the rsync command one more time.  Then you can unmount the current drive with sudo umount /dev/sda1.  Now you want to edit your fstab one more time and delete the old drive's line from it, while updating the mount point of the new drive to the one the old drive was using.  Now you can probably get way with just running sudo mount -av again, but I like to just shutdown (sudo shutdown -h now), and physically remove the old drive.

That's it

That's it. If all went according to plan, then everything that used the old drive previously, should now just use the new drive.  I had to restart my remote machines that used the NFS drives before they would connect.

No comments:

Post a Comment