[ale] Buwahahah!! Success!

DJPfulio at jdpfu.com DJPfulio at jdpfu.com
Fri Sep 1 08:20:10 EDT 2023


On 8/30/23 17:36, Charles Shapiro via Ale wrote:
> 
> 
> * Debian 12 doesn't appear to let you mount an lvolume from fstab  by
> UUID. I could do this on my VM, which was running Ubuntu. On Debian
> you mount from /dev/mapper, which seems to be the Correct Way (at
> least that's the way shipped lvolumes are mounted).  There's some
> magic going on here that I still don't fully understand. Some of the
> hyphens in the /dev/mapper lvolume names are doubled, again for
> reasons which are inscrutable to me.

I prefer to mount LVs using the /dev/{vgname}/{lvname}  mapped syslinks.  They are short and descriptive. 1000x better than UUIDs.  /dev/mapper/ ... is ok, but as you noted, a bit confusing.

For mounting other storage, I'll setup autofs to use LABEL= for USB storage and for all NFS storage, since I go out of my way to ensure NFS storage is mounted to the same locations on all my systems, including the NFS server.

Of course, we each have our own likes and dislikes around this.

VGnames must be unique inside a system, so when people standardize on "vg00" across all their systems, I wonder how they plan to quickly access that storage on another box ... they'll have to rename the VG. Whatever.  I do like short names, but meaningful names.

As for LVM, I start small and expand when needed.  I've never correctly guessed how large any file system needs to be in advance.  LVM makes expanding 5 seconds.  It also allows for a little better security, since some security choices deal with mount options.  If the entire OS is mounted to /, then only 1 set of mount options is possible.  But /tmp and /var definitely need different mount options than /.

If one of my systems stops working, I'll check the usual suspects - PSU, GPU (if it has one), listen for the beeps or LED flashes on the MB.  If nothing, I have a limit for how much I'm willing to spend to replace the MB + CPU. Usually around $300 assuming the new stuff will be 4x faster or more.  That's been pretty easy.

I'm like Jim on getting HDDs with 5 yr warranties after years of going cheap.  Then I realized that cheap disks seem to fail around their warranty period whereas those with 5 yrs typically last more than 10 yrs.  I've never, ever, had a WD Black fail, unless I dropped it.  I also have some used Enterprise "Gold" HDDs. None of them has failed either.  1 of my WD Black drives is starting to show failed blocks, but it has been running 13 yrs and still has 80% of the spare blocks available. I've already moved important data off it (last year), so now it is just a scratch disk for working files.

I do daily, automatic, versioned, "pulled" backups on all the important storage here.  Media files only get a mirror and some have parity files added (par2). Can't really have 90 - 366 days of backups for many TBs of those files.  But for normal files in an OS or typical documents, it isn't a big ask.
I treat VMs and containers just like physical machines when it comes to backups.  Most backups take between 1-3 minutes nightly.  A system restore is between 20 and 45 minutes.  Seems like a reasonable trade off to have a bit slower restore to save not backing up the full OS.

I'm anti-server. I don't need the noise or power consumption.  2 Ryzen 5 5600G systems, each capable of running everything without the other, handle my redundancy needs. Plus they are using just ... let me check ... right around 100W each - that includes drive cages and external disk arrays.  One of the systems is busy transcoding video. When that finishes, it will drop back to about 80W of use.  I prevent HDDs from spinning down, BTW, so the extra power is all CPU.

If you don't have automatic backups, you've already failed. Whether those are daily or weekly, is user's choice.


More information about the Ale mailing list