I have a 2 bay NAS, and I was planning on using 2x 18tb HDDs in raid 1. I was planning on purchasing 3 of these drives so when one fails I have the replacement. (I am aware that you should purchase at different times to reduce risk of them all failing at the same time)

Then I setup restic.

It makes backups so easy that I am wondering if I should even bother with raid.

Currently I have ~1TB of backups, and with restics snapshots, it won’t grow to be that big anyways.

Either way, I will be storing the backups in aws S3. So is it still worth it to use raid? (I also will be storing backups at my parents)

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I absolutely would, for a few reasons:

    • restoring from backup is a last resort and involves downtime; swapping a disk is comparatively easier and less disruptive
    • it’s possible your backup solution fails, so having some redundancy is always good
    • read performance - not a major factor, but saturating a gigabit link is always nice
    • Atemu@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      Read perf would be the same or better if you didn’t add redundancy as you’d obviously use RAID0.

      RAID is never in any way something that can replace a backup. If the backup cannot be restored, you didn’t have a backup in the first place. Test your backups.
      If you don’t trust 1 backup, you should make a second backup rather than using RAID.

      The one and only thing RAID has going for it is minimising downtime. For most home use-cases though, the 3rd 9 which this would provide is hardly relevant IMHO.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        Read perf would be the same or better if you didn’t add redundancy

        RAID 1 can absolutely be faster than a single disk for read perf, and on Linux it is tuned to be faster. It’s not why you’d use it, but it is a feature of RAID. Intuitively, since both disks have exactly the same data, each disk could read different things. Likewise, for writes, you don’t have to write at the same time, as long as they’re always correct (e.g. don’t flip the metadata segment until both have written the data), so you can even get a write boost.

        If performance is all you care about, then yeah, go ahead and use RAID 0. But you do get a performance boost with mirroring as well.

        Yes, a backup should be tested, but it shouldn’t be relied on. Internet can go down, services can have maintenance, etc, so it’s a lot better to never need it. If you can afford a mirror, it’s having.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 days ago

            No, JBOD is not the same as RAID0. With RAID0, you always need the disks in sync because reads need to alternate. With JBOD, as long as your reads are distributed, only one disk at a time needs to be active for a given read and you can benefit from simultaneous reads on different disks. RAID0 will probably give the biggest speedup in a single user scenario, whereas I’d expect JBOD to potentially outperform in a multiuser scenario assuming your OS and filesystem is tuned for it.

            RAID0 is pretty much never the solution, and I’d much rather have JBOD than RAID0 in almost every scenario.

            RAID1 gives you redundancy while preserving the ability for disks to independently seek, so on competent systems (e.g. Linux and BSD), you’ll get a performance speedup over a single disk and get something that rivals RAID0 in practice. You wouldn’t use it for performance because JBOD is probably just as fast in practice without the storage overhead penalty (again, assuming you properly distribute reads across disks), but you do get some performance benefits, which is nice.

            • Atemu@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              JBOD is not the same as RAID0

              As far as data security is concerned, JBOD/linear combination and RAID0 are the same

              With RAID0, you always need the disks in sync because reads need to alternate. With JBOD, as long as your reads are distributed, only one disk at a time needs to be active for a given read and you can benefit from simultaneous reads on different disks

              RAID0 will always have the performance characteristics of the slowest disk times the stripe width.

              JBOD will have performance depending on the disk currently used. With sufficient load, it could theoretically max out all disks at once but that’s extremely unlikely and, with that kind of load, you’d necessarily have a queue so deep that latency shoots to the moon; resulting in an unusable system.
              Most importantly of all however is that you cannot control which device is used. This means you cannot rely on getting better perf than the slowest device because, with any IO operation, you might just hit the slowest device instead of the more performant drives and there’s no way to predict which you’ll get.
              It goes further too because any given application is unlikely to have a workload that even distributes over all disks. In a classical JBOD, you’d need a working set of data that is greater than the size of the individual disks (which is highly unlikely) or lots of fragmentation (you really don’t want that). This means the perf that you can actually rely on getting in a JBOD is the perf of the slowest disk, regardless of how many disks there are.

              Perf of slowest disk * number of disks > Perf of slowest disk.

              QED.

              You also assume that disk speeds are somehow vastly different whereas in reality, most modern hard drives perform very similarly.
              Also nobody in their right mind would design a system that groups together disks with vastly different performance characteristics when performance is of any importance.

  • BakedCatboy@lemmy.ml
    cake
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Keep in mind that if you set up raid using zfs or btrfs (idk how it works with other systems but that’s what I’ve used) then you also get scrubs which detect and fix bit rot and unrecoverable read errors. Without that or a similar system, those errors will go undetected and your backup system will backup those corrupted files as well.

    Personally one of the main reasons I used zfs and now btrfs with redundancy is to protect irreplaceable files (family memories and stuff) from those kinds of errors, as I used to just keep stuff on a hard drive until I discovered loads of my irreplaceable vacation photos to be corrupted, including the backups which backed up the corruption.

    If your files can be reacquired, then I don’t think it’s a big deal. But if they aren’t, then I think having scrubs or integrity checks with redundancy so that issues can be repaired, as well as backups with snapshots to prevent errors or mistakes from messing up your backups, is a necessity. But it just depends on how much you value your files.

    • Atemu@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      Note that you do not need any sort of redundancy to detect corruption.

      Redundancy only gains you the ability to have that corruption immediately and automatically repaired.

      While this sounds nice in theory, you have no use for such auto repair if you have backups handy because you can simply restore that data manually using your backups in the 2 times in your lifetime that such corruption actually occurs.
      (If you do not have backups handy, you should fix that before even thinking about RAID.)

      It’s incredibly costly to have such redundancy at a disk level and you’re almost always better off using those resources on more backups instead if data security is your primary concern.
      Downtime mitigation is another story but IMHO it’s hardly relevant for most home users.

      • beastlykings@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        Can you explain this to me better?

        I need to work on my data storage solution, and I knew about bit rot but thought the only solution was something like a zfs pool.

        How do I go about manually detecting bit rot? Assuming I had perfect backups to replace the rotted files.

        Is a zfs pool really that inefficient space wise?

        • Atemu@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          10 days ago

          Sure :)

          I knew about bit rot but thought the only solution was something like a zfs pool.

          Right. There are other ways of doing this but a checksumming filesystem such as ZFS, btrfs (or bcachefs if you’re feeling adventurous) are the best way to do that generically and can also be used in combination with other methods.

          What you generally need in order to detect corruption on ab abstract level is some sort of “integrity record” which can determine whether some set of data is in an expected state or an unexpected state. The difficulty here is to keep that record up to date with the actually expected changes to the data.
          The filesystem sits at a very good place to implement this because it handles all such “expected changes” as executing those on behalf of the running processes is its purpose.

          Filesystems like ZFS and btrfs implement this integrity record in the form of hashes of smaller portions of each file’s data (“extents”). The hash for each extent is stored in the filesystem metadata. When any part of a file is read, the extents that make up that part of the file are each hashed and the results are compared with the hashes stored in the metadata. If the hash is the same, all is good and the read succeeds but if it doesn’t match, the read fails and the application reading that portion of the file gets an IO error that it needs to handle.

          Note how there was never any second disk involved in this. You can do all of this on a single disk.

          Now to your next question:

          How do I go about manually detecting bit rot?

          In order to detect whether any given file is corrupted, you simply read back that file’s content. If you get an error due to a hash mismatch, it’s bad, if you don’t, it’s good. It’s quite simple really.

          You can then simply expand that process to all the files in your filesystem to see whether any of them have gotten corrupted. You could do this manually by just reading every file in your filesystem once and reporting errors but those filesystems usually provide a ready-made tool for that with tighter integrations in the filesystem code. The conventional name for this process is to “scrub”.

          How do I go about manually detecting bit rot? Assuming I had perfect backups to replace the rotted files.

          You let the filesystem-specific scrub run and it will report every file that contains corrupted data.

          Now that you know which files are corrupted, you simply replace those files from your backup.

          Done; no more corrupted files.

          Is a zfs pool really that inefficient space wise?

          Not a ZFS pool per-se but redundant RAID in general. And by “incredibly costly” I mean costly for the purpose of immediately restoring data rather than doing it manually.

          There actually are use-cases for automatic immediate repair but, in a home lab setting, it’s usually totally acceptable for e.g. a service to be down for a few hours until you e.g. get back from work to restore some file from backup.

          It should also be noted that corruption is exceedingly rare. You will encounter it at some point which is why you should protect yourself against it but it’s not like this will happen every few months; this will happen closer to on the order of every few decades.

          To answer your original question directly: No, ZFS pools themselves are not inefficient as they can also be used on a single disk or in a non-redundant striping manner (similar to RAID0). They’re just the abstraction layer at which you have the choice of whether to make use of redundancy or not and it’s redundancy that can be wasteful depending on your purpose.

          • Andres Salomon@social.ridetrans.it
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            10 days ago

            @Atemu @beastlykings Every few decades seems optimistic. I have an archive of photos/videos from cameras and phones spanning from early 2000s to mid-2010s. There’s not a lot, maybe 6gb; a few thousand files. At some point around the end of that time period, I noticed corruption in some random photos.

            Likewise, I have a (3tb) flac archive, which is about 15-20 years old. Nightly ‘flac -t’ checks are done on 1/60th of the archive, essentially a scrub. Bitrot has struck a dozen times so far.

            • Atemu@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 hours ago

              Interesting. I suspect you must either have had really bad luck or be using faulty hardware.

              In my broad summarising estimate, I only accounted for relatively modern disks like something made in the past 5 years or so. Drives from the 2000s or early 2010s could be significantly worse and I wouldn’t be surprised. It sounds like to me your experience was with drives that are well over a decade old at this point.

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    12 days ago

    I always do some level of RAID. If for no other reason, I’m not out of commission if a disk fails. When you’re working with multi TB, restoring from a backup can take a while. If rapid recovery from a disk failure is not a high priority for you, then you could probably do without RAID.

    Either way, make sure you test your backups occasionally.

    Another way to put it: With RAID, a disk failure is like your Check Engine light coming on. You can still drive, but you should address the problem as soon as you can. Without RAID, it’s like your engine has seized up and you have to tow it for repair and are without your car until it’s fixed.

    • Dust0741@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 days ago

      Hmm that’s a good point.

      Aws also can cost a good chunk if you restore un-optimally