At the moment I have my NAS setup as a Proxmox VM with a hardware RAID card handling 6 2TB disks. My VMs are running on NVMEs with the NAS VM handling the data storage with the RAIDed volume passed through to the VM direct in Proxmox. I am running it as a large ext4 partition. Mostly photos, personal docs and a few films. Only I really use it. My desktop and laptop mount it over NFS. I have restic backups running weekly to two external HDDs. It all works pretty well and has for years.

I am now getting ZFS curious. I know I’ll need to IT flash the HBA, or get another. I’m guessing it’s best to create the zpool in Proxmox and pass that through to the NAS VM? Or would it be better to pass the individual disks through to the VM and manage the zpool from there?

  • paperd@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    ·
    18 days ago

    If you want multiple VMs to use the storage on the ZFS pool, better to create it in proxmox rather than passing raw disks thru to the VM.

    ZFS is awesome, I wouldn’t use anything else now.

    • SzethFriendOfNimi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      18 days ago

      If I recall correctly it’s important to be running ECC memory right?

      Otherwise corrupter bites/data can cause file system issues or loss.

      • ShortN0te@lemmy.ml
        link
        fedilink
        English
        arrow-up
        10
        ·
        18 days ago

        You recall wrong. ECC is recommended for any server system but not necessary.

        • RaccoonBall@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          18 days ago

          And if you dont have ECC zfs just might save your bacon when a more basic fs would allow corruption

          • conorab@lemmy.conorab.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            14 days ago

            I don’t think ZFS can do anything for you if you have bad memory other than help in diagnosing. I’ve had two machines running ZFS where they had memory go bad and every disk in the pool showed data corruption errors for that write and so the data was unrecoverable. Memory was later confirmed to be the problem with a Memtest run.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    3
    ·
    18 days ago

    I did on proxmox. One thing I didn’t know about ZFS, it has a lot of random writes, I believe logs and journaling. I killed 6 SSDs in 6 months. It’s a great system - but consumer SSDs can’t handle it.

    • ShortN0te@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      18 days ago

      I use a consumer SSD for caching on ZFS now for over 2 years and do not have any issues with it. I have a 54 TB pool with tons of reads and writes and no issue with it.

      smart reports 14% used.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      17 days ago

      That doesn’t sound right. Also random writes don’t kill SSDs. Total writes do and you can see how much has been written to an SSD in its SMART values. I’ve used SSDs for swap memory for years without any breaking. Heavily used swap for running VMs and software builds. Their total bytes written counters were increasing steadily but haven’t reached the limit and haven’t died despite the sustained random writes load. One was an Intel MacBook onboard SSD. Another was a random Toshiba OEM NVMe. Another was a Samsung OEM NVMe.

  • BlueÆther@no.lastname.nz
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 days ago

    I run proxmox and a trunas VM.

    • TrueNAS is on a virt disk on a NVME drive with all the other VMs/LXCs
    • I pass the HBA through to TrueNAS with PCI passthrough: 6 disk Raid z2. this is ‘vault’ and has all my backups of hone dirs and photos etc
    • I pass through two HDs as raw disks for bulk storage (of linux ISOs): 2 disk Mirrored zfs

    Seems to work well

  • Mio@feddit.nu
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    I am more looking into BTRF for backup due to I run Linux and not BSD ZFS requires more RAM I only have one disk I want to benefit from snapshots, compression and deduplication.

        • blackstrat@lemmy.fwgx.ukOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 days ago

          It stole all my data. It’s a bit of a clusterfuck of a file system, especially one so old. This article gives a good overview: https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/ It managed to get into a state where it wouldn’t even let me mount it readonly. I even resorted to running commands of which the documentation just said “only run this if you know what you’re doing”, but actually gave no guidance to understand - it was basically a command for the developer to use and noone else. It ddn’t work anyway. Every other system that was using the same disks but with ext4 on their filesystems came back and I was able to fsck them and continue on. I think they’re all still running without issue 6 years later.

          For such an old file system, it has a lot of braindead design choices and a huge amount of unreliability.

          • Mio@feddit.nu
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            12 days ago

            Dataloss is never fun. File systemet in general need a long time to iron out all the bugs. Hope it is in a better state today. I remember when ext4 was new and crashed in a laptop. Ubuntu was to early to adopt it, or I did not use LTS.

            But as always, make sure to have a proper backup on a different physical location.

  • minnix@lemux.minnix.dev
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    18 days ago

    ZFS is great, but to take advantage of it’s positives you need the right drives, consumer drives get eaten alive as @[email protected] mentioned and your IO delay will be unbearable. I use Intel enterprise SSDs and have no issues.

    • RaccoonBall@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      18 days ago

      Complete nonsense. Enterprise drives are better for reliability if you plan on a ton of writes, but ZFS absolutely does not require them in any way.

      Next you’ll say it needs ECC RAM

        • Avid Amoeba@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          17 days ago

          And you probably know that sync writes will shred NAND while async writes are not that bad.

          This doesn’t make sense. SSD controllers have been able to handle any write amplification under any load since SandForce 2.

          Also most of the argument around speed doesn’t make sense other than DC-grade SSDs being expected to be faster in sustained random loads. But we know how fast consumer SSDs are. We know their sequential and random performance, including sustained performance - under constant load. There are plenty benchmarks out there for most popular models. They’ll be as fast as those benchmarks on average. If that’s enough for the person’s use case, it’s enough. And they’ll handle as many TB of writes as advertised and the amount of writes can be monitored through SMART.

          And why would ZFS be any different than any other similar FS/storage system in regards to random writes? I’m not aware of ZFS generating more IO than needed. If that were the case, it would manifest in lower performance compared to other similar systems. When in fact ZFS is often faster. I think SSD performance characteristics are independent from ZFS.

          Also OP is talking about HDDs, so not even sure where the ZFS on SSDs discussion is coming from.