( gyuri23 | 2023. 04. 27., cs – 08:08 )

Én annyit mondok csak, hogy ha gyártó nem ajánlja akkor én nem használom, te meg azt csinálsz amit akarsz.

Számolatlan lemez hiba, raid összefosás > erre nem lennek buszke,

Gondolom te valami isten vagy, hogy befolyásolni tudod a hardware-t :) Ha alattad nem romlott el lemez, akkor vagy csikó vagy, vagy nem használsz nagy számú gépet :)

Ha nem láttál még hw raid firmware hiba miatt bedőlő gépet akkor vagy csikó vagy, vagy nem használsz nagy számú gépet :) (de szép is volt egy 2x250 lemezes EVA HA storage cluster teljes adatmegsemmisülése egy firmware hiba miatt, pedig a gyártó ideküldte az USA-ból a csúcsmérnökét)

Tovább a is azt mondom, hogy kb minden ZFS-el komolyan foglalkozó csapat azt mondja ne használd RAID-en, mert az barkács. RAID a RIAD felett, logikus nem? :D

A mondat elemzést rád hagyom, győzködd csak magad, hogy jó a RAID a ZFS alá, mer nem azt mondja a gyártó, hogy tilos, csak azt, hogy nem ajánlott :D

És igen, 30 év alatt eljutottam oda, hogy nem vagyok képben, ezért tanulok más hibáiból :D

És ha már a pék fasza szóba került, ezután (lásd lent):

It is best to use a HBA instead of a RAID controller, for both performance and reliability.

leszakad a kezed ha átkapcsoldod HBA-ba a RAID kontrollert?  :D

u.i.

Hardware RAID controllers

Hardware RAID controllers should not be used with ZFS. While ZFS will likely be more reliable than other filesystems on Hardware RAID, it will not be as reliable as it would be on its own.

  • Hardware RAID will limit opportunities for ZFS to perform self healing on checksum failures. When ZFS does RAID-Z or mirroring, a checksum failure on one disk can be corrected by treating the disk containing the sector as bad for the purpose of reconstructing the original information. This cannot be done when a RAID controller handles the redundancy unless a duplicate copy is stored by ZFS in the case that the corruption involving as metadata, the copies flag is set or the RAID array is part of a mirror/raid-z vdev within ZFS.

  • Sector size information is not necessarily passed correctly by hardware RAID on RAID 1. Sector size information cannot be passed correctly on RAID 5/6. Hardware RAID 1 is more likely to experience read-modify-write overhead from partial sector writes while Hardware RAID 5/6 will almost certainty suffer from partial stripe writes (i.e. the RAID write hole). ZFS using the disks natively allows it to obtain the sector size information reported by the disks to avoid read-modify-write on sectors, while ZFS avoids partial stripe writes on RAID-Z by design from using copy-on-write.

    • There can be sector alignment problems on ZFS when a drive misreports its sector size. Such drives are typically NAND-flash based solid state drives and older SATA drives from the advanced format (4K sector size) transition before Windows XP EoL occurred. This can be manually corrected at vdev creation.

    • It is possible for the RAID header to cause misalignment of sector writes on RAID 1 by starting the array within a sector on an actual drive, such that manual correction of sector alignment at vdev creation does not solve the problem.

  • RAID controller failures can require that the controller be replaced with the same model, or in less extreme cases, a model from the same manufacturer. Using ZFS by itself allows any controller to be used.

  • If a hardware RAID controller’s write cache is used, an additional failure point is introduced that can only be partially mitigated by additional complexity from adding flash to save data in power loss events. The data can still be lost if the battery fails when it is required to survive a power loss event or there is no flash and power is not restored in a timely manner. The loss of the data in the write cache can severely damage anything stored on a RAID array when many outstanding writes are cached. In addition, all writes are stored in the cache rather than just synchronous writes that require a write cache, which is inefficient, and the write cache is relatively small. ZFS allows synchronous writes to be written directly to flash, which should provide similar acceleration to hardware RAID and the ability to accelerate many more in-flight operations.

  • Behavior during RAID reconstruction when silent corruption damages data is undefined. There are reports of RAID 5 and 6 arrays being lost during reconstruction when the controller encounters silent corruption. ZFS’ checksums allow it to avoid this situation by determining whether enough information exists to reconstruct data. If not, the file is listed as damaged in zpool status and the system administrator has the opportunity to restore it from a backup.

  • IO response times will be reduced whenever the OS blocks on IO operations because the system CPU blocks on a much weaker embedded CPU used in the RAID controller. This lowers IOPS relative to what ZFS could have achieved.

  • The controller’s firmware is an additional layer of complexity that cannot be inspected by arbitrary third parties. The ZFS source code is open source and can be inspected by anyone.

  • If multiple RAID arrays are formed by the same controller and one fails, the identifiers provided by the arrays exposed to the OS might become inconsistent. Giving the drives directly to the OS allows this to be avoided via naming that maps to a unique port or unique drive identifier.

    • e.g. If you have arrays A, B, C and D; array B dies, the interaction between the hardware RAID controller and the OS might rename arrays C and D to look like arrays B and C respectively. This can fault pools verbatim imported from the cachefile.

    • Not all RAID controllers behave this way. This issue has been observed on both Linux and FreeBSD when system administrators used single drive RAID 0 arrays, however. It has also been observed with controllers from different vendors.

One might be inclined to try using single-drive RAID 0 arrays to try to use a RAID controller like a HBA, but this is not recommended for many of the reasons listed for other hardware RAID types. It is best to use a HBA instead of a RAID controller, for both performance and reliability.