Proxmox ZFS mirror - szörnyen alacsony I/O és IOPS
Sziasztok!
Kérlek vezessen rá valaki mit néztem be és miért ilyen szörnyen rosszak a sebességek. Adott egy nem mai gyerek HP ProLiant DL20 szerver (E3-1220v5 / 32GB DDR4 UDIMM ECC). Debian 12 alap, Linux 6.8.12-11-pve kernel. Két HDD (WD2000F9YZ-0) és két SSD (WDS500G1R0B). Tudom hogy az SSD-k nem enterprise grade eszközök, de ennél azért többet kellene tudniuk, és a HDD-k esetén is gyanúsan alacsony minden érték. A B140i RAID vezérlő le van tiltva, minden eszköz közvetlenül, AHCI-n elérhető. A partíciók és egyéb hasznos infók:
root@pve:~# fdisk -l Disk /dev/sdb: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: WDC WD2000F9YZ-0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: D5DEFE9B-A29D-614F-BBF3-C31770AD9F27 Device Start End Sectors Size Type /dev/sdb1 2048 3907012607 3907010560 1.8T Solaris /usr & Apple ZFS /dev/sdb9 3907012608 3907028991 16384 8M Solaris reserved 1 Disk /dev/sdd: 465.76 GiB, 500107862016 bytes, 976773168 sectors Disk model: WDC WDS500G1R0B Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: CD6F9ED7-CD69-48DD-AE75-2320DA87F794 Device Start End Sectors Size Type /dev/sdd1 34 2047 2014 1007K BIOS boot /dev/sdd2 2048 2099199 2097152 1G EFI System /dev/sdd3 2099200 975175680 973076481 464G Solaris /usr & Apple ZFS Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: WDC WD2000F9YZ-0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 654222C6-1214-894C-BF2E-9B003371915C Device Start End Sectors Size Type /dev/sda1 2048 3907012607 3907010560 1.8T Solaris /usr & Apple ZFS /dev/sda9 3907012608 3907028991 16384 8M Solaris reserved 1 Disk /dev/sdc: 465.76 GiB, 500107862016 bytes, 976773168 sectors Disk model: WDC WDS500G1R0B Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 813FCBF3-4AAD-4531-B964-88E750E044DF Device Start End Sectors Size Type /dev/sdc1 34 2047 2014 1007K BIOS boot /dev/sdc2 2048 2099199 2097152 1G EFI System /dev/sdc3 2099200 975175680 973076481 464G Solaris /usr & Apple ZFS ============================== root@pve:~# zfs list NAME USED AVAIL REFER MOUNTPOINT local-zfs-hdd 552K 1.76T 96K /local-zfs-hdd rpool 10.9G 435G 104K /rpool rpool/ROOT 10.9G 435G 96K /rpool/ROOT rpool/ROOT/pve-1 10.9G 435G 10.9G / rpool/data 96K 435G 96K /rpool/data rpool/var-lib-vz 96K 435G 96K /var/lib/vz ============================== root@pve:~# zpool status pool: local-zfs-hdd state: ONLINE config: NAME STATE READ WRITE CKSUM local-zfs-hdd ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-WDC_WD2000F9YZ-09N20L1_WD-WMC1P0D9UMLK ONLINE 0 0 0 ata-WDC_WD2000F9YZ-09N20L1_WD-WMC1P0E6MPP9 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-WDC_WDS500G1R0B-68A4Z0_244045800382-part3 ONLINE 0 0 0 ata-WDC_WDS500G1R0B-68A4Z0_244045800391-part3 ONLINE 0 0 0 errors: No known data errors
És a tesztek. Gyalázatos. Miért?
root@pve:~# fio --name=randwrite_hdd --ioengine=libaio --rw=randwrite --bs=4k --numjobs=4 --size=1G --runtime=15 --group_reporting --iodepth=32 --directory=/local-zfs-hdd/fiotest randwrite_hdd: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 ... fio-3.33 Starting 4 processes randwrite_hdd: Laying out IO file (1 file / 1024MiB) randwrite_hdd: Laying out IO file (1 file / 1024MiB) randwrite_hdd: Laying out IO file (1 file / 1024MiB) randwrite_hdd: Laying out IO file (1 file / 1024MiB) Jobs: 4 (f=4): [w(4)][9.8%][eta 02m:37s] randwrite_hdd: (groupid=0, jobs=4): err= 0: pid=15611: Tue Jun 24 15:14:43 2025 write: IOPS=7571, BW=29.6MiB/s (31.0MB/s)(473MiB/15981msec); 0 zone resets slat (usec): min=5, max=1472.8k, avg=524.55, stdev=22447.00 clat (usec): min=5, max=1517.1k, avg=16377.85, stdev=124716.23 lat (usec): min=510, max=1517.1k, avg=16902.40, stdev=126690.01 clat percentiles (usec): | 1.00th=[ 676], 5.00th=[ 775], 10.00th=[ 832], | 20.00th=[ 914], 30.00th=[ 988], 40.00th=[ 1090], | 50.00th=[ 1205], 60.00th=[ 1369], 70.00th=[ 1631], | 80.00th=[ 2245], 90.00th=[ 3982], 95.00th=[ 8848], | 99.00th=[ 859833], 99.50th=[1249903], 99.90th=[1468007], | 99.95th=[1468007], 99.99th=[1484784] bw ( KiB/s): min= 4892, max=177312, per=100.00%, avg=65378.60, stdev=13260.11, samples=59 iops : min= 1221, max=44328, avg=16344.35, stdev=3315.10, samples=59 lat (usec) : 10=0.01%, 500=0.01%, 750=3.62%, 1000=28.00% lat (msec) : 2=45.36%, 4=13.05%, 10=5.27%, 20=1.44%, 50=1.60% lat (msec) : 100=0.21%, 250=0.11%, 500=0.20%, 1000=0.20%, 2000=0.92% cpu : usr=0.48%, sys=5.39%, ctx=18019, majf=0, minf=52 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.9%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=0,121008,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): WRITE: bw=29.6MiB/s (31.0MB/s), 29.6MiB/s-29.6MiB/s (31.0MB/s-31.0MB/s), io=473MiB (496MB), run=15981-15981msec ============================== root@pve:~# fio --name=randwrite_ssd --ioengine=libaio --rw=randwrite --bs=4k --numjobs=4 --size=1G --runtime=15 --group_reporting --iodepth=32 --directory=/rpool/data/fiotest randwrite_ssd: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 ... fio-3.33 Starting 4 processes randwrite_ssd: Laying out IO file (1 file / 1024MiB) randwrite_ssd: Laying out IO file (1 file / 1024MiB) randwrite_ssd: Laying out IO file (1 file / 1024MiB) randwrite_ssd: Laying out IO file (1 file / 1024MiB) Jobs: 4 (f=4): [w(4)][12.3%][eta 01m:54s] randwrite_ssd: (groupid=0, jobs=4): err= 0: pid=15942: Tue Jun 24 15:16:26 2025 write: IOPS=9251, BW=36.1MiB/s (37.9MB/s)(584MiB/16165msec); 0 zone resets slat (usec): min=5, max=1657.8k, avg=427.76, stdev=18378.22 clat (usec): min=4, max=1683.2k, avg=13404.34, stdev=101949.72 lat (usec): min=531, max=1683.2k, avg=13832.10, stdev=103565.46 clat percentiles (usec): | 1.00th=[ 717], 5.00th=[ 840], 10.00th=[ 914], | 20.00th=[ 1020], 30.00th=[ 1106], 40.00th=[ 1205], | 50.00th=[ 1319], 60.00th=[ 1450], 70.00th=[ 1745], | 80.00th=[ 2606], 90.00th=[ 4686], 95.00th=[ 10683], | 99.00th=[ 484443], 99.50th=[ 876610], 99.90th=[1551893], | 99.95th=[1652556], 99.99th=[1686111] bw ( KiB/s): min= 1800, max=189069, per=100.00%, avg=62067.11, stdev=12682.54, samples=76 iops : min= 450, max=47267, avg=15516.63, stdev=3170.64, samples=76 lat (usec) : 10=0.01%, 750=1.58%, 1000=16.30% lat (msec) : 2=56.33%, 4=13.33%, 10=7.19%, 20=1.92%, 50=1.71% lat (msec) : 100=0.24%, 250=0.08%, 500=0.33%, 750=0.41%, 1000=0.17% lat (msec) : 2000=0.41% cpu : usr=0.60%, sys=6.82%, ctx=23038, majf=0, minf=52 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.9%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% issued rwts: total=0,149558,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): WRITE: bw=36.1MiB/s (37.9MB/s), 36.1MiB/s-36.1MiB/s (37.9MB/s-37.9MB/s), io=584MiB (613MB), run=16165-16165msec
Előre is hálásan köszönöm a segítséget!
- Tovább (Proxmox ZFS mirror - szörnyen alacsony I/O és IOPS)
- 866 megtekintés