02:21:29 I'm trying to figure out what width of RAIDZ1 I should use for an SSD pool I'm thinking of building 02:21:49 I keep seeing this "power of two plus parity" rule being thrown around, but I also see conflicting information 02:22:01 is there any reason why I *shouldn't* use a 4-wide RAIDZ1? 02:39:51 Whats that org that operates their own DNS and lets you register free domain names that only work if you use their nameservers? 02:41:00 OpenNIC? Though there are others which are similar, that's the big one. 02:43:42 JAA: Ah yes I believe that's the one. Thanks! 18:55:09 Andrew: i was doing some research on building a NAS a while back and it seems like most people recommend that you just use ZFS and let it do its magic, rather than setting up some specific version of RAID 19:06:32 there's ZFS RAIDZ1/Z2/Z3 19:07:20 (or just straight up ZFS mirror) 19:08:18 Z1 is the default i believe... assuming you have enough disks of course 19:30:07 Magic? That seems awkward. The whole point of these is that you can select how much redundancy you want/how many disk failures you want the system to tolerate. 19:35:09 I could imagine that a raidzX with 2^n+X disks would be slightly better for the parity calculation performance since it could work on units that are a power of two. But whether that actually matters in practice with modern hardware... Benchmark time. 21:53:16 creating channels on discord breaks because it uses "open"ai to suggest emojis, and "open"ai is down 21:53:24 This is the future of the corporatized internet 22:00:16 lmao 22:19:05 Woooooow 22:19:06 andrew: the general rule of thumb is to `fdisk -l /dev/sdX` on the disk, checking the physical sector size and then applying the proper ashift= value during zpool creation. for a typical hdd and ssd with 4096 bytes sector size, that is 2^12=4096, so ashift=12 23:02:41 Fusl: that part I know, I'm asking about whether the number of disks per vdev in a RAIDZ1/2/3 configuration actually matters 23:03:32 the problem is that to see how well a specific hardware configuration will perform, I will need to buy the hardware, and before buying the hardware, I'd like to have a decent idea of how well that hardware would perform 23:04:41 it does for recovery/rebuild times. a raidz over 90 disks for example is slower than a raid0 over 6x raidz over 15 of the 90 disks each 23:05:23 and generally, raidz and hdd's don't mix together very well after the zpool becomes a little fragmented 23:05:33 (mostly due to the increased random i/o) 23:05:43 to provide context: I'm debating which (and how many) SSDs to buy to expand my SSD capacity 23:06:11 is it a waste of money/IOPS to have RAIDZ1 over four disks instead of three? 23:06:24 nope, that's perfectly fine 23:07:26 it's that age-old problem of whether to buy more now for less cost per usable GB or buy less now for less cost since I'm probably not going to fill up the storage for a while 23:09:16 imho i'd go with more when using zfs since expanding a raidz isn't easy (i think you'll have to recreate the entire zpool if you want to expand to more disks) 23:09:19 been a while since I looked into this, so hazy on the details, I seem to remember 10 drives per raidzN is the sweet spot of performance/space "wasted" for parity (can still add multiple 10drive vdevs per pool if you want though, so no issues there). not personally tested that with ssds though, I remember numbers checking out on spinning rust from 23:09:19 "yeah, good enough" testing though 23:09:34 imer: I'm not buying ten SSDs right now :P 23:09:40 well, i dont knoow 23:09:51 I wouldn't be surprised if people were! 23:10:15 anyways, I'm trying to decide whether to buy used eBay SAS SSDs (like the Samsung PM1643) or some brand new PCIe Gen4 SSDs (for about 50% more cost) 23:10:45 the brand new drives are consumer grade but their sustained performance still likely exceeds those old SAS drives 23:12:36 I have concerns about my LSI 9300-16i being a bottleneck if I bought a bunch of SAS SSDs - there's a 7 GB/s limit due to the PCIe 3.0 x8 link, and the SAS controllers allegedly handle "over 1 million IOPS" each, which will easily be saturated by reading from two of the SSDs 23:13:37 that being said, I'm not sure whether it actually matters that much in the real world, chances are the CPU wouldn't be able to keep up anyways 23:15:28 Isn't raidz expansion a thing now? I remember hearing about it like a year or two ago. 23:15:40 JAA: RAIDZ expansion is still a work in progress, I've been following that PR for a while 23:15:47 Ah 23:16:03 I guess it was a 'SOOON!!!!' thing then that I'm thinking of. 23:17:25 I know the thing I'm doing is a bit strange, I'm planning on running a database workload, which needs IOPS, but for cost savings I'm planning on using RAIDZ1 instead of mirrors, and to paper over the IOPS penalty of the RAIDZ I'm considering buying some hecking fast SSDs :P