IRC Logs for #crux-devel Tuesday, 2014-12-09

prologicI'm planning on buying:
prologicand WD Red 3TB drives00:00
prologicbut since we can't upgrade/change the pool type yet -- it might force me to buy more drives that I anticipated up front00:01
jaegerI use a mix of WD reds and seagate constellation ES. When the seagates go I'll replace them with WD Reds00:31
jaegerThe only problems I've had were physical drive failures00:31
jaegerreplacing and resilvering are easy00:32
prologicand clearly you run this on CRUX right?00:40
prologicNo other tools besides zfs/zppol?00:40
prologicSMART to detect disk failures?00:40
jaegersometimes I use the tw_cli tool for my controller but otherwise, yeah. crux, zfs, zpool, smartmontools00:41
prologicyou use a raid controller?00:42
prologicor what do they call it ; HBA?00:42
jaegeryes. I wonder what that 10/4 uses as its controller00:42
jaegerIn my case it's actually a RAID Controller but I'm not using any RAID00:42
prologicall onboard I assume00:42
prologicyeah RAID is not recommended with ZFS I've read00:43
prologicand makes little to no sense :)00:43
jaegersome people set a RAID controller up in JBOD mode but that's also not ideal00:43
jaegersingle non-raid exportable units are best in the case of my controller, at least00:43
prologicyeah so I guess the only thing I'm unsure about is the pool type and no. of drives to start with :)00:45
prologicI'm pretty set on the hw; the 10/04; nice price, rack mounted; good no. of hot-swap drive bays (10 3.5", 4 2.5")00:45
jaegeryou might want to look at supermicro's stuff, too, if you want a second opinion00:46
prologicyeah I have but they're not cheap00:46
jaegerdepends on the config. They're much cheaper for simple servers than many other manufacturers, not sure on the storage side00:47
jaegerif you're already set on the lime tech thing, no worries, just throwing it out there00:47
prologicI'll have anoather look00:48
prologicthere's broadberry and thinkmate will supply rackmountable supermicros00:48
prologicI want this to go into my 22RU 800mm rack at home :)00:48
jaegerregarding the pool growing stuff, you CAN create a zpool with different VDEV types00:49
jaegerand you can add VDEVs later00:49
jaegeryou just can't change a VDEV's type or shrink it00:49
jaegeryou can't remove a device from a VDEV, only replace it, etc.00:49
jaegerso if you wanted to create a raiz1 now you could add drives to it later but not change it to a raidz200:50
jaegerbut you could have a raidz1 and a raidz2 in the same pool00:50
prologichmm interesting00:50
prologicyeah I'm not sure what to do :)00:50
prologicgiven what I plan to buy only has 10 bays :)00:51
prologicideally I want raidz2 anyyway00:51
jaeger <-- tons of ZFS info00:53
prologicoh yes been reading these blogs :)00:54
prologicvery well written00:54
prologicI guess I could do something like this00:55
prologicbuy 5 drives; raidz2 with 1 hot spare00:55
prologicthen later on another 5 drives which I could add to the existing pool or create a new pool00:55
prologicbeen playing with the tools at least locally using file disk images00:56
prologicalso Q: how does zfs detect failed drives?00:56
prologicdo you have to "zfs scrub" on a regularly basis to determine the state of the pool?00:57
jaegerit detects errors in the filesystem data00:57
jaegerI scrub once a month, that's probably overkill00:57
prologicobviously smartmontools will help to notify you of failed disks too00:57
prologicso zfs will pick up on faulty disks without scrubbing?00:57
prologicthe docs I was reading was a bit misleading on that00:57
jaegerusually, yes. Depends on how good the controller and drives are, I suppose01:03
jaegerwell, and if ZFS detects software parity errors01:04
prologiccan't think of anything else right now that I probably can't read up on :)01:07
prologicthanks :)01:07
prologicoh do you do any iSCSI stuff over your pools?01:07
prologicor how do you typically share the storage?01:08
prologicI see zfs itself has builtin support for NFS/SMB (with the correct sw deps installed)01:08
jaegerI use smb for most of my stuff, works everywhere01:08
jaegeriscsi works alright but isn't worth the effort to set up in my opinion01:09
jaegerhaven't done it with ZFS on linux01:09
prologicyou find smb easier than nfs?01:10
prologicmixture of *nix, windows machines?01:10
jaegerwindows and osx fuck up nfs pretty badly, in my opinion01:10
jaegermy stuff at home is windows and crux so smb works great01:11
prologicfair enough01:12
prologicwe have osx (mba) and crux01:12
prologicsoon another crux desktop for the wife (probably a NUC or similar)01:12
*** kori has quit IRC01:34
*** kori has joined #crux-devel01:36
korithis seems neat.02:21
*** Feksclaus has quit IRC02:36
prologicwtf is this?03:34
prologican init written in bash?03:34
*** kori has quit IRC03:36
*** kori has joined #crux-devel03:38
*** kori has quit IRC03:38
*** kori has joined #crux-devel03:38
*** mavrick61 has quit IRC03:43
*** mavrick61 has joined #crux-devel03:45
diverseprologic: that's bad04:28
prologicjaeger, how do you have smartd configured btw?04:29
prologicdiverse, what is?04:30
diversethe core init being written in bash04:30
prologicjaeger, really?04:41
prologicso just: DEVICESCAN04:41
prologichow do you get alerted of a disk failure and go to your cabinet and replace a disk? :)04:41
jaegerI use that VM for lots of things so I frequently see the logs anyway04:42
jaegerI would probably set up mail notifications otherwise04:42
prologic$ egrep "^[^#].*" /etc/smartd.conf04:42
prologicthat's the default :)04:42
prologicahh k04:42
prologicnps :)04:42
prologicI'll probably setup some kind of influx/grapfana monitoring and configure smartd to send email alerts on failures04:43
prologicthese start at ~$2500 :/04:47
prologicdagnamit :)04:47
prologicI suppose I could start with something like this :)05:11
prologicthrow some ram in, buy 4 disks05:11
prologicgood price05:11
prologicbut only 4 bays :)05:11
prologicand no 2.5" bays (for ARC and L2ARC)05:12
prologicnice read ^^^08:47
prologicjaeger, I'm not sure what SuperMicro stuff you had in mind for storage -- But I can't find anything that matches this LimeTech 10/4 system (3RU, 10x 3.5" + 4x 2.5" Hot Swap bays)08:48
prologicthese guys seem to have gotten iStarUSa to custom make the chasis too -- hard to find 3.5/2.5 hotswap systems08:49
*** teK__ has joined #crux-devel09:14
*** teK__ has quit IRC09:14
*** Feksclaus has joined #crux-devel11:40
jaegeristar and norco might also be worth a look14:19
jaegeranyway, if you like the lime tech one, go with that, I was just chatting about others, not making a case for something else :)14:20
*** tvaalen has quit IRC15:01
*** tvaalen has joined #crux-devel15:01
prologicall good :)15:08
prologicI can't find anything else though15:08
prologicotherwise you have to DIY15:08
prologicjaeger, how do you have your pool(s) configured? Can I see your zpool status? :)15:23
prologicjaeger, Also do you use different controllers per vdev or just plug all the drives into the onboard sas/sata ports?15:24
jaegerI only have one pool, my storage needs aren't large. It's a raidz1 with 3 devices and 1 spare15:27
jaegerAll 4 drives are connected to a 3ware 9650se controller15:28
jaegerIf you have a good onboard SATA chipset you can certainly use that. I couldn't in my case because I pass that controller directly to a VM and couldn't split ports on the onboard one15:28
jaegerso it was all onboard ports go to the VM or use a separate controller15:28
prologicahh k15:31
prologicthanks :)15:31
prologicbeen doing heaps of ready today (been sick off work)15:33
prologicI think the only configuration that makes sense for me with that AVS 10/4 3RU box is15:33
prologic2 vdevs of raidz2 2+2 (+1 hot spare)15:33
prologicinitially populating the first vdev and later populating the other 5 bays and adding to the pool later on15:34
prologicdoes that make sense? :)15:34
jaegerWell, how you want to lay it out is subjective but I see no reason that shouldn't work15:35
prologicyeah I was getting confused between vdevs and pools for a while15:36
prologicbut it seems that not only are physical disks, files, cahce, log are vdevs15:36
prologicbut groups of indiviaul devices/files in some cinfugration are considered a vdev too15:36
prologicspace efficiency of raidz2 is no better than mirroring15:41
jaegernot with X+Y with X==Y15:41
jaegerwhere X>Y it is15:41
jaegerconsider this scenario:15:42
jaegerinstead of doing a 2+2(1) now and another 2+2(1) later, what about a 2+2(1) now and just adding disks later so you end up with a 7+2(1)??15:42
jaeger(1) being the hot spare15:42
jaegerDo you really need to be able to lose up to 4 disks at a time?15:43
jaegermaybe you can't do that, I can't remember exactly15:44
jaegerah, that functionality still isn't in ZFS, though there's some research done towards it, I guess15:45
jaegernever mind, ignore that scenario :P15:45
jaegeryou can expand the pool but not individual vdevs15:45
jaegerI haven't changed my zpool since I built it so I have to go back and reference this stuff to remember :P15:47
prologicsorry was reading15:55
prologicwait you can't add disks to a vdev?15:55
jaegernot a raidz vdev, looks like15:55
prologicuntil that functionality becomes available15:55
prologicwe're stuck with adding more vdevs of the same geometery to the pool15:56
prologicso it's a toss up between (for me) raidz1 and raidz215:56
prologicyou use raidz1 right?15:56
prologicthere is another option15:57
prologicmost of the data I ever plan to put on this will likely come from reproducable sources15:57
prologicso I could just (which I plan to do anyway and already do) just backup the imporatnt data15:57
prologicand just live with the change that I might have a total pool failure :)15:57
jaegerIt's unlikely that you'll lose more than one disk at the same time unless you ignore problems, I imagine15:58
jaegerthere's some contention that if you're going to use large disks like 5 or 6 TB you should avoid raidz1 due to the resilvering "stress" when you replace a drive15:59
jaegerMy drives are a mix of 2 and 4 and I haven't seen any trouble there16:00
prologicI've been reading that16:00
prologicso your pool isn't at full capacity then?16:00
prologicsince you have mixed drive sizes16:00
jaegerWhen one of my 2TB drives failed I replaced it with a 4TB because it was cheap at the time16:01
jaegereventually I'll replace them all, most likely16:01
prologicthing is it likely only takes hours to resilver a new disk anyway16:01
prologicmost disks can write at ~100MB/s?16:02
jaegerdepends on how much data needs to be rewritten16:02
prologicthat's true16:02
prologicso if you follow the 80% rule16:02
prologicor I've read others where it's the 90-90% rule16:02
prologicgoing to head to bed16:05
prologicI'll sleep on this :)16:05
prologicmaybe I'll be convinced a 2+1 is good enough with room to grow to another 2 groups giving me 3x(2+1) (1 hot spare)16:05
*** frusen has joined #crux-devel19:07
jaegerIn general is a dynamic array better for append-only type storage things and linked lists better for arbitrary insert/delete type storages in C?20:06
jaegerspecifically for a list or collection20:07
teK_what do you mean by dynamic array?20:14
teK_an array that gets diuble if filed with 75% data?20:15
jaegerpretty much, yeah20:15
jaegernot an official data type, wasn't sure how better to describe it20:15
jaegerI was poking around at my pkg software stuff and thinking about that dynamic array or linked lists for storing the package database in memory20:16
jaegerlinked lists probably make more sense in this case since arbitrary in-list deletions/inserts aren't as simple with the dynamic array20:16
teK_if you want to optimize the storage you could get the size of the lib/db/pkg file and malloc just enough/a little more than that to store things in the (contiguous) memory block20:18
teK_after all, my db is 11M in size.. merely in need for heavy optimisations ;>20:18
jaegermaybe a hash map is a better option than either of the others20:21
teK_that's what I'm using20:21
jaegeryeah, I remember you talking about that a bit20:22
diverseare we talking about pkgutils?20:22
teK_that was because I liked (the simplicity of) the hash() function very much20:22
teK_we are talking about separate (re)write versions of prt-get et. al :)20:23
teK_n = 100000: LinkedList, sort, bsearch: 27.53 ms20:23
teK_n = 100000: ArrayList, sort, bsearch: 21.76 ms20:23
teK_n = 100000: LinkedList, linear search: 2837.30 ms20:23
teK_n = 100000: ArrayList, linear search: 0.27 ms20:23
teK_results from a microbenchmark, an acquaintance of mine conducted some weeks ago20:24
teK_don't ask about element size or language :)20:25
diversejaeger: btw, in C++ and Rust, we call dynamic arrays as vectors20:25
teK_somebody argued this was due to the cache of the cpu speeding linear and non-jumpy things20:25
teK_and we, diverse, do C :>20:26
teK_we implement our own data structures! :>20:26
jaegervector makes sense for this particular thing in any language, I suppose20:26
diverserather than a LinkedList, how about a DoublyLinkList? It's nicer with a head and tail to track elements.20:28
teK_and helpful in which situation?20:31
jaegerteK_: which hash function did you say you were using?20:32
teK_  unsigned long hash = 5381;20:33
teK_  int c;20:33
teK_  while ((c = *str++))20:33
teK_    hash = ((hash << 5) + hash) + c; /* hash * 33 + c */20:33
diverseteK_: for faster searching and deletion of elements?20:38
teK_how does double-linking speed up searching?20:39
teK_and wrt deletion.. this still should be way slower than a hash table20:40
diversebecause lookup can also go in reversal from the end20:41
teK_depends on your searching algo.. still slower than hash tables in our case20:43
jue.oO lots of security problems with current xorg-server ->
teK_Ilja "rocks"20:44
diversejue: oh shit20:44
teK_X.Org believes all versions of the affected functions contain these20:45
teK_flaws, dating back to their introduction.20:45
teK_holy moly20:45
jueobjections if I commit the rc (version that fixes those issues?20:46
jueseems to work fine for me20:47
teK_I'm willing to test before/after your commit ;)20:48
jueok :)20:48
juebtw, here's the ann for ->
teK_Be more paranoid20:51
teK_that's the spirit... how is sound code paranoid..20:51
diverseteK_: didn't know why you wanted to go with linkedlists but hashmaps are nicer for random lookup of packages20:54
jaegerI mentioned linked lists as an option, that's all. definitely not as good a choice in this context20:57
diverseah, no worries20:58
jaegerjue: none from me, though I see you did already :)21:05
diversethe way I learned how pkgutils works, is by using a hashmap for packages, where the key is the name, and value is the info. The info holds the version and the files as separate members.21:05
juejaeger: thx21:07
diversebut that stuff is mostly handled by the database21:08
frinnstheh, was just about to paste the above url :)21:54
juefrinnst: hope you agree on using the rc?22:02
frinnsttek there are a bunch of bind releases too23:18
frinnstthat ddos thing that all dns software seems to be affected by23:19
frinnstoh sorry, you had pushed it already23:20
Worksteralanco was i here the other day can someone point kde branch to 3.1 and maybe make the 3.1 branch off 3.0 since alan hasn't even done that yet.23:53
*** prontotest has joined #crux-devel23:57
*** prontotest has left #crux-devel ()23:57

Generated by 2.11.0 by Marius Gedminas - find it at!