IRC Logs for #crux-devel Monday, 2013-02-04

*** jue has joined #crux-devel00:16
*** mike_k has joined #crux-devel03:09
frinnstgit reset ftw :)04:32
frinnstthe mailinglist is really starting to depress me05:02
teK_today I was asked how one could locally (???) install a package on debian/ubuntu05:05
frinnstwhat?05:09
frinnstas opposed to on someone elses computer?05:09
*** horrorSt1uck has joined #crux-devel05:52
*** horrorStruck has quit IRC05:55
teK_frinnst: i have no clue06:03
jaegerhah06:21
jaegerthunderbird says, "jaeger@morpheus.net received 785 new messages"06:21
jaeger(it's actually 8)06:21
teK_mutt stopped decrypting inline-pgp messages06:21
teK_fffuuu06:21
juefrinnst: works btrfs-progs for you?06:40
jaegerjue: I'm using in for a temporary data move server here at work, works fine for me06:45
juejaeger: the new version, 20130117?06:46
jaegerah, no. you didn't specifiy so I assumed you were asking about the one in ports, sorry06:47
jaeger2012100406:47
jueI've created a new tarball last week, just change version to 20130117 to test it06:51
juedidn't know that you are using it too, thought that only frinnst is brave enough ;)06:52
frinnstjue: yes, but tbh i've not really done anything with it other than mounting stuff06:52
frinnstthough not noticed any troubles from #btrfs so should be safe06:52
jueok, thanks06:53
jaegerI'm not using it in production anywhere, just one machine that's a temporary intermediary for data transfer06:55
jaegerhttp://pastebin.com/xKYdvLm306:56
jaegersmall test :)06:56
frinnst:)06:57
frinnstwhat are you running raid1 ?06:57
jaegerraid0, actually06:58
frinnstsame here06:58
teK_so btrfs recently got raid 5 and 606:58
teK_:}06:58
jaegernice06:58
frinnstdid they push it?06:58
frinnstdidnt notice06:58
teK_but: what's better in having btrfs-level raid than say software raid on linux?06:59
jaegerthey're different concepts, really06:59
frinnstI nominate tek to try it out first06:59
jaegerthe btrfs folks should call it something else :P06:59
frinnstits more like lvm than md06:59
teK_frinnst: chris mason posted patches on lkml06:59
jaegermdadm raid1 is still a 1-to-1 whole disk thing while zfs and btrfs and some others are on the block level07:00
jaegerraid1 in btrfs terms is more like "keep 2 copies of this crap somewhere on different spindles"07:00
jaeger(note that this is oversimplification)07:00
teK_hmmm07:00
teK_and where's the advantage?07:00
jaegeryou'd be better off reading about it than I could explain, I'm sure07:01
jueraid 5/6 is not in the master branch, but in a branch called raid56-experimental07:01
teK_sorry but I still don't get all the FS-magic involved in zfs and btrfs07:01
jaegerI'm no FS expert07:01
rmulljaeger: Isn't the "block level" the lowest level for disk storage?07:01
jaegerin terms of this discussion, I suppose it is07:02
rmullSo are you meaning to say that the btrfs raid56 implementation is something higher than block level?07:02
teK_will have to look into this thanks jaeger07:02
jaegerI know nothing about btrfs' raid5/6, I didn't say anything about it :)07:02
rmullOh okay07:03
jaegerteK_: np... it's interesting stuff07:03
rmullteK_: As far as ZFS is concerned the primary advantage everyone talks about when using ZFS raid5 over something that's not FS-aware is it solves the "write hole" issue07:03
jaegerwhich btrfs should solve as well07:03
teK_the most complex setup I've ever got my hands on was a 3-disk lvm setup and our mdadm-struff on crux.nu07:04
teK_rmull: I've always been a huge fan of RAID10 so no write-hole for me :>07:04
jaegerWish I could show you some of our storage stuff up close, heh07:04
jaegerI've got 5 NAS devices onsite currently07:04
teK_hdd-0rn07:04
teK_p0rn07:05
teK_\o/07:05
jaegerwell, 3 onsite and 2 a few miles away07:05
jaegerour old oracle device had ~96 1TB SATA drives, the new one has ~60 3TB SAS drives07:06
teK_nice07:07
teK_was this 'internally' connected or do you use fc or something?07:07
jaegerthe raid write hole is less of a problem than it sounds like because it's pretty rare, but it does happen07:07
jaegerclients connect to it via smb and cifs, it's all one device instead of having an access node that connects to it via FC or something07:08
jaegerwith that said the drives are all connected via SAS cabling07:08
jaeger3 expanders that plug directly into the head nodes via SAS HBAs07:08
jaegerthe isilon storage, on the other hand, uses infiniband07:09
teK_so this is pure storage with some NICs07:09
jaegerwell, not sure what you mean by "pure", I'll say that it's storage and access in one appliance07:09
teK_it does no computation on the data itself == pure in my words :)07:11
jaegerhrmm... maybe yes and no :D07:11
jaegerbecause it does no client access, like desktops or FMRI analysis07:11
jaegerZFS does a lot of its own computation, though07:12
jaegerchecksum verification, scrubs, etc.07:12
jaegercompression, deduplication07:12
teK_that's meta-computation :P07:12
jaegerwe actually learned the painful way that we could absolutely kill the appliance by using deduplication on the old one07:12
jaegerok, fair. :)07:13
jaegerFor a period of a couple months I was coming in nearly every morning and spending 2 hours fixing VMs that lost their disk when the storage's NFS shit itself07:13
jaegerno more dedup :D07:13
teK_arg ;)07:14
teK_that was one of the main features of ZFS, right?07:14
teK_but dedup for VM-images is rather pointless, isn't it?07:14
jaegeryes, and it can be very useful in *extremely* specific circumstances07:14
jaegerwell, if you clone a lot of VMs it would be useful there but we didn't use it for VMs07:15
jaegerwe used it on other things but the dedup tables are pool-wide07:15
jaegerwe used it specifically for FMRI imaging data, huge numbers of scans from an MRI that scientists make multiple copies of07:15
jaegerthe problem was that when snapshots were exhausted at the end of their cycle (30 days or whatever, depending on the project) the un-deduplication that had to happen was bringing the appliance to its knees07:16
teK_I see07:17
teK_sucks :P07:17
jaeger128GB RAM and 4 2.30GHz quad-core AMDs were not enough to handle the dedup07:17
teK_but VMs losing their images due to that? meeeh07:17
teK_oh07:17
jaegerwell, when the CPU and RAM resources were completely exhausted, the NFS connections went down07:17
jaegerso our VMware hosts lost their connections to the datastore and the VMs then thought they had had their disks removed, basically07:17
jaegerlinux makes everything read-only when its disk access times out, so all the affected VMs had to be rebooted07:19
jaegersometimes they came up fine, sometimes I had to boot recovery ISOs and do extensive FS repairs07:19
jaegersurprisingly windows VMs handled it much more gracefully, usually just needing a reboot07:20
frinnstmost (all?) raid controllers should be pretty safe from the writehole these days07:40
frinnstatleast if you have some form of power backup07:41
jaegeryeah, most current hardware is fine but there's a lot of old crap still out there in the wild, so to speak07:43
*** sepen has joined #crux-devel08:27
juehi sepen, you got mail ;)08:36
sepenhey08:37
sepenRoelof??08:38
teK_:p08:38
sepenjue, .tar.gz doesn't exists when I download x64 source file08:39
sepenand thats for what I did that08:39
jueoops08:40
jueyou start downloading from here -> http://www.oracle.com/technetwork/java/javase/downloads/index.html ?08:40
sepenoops08:41
sepenthis morning it was .gz only08:41
sepenI'm sure of that08:41
juehmm, that's strange08:42
sepenwell, I built my package with the Pkgfile I pushed into git08:42
sepenI'm gonna update it again :P08:42
sepenjdk-7u13-linux-x64.gz: gzip compressed data, from Unix, last modified: Wed Jan 30 10:38:17 201308:44
sepenI'll do later, I should go to my girlfriend now, bbl08:49
*** sepen has quit IRC08:49
*** joe9 has joined #crux-devel11:26
*** sepen has joined #crux-devel12:11
*** mike_k has quit IRC13:07
jaegerso... we have all this RAM I'm replacing in servers... I'm going to upgrade my workstation to 24GB :D14:39
sepen;P14:43
frinnstnice. I tried that but the motherboard didnt accept buffered ram :(15:14
*** sepen has quit IRC18:47
*** mavrick61 has quit IRC19:46
*** mavrick61 has joined #crux-devel19:47
jaegerI had to update the BIOS to make it recognize the RAM but didn't have time to test it, will do that tomorrow20:22
teK_Pending posts:23:57
teK_From: crux@crux.nu on Tue Oct 30 12:51:36 201223:57
teK_Subject: ports/opt (2.8): [notify] texlive: initial import replaces tetex23:57
teK_Cause: Message body is too big: 8988719 bytes with a limit of 200 KB23:57
teK_:P23:57
frinnstlol23:57

Generated by irclog2html.py 2.11.0 by Marius Gedminas - find it at mg.pov.lt!