IRC Logs for #crux Monday, 2011-11-14

*** frinnst has joined #crux00:52
*** frinnst has quit IRC00:52
*** frinnst has joined #crux00:52
*** vee has joined #crux00:52
*** linXea has joined #crux01:16
*** linXea has joined #crux01:16
pitillogood morning01:18
Romstermorning pitillo01:18
veeheya01:26
horrorStrucksup01:27
veehows everyone?01:42
horrorStrucktoo much food during nutrition time --> half sleepy, checking on my shirt's button regularly.01:45
horrorStruckbut finally found the fix for the nasty mkv+dts xbmc stuttering bug so it is a great relief in my life.01:49
veeand here i am, complanining about astronomy... xD02:04
*** vee has quit IRC02:19
horrorStruckyeah, lycanthropy sux :(02:23
*** ThePub has quit IRC02:32
*** mike_k has joined #crux03:46
RomsterhorrorStruck, what's the fix?04:46
*** _nono_ has quit IRC04:53
*** _nono_ has joined #crux04:58
*** SiFuh has quit IRC05:10
*** SiFuh has joined #crux05:12
*** _nono_ has quit IRC05:15
*** _nono_ has joined #crux05:19
*** acrux has quit IRC05:23
*** acrux has joined #crux05:24
*** acrux has quit IRC05:24
*** acrux has joined #crux05:24
*** cippp has joined #crux05:31
rauz_is lycanthropy not the change to a werwolf05:37
*** cippp has quit IRC05:51
prologichmm05:57
prologichow does RAID 6 compare to RAID 10 for performance and fault tolerance?05:57
prologicMy understanding is that RAID 6 allows for more tolerance but still have the same performance as RAID 10?05:58
cruxbot[opt.git/2.7]: [notify] flash-player-plugin: updated to 11.1.102.5506:14
Romsterprologic, well raid10 has higher iops06:20
prologicyeah so you're basically picking higher iops over tolerance06:20
Romsterbut worse data protection should both disks fail in each side of the raid1 part of the raid 1 + 006:20
Romstercorrect06:21
Romsterraid6 has more fault tolerance but a very heavy write penalty.06:21
prologicis raid6 the best for fault tolerance btw?06:21
prologicbecause I could have a raid10 nas06:22
Romsterand it's iops is really bad due to having to read the entire stripe if you don't write the entire stripe width.06:22
prologicand a raid6 nas for backup06:22
Romsterto recalculate the parity.06:22
Romsterraid6 is getting near it's limits for the 2+ larger sata disks of today. any more than about 14 disk raid6 using 2tb or larger disks is asking for trouble. for the SAS disks that have like 600GB per a disk you can go well over the ~14 disks safely as they have far lower read error rate.06:24
cruxbot[opt-x86_64.git/2.7]: [notify] flash-player-plugin: updated to 11.1.102.5506:24
Romsterraid10 more for performance and most definitely for a database.06:24
Romsterraid6 is best for bulk storage of files that are mostly read.06:24
Romsterthere is talk of triple parity in the mdadm ML too http://queue.acm.org/detail.cfm?id=1670144 i found this url as well.06:25
prologicwell my idea is to have a RAID10 NAS for shared storage in a small cluster running vms06:26
prologicand a RAID6 NAS for rsynced backups that are written to daily06:26
Romstermain issue is there is a bigger risk of hitting a unreadable block during a rebuild, data scrubbing minimizes this but it's not perfect.06:27
Romsterraid10 would be very good for VMS do to the high iops required.06:28
Romsteryou could do something like use lvm on top of it and carve it up to each LV for each VM. then at points in time snapshot each LV and back that up to a raid6 array?06:28
Romsterwhich goes along with your raid6 NAS for rsynced backups06:29
teK_Romster: will these snapshots work for vmware/qemu/etc. if taken during production?06:30
teK_snapshots == lvm snapshots, not VM-software snapshots06:30
prologicRomster, I was planning on running (possibly) ProxMox VE on my cluster anyway06:31
Romsteryes lvm flushes the buffersa and read locks the disk for a second or two jsut long enough to make a COW snapshot image. then unlocks he LV for writes again.06:31
prologicwhich supports LVM and snapshots - in fact it does it's backups of vms that way06:31
Romsteronly special things to consider is if you use SQL to read lock tables and flush before LV snapshotting06:32
teK_Romster: you can start a VM off of a 'running'-state backup snapshot?06:32
teK_1. take snapshot 2. shut down the vm 3. restore from snapshot 4. restart VM from backed up LV06:33
teK_will this work?06:33
Romsteryou can and recent lvm and kernel can even merge a snapshot back into it's origin too. and there is work on reducing duplicate COW snapshot chunks to reduce disk PV used for many snapshots of very little differences.06:33
Romsterit will but each snasphot will consume some memory resources. for tracking the COW image chunks though.06:35
teK_I see the VM-software as a issue not the Host-FS/Kernel part06:35
teK_you know... you have to shutdown the VM-instance to restart from the backup06:35
Romsteri'm not certain but the de-duplication of COW chunks might even be complete.06:35
Romsteri stopped following lvm2 closely the last few months.06:36
Romsteryou could even use a snapshotable FS like xfs or something and VKMS image files.06:36
teK_again.. I think the VM-software might screw up if you do that06:36
Romsteri haven't got any experience on that though.06:37
Romsterwhy would the VM screw up?06:37
teK_you'd shut it down before a restore, wouldn't you?06:37
Romsterwell if were to restore a backup yeah...06:38
Romsterbut to make a snapshot you don't have to shut down the VM.06:38
teK_so you'd start it with the restored virtual disk image that was taken during production (state: running)06:38
Romsteronly if oyu got say mysql to readlock and flush it's last transactions to disk before LV snapshot.06:38
teK_backup is nothing without restore :)06:38
Romsteryou can restore too... but you'd do that then start the VM after.06:39
teK_yeah of course06:39
teK_I guess I'll have to try that06:39
Romsterelse i would imagine the VM would go ballistic. or the OS on that VM.06:39
teK_but I will have to use ESXi I guess, so I have no choice anyway?06:39
Romsteri'm not experienced with that sorry.06:40
teK_me neither (yet)06:40
Romsterbe best to talk to the experts on that in #lvm or in there channel/wiki/ML.06:40
teK_we'll see :)06:40
teK_thanks anyway06:40
Romsternp06:40
Romsteri'll only say what i know, not going to bullshit anyone with garbage :_06:41
Romster:)06:41
Romsteri haven't ran masses of VM's i'm going by all the experience i've come across in the various channels.06:42
teK_of course not :]06:42
Romsterbut i do know lvm very well except the clustering side of it.06:42
Romsterand a good understanding of raid.06:42
teK_it took some time to grasp/remember the pv, vg, lv layers06:43
teK_but that's okay as I still don't really get the hype behind ZFS :)06:43
Romsteri understand SMART better now too... that's a art to decipher but it's not a guarantee that it'll tell you there is imminent death06:43
teK_hehe06:43
Romsteroh the layers didn't have me confused for long, all the layers just use the same extent size blocks but each layer has it's own mapping of extents.06:44
teK_theeeeere you go ;)06:44
Romsterhttps://www.ibm.com/developerworks/linux/library/l-lvm2/ not the official lvm doc but a good read.06:45
teK_they do have some really good articles over there06:45
teK_:)06:45
Romsteri've been spending msot of my time fixing mistakes by users of lvm.06:45
Romsterrecovered data on about 90 something percent of cases.06:46
teK_nice06:46
Romstermost of it is usually corrupt metadata issues.06:46
Romsterbt i've had a few hard ones where i had to piece back the entire LV mappings over all the PVs and got it to work.06:47
Romsterbut*06:47
Romsterthe majority are drescue disk to new disk fix PV UUID restore metadate to VG. fsck LVs06:48
teK_I used it once (for el cheapo RAID0) :)06:48
Romsteri plan todo more eleborate setups once i can afford a SAS card and some either 8 or 15 bay hdd enclosures with multiplexers.06:49
teK_hehe06:49
teK_I plan building to identical VM-Servers for hosting the old stuff for my old company06:49
Romstercurrently got some mdadm raid1's as PVs then i got a couple LVs striped over all them PVs the rest are either JBOD over both PVs or reside on a single disk PV.06:50
teK_they received an offer for 2 IBM duale core xeon servers and one storage thingy for 3TB for about 21k EUR06:50
teK_and I thought this should be doable with 2 cheap mach[Cines and some redundancy, too06:50
Romsteri'd like to move to a more mirror plus at least 2 backups. i would love a google like FS for linux if one existed 64MB extents scattered over all disks and have at least 3 copies of each extent and pack the small files into one extent and large files span multiple extents.06:51
teK_ocsfs and gfs won't do that?06:51
teK_(dunno!)06:51
Romsterthough google uses multiple servers for fetching the data storage and i'll probably jsut have the 1 pc doing all 3 daemons.06:52
teK_or hdfs..06:52
Romsterhmm need to look into those.06:52
Romsteri'm scared if i did a say 5 way mirror of each LV if 1 disk/controller decided to have some fun and then all other mirrors synced the bad disk then every copy of that LV is useless.06:53
teK_it's a hard problem I guess. You'd have to do some election where the other 4 mirrors would beat the faulty one06:54
Romsterhaving a few backups at different points in time including a 2 disk mirror LV for cases of complete disk failure to have high availability, should cover my backside.06:54
teK_provided there's a mechanism to ensure data integrity ;)06:54
Romsteryeah that google FS uses sums on each extent/FS not sure exactly how it works.06:55
Romsterbut it can detect failing extents and relocate data from other disks.06:55
teK_there's a paper on that topic I think06:56
*** ThePub has joined #crux06:56
Romsterif you got any useful links i'd love to see it.06:56
teK_http://www.ioremap.net/projects/libeblob sounds cool, too06:56
teK_http://labs.google.com/papers/gfs-sosp2003.pdf06:56
teK_http://labs.google.com/papers/bigtable-osdi06.pdf06:57
teK_brb06:57
Romsterhttps://lwn.net/Articles/258516/ Ceph filesystem but i dunno if that's out or what.06:57
Romsterdon't think i can find that url on the google fs now.06:58
Romsteri'm thinking dm-mapper could be put to use doing such a task with some userspace/kernelspace program07:04
Romsteryeah the google filesystem is what had read in the past.07:05
Romsterfile namespace, chunkserver it was a good 8 months ago i read all that.07:06
Romsterand one other server for i think data constancy replication07:06
*** ThePub has quit IRC07:07
Romsterheck i should print this out now that i got a CISS printer setuo.07:07
Romstersetup*07:07
Romsterten i can read and study it more everyday.07:08
Romstertakes time for stuff to sink into my brain sadly.07:08
Romstersorry if i overloaded prologic with so much text too.07:09
Romsteri tend to drift off course.07:10
RomsterteK_, anything else of interest, i'm digging though my bookmarks too.07:10
Romsterlibeblob looks nice but not what i'm after though i'll bookmark it incase a use turns up.07:12
horrorStruckRomster: ten years later http://trac.xbmc.org/ticket/10891 and backported to 10.107:21
Romsterall thes filesystems tend to be p2p over multiple hosts i just want one node with about 20 or so disks.07:21
Romsterperhaps later i might have anther remote node away from same site over a radio link.07:22
RomsterhorrorStruck, is that fix in ffmpeg in contrib?07:23
Romsterif not i can apply a patch for it.07:26
Romsterhmm or it's in that player that's got the bugs.07:33
horrorStruckRomster: it's an ffmpeg bug but a workaround for xbmc07:37
horrorStruckthere's no patch for ffmpeg to mu knowledge07:37
Romsterhmm i'd be keen to fix the source of the problem not some bandaid fix to the player.07:39
horrorStruckit would be great to see a real fix in fact, this stupid bug is very annoying :(07:40
Romstermight explain some stuff i've noticed with syncing of sound and video too.07:44
horrorStrucki've built xbmc with system ffmpeg and the bug was still there, actually this is because of broken mkv files so we can't really blame anyone but the encoders. however some players will just play those files fine.07:48
Romsterhmm so if you encode a new file with the mkvtools in contrib now the bug does not exist?07:56
Romsterof course it does nothing for existing broken mkv's07:57
horrorStrucki didnt try yet to fix the files, not really on option with hundreds of broken ones. better fix the non-broken code :P07:59
teK_frinnst: flash update for 64 bit (md5sum mismatch) \o/08:04
teK_or my fault .)(08:05
Romstercan't really fix broken mkv's but hopefully the tools to make the mkvs haven't got that bug anymore.08:18
*** ThePub has joined #crux08:36
*** jdolan has quit IRC08:43
*** jdolan has joined #crux09:02
*** ChanServ sets mode: +o jdolan09:02
*** joe9 has joined #crux09:15
*** ardo has quit IRC10:54
linXeafor some reason some .mkv work in VLC while mplayer fail to open them. Maybe that's just my computer11:00
rmullWhat's the mplayer output?11:01
*** prologic has quit IRC11:04
linXeaMPlayer has finished unexpectedly ... something like that, Anyway, they play just fine in VLC. And it's only a few files that have this problem. I think it may have something to do with hardcoded subs.11:05
linXeabeen the same errors cross platform. freeBSD and linux, nothing I worry about, just pointing it out.11:06
rmullInteresting, have you mentioned it to the mplayer folks?11:08
*** prologic has joined #crux11:10
*** prologic is now known as Guest8367311:11
linXearmull: no, done some basic research but never cared to dig too deep into it since VLC was always a single click away.11:22
*** Rotwang has joined #crux11:26
rmullI'm getting a major footprint mismatch when sysuping ruby11:42
rmullAll the files look tk related11:42
rmulltk is already installed11:43
rmulljue: Any thoughts on this? Am I PEKKACed?11:43
juermull: tk is a optional dependency for ruby, so there's not much I can do11:50
rmullBut tk is already installed - shouldn't a ruby build see it?12:47
Rotwangrmull: If you have tk installed and it isn't taken into account in the .footprint12:55
Rotwangthen yes, you're going to get a mismatch12:56
Rotwangand you can safely ignore it12:56
Rotwangpeople will say that I'm crazy, but in most cases I simply ignore .footprints12:56
jaegerrmull: sounds like the ruby maintainer doesn't use tk, I'd guess13:12
jaegerto add on to jue's "optional dependency" comment13:12
*** Evil_Bob has joined #crux13:32
rmullOkay. I'll just ignore footprints and consider this "notabug," thanks all.13:32
jaegerSometimes it's fine to ignore them but they're very useful for double checking13:33
*** frinnst has quit IRC13:40
rmullAnybody have a mirror for usbutils-004.tar.bz2?13:53
rmullFound one, never mind13:54
*** Guest83673 is now known as prologic13:54
*** prologic has joined #crux13:54
rauz_i hate windows install crap i takes forever, updates and updates and drivers and search for drivers ...14:09
Rotwang[;14:09
rauz_and the reboots ahh drive me crazy14:10
*** frinnst has joined #crux14:37
*** Rotwang has quit IRC16:28
*** jdolan has quit IRC16:35
*** Evil_Bob has quit IRC16:50
*** linXea has quit IRC17:00
*** jdolan has joined #crux17:20
*** ChanServ sets mode: +o jdolan17:20
*** mike_k has quit IRC17:24
*** vee has joined #crux17:29
*** tilman has quit IRC17:45
*** tilman has joined #crux17:45
veemy brain hurts18:00
*** aarchvile has quit IRC18:13
*** aarchvile has joined #crux18:13
*** ThePub has quit IRC18:38
*** vee has quit IRC18:52
*** Ovim|afk_nA has quit IRC18:54
*** Ovim-Obscurum has joined #crux19:07
RomsterlinXea gdb/strace it19:31
Romsteri use -im on plgmk in prt-get.conf to ignore new files.19:32
Romsterrmull, only missing files are considered a bug.19:33
Romsterrauz_, yeah windows sucks big time, there is software informer that's not too bad.19:34
*** joe9 has quit IRC21:21
*** crshd has joined #crux21:34
*** crshd has quit IRC21:49
*** sabayonuser has joined #crux21:50
*** Dudde has quit IRC21:50
*** crshd has joined #crux21:51
*** Dudde has joined #crux21:51
*** sabayonuser has quit IRC21:57
*** crshd has quit IRC22:17
*** crshd has joined #crux22:17

Generated by irclog2html.py 2.11.0 by Marius Gedminas - find it at mg.pov.lt!