IRC Logs for #crux Monday, 2012-10-15

*** vaddi has joined #crux00:35
*** Rotwang has joined #crux01:15
*** sh4rm4 has quit IRC01:20
*** sh4rm4 has joined #crux01:22
*** lasso|qt has joined #crux01:33
*** sammi` has quit IRC01:34
*** mike_k_ has joined #crux01:39
*** s44 has joined #crux01:47
*** spider44 has quit IRC01:49
frinnstteK__: lol02:09
prologicjaeger, thanks for your comments - now some of my own02:23
prologicyes I do watch enough TV - at least I schedule enough to warrent 4 tuners02:23
prologicI'm already in situations where I run out of available tuners02:23
prologicin a household you can imagine this happenning quite frequently with multiple mythtv frontends02:23
prologicI'm not sure about the TRIM and SSD performance with software RAID1 though - will have to look into this02:23
prologicI wanted RAID1 SSDs for the OS to lessen the risk of loosing the OS and config02:24
prologicbut if you loose performance, I'll just stick with 1 SSD for all my OS drives02:24
frinnstyou could use btrfs and raid it that way :)02:28
frinnstthen you won't lose trim02:28
prologichmm02:41
prologicis it stable enough?02:41
frinnstdepends on what features you use02:58
frinnsti run it on my desktop as / and as /lotsofmediafiles :)02:59
frinnstno dataloss yet! :)02:59
frinnstbut I have backups (on another btrfs system) :)03:00
prologicright03:05
prologicwell I'm planning on a ZFS NAS soon03:05
prologicbut also a new mythtv backend (rack mounted)03:05
prologicactually all this stuff rack mounted03:05
prologicI probably don't mind btrfs'ing the media OS and Storage drives03:05
*** Rotwang has quit IRC03:05
prologicbut obviously going with ZFS (FreeNAS or Solaris) for the NAS/SAN03:06
frinnstbah. how boring :)03:12
frinnstit will just work and not present any interesting challenges at all :)03:12
niklaswelalala04:00
niklaswedoes someone know which ports vsphere using to connect to esxi-host?04:01
*** nogagplz_ has joined #crux04:07
pitillomorning04:11
frinnst44304:11
frinnstor, exactly what kind of traffic are you thinking of?04:12
frinnstdifferent ports are used for different stuff, vmotion, HA, etc04:13
niklaswefrinnst: you know when you connecting to your esxi via vsphere client.. and can create new virtual machine and stuff..04:13
frinnstyeah that should be https04:13
frinnsthttps://lkml.org/lkml/2012/10/13/127 linus is fun04:31
frinnstbtw just noticed trim support has been added for 3.7 with md05:13
frinnstatleast for raid505:13
frinnsthttp://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commit;h=620125f2bf8ff0c4969b79653b54d7bcc9d4063705:13
prologicniklaswe, afaik vsphere is a windows only client05:30
prologicthey don't make a client for any other platform05:30
prologicafaik they do make a web-based one (iirc requires jvm)05:30
prologicfrinnst, ahhh nice :)05:31
prologicmaybe I'll be able to use normal raid afterall05:31
prologicI was planning (for my mythtv box anyway) RAID-1 SSD drives for the OS and 4x 1TB RAID-10 drives for video storage05:32
frinnstnah the new client is flash05:37
frinnstwith 5.105:37
frinnstprologic: i'd rather spend the money from one of the ssds on the spinning media. maybe 2tb?05:39
frinnstalso, raid505:39
prologicahh right05:39
prologicnews to me :)05:39
prologichmm05:39
prologicI do wonder about the chances of an SSD failure though05:39
prologicand having to reimagine a new OS drive (SSD)05:39
prologicalthough my current backend/frontend box atm has one and it's still going strong05:40
frinnstyeah but the nature of ssd's failures are probably very different from spinning media05:40
prologictrue05:40
prologicI could probably get away with just the one05:40
prologicyou're right05:40
prologicbut RAID-5?05:40
frinnsti've never had one fail for me.. but corruption is probably more common than just electronic failures05:40
prologicI tend to think I might need the extra write speed for 4 tuners05:40
frinnstno clue about what io performance you need. just feel that raid10 is a bit of wasted space for home use05:41
frinnst:)05:41
frinnstbut you probably know much better what kind of io you'll need than me05:41
prologicyeah I haven't found any concrete I/O numbers yet05:41
niklasweprologic: I know, but I will use an ssh-tunnel to connect to my esxi at home..05:41
prologicalthough I should be able to measure some off my current box05:42
prologicsee what the I/O is like with one MEPG stream05:42
prologicoff the tuner05:42
frinnsti use 4x 2tb "green" drives in raid505:42
prologicI have server builds for NAS, Media and VM Server - all with RAID1 SSD OS drives05:42
prologicI kinda feel like sticking with that :)05:43
frinnst=)05:43
prologicI should measure the I/O on my current box before hading to bed05:43
prologicgive me some real numbers to play with05:43
prologick have a HD program recording05:44
prologicgah05:45
prologicdon't have iotop installed05:45
prologicCould not run iotop as some of the requirements are not met:05:45
prologic- Linux >= 2.6.20 with05:45
prologic  - I/O accounting support (CONFIG_TASKSTATS, CONFIG_TASK_DELAY_ACCT, CONFIG_TASK_IO_ACCOUNTING)05:45
prologicany brilliant ideas? :)05:46
frinnstbuild the kernel with the required support? :)05:46
frinnstfucking .net runtime compile wasdfaksd cpu05:48
prologicyeah no :)05:48
prologictoo much farting around05:48
prologicI just measured roughly with watch -n 10 -d du -h <the .mpeg file>05:49
prologic~80Mbps for a single HD stream05:49
prologicjudging by that I definately need RAID-10 to cope with 4x HD streams05:49
frinnsti have *no* issues saturating my 1gbit connection at home with 4x2tb in raid5 at home05:50
prologic7200rpm spinning media will cope with 2 streams by that figure05:50
prologicbut not 3 or 405:50
prologicahh yeah05:50
prologicthat's reading though05:50
prologicwhat about writing?05:50
prologicraid5 has read performance benefits anyway05:50
frinnstyeah but i mostly read from that box05:50
prologicthis is my point though05:51
prologic4x HD tuners streaming to disk05:51
prologicI _will_ need better writer performance than that of a single disk05:51
prologicat least 2x write performance05:51
frinnstwhat's the write rate of one of those? 80mb/s?05:51
prologicwait05:52
prologicI think I made a mistake05:52
prologiclet's do this again05:52
prologicahh05:53
prologicthat's more like it05:53
prologicforget what I said05:53
prologica HD stream off my tuners is only 8Mbps05:53
frinnstah05:53
prologic~1MB/s05:53
frinnstthats nothing :)05:53
prologica single disk can handle 4x of those05:53
prologiceasily05:53
prologicyeah05:53
prologicworrying about nothing05:54
prologicI could just LVM a set of disks together05:54
prologicand not worry about any kind of redundancy05:54
frinnst/dev/md0        5.4T  3.7T  1.8T  68% /home05:54
prologicit's live tv and scheduled recordings anyway05:54
prologicwho cares right?05:54
prologicif you want to keep stuff - chuck it on the ZFS NAS05:54
prologicor you think I should/could just raid it anyway?05:55
prologiccombine the block level devices into one?05:55
frinnstwell might as well, raid5 :)06:01
prologicyeah I think so06:01
prologicdone :)06:02
prologicnight:)06:02
*** tilman has quit IRC06:04
*** tilman has joined #crux06:06
*** ChanServ sets mode: +o tilman06:06
cruxbot[opt.git/2.7]: xterm: updated to 28406:07
jaegerprologic: the single best advive I can give on SSD purchasing is this: Do not buy OCZ.06:09
jaegerprologic: other than that, I'd suggest a single SSD for the OS for the reasons already mentioned, but make backups periodically06:09
jaegerFrom a statistics standpoint SSD failure rates aren't bad at all06:10
frinnstyeah raid is not a backup solution06:10
jaegers/advive/advice/06:10
niklaswehave you ppl got this message to after updated iptables... WARNING: The state match is obsolete. Use conntrack instead.06:18
niklaswelooks like its a bug.. https://bugs.launchpad.net/gentoo/+bug/106529706:20
teK__prologic: go for a Samsung 830 as they just released the 840 series06:30
teK__displaimer: I own and use OCZ SSDs and one of them (Vertex3) outlived a WD Green 3TB after one year of use06:30
* frinnst only buys intel06:30
teK__*claimer06:30
teK__frinnst: not all of us are rich snobs :p06:31
frinnsthah, they are not expensive06:31
frinnst330 series is cheap as hell06:31
jaegerI own 2 OCZ SSDs - anecdotally they've both failed... 1 of them I RMA'd which was a nightmare due to their service06:31
teK__were you just lighting a cigar with a 100$-bill?06:32
jaegerthe other I didn't bother, replaced it with better06:32
jaegerI have 2 intels, 2 samsungs, and the 1 OCZ that I use still06:32
jaegerThe samsungs are my favorite (256GB 830s)06:32
teK__I had no idea/choice back then :)06:32
*** joe9 has joined #crux06:38
frinnstthe samsungs are more expensive than the intel ones06:39
frinnstatleast at the store where im looking06:39
joe9Romster: u around?06:40
prologichmm06:43
prologicyeah I'm going with Intel 520 series SSDs06:43
pitilloI was thinking of buying a ocz agility 3 (60GB) "cheap" 55e... I was looking at corsair and kingston too because samsung and intel were out of price06:43
prologicbut you've got me thinking that RAID1 SSDs are a waste of time - just keep a dump of the OS drive backed up to the NAS06:43
jaegerThat's my recommendation06:55
jaegeranother thing to consider: the larger the SSD the better the performance, to a certain breakpoint06:56
jaegera single 120GB or 240GB SSD will outperform 2 60s in any RAID config06:56
jaegerEven if TRIM worked06:56
jaegerteK__: yeah, the 2 OCZs I own were bought before I knew better, they were my first 206:56
prologicI think for an OS drive though, the smaller ones will be plenty good enough06:57
prologicyou only want it to boot quickly right?06:58
jaegerEven the small ones will stomp most platter drives, just thought you should be aware06:58
prologic*nods*07:00
teK__I bet that for boot time latecny is much more important than bandwidth07:01
prologicthat's why I plan on using the Intel 520 120GB SSDs for a ZIL and a Intel 520 240GB for a L2ARC07:01
prologic(NAS)07:01
teK__on a related side note: Never underestimate the bandwidth of a station wagon full of tapes. -- Dr.  Warren Jackson, Director, UTCS07:01
prologichaha07:01
prologichmm curiously07:02
prologicwill a 2.5" SSD fit in an FDD tray?07:02
jaegerMost will with a bracket07:03
jaegeror without, if you prefer some other solution. velcro is a popular one07:03
jaegerActually, I'm not sure about that. If the FDD tray doesn't have the same mounting spots as a 3.5" HDD tray, it might not07:04
jaegerMost of those brackets are for HDD trays07:04
prologicmaybe I'll get the Norco 2008 (http://www.norcotek.com/item_detail.php?categoryid=1&modelno=RPC-2008) over the 210607:04
prologicor just screw it into the underside of the case :)07:05
jaegerThere are no moving parts so put it wherever you like07:06
*** sepen has joined #crux07:16
cruxbot[opt.git/2.7]: desktop-file-utils: updated to 0.2107:16
cruxbot[opt.git/2.7]: subversion: updated to 1.7.707:16
cruxbot[opt.git/2.7]: subversion-bashcompletion: updated to 1.7.707:16
sepenhi07:16
jaegerheyo07:16
sepenrc3 worked here (update) without problems07:17
jaegerGood :)07:17
jaegerprologic: amusingly I ran a ZFS test for a while with a USB thumb drive as L2ARC07:18
prologichaha07:19
prologichow'd that go?07:19
prologictrying to work out what power supplies I need for these configs07:19
prologicI think a 2U 450W PSU is okay for this media box config07:19
jaegerIt worked quite well, actually07:20
prologicwow07:20
jaegerOn the subject of ZFS is this something you're planning for the future or have you already worked with it? It might not be worth spending the money on SSDs for ZIL and L2ARC07:21
jaegeryou should do some statistical research on the workload first07:21
jaegerThere are some scripts specifically for that, in fact. zilstat and arc_summary07:21
jaegerIf you have enough RAM then the L2ARC might handle itself without any need for SSD07:22
jaegerAnother consideration: MLC SSDs aren't ideal for ZIL usage07:22
jaegerSLC SSDs are very expensive07:23
jaegerAn alternative configuration would be to leave L2ARC alone and mirror 2 MLC SSDs for the ZIL07:24
prologicand also hard to find (SLC SSDs)07:28
prologicand the MLC versions seem to be getting faster07:28
jaegerSpeed isn't the issue in this case, though, it's reliability07:28
prologicsure for the ZIL07:28
jaegerFailed writes to a single MLC-based ZIL could screw up your data, theoretically07:28
prologicI haven't found any SLC SSDs so far07:29
prologicnot in AU anyway07:29
jaegerPersonally I'd recommend building without SSDs and running some stats but that's just me :)07:29
*** joe9 has quit IRC07:30
prologicsure07:32
prologicat least I'd get to fill the entire 24-bays :)07:32
prologicright now though, it's the spinning media that's the most expensive part (other than the board, cpu, ram, case)07:33
jaegerI recently decommissioned my 48-drive FreeBSD ZFS server =/07:34
jaegerI might try rebuilding it with solaris if I have time but for now it's just down07:34
prologicahh07:35
prologicend of life?07:35
jaegerNah, just ran into a problem with FreeBSD using the Sun J4400 expanders properly07:36
jaegerIt works fantastically well for data service but when a drive fails the entire thing halts07:36
jaegerWell, the server doesn't halt, I should be clear. ZFS commands all hang until the server is rebooted but data service isn't interrupted.07:37
jaegerSo it's not hot-swappable, even though the hardware supports that07:37
jaegerWith that said we have 12 of these expanders; 8 of them are connected to Solaris servers07:40
prologicahh07:41
prologicsucks07:41
*** joacim has quit IRC07:46
*** joacim has joined #crux07:46
prologicyou can't start a 3 disk raid-5 and grow the raid-5 by another 3 didks can you?07:53
prologicnot without destroying the array and starting over?07:53
jaegerwith mdadm?07:56
prologicin general07:56
jaegeras far as I know you can but it depends entirely on the implementation07:56
jaegerso whether or not it's mdadm is important07:57
jaegerMaybe a better way to word that is "maybe." :)07:57
prologicheh07:59
jaegerI have no idea, for example, if you can do that with intel's RST stuff07:59
jaegerwith mdadm it's not difficult07:59
prologichttps://docs.google.com/spreadsheet/ccc?key=0AikgBiDqYO_rdGwxSk0xQkRsbG9aVjhtcVg4YVd0Rnc08:26
*** lasso|qt has quit IRC08:33
*** lasso|qt has joined #crux08:43
jueniklaswe: why a bug? '-m state' is just obsolete now, use '-m conntrack' instead, or what's your problem?09:35
*** joacim has quit IRC09:42
*** Rotwang has joined #crux10:59
*** lasso|qt has quit IRC11:10
*** sh4rm4 has quit IRC11:28
*** sh4rm4 has joined #crux11:52
*** `c0x has joined #crux11:57
*** c0x` has quit IRC12:00
*** lasso has joined #crux12:05
*** joe9 has joined #crux12:15
cruxbot[opt.git/2.7]: mod_svn: update to 1.7.712:27
*** Evil_Bob has joined #crux13:32
*** lasso_ has joined #crux13:48
*** lasso has quit IRC13:51
*** lasso_ has quit IRC13:57
*** jdolan_ has joined #crux14:01
*** jdolan has quit IRC14:02
*** Evil_Bob has quit IRC14:29
*** vee_ has joined #crux14:34
vee_hello hello14:34
*** roquesor has joined #crux14:39
niklaswehello14:47
vee_gotta ask a question, but, will do so once i get to class14:52
*** vee_ has quit IRC14:52
*** vaddi has quit IRC15:16
prologicjaeger, comments?15:27
jaegeron what, the spreadsheet?15:32
prologicyeah :)15:33
prologicI think it's complete15:33
prologicat least I don't think I'm mising anything15:34
jaegerlooks alright15:35
*** mike_k_ has quit IRC15:53
*** joacim has joined #crux16:35
Nomiusjaeger: did you rebuilt cairo?16:36
*** Rotwang has quit IRC16:52
*** jdolan_ has quit IRC18:29
jaegerNomius: No, I couldn't find a reason to... not sure what you meant about backends being disabled18:46
jaegerNomius: http://pastebin.com/sFEEUk8L18:47
NomiusLet me show you20:19
NomiusHey, guess what...20:21
NomiusI'm an ass20:21
Nomiuscairo is just fine as it is20:21
*** horrorSt1uck has joined #crux20:28
*** horrorStruck has quit IRC20:31
RomsterNomius, welcome to computers :)20:35
NomiusYeah20:37
NomiusAnyways, pango 1.32.1 doesn't built...20:37
RomsterNomius, what 32/64/multilib and 2.7.1 or 2.8?20:42
Nomius2.8rc320:46
NomiusLatest pango is 1.32.1, but we have 1.30 in the iso/ports20:47
NomiusAnd 1.32 was released a month ago20:47
Romsterand 1.30 is listed on the gtk site20:47
NomiusOk, so I guess http://ftp.gnome.org/pub/gnome/sources/pango/1.32/ might be unstable20:51
Romsterquite possibly si it 1.32 that your failing to build?20:52
Romsteris*20:52
NomiusYeap20:52
Romstermore than likely pango may need a dev version of cairo20:59
Romsterthat isn't relased yet or some other dependency other than you do need to package harfbuzz21:00
Romsterlatest pango depends on harfbuzz21:00
NomiusYeah21:00
Romsterok i'm back tow ork later21:01
*** horrorStruck has joined #crux21:17
*** horrorSt1uck has quit IRC21:20
*** Flynn` has joined #crux21:37
*** Flynn` has left #crux21:37
*** __mavrick61 has quit IRC21:44
*** __mavrick61 has joined #crux21:46
joe9Romster: can you please update my .pub file?21:48
joe9Romster: very sorry for the bother.21:48
joe9Romster: I generated it with -dsa, which I think is better than rsa.21:48
*** sepen has quit IRC22:18
*** vee has joined #crux23:30
veehello again23:30
*** vee has quit IRC23:38
*** spider44 has joined #crux23:48
*** s44 has quit IRC23:50
*** vaddi has joined #crux23:58

Generated by irclog2html.py 2.11.0 by Marius Gedminas - find it at mg.pov.lt!