IRC Logs for #circuits-dev Friday, 2013-01-04

*** Osso has joined #circuits-dev01:15
*** Osso has quit IRC02:50
*** Osso has joined #circuits-dev04:10
*** Osso has quit IRC05:40
*** Osso has joined #circuits-dev05:40
mehereping Osso?05:41
prologicok fine we'll chat here05:42
prologicheh05:42
prologichi mehere :)05:42
prologichow was your Christmas and New Year's?05:42
mehereHI, just wanted to know whether you also still have problems with the IPv6 tests05:42
prologicnope05:43
prologicall passing here05:43
prologicbut I believe you need to have an IPv6 capable machine05:43
prologicI have an IPv6 network here so I'm all good05:43
prologicbut you probably know that :)05:43
mehereMmh. After they failed on 2.6, I tries 2.7 but they still fail05:43
prologicI'm on 2.6.3 here05:43
mehere('course I got an IPv6 machine)05:44
prologicsorry05:44
prologic2.6.605:44
prologicin what way are they failing?05:44
prologicand stupid question05:44
prologicare you up-to-date?05:44
prologicrepo-wise05:44
mehereI'm up-to-date. I get "coercing to Unicode: need string or buffer, tuple found" from socket.bind when ('::1', 0, 0, 0) is used as host05:45
prologicokay that's very bizare05:46
prologicPython compiled with ipv6 support?05:46
mehereThere's a check, right? Anyway, bind with ("::1", port) succeeds05:47
prologick now I'm confused05:47
mehereThe debugger goes right down into the native code. But the native code doesn't do any coercing to unicode as far as I can see.05:49
prologicI'm unsure tbh05:53
mehereWell, knowing that it works at your site is already a hint. I'll further look into it.05:53
prologicall tests actually pass on 2.6, 2.706:02
prologicon both Linux and Mac OS X06:02
prologicI assume FreeBSD as well - but I don't have an active BSD box06:02
prologicOsso, ping06:24
prologic!!!06:24
prologic:)06:24
prologicomg06:24
prologichttp://codepad.org/KBN2PuBt06:24
prologicI think I just solved multiprocessing for circuits06:24
prologicso simple I should have done this years ago06:24
prologicpushed06:30
prologiceat that!06:30
prologichttp://hg.softcircuit.com.au/circuits-dev/commits/cfb68b6fae0adca1806b796fb2f88e3815f206d406:30
prologicI can see some problems with this though06:31
prologicthere might be instances where you don't want your parent<->child processes communicating over a pipe06:32
prologicyou might want to disable this06:32
prologicflag in .start(...) ?06:32
*** Osso has quit IRC06:34
prologicmehere, still around?07:04
*** Osso has joined #circuits-dev07:09
prologiclol07:10
prologicjust making a few refinements now07:10
prologicbut yes07:10
prologicworker threads - done07:10
prologicworker processes - done07:10
prologicthread pool - done07:10
prologicprocess pool - done07:10
prologic@future() threaded - done07:10
prologic@future() processed - done07:10
prologicstarting a component in process mode and linking back to it's parent - done07:11
prologicby default this is off though07:11
prologichave to07:11
prologic.start(process=True, link=True)07:11
Ossoyou are scarily efficient recently07:39
mehereFixed the IPv6 problem. I have no idea why this has ever worked at your sietes, but now the bind gets tha parameters as specified in the Python docs.07:42
Ossothat's because that line is resolving the localhost07:46
Ossoit'll be the same no matter the port number is07:46
Ossohow does this work "event, channels = self._pipe.recv()" how does it deserialize automatically ?07:47
*** Osso has quit IRC07:51
*** Osso has joined #circuits-dev07:51
prologicOsso08:04
prologicshhh :008:04
prologicdon't tell anyone08:04
prologicbut I f00ked up08:04
prologicmultiprocessing might be working (sort of)08:04
prologicbut events aren't going to the other side08:05
prologicand I forgot to handle return values08:05
prologicgah08:05
mehereHow was that relating pivotal and checkins...08:07
mehereSomething with #number08:07
Ossoahahahaha08:10
prologicmehere, eh?08:23
mehereyeah?08:23
prologic[Fixes #...]08:23
prologic[Delivers #...]08:23
prologicin the commit msg08:23
mehereToo late for this time, I'll try next time. Somehow I couldn't find it in the Pivotal docs.08:24
mehereThanks anyway08:24
prologicyeah it's a bit hard to find at first08:25
prologicall good :)08:25
prologicjust hit Finish and Deliver in the web ui :)08:25
mehereNo not yet. I wanted to synvjronize with what you have done, but there are still some things tbd08:26
prologicah nps08:27
prologicOsso, you fixing component targeting?08:27
OssoI was planning to at some point08:28
prologichaha08:28
prologicyou wanna take a look at this multiprocessing crap?08:28
prologichttp://codepad.org/qoT0yAVn08:28
prologicis the sample I'm playing with08:28
Ossothe reason I am fixing is because I did not get enough sleep today08:29
prologiclol08:29
prologicyeah it's nearly 3am here for me08:29
prologicI should go to bed soon08:29
Ossodon't we need all the stuff in circuits.node to make this work ?08:31
prologicnot really no08:31
prologicI was trying to do it with multiprocessing.Pipe08:31
prologicwithout relying on anything else in circuits08:32
prologicso in process mode, there are two threads08:32
prologicthe MainThread08:32
prologicand a Comms Thread08:32
OssoPipe only replaces the socket but it does not replace serialization/deserialization/reading/writing etc08:32
prologicshould we use multiprocessing.Queue isntead?08:34
prologicit does handle serialization/deserialization though08:34
prologicvia .send and .recv methods08:34
Ossoah!08:35
Ossoyes Queue sounds good08:35
prologicreally?08:37
Ossoyou have to at least sound convinced yourself08:37
prologicwel08:38
prologica Queue should work just as well08:38
prologichowever it does take a performance hit08:38
prologicbut in theory I could do away with the Comms Thread tho?08:38
prologicright now nothing is working damnit :(08:39
Ossowell the Comms Thread08:42
Ossooh I see08:43
Ossoyes I think so08:43
prologicRuntimeError: Queue objects should only be shared between processes through inheritance08:56
prologicbah08:56
prologicI keep getting this error08:56
*** Osso has quit IRC09:54
*** Osso has joined #circuits-dev12:05
*** Osso has quit IRC13:12
*** Osso has joined #circuits-dev13:33
*** Osso has quit IRC13:38
*** Osso has joined #circuits-dev13:39
*** Osso has quit IRC14:38
*** Osso has joined #circuits-dev18:34
*** Osso has quit IRC19:34
*** mehere has joined #circuits-dev03:57
prologicmehere, ping04:37
*** Osso has joined #circuits-dev08:09
Ossoprologic: did you manage to use the Queue ?08:09
OssoI have 18 tests failing here08:49
Ossowho broke test_tcp !08:50
Ossotest_process is randomly failing here uhm08:51
Ossoand test_worker_process too08:51
Ossoit may be related to me pressing ctrl-c08:54
Ossoit is kept and only passed to started processes?08:54
Ossoneed to fix that08:54
Ossobut first!08:55
Ossocomponent targetting08:55
*** Osso has quit IRC09:16
*** Osso has joined #circuits-dev09:24
*** Osso has quit IRC10:25
*** Osso has joined #circuits-dev11:32
*** Osso has quit IRC11:32
*** Osso has joined #circuits-dev11:33
*** Osso has quit IRC12:33
*** Osso has joined #circuits-dev12:58
prologicOsso, hey15:04
prologicdid you do a fetch/pull?15:04
Ossohello15:04
prologicI brought back the Bridge15:05
prologicall tests pass here15:05
Ossooh nice15:05
prologicthe Bridge is very similar to the Node15:05
Ossowe have a pb with ctrl+c15:05
prologicexcept that it takes a socket (specifically a UNIXClient socket)15:06
prologicie:15:06
prologica, b = Pipe()15:06
prologicBridge(a)15:06
prologicumm15:06
prologicthe problem you speak of is with tests/core/test_pools.py15:06
prologicrunning the entire test suite15:06
prological tests pass15:06
prologicbut there is a dead process left over in the pool that doesn't seem to get cleaned up15:07
prologicdespite .daemon = True15:07
prologicI haven't been able to fix that yet15:07
prologicbut I'm 99.99% sure it's only the Pool that leaves a process lying around15:07
Ossoah ~15:08
Ossobut it seems like the functionality is the same between node and bridge ?15:08
prologicyes15:11
prologicthey are a little15:11
prologicand I'm unsure what to do about that15:11
prologictbh Bridge is much simpler in terms of implementation that circuits.node15:11
prologiccircuits.nod was meant to be more flexible15:11
prologicie: A Node that used a pair of pipes15:11
prologicor one that used a TCP server/client15:11
prologicetc15:11
prologicbut never quite worked out that way15:12
prologicalso for some reason I decided to use JSON for circuits.node15:12
prologicBridge used pickle15:12
prologicone idea I had was to nuke circuits.node15:13
prologicbring in some of the flexibility into Bridge (ie: using other networking components)15:13
prologicand bring in a new form of explicit targeting for other connected nodes (via the Bridge)15:13
prologicsomething like:15:13
prologicBridge(socket, prefix=None)15:14
prologicwhere prefix is some channel prefix that the Bridge uses15:14
prologicwhen it sees events with this prefix, strips the prefix and sends it down it's socket15:14
prologicdunno15:15
Ossoit's really not much code15:19
Ossobut the serialization can be factored15:19
Ossonode and bridge are exactly the same15:19
Ossoto me15:20
Ossohaving both is confusing15:20
OssoI think we should keep node cause we changed name once already15:20
prologickk that's fine15:23
prologicno problems with that :)15:23
prologicbut we'll have to work on it a bit more15:24
prologicbecause it lacks the flexibility and ease of use that Bridge had/has15:24
Ossoyeah sure15:26
Ossowe can improve it15:26
prologickk15:27
prologicwell I'll work on that some more15:27
prologicyou're still fixing component targeting right?15:27
prologicand yeah as mentioned in my comments15:27
prologic.handles(*names) and .handlers()15:28
prologicwe ok with this interface for querying components?15:28
prologicI shall make them both class methods so they can be used on component classes and instances15:28
Ossocomponent targetting is working, can you have a look and accept th estory ?15:32
OssoI forgot to link the commit to the story15:33
prologicall good :015:34
prologicyeah I"ll check it out15:34
prologichttp://codepad.org/QzbXep5I15:39
prologicI'm pretty hapyp this this :)15:39
prologicgood job ihmo15:39
prologicyou can even target components15:39
prologicand if they don't have said event handler15:40
prologicthe event just gets eaten15:40
prologicas I'd expect15:40
prologichttp://codepad.org/85rwtyIK15:45
prologichmm15:46
prologicrace condition15:46
prologicpasses if I run it individually15:46
Ossouhm!15:46
prologicgoing out for a while - little niece's birthday morning tea15:48
prologicand picking up a free Dell Poweredge :)15:48
prologicwoot woot15:48
Ossocool15:49
*** Osso has quit IRC17:34
jgiorgiprologic: is there a built in redirect method in circuits.web?00:05
prologicyes00:17
prologic.redirect(...) on a Controller instance00:17
prologicor self.fire(Redirect(...))00:18
prologicor00:18
prologicraise Redirect(...)00:18
prologicall of which should work00:18
jgiorgi.redirect(url) relative urls acceptable?00:19
prologicyeap00:21
prologicpydoc circuits.web.Controller00:21
jgiorgicool00:23
jgiorgii assume that the replacement for tick is stable? it's just a generate_events handler with no arguments right?00:24
prologiccorrect00:48
prologicand yes it's stable00:48
prologicit was introduced in 2.0.000:48
prologicbut yes - in dev we've removed ticks altogether00:48
prologicwe found they are irrelevant now00:49
jgiorgicool i need to do some very frequent db polling so that would be the most effective imo00:49
prologicjust remember though00:50
prologicgenerate_events is called very frequently00:51
prologicit's almost the same as the old ticks really00:51
prologicso if you don't do some kind of blocking call in your handler00:51
prologicthen generate some events00:51
prologicthen it'll get called quite quickly00:51
prologicthat isn't a problem though :)00:51
prologicfor DB stuff00:51
prologicI would just thread it00:52
prologichopefully soon we'll write some DB wrapper components00:52
jgiorgiluckily it's a memory-only mongo collection so the return is extremely fast00:53
mehereHi05:36
meherebbl. Just wanted to ask why you have re-introduced that polling behaviour (waking up every 100ms) . I was so happy about having removed that successfully.05:38
*** Osso has joined #circuits-dev07:23
*** Osso has quit IRC07:24
*** Osso has joined #circuits-dev08:17
*** Osso has quit IRC10:05
*** Osso has joined #circuits-dev10:46
*** Osso_ has joined #circuits-dev11:45
*** Osso has quit IRC11:47
prologicmehere, what did we do?12:42
prologicI didn't think we changed the behavior at all tbh12:42
prologicI *thought* we simplified things12:42
prologicbut talk to Osso, that stuff is still over my head a little :)12:42
*** Osso has joined #circuits-dev13:26
*** Osso has quit IRC13:28
*** Osso has joined #circuits-dev13:29
*** Osso_ has joined #circuits-dev13:36
mehereIt's the "e.reduce_time_left(TIMEOUT)" in manager.tick. The effect is that the maximum time that the select waits for input and output is TIMEOUT. But there is no need to restrict this time.14:15
mehereMaybe it is misleading that manager.tick is still called "tick". It is not intended to tick. It is more a "waitForStateChange" with a timeout. And the "State" referring to the complete circuits application.14:27
mehereIn many applications, the only thing that causes work to be done is I/O. So there will be no event generator except for a "poller" (that should actually be renamed as well). As it is implemented, the "poller" waits for something to happen on any of the known file descriptors -- or for a timeout. But if the I/O is the only possible cause for activity, well, then we need no timeout.14:31
mehereWhen *do* we need a timeout? If, e.g. we have a running timer, so an event scheduled to be delivered as some time. In this case (and only in this case) it makes sense to reduce the timeout (i.e. the time we are prepared to wait for I/O to occur) so that the timer receives another "GenerateEvents" "in time", i.e. is when the event has to be delivered (at latest).14:33
*** Osso_ has quit IRC14:37
mehereI assume hat 99% of all applications react to I/O or timers only. If someone really wants to introduce an event source that requires polling (i.e. cannot participate in a "select" call), then this source can simple implement a "generate_events" handler that reduces the timeout in the GenerateEvent to TIMEOUT. But it is not the main-loop's task to do that "to be on the safe side".14:38
mehereI hope your bouncers copied this, you (they) appear on the "people in room" list. I'll be back again tomorrow at about 14:00 CET ;-)14:41
prologicyeap15:15
prologicmehere:  got all that15:15
prologicI actually completely agree with everything you've said15:15
prologicI just don't know how we broke that particularly15:15
prologicI'll talk to Osso15:15
prologicor you can if you beat me to it15:15
prologicI *thought* we just simplified things a little15:15
prologicbut kept the same behavior15:15
Ossoremember it is that "if" I was talking about15:16
Ossothe one is said that if removed everything would still work but we would sleep for max TIMEOUT15:17
OssoI think we should have a property enable_ticks15:20
Ossothen we check if enable_ticks = True then we sleep for TIMEOUT15:20
Ossoif not15:20
Ossowe sleep for very long15:20
prologiccan you show me what that bit of code looked like?16:34
prologicI'm curious16:34
Ossoyeah sure17:12
Ossoit's in manager.py17:13
Ossoline 69117:13
Ossoaround17:14
Ossoin the tick function17:14
Ossowhen we do reduce_time_left17:14
*** Osso has quit IRC17:24
mehereJust 5 minutes before I have to leave, but I vote against enable_ticks. If you really need ticks, write and add a component that reduces the time left in the GenerateEvents event to TIMEOUT. If we have enable_ticks with a default of True, problems will slip through undetected!23:50
mehere... should be "If you really need ticks in your particular application ...". In general we don't (that's why you removed them, right?)23:53
jgiorgihonestly i cant think of one thing that needs ticks, generate events is sufficient23:55
mehereOh, there can be such situation. But it happens only if something cannot "be waited for" in select. So for example if you have I/O *and* another running thread and you wait for either to produce an event. Then you have to "poll", meaning that you have to call select, then have a look at the thread, then again at the select and so on. You can get this solved in Linux with eventfd, but I don't think this is generally available (and00:06
mehere certainly not at the Python level). And besides, This really rarely happens (that's why I said that 99% of all applications use I/O and timers only -- just my rough estimate, though ;-) ). In circuits this would mean that the component that waits for the thread ("ThreadWaiter") simply does its checks and if the other thread isn't ready yet, reduces the time left to TIMEOUT (or actually what is appropriate as response time).00:06
meherebbl00:07
prologicmehere, I agree00:51
prologicas I said00:51
prologicwe made a mistake with some of this simplification it would seem00:52
prologicI believe we also ran into some problems where things were getting stuck00:52
prologicso we may have inadvertently removed this beavhior00:52
prologicOsso: ping me when you're back00:52
prologicI can't find that L of code you're talking about now :)00:52
*** prologic has joined #circuits-dev02:19
*** prologic has quit IRC02:19
*** prologic has joined #circuits-dev02:19
*** Osso has joined #circuits-dev02:30
*** Osso has quit IRC02:34
*** Osso has joined #circuits-dev02:35
*** Osso has quit IRC02:35
*** Osso has joined #circuits-dev02:36
prologicOsso, hey03:19
prologicthat line you were talking about03:19
prologiccan't locate it exactly in the dev repo at tip03:19
prologicI also tried playing around with a few things03:19
prologicand could not make that behvior come back without making the event loop get stuck03:19
prologicie: when an event is fired03:19
prologicit just sleeps forever03:20
prologicI can see code that is *supposed* to wake up the thread when an event is fired03:20
prologicbut that doesn't seem to work03:20
prologicmostly because self.root.neds_resume isn't being set - it's still None for some reason I think03:20
*** Osso has quit IRC03:39
*** Osso has joined #circuits-dev05:34
Ossouhm maybe I can a look tonight05:35
*** Osso has quit IRC05:39
*** Osso has joined #circuits-dev05:39
*** Osso_ has joined #circuits-dev05:40
prologichey Osso05:48
prologicI'm still here for a few mins05:48
Ossooh ok05:48
OssoI thought you were gone already05:48
prologicnot quite05:49
prologicI just nuked workers and pools05:49
prologicI think they suck horribly05:49
mehereHi05:49
prologicwhat I think we need to do is grab the threadpool implementation (or import it)05:49
prologicand multiprocessing.Pool05:49
prologicand just wrap these05:49
Ossothere's already so much stuff we have changed05:50
Ossocan't the worker redesign wait ?05:50
prologicwell it can05:50
prologicbut05:50
prologicthe only part of it that works is the threaded side05:51
prologicI cannot make a process pool work with components05:51
prologicI actually accidently ended up fork bombing my PC tonight05:51
prologic:/05:51
Osso:o05:52
prologicone of the problems with a process ppol using circuits components05:52
prologicis we can't really tell easily if a particular worker is busy or not05:52
mehereResuming the last discussion ... Does everybody agree that lines 680-683 in manager.py should be commented out? If so I'll have a look at the consequences...05:52
prologicso half the implementation of Pool that works fine for threaded workers/pools05:53
prologicdoesn't quite work so well for process based ones05:53
prologicmehere, if they're the linkes I'm thinking about - yes05:53
prologicI think we all agree05:53
prologic-however- I have to warn05:53
prologicit does break things horribly05:53
prologicI've found that firing an event doesn't in fact wake the thread up05:53
prologicit just continues to sleep indefinately05:54
mehereOf course firing an event does wake anything up -- actually it cannot.05:54
prologicyeah05:54
mehereMost test cases are actually non-standard usages05:54
prologicthat's what I was finding05:54
prologicwell this is true :)05:54
mehereIt's mostly the test cases that need to be fixed.05:55
prologichmmm05:55
prologicok one sec05:55
prologichttp://codepad.org/PkHTdYHU05:56
prologicconsider this example05:56
prologicif you comment out said 3 lines05:56
prologicdoing:05:56
prologicx = m.fire(Hello())05:56
prologicdoes nothing05:56
prologicit just appears to sleep forever05:56
Ossowell there's the self._ticks to wipe out too so might as well do it at the time05:57
prologicwe already did didn't we?05:57
OssoI still the property05:57
mehereOf course it sleeps, because it breaks the fundamental assumption that events come from I/O. Thats what is makes "non-standard" you fire an event from another thread05:57
Ossosee05:57
prologicsorry - fit of anger :)05:57
Ossoit is unused05:57
Ossojust needs removing05:58
Ossoif you look at the self._fire()05:58
Ossoit's setting the Event to True05:58
Ossoso when you use fire it should be getting out of the wait(xxx)05:58
prologichmm05:59
Ossoif it does not some minor bug must have creeped in somewhere05:59
prologicmehere, but aren't events themselves producers?05:59
prologicI mean05:59
mehereThe only way fire can get you out of the wait (if the wait is actually a select) is to do something with file descriptors. I remember that I added something like that, maybe you removed it...05:59
prologiccan't other threads be producers of events externally05:59
prologichmm06:00
Ossoif they fire an event06:00
Ossowe are woken up06:00
Ossoif there's an event queued up we won't sleep06:00
Ossoboth cases are covered06:00
prologicbut doing so from another thread does not wake it up?06:01
Ossoit does06:01
prologicok06:01
mehereFound it, it's the _create_control_con in pollers.06:01
Ossothe fire breaks the wait()06:01
prologicthe maybe it doesn't work in Python 2.6.6?06:01
prologicbecause it doesn't work for me06:01
prologicat least not at the python interactive shell06:02
mehereIf the other thread fires, it sends a message on the control connection that wakes up the waiing thread06:02
Ossoneeds_resume() is the function that wakes up the wait06:02
Ossomaybe you can have a look at it see if it breaks the wait properly06:03
prologicyeah06:04
prologichere in my environment06:04
prologicit doesn't break out of the wait06:04
prologicnot from another thread it seems06:04
mehere... BasePoller.resume() must be called, let's see who did that in the old version ...06:06
prologichttp://codepad.org/GTooy1xZ06:07
mehereOh, you removed the lock in events.reduce_time_left, I think that's fatal...06:07
mehereWhy did you do that?06:07
prologichmm06:09
prologicwhat was it for?06:09
prologicyou're going to educate me here :)06:09
prologicI suck horribly at threading, locking06:09
mehereMake wake up work, for example?06:09
prologicoh really?06:10
prologichmm06:10
prologiccan you explain how?06:10
mehereI have to look at that myself again, give me some hours. But it's a goog starting point.06:10
meheregood, not goog06:10
*** Osso_ has quit IRC06:11
*** Osso_ has joined #circuits-dev06:12
Ossoit is all based on resume() working06:14
mehereYou have taken out the complete else part for "current_thread() == self._executing_thread" in manager._fire! This stuff can no longer work!06:16
OssoI removed it cause we don't care06:17
mehere...care about wahat? The wake up?06:17
Ossoit does the wake up when we waiting06:17
mehereright now it doesn't, right?06:18
Ossoit does the wake up06:18
Ossothe reason it bugs is that the wait() is not interrupted ?06:18
mehereHow can it be interrupted if resume() isn't called?06:19
Ossothat's the problem resume() is called06:19
meherewhere?06:19
prologicafaict resume is never getting called because it's never set06:20
Ossoin self._fire()06:20
prologicI mean self.root.needs_resume is always None for some reason06:20
Ossoif needs_resume: it calls it06:20
prologicif self.needs_resume:06:20
prologicthis test fails I think06:20
meherewho is setting need_resume? By default it is None...06:21
prologic>>> x = m.fire(Hello())06:21
prologicDon't need to resume!06:21
prologic>>>06:21
prologicthere06:21
prologicthe tests is failing06:21
prologictest06:21
prologicthe FallbackGenerate is suppose to set it afaik06:21
Ossook I'll run the thing myself06:21
prologicat least it has logic to06:21
mehereAll that needs_resume makes it unnecessarily difficult. The original approach was plain and worked automatically!06:22
Ossoyou see the needs_resume is set right before waiting06:23
prologichmm06:23
prologicit seems to set self.root.needs_resume once06:23
prologicand never again06:23
Ossoself.root.needs_resume = self.resum06:23
mehereself.resume is wrong.06:23
mehereIt's the reusme of currently handling component that must be called06:24
Ossoit can't be wrong06:24
Ossocause it is set right before you do the wait()06:24
mehereThen you definitely need locks on order to avoid race conditions-06:25
Ossoyou don't need locks cause we aren't multithreaded06:25
prologicthat's what I thought too06:25
mehereBesides where is that self.root.needs_resume = self.resum, can't find it in pollers.py06:26
prologicno06:26
prologicit's in helpers.py06:26
Ossoit is in FallBackGenerator06:26
prologicyeah06:26
Ossoit needs to be in the pollers too06:26
mehereBut if we have a poller component, the fallback generator is nevere used.06:26
Ossoneed to add it there06:26
Ossoright06:26
mehereWhy did you make it that difficult and didn't keep my approach?06:27
Ossoin our case we don't have a poller though06:27
prologicmehere, I think it still is under some circumstances?06:27
prologicat least it appears that way06:27
prologicoh wait06:27
prologicit is used initially06:27
prologicbecause the poller isn't regsitered yet06:27
prologicd'uh :)06:27
Ossofor 2 reasons, I removed the locks06:27
Ossoand we don't need to store the event anymore06:27
Ossomaybe even 306:28
prologicwe removed ticks?06:28
prologicI think you're right though Osso06:28
prologicI think that needs_resume logic needs to be in the BasePoller too06:28
prologicno?06:28
mehere(1) you still need the locks and (2) what's so bad about having that reference to the event06:28
Ossothe resume is now customizable06:28
Ossoit does need to be in BasePoller06:28
mehereIsn't it better if it works automatically? Would have avoided all this...06:29
mehereIt *was* customizable, every component can have its own resume()06:29
Ossothe problem was that you not only store the event in one place06:30
Ossoyou have to store more references06:30
Ossoyou needed to store _generate_event06:31
mehereAnd?06:32
prologicI fixed it06:32
mehereYou need something common to lock on06:32
prologicOsso, you were right06:32
prologichttp://codepad.org/us4zaaRB06:32
Ossooh that code had a poller already ?06:33
mehereMaybe you'll just have to try it yourself. But believe me, I didn't introduce those locks for fun but to get rid of problems caused by race conditions.06:34
Ossothey weren't for fun that's why I had to rearrange the code in order to use the fact that06:34
prologichttp://codepad.org/exKEMHw006:35
prologicThis should fix the problem right?06:35
prologicAt least in testing it does here06:35
Ossoyou can see on_generate_event as critical section06:35
Ossoit should yes06:36
mehereWhere's the lock now?06:37
prologicOsso, if you're happy with that diff I'll commit and push it06:37
Ossoon_generate_event will never run from more than 1 thread at the same time06:37
Ossothe whole function is always locked06:37
prologicthis is true06:37
prologicihmo it's kinda silly to06:38
mehereYes, but manipulating the time left must be locked06:38
OssoI am not sure you want to add the print() call ?06:39
Ossoare we changing time left anywhere outside of our running thread ?06:40
prologicdo we in tests?06:40
*** Osso_ has quit IRC06:40
prologicWhat I mean to say is if you do something like (on the Python shell):06:40
prologicm = Manager() + Debugger()06:41
prologicm.start()06:41
mehereYou should reduce it when you fire an event06:41
prologicdoes doing m.fire(...) run in that thread06:41
prologicor the interactive shell's thread?06:41
Ossowhen you fire an event, you are breaking the wait06:42
Ossoand it is equivalent in on_generate_event to have the time reduced to 006:43
Ossoso the _fire can delegate it06:44
Ossoto a section that is thread safe06:44
mehereThen the poller must reduce it in it's _generate_events, iff it was explicitly woken up by fire06:45
prologicOsso, pushed06:45
Ossoreduce_time_left is lost after on_generate_event06:47
Ossocause it is not save anywhere anymore06:47
Ossosaved*06:47
mehereBut there may be another handler component "after" the poller06:48
mehereShouldn't the poller also clean up the needs_resume after the select?06:48
prologicdidn't I add that?06:49
Ossoyou have a point there06:49
Ossoif we want multiple pollers to works concurrently we should remove the filter=True and set the reduce_time_left06:50
Ossobut we are filtering currently on the first on_generate_event06:50
Ossoand the other ones will never be run06:50
prologicwait06:51
prologicis it valid to have multiple select calls in a single process?06:51
prologicI've found this to not work very well in the past06:51
prologicie: having multiple pollers06:51
Ossocurrently there's no handling for multiple pollers06:52
Ossowe filter on the first06:52
mehereNo, you shouldn't have that. It is more of a thought experiment to make sure that we have understood things properly06:52
Ossoso I never cared about the reduce_time_left06:52
Ossoafter the wait()06:53
mehereI agree, if you simply leave time left as it is, you don't need the lock06:54
prologicwell this has all been very educational - thanks guys :)06:54
prologicbut it's 1am and I need sleep before work!06:54
prologicalso whilst this works - the changes that is06:54
prologicsome tests are now failing06:54
mehereI'm not sure about the handling of needs_resume, I'll have to think about whether this is really thread safe.06:55
meherebbl06:56
prologiczzz06:56
Ossoyep sure I could use double checking06:56
Ossogood night prologic :)06:56
*** Osso has quit IRC07:47
*** Osso has joined #circuits-dev07:48
mehereOsso, here's a race condition: t1 is before "self.root.needs_resume = self.resume" (but going there), t2 is in _fire and finds self.needs_resume to be (still) None, control goes back to t1 that now sleeps forever. So the "fire" from t2 is effectively lost. I now remember that this was actually the reason for setting time left to zero when firing. Combined with the "lock"ed access to the event in _generate_events and _fire this sol07:58
*** Osso has quit IRC08:38
*** Osso has joined #circuits-dev09:26
Ossothat's a problem indeed, we may have to call resume() all the time without the if09:30
OssoI wonder if there's a more elegant method though09:30
OssoI would rather avoid having to call Event.set() on each event fire09:31
*** Osso_ has joined #circuits-dev09:32
*** Osso has quit IRC09:36
OssoI think we can do this10:47
Ossohttp://circuits.codepad.org/U06z7s4210:47
mehereI really don't understand your oppisition against locks. They are the means for this situation.11:37
mehere"opposition"11:39
mehereBesides, the fallback generator isn't very interesting. The point is how to avoid loosing the "fire" from the other thread when we are about to wait in the select (poller).11:51
*** Osso has joined #circuits-dev13:46
Ossoyeah I know a lock would solve the problem13:46
Ossomaybe it is the best solution but it seems so unnecessary13:47
*** Osso_ has joined #circuits-dev13:47
*** Osso has quit IRC13:48
*** Osso_ has joined #circuits-dev13:49
*** Osso_ has quit IRC13:59
mehereWhy should it be unnecessary if it is the best solution?14:07
prologicmaybe it just seems unnecessary because of the lack of multithreading feature that circuits is not - it's an async framework :)14:45
prologicI dunno :)14:45
prologicyou may be right mehere14:45
mehereWell, circuits should be safe to use in a multi-threading context. Actually, many of the test cases are multithreading because they start an application and then fire events into that application from the main thread.14:53
prologicwhilst I agree with that and the use-case(s)14:57
prologicno two thread would ever compete for the event queue though?14:57
prologicor anything else for that matter14:57
prologicbut maybe I don't understand how threading works14:58
prologicmehere:  ping?19:16
*** Osso has joined #circuits-dev21:39
prologicOsso:  can you comment on the implementations of Worker and Pool ?22:06
prologicI think this is what we should have ihmo22:06
prologicI'll explain more later on22:06
*** Osso has quit IRC22:39
*** Osso has joined #circuits-dev23:42
*** Osso has quit IRC00:42
*** Osso has joined #circuits-dev02:29
*** Osso has quit IRC03:28

Generated by irclog2html.py 2.11.0 by Marius Gedminas - find it at mg.pov.lt!