IRC Logs for #circuits Monday, 2013-12-23

*** ircnotifier has joined #circuits00:43
clixxIOprologic: I got a draft of my c++ version00:45
clixxIOhttps://github.com/clixx-io/clixx.io/blob/master/arduino/eventframework/timer.cpp00:45
clixxIObtw, how do I add gpio events to circuits? is there a blog post on adding custom callbacks?00:56
*** clixxIO has quit IRC01:01
*** realzies has quit IRC01:16
*** realzies has joined #circuits01:30
*** litzomatic has joined #circuits01:42
*** clixxIO_ has joined #circuits02:19
prologicclixxIO_, just write event handlers02:26
prologice.g:02:27
prologic@handler("foo")02:27
prologicdef _on_foo(self, *args, **kwargs);02:27
prologic   ...02:27
prologicor just02:27
prologicdef foo(self, *args, **kwargs):02:27
prologic   ...02:27
prologicif inheriting from Component02:27
clixxIO_how are they invoked?02:28
prologicby the event loop and siatpcher02:29
prologicthere are quite a few rules on how the internals of this works02:29
prologic:)02:29
prologicbut eventually a matching event handler is called for the event02:29
clixxIO_ok, here's the source code of something I did [badly] before02:31
clixxIO_https://bitbucket.org/djlyon/cardashery/src/ff8416c384bf5dedaea54ecaf47516e5c802a4a1/carstereo.py?at=default02:31
clixxIO_if you scroll down to the bottom you can see where the events/callbacks are set up02:32
prologicahh yes I see02:34
prologicyeah so the best way would be to build that GPIO component02:34
prologicand appropriate events02:34
prologicand treat GPIO like a protocol02:34
prologicwhich it sort of is02:34
prologicit understands things like02:34
prologicthe pin no.02:34
prologicrising02:34
prologicfalling02:34
prologicetc02:34
litzomaticprologic what you mean by your question?02:35
prologicoh sorry02:36
prologicyeah I wasn't sure what you meant either :)02:36
prologiclet's start over :)02:36
litzomatichehe, was thinking what's the point of having multiple event loops and worker processes.02:36
litzomaticseems like if I use worker processes 1 event loop will do fine.02:36
litzomaticif worker processes do heavy lifting.02:37
litzomaticseems like the only case for multiple event loops is if I'm just serving static files and no doing much heavy lifting ever. (maybe just a chat server or something)02:37
litzomatici.e. The event loop itself shouldn't ever get bogged down by too many requests.  It's generating templates and doing business logic and perhaps generating ORM objects that will take up the CPU.02:39
litzomaticIf I'm not doing any heavy lifting multiple events loops can take advantage of a multi-core system... but that's a pretty narrow use case, right?02:40
clixxIO_prologic: yes gpio is like a protocol02:40
prologicwell02:42
prologicby multiple event loops02:42
prologicdo you mean when you do02:42
prologicapp = Server(...) + Root()02:42
prologicapp.start(process=True)02:43
prologicapp.run()02:43
prologicetc?02:43
litzomaticyep02:43
prologicyeah02:43
prologicthis does fork sub-processes02:43
prologiceach with a shared listening socket02:43
prologicand their own event loop02:43
litzomaticyep.02:43
prologicit _does_ increase performance02:43
prologicbut multiprocessing doesn't work right?02:43
prologicinside a sub-process02:43
litzomaticyep, but not much point if the work process will bog down the CPU long before we get that many requests.02:43
prologicor is it because the sub-processes are daemon?02:43
prologicyeah02:44
prologicwell it's always a trade off02:44
litzomaticits because the spawned event loops are deamon.02:44
prologic*nods*02:44
prologicif we changed them to non-daemon proceses02:44
litzomaticI haven't tried it yet, but I theorize it will work with a non-daemon process.02:44
prologicwould Worker then work as expected inside them?02:44
prologicright02:44
prologicyou should try this and see what happens02:45
prologicI think we made them daemon sub-proceses02:45
prologicso we didn't have to do lots of cleanup code to shut them down02:45
prologicor something02:45
litzomaticyeah, if they could share the same pool, that'd be ideal.02:45
prologicwell02:46
prologicthat's actually possbly to02:46
prologicbut then you'd be seralizing events from sub-proceses02:47
prologicto whereever the pool is02:47
litzomaticyeah because the kernel doesn't actually load balance or round robin when its picking an event loop02:47
prologicand in that case02:47
prologicmaybe use a dedicated job queue?02:47
prologicand write a circuits component wrapper for it :)02:47
prologice.g:02:47
litzomaticat least I don't think it does02:47
prologiczmq02:47
prologicor something02:47
prologicoh it does load balance02:47
prologic-but-02:47
litzomaticit seemed to me the other event loop would only get chosen by the kernal when a request was coming in if the event loop was actually busy02:48
prologicit's first in first srved02:48
prologicI think02:48
prologicthat's right02:48
prologicif one process doesn't accept the socket02:48
prologicthe others will be tried02:48
prologicsocket internals afaik02:48
litzomaticso as long as the event loop was keeping up with the requests, if we had 1 pool per event loop the other pools would not be utilized.02:48
litzomaticSo perhaps a chat server that does some heavy lifting during low load would be a perfect use case.02:50
litzomaticmassive amounts of requests and during lulls maybe processes some statistics or stores to a database... whatever.02:50
prologicwell02:51
prologicthere's no reason why this can't be done in a process pool02:51
litzomaticyeah, the pool is simply shared between the processes due to the way fork works?  My memory of fork and sharing of information is a bit fuzzy.02:52
litzomaticyeah, seems like the answer is no02:54
litzomaticif we want 4 event loops and 1 process pool for workers we need IPC.02:54
prologicwhich we have :)02:55
litzomaticwhat do you use for IPC?02:56
prologicso sub-procesess (in circuis)02:58
prologiccan already do IPC02:58
prologicvia circuits.core.Bridge02:58
prologic.start(process=True, link=m)02:58
prologicwhere m is an instance to the top-level component/manager of the parent02:58
litzomaticnot sure what we'd use for 4 processes to share the pool manager.02:58
litzomatichmm02:58
litzomatichow does the bridge work?02:58
prologicso in theory the parent process can have the pool02:58
prologicjust don't add this to child graphs02:58
prologicand then use ipc to submit work02:59
prologicthe bridge is a bidirectional link between two processes02:59
litzomatica pipe?02:59
prologicit ignores some events like poller and socket events02:59
prologicbut everything else gets sent aacross02:59
clixxIO_what about shared memory02:59
prologicyes it uses a full duplex pipe02:59
prologicusing circuits.net.sockets.Pipe itself02:59
prologicwe don't have a shared memory bridge/ipc yet03:00
litzomaticthen you have some sort of protocol on top of the pipe?03:00
prologicbut I'm considering that as an option03:00
prologicthe protocol is well a little undefined03:00
prologicbut yes03:00
prologicit does what you'd expected it to03:00
prologicand even sends values back as well03:00
prologicso it's quite transparent03:00
litzomaticobjects?03:00
prologic*nods*03:01
prologicanything03:01
prologicanything that's picklable03:01
prologicx = self.fire(foo(), "bar")03:01
prologicif "bar" were a channel in a sub-processed linked to the parent via a Bridge03:01
prologicyou'd eventually get a value on x03:01
prologicwe keep track of event ids03:01
prologicand value ids03:01
prologicanyway03:02
prologiccircuits has IPC :)03:02
litzomaticso the other loops would have to talk the to "main loop" to talk to the workers.03:02
prologicI guess so :)03:03
litzomaticsimplifies the architecture at a minuscule performance cost if I am understanding right.03:03
prologicall this stuff can always be improved of course :)03:03
prologicI'm just saying what we have right now03:04
litzomaticyeah :)03:04
prologicthere is also circuits.node03:04
prologicwhich is designed (differently to Bridge) for distributed processing03:04
litzomaticIs it a goal of yours to not have any dependencies?03:04
prologicit uses circuits.net.sockets.TCPServer and TCPClient for comms03:04
prologicand has a very similar protocols to the Bridge03:04
prologicall transparent03:04
prologicideally circuits itself should have no external dependencies03:05
prologicyes03:05
prologicit's always been that way03:05
prologic-but- with the upcoming 3.003:05
prologicand separated packages03:05
prologicwith some separate sub-projects for things like03:05
prologiccircuits.twisted03:05
prologiccircuits.io.serial03:05
prologiccircuits.io.notify03:05
prologicyou could join the circuits dev team03:05
litzomaticcircuits.pyro? :)03:05
litzomatichttps://pypi.python.org/pypi/Pyro403:05
prologicand create, maintain and manage something like03:05
prologiccircuits.zmq03:05
prologicfor example03:05
prologiceven pyro sure03:06
prologicwhy not03:06
prologicit doesn't really matter :)03:06
litzomaticseems like that would be a good solution for making complex IPC.03:06
litzomaticyeah.03:06
prologicwhat makes circuits nice is the architecture and powerful/flexible message bus03:06
prologicyou could wrap anything up in it :)03:06
prologicbbl :)03:07
prologicneed to go rest - got sun burnt yesterdady :/03:07
litzomaticeww03:07
litzomaticyeah that always wore me out too.03:07
*** clixxIO_ has quit IRC03:09
*** litzomatic has quit IRC03:12
*** SX has joined #circuits05:33
*** SX has quit IRC06:52
*** SX has joined #circuits07:04
*** Ossoleil has quit IRC08:23
*** Ossoleil has joined #circuits09:23
*** Ossoleil has quit IRC09:57
*** Ossoleil has joined #circuits09:58
*** Ossoleil has quit IRC10:12
*** SX has quit IRC12:50
*** Ossoleil has joined #circuits12:57
*** Ossoleil has quit IRC13:13
*** Ossoleil has joined #circuits13:18
*** Ossoleil has quit IRC13:37
*** Ossoleil has joined #circuits14:10
*** Ossoleil has quit IRC14:10
*** Ossoleil has joined #circuits14:40
*** Ossoleil has quit IRC14:40
*** depestrada has quit IRC18:17
*** Ossoleil has joined #circuits19:07
*** c45y has quit IRC19:26

Generated by irclog2html.py 2.11.0 by Marius Gedminas - find it at mg.pov.lt!