[01:24] SpamapS: Can I put another script in the hooks folder that isn't actually a hook script? [01:24] I want to reuse some code between hooks. [01:56] george_e, you can [01:57] your hooks are executed with the PWD being the root of your branch [01:57] so you can load the code with [01:57] . ./hooks/shared [01:57] (assuming you are using shell) [02:30] james_w: Okay, thanks. [12:19] hazmat: ONLY if this is an easy question you already know the answer to without hunting through the source (I can do that myself if necessary...): [12:20] hazmat: what nodes are created by the unit agent? [13:57] fwereade, its presence node its relation settings and presence nodes [14:10] hazmat, cheers [14:11] hazmat, I'll double-check which of those could have barfed a NodeExists at me, but haven't managed to repro it, so not to worry :) [14:20] fwereade, i've got a branch which pushes the path into the exception of txzk.. i haven't felt a 100% comfortable with it.. but its useful for debugging these sorts of things [14:20] fwereade, its https://code.launchpad.net/~hazmat/txzookeeper/errors-with-path/+merge/77254 [14:21] hazmat, thanks, that's awesome; I'll grab it if I screw things up in the same way again :) [15:29] m_3: thinkup charm needs another round o' review [15:31] m_3, thanks again for setting up the reconnoiter ppa [15:40] hazmat, I appear to be unable to recreate the unit agent presence node when the unit was previously "kill -9"ed (NodeExistsException, despite the fact that it was created ephemeral) [15:40] hazmat, please confirm that I am smoking crack [15:41] fwereade, the session is still alive [15:41] fwereade, the agent need to store their session id, and reconnect with that only creating a new session if the previous is expired [15:42] hazmat, awesome, the universe still works, I'm just ignorant [15:42] * fwereade sighs with relief [15:42] fwereade, we all gotta start somewhere.. for example if you put a sleep(10) in there it would just work ;-) [15:43] fwereade, so i'd setup the base agent to store the client_id after connect to file on disk, and utilize that path if it exists when connecting for the session id [15:43] if it doesn't work (sessionexpired) just reconnect sans the id [15:43] hazmat, cool, on the plus side the unit agent does appear to work if it's killed normally :) [15:43] fwereade, nice [15:44] fwereade, yeah.. i was going through the workflow api.. you had it right.. transition_state is optimistic, fire_transition was pessimisitc [15:44] hazmat, cool, that bit is changing a little in the current work anyway [15:44] fwereade, what's current? which branch is this? [15:45] fwereade, ie. are these changes to what's in the queue? [15:45] hazmat, it's work on top of it, I probably should have written a bug for it [15:45] fwereade, no worries, lbox makes us lazy ;-) [15:46] hazmat, it's the distinction between "uses upstart" and "actually works properly in certain circumstances" [15:47] fwereade, cool, i'm going to delay the session expiration work to build on the upstart stuff your doing, else we have to introduce additional primitives regarding stop/restart doing hook execution, vs stopping activity [15:47] hazmat, yeah, figuring out just what should happen if we suddenly die mid-hook is beyond my understanding at the moment [15:47] just doing a suicide on expiration seems like it would be easier than introducing additional paths for stop 'only activity' [15:48] hazmat, that sounds good though [15:48] fwereade, that's fine, when we save the queue state to disk, it should re-execute the hook [15:48] fwereade, we're going for the at least once guarantee [15:48] hazmat, cool [15:48] instead of the at most once [15:49] the latter has the potent to miss an exec [15:49] * hazmat grabs some coffee [15:59] Hey guys, any idea on this: http://askubuntu.com/questions/82632/juju-bootstrap-problem [16:22] marcoceppi, in process on answering it now [16:25] marcoceppi, hmm.. actually that's not what i thought it was [16:25] Anything I can ask him to get more info> [16:28] marcoceppi, not sure, i posted something, it looks like he might have another dns server on that machine [16:28] perhaps a local caching proxy [16:28] ah, thanks hazmat [16:29] marcoceppi, i thought i'd setup a subscription to juju questions, but i don't every seem to get notified about new questions [16:30] hazmat How did you subscribe? [16:30] marcoceppi, via the web on the tag [16:30] Ah, that only highlights questions in the main feed, IIRC [16:31] hum, maybe not. [16:31] marcoceppi, i don't get it.. it doesn't notify me of all new questions w/ the tag.. that question is in the rss feed [16:32] marcoceppi, hmm.. seems like there are multiple interfaces for subscription management.. [16:33] trying out the stackexchange one from my profile now [16:33] hazmat Check out your profile's email preferences, make sure the checkbox to receive emails is selected. http://askubuntu.com/users/preferences/28532 [16:33] It doesn't appear you do. Kind of seems like a bug. [16:34] You can also change the frequency of emails. It defaults to a daily digest, IIRC [16:35] marcoceppi, that seems to me the trick thanks [16:35] s/me/be [16:35] np [16:49] Nice cozy juju room :) Hi guys [16:52] amithkk, greetings [16:53] hi hazmat [16:59] bcsaller, jimbaker you guys in today? [17:03] * SpamapS tosses amithkk a beverage [17:03] welcome! [17:03] fwereade, up for a standup? [17:03] bcsaller, jimbaker, fwereade invites out.. low attendance expected ;-) [17:03] hi [17:04] hazmat, ah yeah [17:04] * amithkk drinks beverage [17:07] * SpamapS cries out "NO WAIT!" [17:15] What is a standup? [17:17] mainerror, its a very short meeting [17:17] I see. [17:17] mainerror, we just do a quick roundtable on dev tasks [17:18] less than 5m [17:18] mainerror, http://en.wikipedia.org/wiki/Stand-up_meeting [17:23] looks like the latest charm-tools package has a broken link... I'll try it on another machine to verify [17:29] m_3, is that reconnoiter ppa a daily build recipe attached to the imported branch? [17:29] m_3, looking at the imported branch recipes.. i just see https://code.launchpad.net/reconnoiter/+recipes [17:30] SpamapS: but if he knew that you knew that he knew which vial was poisoned... [17:30] no, I didn't use the recipe, just built the daily from the other day [17:30] hazmat: ^ [17:32] hazmat: figured if it worked, then we could use the recipe to do a daily build. I had to update debian/control to add deps to get it to build, so we'll have to add that to the recipe [17:32] happy weekends all :) [17:32] fwereade, cheers, have a good one [17:32] fwereade: you too [17:33] m_3, cool we should probably send a control diff upstream as well, it was just the protobuf dep? [17:33] yup [17:33] m_3, cool [17:33] * hazmat lunches [18:23] hazmat: bout how much longer after a charm is accepted does it take to show up in your charm browser thing? [18:23] jcastro, 15m after its pushed to lp [18:24] jcastro, it won't appear if the charm has structural errors [18:24] jcastro, which one where you thinking of? [18:24] thinkup [18:24] jcastro, thinkup is there already [18:24] ROCK. [18:24] jcastro, its in a ppa, so you need to find via search [18:24] unless its in the official distrib [18:25] http://charms.kapilt.com/~george-edison55/oneiric/thinkup [18:25] right so in the bug report mmims +1'ed it, I guess the next step is to 'promote' it or whatever we do? [18:25] but then he found a bug in the charm tool or something [18:25] jcastro, yup [18:26] jcastro, per the spec its actually posisble for a ppa charm to be the official one, but the charm browser and practice seems to be moving it to ~charmers ownership [18:29] oh ok, so basically we move it [18:29] and then tell him to just propose fixes from then on? [18:31] it needs to be promogulated, right? [18:31] can we rely on the fact that we've always got cloud-init? or can that be provider-dependent? [18:32] promulgated* [18:32] hazmat: ^ [18:32] rog, we use cloud-init everywhere but local provider [18:33] rog, bcsaller and i debated using it there, but opted to pass on it to keep local fast [18:33] we suck for naming it "promulgate" by the way, when "charm bless" or "charm sugar" would have been awesomer. [18:33] jcastro, or just give him r/w access to it [18:34] marcoceppi, i think m_3 and SpamapS know the procedure.. i'd have to look it up [18:34] hazmat: darn. ok, thanks. [18:34] hazmat: I've done it once before if they're pre-occupied [18:34] rog, why? whats up? [18:34] yeah, I can promulgate... just chasing down what's wrong with charm-tools [18:34] rog, its generally a pretty reasonable assumption [18:34] m_3, need another set of eyes? [18:34] hazmat: just wondering what code can be factored out of providers [18:35] also didn't know if acceptance meant it should be promulgated to lp:charm or stay in a ppa [18:35] hazmat: in Go the decision has to be a bit more clear-cut than in python. [18:35] rog, well the cloud init stuff is factored out of the provider implementations, individual provider implementations will by default get a common impl of the provider that uses cloud-init [18:35] m_3: what's the difference from the deployment point of view of where the charm is? [18:35] hazmat: I'm still trying to figure out how the packaging process works for charm-tools... the latest version just isn't installing enough stuff [18:36] m_3: either way I need to charm get right? [18:36] hazmat: yeah, i saw that, but hadn't seen that local didn't use the common implementation [18:36] jcastro, yeah.. charms are always fetched locally, even with a store [18:36] jcastro: yeah, it'll just be a different url [18:36] er. deployed from local [18:36] ok so from the end user perspective, they don't need to do anything different? [18:36] the store just automates the fetch to local and deploy to env [18:36] in other words, is this our problem or a user-visible problem? [18:37] rog, it does use the base class, it just overrides that behavior [18:37] by implementing its own start machine method [18:37] jcastro: so that's the difference... right now 'charm get thinkup' says it doesn't exist in the official charm store [18:37] rog, the local provider is a bit of abberation in several respects, besides the lack of cloud-init, there is no provisioning agent [18:37] so I'll promulgate it [18:38] rog, in the local provider case there is only one machine the host [18:38] so cloud init never runs, we treat the containers as containers instead of machines [18:38] the notion being that if where a new machine, we would indeed run cloud-init [18:39] hazmat: yeah, it's an exception that seems like to be making things a bit harder, cos it makes everything less regular [18:39] rog, it is actually pretty regular the delta is that its an environment with exactly one machine [18:40] hazmat: it's things like no provisioning agent, etc [18:40] rog, and that we have multiple units with isolation on it.. the original impl that SpamapS did had modeled containers as machines [18:40] rog, there is only one machine in the env, so what does the provisioning agent do? [18:40] hazmat: was that too slow? [18:40] rog, no just that niemeyer didn't like the model that way [18:40] hazmat: i'd model containers as machines... [18:40] hazmat: fair enough [18:40] rog, there was a long discussion on list about this [18:41] rog, the problems with networking still haven't been addressed unfortunately [18:41] hazmat: depends how you think of the local provider, i guess - as a test for the system, or as a thing to do useful work with [18:41] wrt to pushing lxc everywhere [18:42] rog, the goal of getting lxc everywhere is better served by the current impl (machines deploying units in containers), but lacking an appros solution for the networking, its a bit moot [18:42] for LXC everywhere, i guess it'd make sense to treat containers as machines and have a provider on every real machine [18:42] rog, not really [18:42] rog, providers model real billable resources, container ares not [18:42] s/ares/are [18:42] oops [18:43] i didn't mean provider, i meant provisioning agent [18:43] totally different thing [18:43] rog, same thing, providers are actualized by a provisioning agent [18:43] there is no provider api for creating a container, the provider is abstraction to external resource interaction [18:44] interesting. [18:44] for the provisioning of compute resources, lxc are are just logical subdivisions of those resources imposed by juju for units [18:44] if there *was* a provider api for creating a container, might that make things easier? [18:45] because then creating a container would be very similar to allocating a new machine [18:46] once you've got multiple levels of provider, spanning across multiple billable providers might become easier. but i'm speaking off the top of my head here, obviously :-) [18:47] hazmat: what are the problems with the networking, BTW? [18:47] rog, i don't think a container provider api helps at all, its a different level of the conceptual stack [18:48] rog, your trying to posit the provider agent living on every machine.. why thats what the machine agent is there for [18:49] rog, the networking problem is what i outlined on the mailing list, namely that because we're using dynamic port allocation for units, which itself is a less than 0.1% problem, instead of static port allocation in metadata, we lose the ability at deploy time to determine conflict/optimal placement of units when machines have multiple units [18:49] What is the "peers" metadata config section for? [18:49] i guess in the end, it doesn't matter what agent does provisioning, just as long as it follows the right zk state [18:50] meaning that we can never have more than one unit on a machine without a soft network overlay [18:50] marcoceppi, its for a special type of relation used to talk within a service among its service units.. like cassandra, or riak, mongodb replica set.. [18:51] the units of those services need to be aware of the other members of the service and to talk to them, for things like setting up token rings for sharding or replication, or leader election [18:51] rog, indeed [18:51] hazmat: Cool, thanks [18:52] hazmat: i wondered about peer relations as well. how exactly do they differ from provides/requires relations? [18:52] rog, provides/requires are to model client/server relations across services, peers model intra service relations [18:53] inter vs intra service relations.. for the one liner ;-) [18:53] hazmat: so a peer relation is always established? [18:55] rog, it is, but that's a convenience of the deploy cli impl, the rel impl is the same at the declaration and actualization layer as a client/server.. the main impl difference is in the watch mechanism feeding into hook execution [18:57] hazmat: is hook execution any different for peer relations? [18:57] rog, no [18:59] rog, the structural layout of the relation is slightly different leading to slightly different related unit watching, instead of watching the units of the opposite end's service, it watches the units of the same service.. that feeds into a hook scheduler, which queues hook exec, the actual hook impl/semantics are the same [19:00] hazmat: thanks [19:01] i think i wrote most of that in a mad crazy code cycle this time last year [19:03] hazmat: it's all in the machine agent, presumably [19:03] rog, not at all [19:03] rog, the machine agent has no responsibilities outside of deploying units [19:03] and removing them if unassigned from the machine [19:03] hazmat: ah, i thought the machine agent executed hooks [19:04] rog, the unit agent does all unit management [19:04] rog, most of this is abstracted into a unit workflow/lifecycle that the unit agent just starts [19:04] hazmat: so it does, i've just seen [19:05] hazmat: hmm, so... is there one machine agent per machine? [19:05] rog, yes [19:05] hazmat: but currently a machine maps exactly one to one with a unit, right? so what does the machine agent actually do? [19:05] * rog should just go and read the damn code! [19:06] ok, yes, it doesn't do much [19:07] rog, none of the agents do very much, they activate semantic logic protocols in their process, but the agent code is always pretty minimal [19:08] rog, the provisioning agent has been an exception to that, but not for long (expose-refactor pushes the firewall stuff out) [19:09] rog, at the risk of repeat myself... machine agents only manage unit deployments [19:10] their not one to one with a local provider [19:11] their only one to one elsewhere due to the network issues as above, either a soft overlay network or static ports would resolve that [19:11] i'd vote for static ports, otherwise you get addressing issues. [19:12] yeah.. me too [19:12] encapsulating units into lxc everywhere also solves the issue of the charm scribbling on a machine's root fs which makes machine reuse problematic [19:13] gotta go [19:13] thanks for the discussion [19:13] rog, cheers have a good weekend [19:13] and you [19:13] james_w: fyi, your graphite and txstatsd charms got an initial review [19:19] yeah, thanks [19:20] important fixes needed, I just need to find the time now [20:21] m_3: AWWW YEAH [20:21] thinkup is working [20:21] http://pad.ubuntu.com/thinkup [20:21] are the commands I used. [20:28] jcastro: phpmyadmin is reported as blocked BTW [20:28] awesome!! [20:28] ^ re: thinkup [21:05] http://cloud.ubuntu.com/2011/11/deploying-thinkup-to-the-cloud-with-juju/ [21:22] jcastro: gotta do a charm get mysql too [21:24] (in the article you reference above) [21:27] whoops! [21:55] charm-tools should be fixed shortly btw [22:00] SpamapS: thanks! [22:16] SpamapS I hope it wasn't anything I did [22:16] no [22:16] definitely me [22:16] rushing things before throwing the turkey in the oven yesterday ;) [22:19] can you guys try updating charm-tools on 11.10? [22:23] m_3: ^^ [22:23] marcoceppi: ^^ [22:23] Has it been packaged? [22:24] it just updated [22:24] * marcoceppi wishes he can edit messages in IRC [22:24] the PPA claims it is published [22:24] Looks like it: 0.2+bzr85-4~oneiric1 [22:27] SpamapS: will check now [22:30] SpamapS: look fine [22:31] m_3: cool. thanks for the heads up [22:31] thanks for not making me learn about recipes tonight :) [23:10] What are the restrictions of names for charms? Just alphas? [23:17] marcoceppi, alpha num not leading alpha, and '-' [23:17] er. leading alpha [23:31] thanks