[00:26] bcsaller, can you finish up the review on this one .. https://codereview.appspot.com/5752069/ [00:26] the force upgrade [00:27] hazmat: yeah, looking now [01:36] hazmat: Uh, indeed.. I guess I got it wrong [07:33] anyone around who can do some reviews please? [07:43] fwereade_: hi [07:43] heya bigjools [07:43] fwereade_: I am this ---><--- close to getting the mass provider working [07:44] just one small issue left [07:44] bigjools, I can absolutely do some reviews, but there's at least one other I have to finish first [07:44] which I need your advice on [07:44] bigjools, swet, can I help with that? [07:44] when the master starts up a second node, it blows up at the point where it tries to get the charm from the provider file store [07:44] because it doesn't use authentication [07:45] I think we talked about this before but I can't remember the resolution [07:46] bigjools, we touched on it ever so briefly... IIRC all I said was something like "yeah, it would be sensible to require authentication for writes" [07:48] your connection is flaky today [07:48] bigjools, how would you feel about either making the provider storage readable without auth, or about perhaps using signed URLS as the ec2.files get_url method does? [07:49] fwereade_: the provider already uses a signed request, I don't know why it's not been used in this case [07:49] perhaps the config is not available? [07:50] bigjools, the url used to access the charm will have originally been created by MaaSFileStorage.get_url [07:51] fwereade_: the url is correct but it is not getting dispatched using the correct method that adds an oauth header [07:52] bigjools, hm, who would it be authenticating as? [07:52] whatever is configured in environments.yaml [07:52] bigjools, nodes shouldn't have access to provider credentials [07:52] why was none of this in docstrings? :/ [07:53] ok so is it safe to make "get" operations unauthed? [07:53] bigjools, I guess the answer is "we're bad at predicting which context people will need" :( [07:54] bigjools, we think so [07:54] I'll see if we can do oauth in the url as well [07:54] otherwise, no auth :) [07:55] bigjools, if you can sign the urls that would be nice but don't overexert yourself; no auth was considered acceptable for orchestra [07:55] ok [07:55] bigjools, there are definitely a number of identity/auth questions that have yet to be addressed [07:56] it's a trivial change to remove auth in maas for file retrieval [07:56] bigjools, then tbh I would go with that [07:57] bigjools, the question of agent identity is I think agreed in principle, but at the moment we just trust them [07:57] bigjools, it's one of the many stories I would love love love to have addressed before 12.04, but... :( [07:57] ok :( [07:59] fwereade_: is there a way to restart the agent on the node so it retries getting the charm># [07:59] ? [07:59] bigjools, you should be ok to just kill the machine agent process [07:59] bigjools, (IMO the unit agent should really be the one getting the charm, but hey ho) [08:00] bigjools, (it's upstartified, it'll come back up (I hope :p)) [08:00] ok [08:00] I'll fix0rate maas [08:23] fwereade_: ok got past that hurdle only for a new steeplechase [08:24] in the charm.log it has: [08:24] juju.errors.JujuError: Unknown provider type: 'maas', unit addresses unknown. [08:24] so I guess I am missing some config? [08:25] bigjools, hmm, let me investigate [08:25] bigjools, would you pastebin the log? [08:26] fwereade_: http://pastebin.ubuntu.com/894806/ [08:26] bigjools, actually, sorry, it's clear [08:26] bigjools, juju/unit/address.py [08:52] fwereade_: oooookay, we have a charm deploying [08:53] * fwereade_ cheers and showers bigjools with confetti [08:53] bigjools, awesome [08:53] what a long road! [08:57] fwereade_: so is that UnitAddress supposed to be public or private? [08:57] bigjools, it should provide both [08:57] fwereade_: what's it for? [08:57] bigjools, if there's no such thing as a private address just return the public one in that case [08:57] not a single comment in that file! [08:58] bigjools, the only thing I'm sure of is that hooks can do something like `unit-info public-address` [08:58] bigjools, and it's almost certainly used by `juju status` [08:58] bigjools, I'm not aware of other uses [09:01] ok thanks [10:22] allenap, ping, can we chat about _extract_stystem_id in your branch? (or possibly bigjools, if you have context?) [10:23] fwereade_: Sure. [10:23] fwereade_: Voice or here? [10:23] allenap, here should be fine; I was just wondering about the return-unchanged behaviour [10:24] allenap, what's the motivation? isn't it just deferring an exception minutely? [10:25] fwereade_: I guess it was to defer the decision as to what is or isn't correct back to maas, in case of doubt. [10:26] allenap, the only plausible case I can see is if *something* for some reason holds a system_id instead, in which case it doe the "right" thing, kinda, but IMO the presence of a system_id in place of a resource_uri is unquestionably indicative of a bug [10:26] allenap, is there some scenario I've missed? [10:26] "Be forgiving with input, be unforgiving with output" or something like that. [10:27] fwereade_: No, you haven't missed anything. [10:27] fwereade_: I'm happy to raise an exception instead. [10:27] allenap, I see this as pre-existing internal data, rather than input, myself [10:27] allenap, cool, if yu would do that that would be great [10:27] fwereade_: Would an assertion fit with the rest of Juju, or should I raise some other exception? [10:28] allenap, I'd probably suggest a juju.errors.ProviderError [10:30] allenap, up to your judgment whether it should be exposed or just tested indirectly via garbage params to get_nodes [10:30] allenap, I'm going to repaste, some of it probably got lost :/ [10:30] allenap, I'd probably suggest a juju.errors.ProviderError [10:30] allenap, possibly even MachineNotFound but I'd need to think about that a bit [10:30] allenap, the only other thing is that we don't like to import anything with an _, even in tests [10:30] allenap, but we're not opposed to exposing utility functions purely for tests [10:30] --- Disconnected (Connection reset by peer). [10:30] --> You are now talking on #juju-dev [10:30] allenap, up to your judgment whether it should be exposed or just tested indirectly via garbage params to get_nodes [10:31] allenap, I'll just make those comments on the review, ping me when it's ready for another round ;) [10:31] fwereade_: I'll change it to a public function. [10:31] fwereade_: Thank you! [10:32] allenap, perfect [10:38] fwereade_: i was just looking at https://codereview.appspot.com/5847059 and realised there's a big gap in my knowledge - what is the scheduler actually *doing*? [10:39] rogpeppe, handling callbacks from watch_unit_relations [10:39] rogpeppe, which represent either settings node version changes for known relations, or adds/removes on the set of related ones [10:40] fwereade_: ah, so we probably wouldn't have a direct equivalent in the go port - we'd probably just use a goroutine and listen on channels, right? [10:40] rogpeppe, and converting that stream of events into a stream of required hook executions in a context with the correct membership for the time at which the change was noted [10:40] rogpeppe, I'm deliberately not thinking about that part of the problem yet, I don't want to prejudice the implementation ;) [10:41] fwereade_: that's fine. i (obviously) am :-) [10:41] rogpeppe, my instinct says that hook scheduling will look somewhat different in go but that's all I can say really :) [10:42] fwereade_: BTW the amazon tests work again :-) [10:42] fwereade_: although only about 60% of the time :-( [10:43] i think we should do more retries when ec2 gives us an error that's likely to be transient. [10:43] fwereade_: have you got any outstanding reviews i should be looking at? [10:44] rogpeppe, yes please, I should have a few on the active reviews page [10:44] rogpeppe, re amazon, sounds sensible [10:46] i can never remember how to find the active reviews page [10:48] ah, found it. [10:49] fwereade_: are they in any particular order? [10:49] rogpeppe, most are independent, but tweak-supercommand sits on top of add-cmd-context [10:50] rogpeppe, although I'm not sure it actually needs to [10:50] fwereade_: Ready for round 2? [10:50] allenap, sure [10:51] allenap, can't see updated MP yet [10:51] fwereade_: Sorry, I should have waiting until that was ready before pinging. [10:51] allenap, np, it's not a costly check for me ;) [10:53] allenap, btw, heads up: not sure how you're deploying your charms (by full name including series, I guess?) but you'll need to handle default-series in environments config at some stage [10:53] allenap, what this actually means is that you can completely ignore the requirement [10:53] allenap, but you'll need to add a line temporarily to your environments.yaml when it lands [10:54] fwereade_: I don't really know what that means :) [10:54] allenap, it won't last long at all, and you should never otherwise have to care about it [10:54] default-series, that is. [10:54] allenap, `juju deploy wordpress` infers series from default-series [10:55] allenap, `juju deploy cs:precise/wordpress` knows the series from the charm url [10:55] fwereade_: What does series refer to? Distroseries, or something else? [10:55] allenap, yeah, exactly [10:55] allenap, charms target a specific series [10:55] allenap, (that we must guarantee they actually be deployed on ;)) [10:56] allenap, so if you see a default-series error cropping up after a merge, go and add "default-series: oneiric" to your maas config and forget about it [10:57] fwereade_: Okay, thanks. [10:57] allenap, and when you later see errors complaining that default-series is no longer valid, remove it :) [10:57] Hehe :) [10:57] allenap, you *might* not even be hit by it [11:06] rogpeppe, TheMue: btw, I'm not sure this was actually announced while you were around: I'm focusing on python for a couple of weeks [11:06] fwereade_: ah, ok. yeah, i didn't know that. [11:06] rogpeppe, TheMue: so I am unlikely to be a timely and helpful reviewer [11:07] fwereade_: :-| [11:07] rogpeppe, TheMue: I will ofc hit those if I have any time left over in my ~1 go hour per day [11:07] fwereade_: (i enjoy your reviews!) [11:07] rogpeppe: thanks :) [11:08] rogpeppe, TheMue: and ofc I'm also always around to talk, but otherwise generally expect diminished engagement from me for a short time [11:22] fwereade_: There seems to be a problem with my branch; it's not getting scanned by Launchpad. I'm talking to the LP webops about it now. [11:23] allenap, np, if it looks likely to take too long let me know and I'll diff it manually [11:26] rogpeppe, a thought: the above doesn't mean I wouldn't really appreciate a reduction in my own go review backlog, or that I won't be trying to get them merged, just that they're not at the *top* of my list atm :p [11:27] fwereade_: hope you'll be back for go soon [11:27] TheMue, thanks, I hope so too [11:27] TheMue, I don't like python any more, it spells "true" wrong ;p [11:27] fwereade_: *lol* [11:34] fwereade_: Turns out I was being a numpty. It's there now. [11:41] allenap, cool [11:45] allenap, LGTM [11:45] allenap, I'll merge this and the other one shortly, just need to persist some of my own state before I do that though ;) [11:51] fwereade_: Thanks! There's a trivial follow-on from use-maas-uris - https://code.launchpad.net/~allenap/juju/maas-to-maas/+merge/98759 - and it's already approved by hazmat. Would you be able to land that at the same time? [11:52] allenap, surely :) [11:53] Thanks. [11:59] allenap, fwereade_ it lives! [11:59] hazmat, ...charm store? [11:59] hazmat, oh maas [12:00] hazmat, sorry braindead :p [12:00] hazmat: \o/ [12:00] fwereade_, the former soon as well hopefully [12:00] allenap, is there any api control over the image that's used on a machine in maas? [12:01] hazmat, actually, I need a spot of advice(but I think your question to allenap is more important, I'll sit quiet a mo) [12:01] hazmat: Not yet; it's plain precise only for now. It will definitely be added though. [12:02] hazmat, ok: I'm thinking about blocking d-i-i/d-i-t use on the amazon cloud only [12:02] hazmat, and it's leading me in a bad direction [12:02] hazmat, the problem is this [12:03] hazmat, to know whether an environment should accept those keys, we don't just need the ec2 uri [12:03] hazmat, we also need to know if we're running on a legacy environment [12:04] hazmat, (right?) [12:04] hazmat, so that we don't just render existing environments inaccessible to the point of not even being destroy-environment-able [12:05] hazmat, however, we cannot generally know if we're running in a legacy environment until we've seen how env state is stored in ZK [12:06] hazmat, which then means we have to check twice: once on bootstrap (in which case it's easy, don't accept bad keys) [12:07] hazmat, and once any time we want to connect to ZK (in which case we need to wait until we've connected to find out whether it's a legacy environment) [12:08] * hazmat catches up [12:08] hazmat, it would be insanity to find and tweak every call to provider.connect() [12:09] hazmat, but to avoid doing *that*, we have to make ec2.MachineProvider do its own check/barf in an overridden connect(), and that involves hitting the state module [12:09] hazmat, which I'm pretty sure the providers should not know about [12:10] hazmat, that's about as far as I'd got [12:10] fwereade_, if a user doesn't perform an activity that identifies a legacy environment as such, then its not really appropriate to cause warning imo [12:11] ie. if their not deploying/adding-unit.. is it an active concern for the env [12:11] hazmat, hm, maybe it is just 3 checks [12:11] hazmat, bootstrap, deploy, add-unit [12:12] hazmat, ok, sounds good to me; thanks [12:12] fwereade_, cool [12:15] hazmat, hm, how would it be if I added something like get_in_legacy_environment() to GlobalSettingsStateManager [12:15] ? [12:17] hazmat, would just return False for now, so we always barf on bad keys, but if we're doing this in a parallel branch that's OK [12:19] hazmat, could always return True I guess but I'd prefer to merge something that has the eventual desired behaviour but which doesn't ever actually trigger until there's a way to detect legacy environments [12:20] hazmat, sensible? [12:38] fwereade_, what needs to use get_in_legacy_environment? [12:38] fwereade_, the new code will end up turning legacy into new style data structures, its the old code that needs to use the legacy data, and it won't be using a new api [12:38] fwereade_, but the legacy structures will still be in place [12:52] hazmat, I need to use it to do the checks: in a legacy environment we want to warn about those keys, but in a new one we want to error; don't we? [12:54] fwereade_, true [12:55] hazmat, does GSSM sound like the right place to you for now? [13:04] fwereade_, perhaps on env manager [13:04] hazmat, cool, whatever's best for you [13:04] either works, but env manager is used more commonly in the places we need to check this [13:05] gssm is used more when we tweak or read providertype/debug-log [13:21] Heh, great.. I was talking to myself.. [13:21] Good morning all :) [13:21] hazmat: Are you around yet by any chance? [13:22] niemeyer, yes.. just finishing up a subordinate review [13:22] hazmat: Heya [13:22] hazmat: I'd like to sync up at some point re. the charm store [13:23] hazmat: For the initial test/beta/alpha/point-nil phase :-), we'll be deploying it without an SSL certificate [13:23] hazmat: Which means we'll need to s/https/http/ on the code [13:23] niemeyer, i guess without cert checking its rather moot [13:24] niemeyer, yeah.. easy enough its a one liner [13:24] hazmat: Super, +1 on cowboying it [13:24] niemeyer, does that mean a deploy is near? [13:24] hazmat: Yeah, as quick as I can develop a charm for the charm store :) [13:24] hazmat: We went full round and it's back on me now [13:25] niemeyer, so like 10m ;-) [13:25] hazmat: Kind of :).. I mean a serious charm! :) [13:27] dog walk bbiab [14:01] mthaddon: So, I'm dropping the SSL need, to avoid the conflict there [14:02] niemeyer: you've discussed this with elmo? not sure if the issue was the SSL cert specifically or the ubuntu.com domain [14:02] mthaddon: I didn't discuss this with elmo, but that's what I understood was a problem from our conversation yesterday [14:02] mthaddon: There's nothing special about the ubuntu.com domain, AFAIK.. [14:03] mthaddon: If that's an issue too, I'm happy to change the client to store.eat-your-lunch.com [14:03] niemeyer: eh? there's quite a lot that's special about the ubuntu.com domain [14:03] ah, I see what you mean [14:03] mthaddon: Sure, no problem.. [14:03] hazmat: Let's change the domain too, please.. [14:07] niemeyer: I don't think we should change the domain [14:08] robbiew: It's fine.. I'll provide the Elastic Balancer domain to hazmat [14:08] robbiew: and will use the Elastic Balancer to distribute load on two independent charm store frontends, deployed with juju [14:09] robbiew: Won't be a first class domain, but no one sees that domain anyway [14:11] niemeyer: good point [14:12] niemeyer: admin-secret CL is now much smaller... except that lbox propose seems to have gone a little mad. [14:12] wrtp: How so? [14:12] niemeyer: it's showing lots of diffs that it shouldn't [14:13] wrtp: It's probably showing the diffs that bzr is showing [14:13] niemeyer: "diff --old lp:juju/go" shows the correct (small) diffs [14:13] wrtp: Take the two revisions that are in Rietveld, and do a diff [14:13] wrtp: Check if the revisions are right [14:13] wrtp: Or if the diff is different [14:14] niemeyer: good idea [14:14] niemeyer: BTW, how can i find out the complete revision id of the current branch tip? [14:15] wrtp: bzr revision-info [14:16] wrtp: bzr log --show-ids also [14:16] niemeyer: thanks [14:21] niemeyer, which domain? [14:22] niemeyer: looks like it's using the wrong revision-id for the old version [14:22] hazmat: Trying to figure out [14:22] wrtp: What's the revision id it's usng? [14:22] roger.peppe@canonical.com-20120321155821-85i0cf6wo39qrpg6 [14:22] wrtp: Isn't that the revision id of the pre-req? [14:23] niemeyer: quite possibly - but that's wrong in this case, because the prereq has already been merged. [14:23] wrtp: I see, ok [14:24] wrtp: Will have to change lbox to use a different base depending on whether the pre-req was already merged or not [14:24] niemeyer: yeah - i wouldn't have thought of that either... [14:24] wrtp: Can you please just confirm that this is indeed the case? Do you get the same "wrong" diff if you diff against the pre-req? [14:26] niemeyer: https://codereview.appspot.com/5875047 is ready for a review [14:26] TheMue: Cheers! [14:26] niemeyer: moin ;) [14:26] andrewsmedina: ping [14:28] niemeyer: yes [14:28] TheMue: Erm, hold on [14:28] lunch [14:28] wrtp: yes, you can check, or yes, you've checked? [14:28] TheMue: I don't think we want that stuff right now [14:29] TheMue: This isn't in use yet [14:29] TheMue: Even in Python, I mean [14:29] TheMue: I didn't even recall that this was actually merged already [14:29] niemeyer: 64 bytes from andrewsmedina: icmp_seq=0 ttl=251 time=2.677 ms [14:29] niemeyer: hmm, thought i've seen calls of these functions [14:29] andrewsmedina: :-) [14:29] andrewsmedina: Heya [14:30] niemeyer: everything ok? [14:30] andrewsmedina: You've told me to ping you if there was something you might be involved in [14:30] andrewsmedina: I think I have something for you to participate in. Interested? [14:30] niemeyer: yes [14:30] andrewsmedina: It's a bit less straightforward than the last task, but actually important [14:31] niemeyer: I'm working on local env [14:31] andrewsmedina: hazmat is conducting some adaptations in the way environments are stored, introducing a couple of commands (set-env, get-env) and also adapting bootstrap to take these options [14:31] niemeyer: make_identity() is called twice outside of state and several times in state/security.py [14:32] niemeyer: nice [14:32] andrewsmedina: We need to adapt our side of things with similar logic [14:32] TheMue: This isn't in use.. [14:32] TheMue: Deploy a charm and try to see who touches that logic [14:33] andrewsmedina: Interested in picking the task? [14:33] niemeyer: i fixed the CL for the time being anyway [14:33] niemeyer: did in the python lib? [14:34] although i need to edit the description [14:34] niemeyer:ops [14:34] niemeyer: hazmat did this in the python lib? [14:34] wrtp: How? [14:34] andrewsmedina: He's working on it right now [14:34] niemeyer: why has it been developed? will it be needed in future? or is it just code that should have been removed? [14:34] andrewsmedina: You can get more details with him to see where's the branch with the Python diff, etc [14:35] andrewsmedina: It should be significantly simpler on our side, because not all of the things changing are implemented yet [14:35] niemeyer: and what do you need to be ported next? i still have the Watch…() methods in Unit and Service open. [14:35] TheMue: It has been developed because long ago there was a push to have more security around ZooKeeper, but other needs walked over that problem [14:36] niemeyer: understand [14:36] niemeyer: ok [14:36] hazmat: can you help me? [14:36] TheMue: I don't know if it's going to be used or not, but I'm against developing and maintaining code while we don't know the answer to that [14:36] niemeyer: i'll put the code in my archive, maybe we need it again later. ;) [14:37] TheMue: Sounds great.. pushing it to Launchpad is also a good idea [14:37] TheMue: The one you proposed is obviously already there [14:37] TheMue: If you have another one, just push as well [14:37] andrewsmedina, yes [14:37] niemeyer: I did the implementations for interfaces related with local enviroment using lxc :) [14:37] TheMue: Regarding the Watch, I thought we had a pretty good agreement on that [14:37] * hazmat catches up [14:37] TheMue: What's still pending? [14:37] andrewsmedina: Wow, nie [14:37] nic [14:37] nice!! [14:38] andrewsmedina: Btw, this may be a good time to remind you that small branches are easier to deal with :) [14:39] niemeyer: I now [14:39] the security stuff is completely unused outside of making an identity for the admin user, and even there its not backed with an acl, so its not functional [14:39] niemeyer: maybe i'm lost in the fact that there's an agremment. last info here is the approach rog and i already made for a Watcher (specific goroutines waiting for change signals and retrieving all they need then) [14:39] niemeyer: I've been kinda busy this week [14:39] niemeyer: if that's the agreement i'm pretty fine with it [14:40] andrewsmedina, was there something in particular you needed help with ? [14:40] hazmat: you could show me the diff of what you did? [14:40] TheMue: I don't know what you and rog agreed.. I know that the three of us talked about an approach back in the Rally, and that we talked about this approach again a few days ago [14:40] TheMue: What's the question? [14:40] niemeyer: yes, that the one i mean [14:41] niemeyer: only wanted to be sure. maybe you and rog discussed something else, so that i would start in the wrong direction. [14:42] TheMue: We did discuss it, but everything we talked about was here in the channel and back at the Rally [14:42] TheMue: We didn't have any discussions outside of that [14:43] TheMue: Did you see this, for example: http://paste.ubuntu.com/870030/ [14:43] andrewsmedina, you'll have to be more specific [14:43] niemeyer: yep, that's the one i have in mind [14:44] TheMue: Actually, I think rog had a more complete one [14:44] niemeyer: i think i still have the link in my temp folder [14:44] hazmat: niemeyer told me that you is conducting some adaptations in the way environments are stored [14:45] hazmat: I will do it in Go port [14:46] andrewsmedina, gotcha, its not done yet.. i just started work on it yesterday, but basically the provider credentials/access go to /environment/provider and the rest go to /environment [14:46] and instead of setting the environment every deploy, its only done once, and subsequent modification requires the use of set-env [14:49] hazmat: you're working on a branch? [14:49] Boom [14:49] TheMue: Did you find it? === niemeyer_ is now known as niemeyer [14:51] SpamapS: around? [14:51] niemeyer: not yet, started with branching and inspecting the py code first [14:51] andrewsmedina, yup.. its lp:~hazmat/juju/environment-settings.. i haven't pushed yet, cause i'm still trying to figure out the api a bit [14:51] hazmat: ok [14:53] back [14:53] TheMue: Found it: http://paste.ubuntu.com/871544/ [14:53] niemeyer: i merged and pushed the prereq branch [14:53] wrtp: Brilliant [14:53] wrtp: Thanks [14:54] flacoste: I am, whats up? [14:54] TheMue: There may be a few details, like the Done order being inverted, but wrtp's basis on it is pretty good [14:54] TheMue: Ah, we agreed to get rid of Err for the moment too [14:54] SpamapS: did you intend to do an upload of juju to the archive today for beta freeze 2, if you do, can you make sure that the two branches necessary for it to work with maas are included? [14:55] pretty please [14:55] one of them is merged at revision 487 [14:55] niemeyer: great, thanks. my agent watcher ones worked the same way (until i get aware that it isn't needed here) [14:55] niemeyer: yeah, i think i had a slightly updated version that took into account our discussion; i'll have a look [14:55] the other one is still waiting to be merged: https://code.launchpad.net/~allenap/juju/use-maas-uris/+merge/98756 [14:55] SpamapS: Ah, if you are considering an update, please hold on until a couple of hours as I sort out the repo address with hazmat [14:55] SpamapS: Will be a one-liner [14:55] wrtp: Thanks much [14:56] niemeyer: well, he didn't reply yet, but i see you'd also like one :-) [14:56] flacoste: Yeah :) [14:56] flacoste: I had hoped to, but I have not seen subordinates fully drop yet [14:57] niemeyer: btw, do wie have a (powerful) search interface to our irc logs? ;) [14:57] TheMue: Very powerful one.. grep [14:57] :) [14:57] SpamapS: i think it would still be worth it - even without subordinates, as that would allow people to test juju + maas from the archive [14:57] niemeyer: ok, on the command line [14:57] but subordinates would also be great! [14:58] niemeyer, TheMue: here's a slightly updated version. it still has Err though. http://paste.ubuntu.com/895174/ [14:58] niemeyer: the question is: are we prepared to discard all errors that the watcher encounters? [14:58] flacoste: I will file my FFe then. [14:58] niemeyer: (or just log them, i guess) [14:58] hazmat: http://23.21.254.154 [14:58] hazmat: This will do for now.. [14:59] hazmat: It's an Elastic IP, so I can make sure it is preserved, and lands on something sensible shortly [14:59] wrtp: Uh.. I'm certainly not prepared to discard any errors.. [14:59] niemeyer, hmm.. those do get recycled. [14:59] niemeyer, ie. we can't ever release that eip [15:00] i mean we can but its breakage [15:00] niemeyer: if there's no Err method, then the thing using the watcher can't get the error. [15:00] wrtp: Wait returns the error.. [15:01] niemeyer: FooWatcher doesn't have a Wait method [15:01] hazmat: They don't get recycled unless I say so [15:01] niemeyer: maybe it should, but the watch channel can fulfil that role. [15:01] hazmat: That's the main point of the Elastic IP [15:01] wrtp: It has a Stop method that returns the result of Wait [15:01] niemeyer, yeah.. i just remember here some startup story about how they started getting netflix traffic because of a recycled eip and dns caches [15:01] s/hearing [15:02] niemeyer: does that mean if you get eof on the channel that you always have to call Stop? [15:02] wrtp: It just means we have access to the error.. [15:02] wrtp: Let's see the code, please.. [15:02] * TheMue should develop a little Go app for searching inside the logs. in time ranges, with patterns, filtered by user, output with surrounding lines and links direkty into the found occurence [15:02] niemeyer: i pasted it above [15:02] wrtp: Let's see that code being used I mean. [15:03] niemeyer: sounds good. [15:03] wrtp: It's trivial to add an Err method if we need it [15:03] niemeyer: i don't mind if the caller always needs to call Stop actually. [15:03] wrtp: It's not trivial to suggest good usage without any usage [15:03] wrtp: Right, it should anyway [15:03] And I should get lunch! [15:03] biab [15:04] niemeyer: that's a nice invariant thing, even if we "know" that when we get eof on the channel, it's already stopped. [15:04] niemeyer: enjoy! [15:04] SpamapS: thanks! [15:09] TheMue: here's an updated version of the watcher demo code, according to the discussion above: http://paste.ubuntu.com/895188/ [15:16] SpamapS: for the upload, any revision starting at 488 will be good for MAAS [15:16] wrtp: thx [15:28] flacoste: great [15:28] andrewsmedina, its very much a work in progress and raw, but i went ahead and pushed the branch fwiw. [15:29] bcsaller: any chance subordinates will land in the next 3 hours? [15:29] hazmat: ^^ ? [15:29] SpamapS, no [15:29] ok, will leave that one out. ;) [15:29] SpamapS: once your FFe is filed, just ask Daviey for approval :-) [15:29] SpamapS, i think we've fixed most the of the issues, but it needs more unit tests, and support for departed hooks [15:30] SpamapS, one moment, i've got more branch to add [15:30] its a maas beautification thing [15:30] I won't be uploading until much later [15:30] just want to know what to put in the FFE [15:30] whats the status of constraints at the moment? [15:31] SpamapS, the branch currently removes support for some previous environments.yaml settings [15:31] SpamapS, its ready to go in though [15:32] the backwards compatibility thread basically stopped its merge [15:32] as it should. :) [15:32] * hazmat sighs.. [15:32] that's my fault [15:33] bigjools, allenap: maas branches merged, one of you please verify it still works ;) [15:33] i delayed on its review/merge because i wasn't comfortable with the static constraints instead of provider based ones. [15:34] but that wasn't really important, given the big picture [15:34] so now compatibility [15:34] fwereade_: I'll give it a go. bigjools is probably fast asleep. [15:38] fwereade_, thanks for merging all those maas branches [15:39] hmm.. eucalyptus compatibility.. [15:54] niemeyer, fwereade_, jimbaker can i get a +1 for this trivial (charm-store-url) http://paste.ubuntu.com/895250/ [15:57] moving into cowboy phase [15:59] hazmat, a pleasure [16:00] niemeyer, is there a store hooked up to that ip address? [16:00] hazmat, +1 despite the inherent cowboyishness, assuming niemeyer says "yes" to your last question ;) [16:00] allenap, cool [16:02] hazmat, +21 [16:02] well i meant +1 [16:02] fwereade_ there's a python store impl in the code base ? i wanted to try an end2end test [16:03] hazmat, there is a kinda hackish one, would be better to use the real one really [16:03] fwereade_, doesn't appear to be running atm [16:03] fwereade_, where is the python one? [16:04] nm.. oh.. i guess i can just the go one locally [16:05] hazmat, lp:~fwereade/juju/charm-store-hack fwiw [16:10] quite a few conflicts running charmload in parallel [16:32] team meeting? i'm happy to skip it [16:36] hazmat, +1 on skip [16:37] hazmat, also +1 :) [16:39] hazmat: No, it's a public IP unhooked [16:39] hazmat: But there will be [16:39] hazmat: Soon! [16:42] niemeyer: wtf is 5 revs behind.. [16:44] SpamapS: Will check, thanks [16:59] hazmat, hm, what parallel branch should we be working against? [16:59] wrtp: So admin-identity is showing the real thing now? [16:59] hazmat: +1 on the address.. -1 on env variable. [16:59] niemeyer: i'm not sure what you mean by "the real thing" there [17:00] wrtp: The CL [17:00] niemeyer: yeah [17:00] wrtp: pre-req issues [17:00] niemeyer: yes [17:01] niemeyer: there are a few other minor changes too (formatting, extra log messages) which i'm hoping you don't mind me bundling in the same CL [17:05] niemeyer: i'm a bit concerned that the live tests fail with relatively high frequency (~ 40% without actually measuring). it's due to transient errors from the EC2 servers. i'm wondering if we should make goamz/ec2 automatically retry when it gets one of those errors. [17:05] wrtp: Huh? [17:05] wrtp: I thought we had just addressed that? [17:05] niemeyer: that was different [17:06] niemeyer: that was dealing with eventual-consistency issues, not random server failure [17:06] wrtp: What's random server failure? [17:06] niemeyer: here are a few examples i've collected: http://paste.ubuntu.com/895342/ [17:07] niemeyer: i've seen all of those errors multiple times [17:07] niemeyer: actually, the first one i've only seen once. [17:07] wrtp: no instances found is not a random server error.. [17:07] wrtp: Seems like normal eventual consistency issues [17:07] niemeyer: yeah, that's different, sorry, it shouldn't have been there. [17:07] Ok, np [17:08] wrtp: Yeah, those errors are awkward [17:08] niemeyer: i *think* that the ec2 package is best placed to deal with them [17:08] goamz/ec2, that is [17:09] wrtp: I'm happy to have a pass at that after I get rid of my current assignments [17:09] niemeyer: i could do it if you like [17:09] wrtp: I've implemented something I'm comfortable with in that area before [17:09] wrtp: Please leave that with me as I've done it before.. should hopefully not take long [17:09] wrtp: I'll copy some code over [17:09] niemeyer: cool, np. [17:10] niemeyer: the "remote error: handshake failure" error comes from the crypto/tls package, BTW [17:11] wrtp: I was guessing that.. [17:11] wrtp: It's probably the same "unexpected EOF" problem in a different client side location [17:12] niemeyer: it doesn't look like it, actually. [17:12] niemeyer: the "handshake failure" comes from an error code sent by the remote side. [17:17] have we got a meeting now? [17:17] wtf 20165 0.0 5.0 194760 25184 pts/2 Sl+ Mar20 0:49 python /home/wtf/ftests/build/juju/bin/juju status [17:17] SpamapS: ^ [17:17] hazmat: ^ [17:17] That's why the wtf is locked up [17:18] TheMue: and yeah, timeouts are good ;) [17:18] wrtp: Indeedfully! [17:19] niemeyer: :D [17:19] timeouts are good, but go makes it easy to program them at the level they're needed, rather than baking them into every call... [17:23] wrtp: It's great that no one is suggesting that we should bake them in every call then. [17:23] niemeyer: cool. [17:23] wrtp: It's also great that we seem to have found pretty good spots for our timeouts so far. [17:23] niemeyer: just saying, before we start doing it :-) [17:24] wrtp: Sure, just sayin' too.. [17:25] SpamapS, did you have any luck tracking how to reproduce that destroy service issue? [17:26] hazmat: its on my todo after I get through the 500+ email backlog for today ;) [17:26] hazmat: but it happened twice where destroy-service gave that 'no node' and then had to be run one more time. [17:29] SpamapS: wtf is alive again [17:30] niemeyer: thanks! [17:30] SpamapS: np.. we should have some kind to kill/retry the tests [17:30] Erm [17:30] SpamapS: np.. we should have some kind of timeout to kill/retry the tests [17:36] wrtp: LGTM on admin-identity, with a trivial only [17:36] niemeyer: thanks [17:41] wasn't this meeting supposed to start 40 minutes ago? [17:42] (or 1h40m ago if you look at the calendar event...) [17:42] wrtp: Yeah.. I'm happy to have it now [17:43] TheMue, hazmat, fwereade_, bcsaller, jimbaker? [17:43] Perfect timing :-) [17:43] not william, it seems :-) [17:43] heh [17:44] William is probably a bit overloaded with meetings.. I think we spent something like 4h on G+ yesterday [17:44] fwereade_: Didn't we? :) [17:44] niemeyer, sorry, I think I missed something [17:45] William is probably a bit overloaded with meetings.. I think we spent something like 4h on G+ yesterday [17:49] wrtp: Ok, I think that meeting isn't flying :-) [17:49] hmm, i just joined a hangout with fwereade_ and hazmat, but i don't think it was the team meeting :-) [17:49] wrtp, fwereade_, TheMue: I'd like to take that chance to schedule a weekly Go port meeting [17:49] niemeyer: looks like it [17:50] niemeyer: i think we should definitely do that [17:50] When is a good time for you? I guess my mornings are best? [17:50] niemeyer: the usual meeting is too many people to get useful stuff discussed, i think [17:50] TheMue, fwereade_? [17:50] niemeyer: mornings definitely best, yes [17:50] niemeyer, sorry, talking to hazmat [17:51] fwereade_: No need to apologize.. I'm fine with you talking to hazmat. [17:51] ;-) [17:52] wrtp: What about.. Monday, at .. [17:52] 14UTC? [17:53] niemeyer: i'm going to be away for two mondays in a row soon (the 2nd and the 9th) if that makes a difference [17:53] wrtp: Well, it certainly does [17:54] but usually that would be fine, and the time's fine too [17:54] wrtp: We can start with Tuesday, some time, and then move back to Monday once you're done with those [17:54] s/some/same/ [17:54] niemeyer: i'm away the entire week of the 2nd and monday and tues the following week [17:54] Ugh, ok :) [17:54] niemeyer: (two public holidays around easter & i'm taking some other days too) [17:55] Well, let's do it Monday then.. [17:55] Since you'll be off next week, no day will fit :) [17:55] Next week => Week of the 2nd [17:55] niemeyer: yeah [17:55] niemeyer: but i'm around wed-fri on the week after that [17:55] wrtp: Sure, we can move that week maybe, or have two meetings on the same week [17:56] niemeyer: sounds good. [17:56] I'll add to the calendar [17:56] niemeyer: i like the idea of a monday meeting in general [17:56] wrtp: yeah, it's good to quick the week off in a good way [17:56] s/quick/kick [17:56] yeah [17:58] wrtp, niemeyer: sorry... but week after next I'll be off mon/tue :( [17:59] fwereade_: No worries.. people will be on/off casually.. we can adapt specific events [17:59] niemeyer: monday is ok [17:59] wrtp, niemeyer: in principle, though, ++monday [17:59] niemeyer: could it be that tomb isn't tagged for the current weekly? [18:01] fwereade_: was just looking at adding a juju destroy-environment command. is there any command that *doesn't* take a --environment flag? [18:02] TheMue, fwereade_, wrtp: If I didn't screw up, you should have received an event notification at the adequate time [18:02] wrtp, there are certainly mooted ones, that don't exist yet [18:02] TheMue: Checking [18:02] fwereade_: ok. just thought it could be a general argument that applied to all. [18:02] fwereade_: what mooted commands might not talk to the juju environment, BTW? [18:03] wrtp, I'm not sure of there are any that exist right now, but sadly it's not completely general [18:03] wrtp, juju source to branch a charm, I think [18:03] TheMue: You're right.. pushing [18:04] niemeyer: great, thx [18:04] fwereade_: hmm, i wouldn't mind if that command still accepted the --environment flag, but errored out in ParsePositional if so. [18:05] wrtp, it's not appropriate for supercommand though -- jujud uses it too and that's never available there [18:05] fwereade_: as it's definitely the exception rather than the rule [18:05] wrtp, an embedded environment-arg-handling type would see a fair bit of use though [18:06] fwereade_: yeah, i'll do something like that. [18:06] wrtp, cool [18:09] hazmat, improved status output LGTM [18:09] off for a while, everyone, take care [18:22] hazmat: Sorry for bothering, but is the change to drop the env var for the store coming? [18:22] hazmat: Just want to make sure SpamapS has the right thing to publish, if an update is rolling [18:23] fwereade_: have fun [18:31] i'm off too now. see you tomorrow [18:32] wrtp, fwereade_: Cheers² [18:47] and i'm finishing for today too, bye [19:09] niemeyer, was on another call, just checked in with SpamapS, the change will be in, working it on now [19:13] hazmat: Thanks a lot [19:13] hazmat: Synchronizing re. store with IS ATM [20:12] SpamapS: do we have a package yet? [20:24] flacoste: no, a few more commits trickling in [20:26] SpamapS: 35 minutes man ;-) [20:32] * niemeyer breaks down for a moment [20:33] OH [20:33] flacoste: I misunderstood the timing [20:34] SpamapS: daylight savings confused you? [20:34] no I thought it was tomorrow morning [20:34] my bad [20:36] someone just pointed out to me a memory leak in txzk [20:37] might be in the bindings.. still investigating [20:37] SpamapS: do you think you can still get it in? [20:39] flacoste: working as fast as I can [20:40] we are in good hands! [20:52] test suite fails [20:52] http://paste.ubuntu.com/895639/ [20:52] hazmat: ^^ [20:53] SpamapS, sigh.. so much for the cowboy [20:53] SpamapS, un momento [20:53] lol [20:53] * SpamapS blows smoke off his six shooter and reloads [20:55] SpamapS, done [20:55] what was the deal? [20:55] SpamapS, we changed the store.charms.ubuntu.com hardcoded url to an elastic ip address for now [20:56] SpamapS, we'll change back b4 the rc once the store domain issues are resolved [20:56] oh boy fun [21:03] and more fails [21:03] http://paste.ubuntu.com/895650/ [21:04] sorry guys this is a total no-go [21:04] Even if I upload [21:04] it will FTBFS [21:29] niemeyer: i'm just trying to plan summer hols. i've got "platform rally" in my diary for june 25th to 29th, but there seems no indication of that on the wiki. is there anything happening at around that time or has it been cancelled? [21:43] SpamapS, huh, I wasn't familiar with that acronym but I'd expected at least *one* of the Fs to mean what I thought [21:46] fwereade_: Fail To Build From Source [21:57] argh.. running full suite [21:58] https://launchpadlibrarian.net/97931138/buildlog_ubuntu-natty-i386.juju_0.5%2Bbzr492-1juju3~natty1_FAILEDTOBUILD.txt.gz [21:59] hazmat: is there some reason this test is slow juju.control.tests.test_upgrade_charm.RemoteUpgradeCharmTest [22:00] test_latest_dry_run [22:00] I mean [22:00] perhaps something new to mock in twisted? [22:01] SpamapS, yes its.. still using the old domain address to mock out an external call, which doesn't match so then it goes external to the actual ip, but that's not hooked up yet, so it hangs [22:01] Ok [22:01] so.. just a lot of last minute thrashing w/o running the test suite? [22:02] yeah.. not in full [22:02] one constant typed out everywhere [22:02] Heh.. that'll teach ya [22:03] so it was committed as a trivial.. [22:03] but it really wasn't :) [22:06] ping me when there's a new commit, I'll re-try and re-upload. Can you also document the whole thing in a bug (release team requests as much) [22:06] Its not clear at all *why* we made that last minute change from teh changelog. [22:06] rogpeppe: I actually don't know how that looks like yet [22:07] niemeyer: ok, thanks. [23:36] niemeyer, so why are we changing the url to an ip address, if we need to change it back again as an SRU? [23:37] niemeyer, ie. isn't it better to just fix the domain name [23:52] hazmat: Hmm.. I guess the ip address is a bad idea indeed, as it'll make it harder to switch in a bit. [23:52] hazmat: I'm just trying to move things forward.. right now there's some contention going on in terms of getting the store moving [23:55] hazmat: I'd prefer to use a domain, but apparently using a ubuntu.com in my account is an issue