[00:05] niemeyer, hmm.. so destroy-environment cleans out state and containers, but there is one caveat that would need to be addressed by a tear down, namely updating the lxc cache image, and perhaps cleaning out the apt-proxy cache [00:06] the precise lxc ubuntu script now has update cache functionality this incorporated [00:14] <_mup_> juju/robust-zk-connect r415 committed by jim.baker@canonical.com [00:14] <_mup_> Updated tests [04:42] <_mup_> juju/robust-zk-connect r416 committed by jim.baker@canonical.com [04:42] <_mup_> Docstrings, PEP8, PyFlakes [13:35] Good morning all [13:35] hi niemeyer [13:35] Or good afternoon, to some :) [13:37] yep. good morning to you. [13:42] Testing [13:42] g'morning [13:44] niemeyer: good afternoon :-) [14:34] jimbaker, could you send an email to the list re describing high level cli changes for scp/ssh/num-units [14:35] niemeyer, your still against the change in lp:~hazmat/juju/assume-local-ns-if-local-repo.. ie assume local ns if --repository is specified on the cli? [14:37] fwereade, the visible instance stuff looks good, its a +1 with the fix for local provider [14:37] hazmat, cool, let me take a quick look [14:43] hazmat: Yeah [14:44] niemeyer, it looks like the dns entry for the store is in.. so people are getting client hangs, is it possible to put up a static page there that will cause a more immediate fault? [14:44] ie. if you forget to add a 'local:' the deploy just hangs [14:44] hazmat: Yeah, should be possible [14:51] niemeyer, so i still think its rather confusing to have a --repository not imply local.. i think the best thing to avoid confusion is to just have resolve explicitly log which repo its using, that will at least give the user some indication of what's going on [14:53] * hazmat notes its time to go to the dentist.. drill baby drill ;-) [15:01] hazmat: I find it confusing in the other direction, and problematic for several reasons [15:01] hazmat: --repository to me simply implies "look for charms here as well" [15:02] hazmat: having it affecting the resolution of urls sounds pretty surprising [15:02] niemeyer, right.. but the common case for specifying --repository is to deploy a local charm.. not deploying from the cs.. the dep lookup algorithm goes by first found dep, so i don't see that algorithm being affected, its just a matter of resolving the first item [15:04] your the taste master ;-) but i'd like to switch that branch out to do a log of repo being utilized to avoid confusion in this case, as i think its common [15:04] and is a nice practice in general [15:07] hazmat: --repository is called like that because it enables any kind of repository [15:07] hazmat: It's not the same as --local [15:07] niemeyer, so the goal is that it could enable a private remote repo? [15:08] hazmat: Yes, that's what we discussed [15:08] currently that logic is hardcoded .... remote == 'store.juju." [15:08] hazmat: implementation vs. public API [15:08] niemeyer, does that imply a different namespace prefix? .. could the user switch the default lookup to a private repo? [15:09] hazmat: The user can do whatever he pleases.. the command line option is unrelated to that [15:09] hazmat: We can change the logic, but then let's rename the option as well [15:10] hazmat: and change it globally, and _fail_ in case the charm isn't found locally anywhere [15:10] niemeyer, i meant the user doing so sans code modification.. if they have a private repo, that they want to use for all their charms.. an additional environments.yaml repo config setting seems appropriate [15:10] hazmat: So for --local, help="Resolve all charm references to the given local repository" [15:11] ic [15:11] argh.. have to come back to this latter.. late for my appt [15:11] hazmat: "Enjoy" :) [15:11] * hazmat lol [15:11] probably not [15:18] I've got a question about checking cryptographic signatures from 3rd party sources [15:19] The third party source doesn't provide a has to check against, so I've created a bzr repo in LP that has the proper checksum. The install branches this and performs the check - that way I can continually update the checksum without having to always update the charm [15:19] Best practice for this? Or is this okay? [15:34] m_3: the hadoop email reminds me about automated testing of charms [15:34] know anything since the sprint about that? [15:35] jcastro: nope, haven't been following the conversation on it [15:58] marcoceppi: hey so, sorry to sound annoying, but Minecraft? [15:59] Yes, it's in relation to minecraft :) [15:59] My question is at least [15:59] It's kind of the last thing I need to do [15:59] ah ok [16:00] hazmat: SpamapS: m_3: what do you guys think about marco's suggestion for checking the checksum? [16:06] marcoceppi: sounds like this at least tells you if it's changed or not.. next best thing to 3rd party hash. doesn't tell if you were using a compromised binary to begin with [16:11] need to pop out for a bit, back later [16:20] m_3: I know the binary I have isn't compromised, if that helps. I guess I'll make a plea to Mojang to include checksums [16:23] marcoceppi: what you've done is a great workaround... seems like the same amount of work as if you added the hash directly into the charm, but either way works [16:24] i.e., when new binary comes out, you update hash... either in your repo _or_ the charm [16:25] Right, but the alternative I was looking at is, the charm won't have to be updated as nearly as often [16:25] I'll push up what I have now, then I think it'll be ready for review [16:25] marcoceppi: in general, you're signing and saying that the upstream binary is clean... I'd keep asking Mojang for hashes [16:26] yeah, I'll start bugging him on twitter [16:35] Jorg [16:35] whoops [16:37] m_3: uh oh....saw the video....now people will know your face...you're so screwed [16:37] lol [16:39] hazmat, yes, i will send the emails describing these and other proposed cli changes [16:42] robbiew: no way... didn't know it was out... ugh [16:43] hey all - any way I can get "juju ssh" to use a unit's ip address rather than dns to attempt to connect? [16:44] bloodearnest: juju ssh service_name/unit_number ? [16:44] marcoceppi, that's what I used, but the dns on the unit instances is a .local, and resolution fails [16:49] m_3: http://cloud.ubuntu.com/2011/11/hadoop-world-ubuntu-hadoop-and-juju/ [16:49] :D [16:49] the command "juju ssh " works, but "juju ssh " fails with a dns error [17:21] bloodearnest, that's odd.. which provider are you using? [17:22] marcoceppi, i guess separating the checksum from the download is better than no checksum at all [17:26] Don't want to be a downer, but Juju doesn't always get positive mentions.. http://www.youtube.com/watch?v=oebqlzblfyo&feature=youtu.be .. whole thing is good, but juju pops up at 3:30 or so [17:27] Luckily its just a WTF, not a specific argument against it. [17:34] SpamapS: This man is very angry [17:34] this is awesome [17:35] and VERY smart [17:36] this guy did that SSD talk right? [17:36] at the US velocity? [17:36] Dunno [17:37] ah yeah, it is him [17:37] AWS -- Is shit, * Openstack -- more shit [18:01] hazmat, canonistack [18:08] i'm off for the day, see y'all tomorrow. [18:11] cya rog [18:24] later all [18:24] fwereade: Cheers! [18:24] rog: Have a good evening rog [18:25] * niemeyer on lbox propose now [18:40] Is Launchpad read-only?> [18:42] Seems to be back.. [18:42] Seems to be down again.. something funky [18:42] bzr: ERROR: Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist. [18:43] hazmat: Btw, feel free to talk to IS if you'd like to have dummy server deployed sooner rather than later [18:54] jamespage: is the etherpad-lite charm still good to go? Or are you planning any surgery on it? [18:54] I think its still good - I'm not planning todo any more work on it [18:54] until they next make a release [18:54] * jcastro nods [19:15] <_mup_> juju/local-repo-log-broken-charm r415 committed by kapil.thangavelu@canonical.com [19:15] <_mup_> merge trunk [19:21] hmm [21:21] (looks like this didn't get sent) hazmat, i think it makes sense to add timeout support to making more robust connections. but maybe this should be done in two phases: robust (but consequently never times out!); then add a --timeout option to juju (or maybe better --timeout-zk) [21:21] the advantage of doing this is that vs an external timeout is we can timeout just the connection to zk setup [22:18] jimbaker, agreed in principal... we already have a timeout, but there's a post host up, pre timeout error afaicr as well that needs handling [22:29] hazmat, sounds good [22:29] jimbaker, afaics the end behavior is the same, not sure i understand the distinction with an external timeout [22:30] hazmat, the difference is that one be inadvertently interrupting some ZK modification [22:31] i suppose the cli could be more nuanced in terms of handling signals however [22:32] jimbaker, there is a double wait, connection to zk, and zk initialized, and then cli done, the cli can be disrupted any at point though [22:32] s/any at/at any [22:32] interuppted that is [22:33] hazmat, correct. i'm only referring to timeout the waiting up to the point of checking for /initialized [22:33] in actual usage, any subsequent waiting is minimal of course [22:34] hmm.. well it could be substantial.. and there is a wait timeout firing race against initialized.. oh .. right i forget, its not bootstrap waiting.. which simplifies this considerably its everything else. [22:34] hazmat, in this model, bootstrap doesn't wait, and in practice it actually works well [22:36] hazmat, right now i'm just hunting down while we are seeing tx connection timeouts leaking out of the retry connect loop (no errback setup on them of course...) [22:38] we seem them in http://wtf.labix.org/413/ec2-wordpress.out for example ("Unhandled error in Deferred") [22:38] one scenario i was thinking of in the context of the local provider, anodd behavior is that there is a long asyncop for the debootstrap of the master template and first unit, it might be nice to due that directly in bootstrap, so the user has feedback when its done and deploys/add-units are fairly normalized in times.. but that's orthogonal i guess. [22:39] hazmat, my feeling is that the normal thing to do in this case is juju bootstrap && juju status, and that gets what you want [22:39] maybe a hypothetical juju --waitfor bootstrap is just that, i don't know [22:40] hazmat, actually in that case, it's really juju bootstrap && some activity && poll on juju status [22:41] jimbaker, that timeout exception from txzookeeper is odd, afaics that's caught by sshclient.connect [22:42] hazmat, i know, i've traced it through and i believe that's the case, except for the fact that it appears like that on stderr ;) [22:42] jimbaker, indeed, reality is rather contrary in this case ;-) [22:47] jimbaker, it looks like _cb_tunnel_established is the one raising the error, but it looks like that has an errback [22:50] hazmat, yes, all the deferreds have errbacks. i do wonder about the chain_deferred variant however [23:06] jimbaker, me too [23:08] hazmat, brb [23:08] jimbaker, it consumes the error (ie handles it) and propogates the error down the client connect deferred which is yielded on [23:08] it looks right.. [23:11] So, config-changed. It gets run one or multiple times depending on how many config options there are in the install? [23:14] The problem I have, a service needs to be restart each time a configuration option is updated, but it's pretty slow to restart. [23:23] marcoceppi, config gets run once prior to start, and then once per time the config is changed [23:24] marcoceppi, multiple values can be set in a single command line, ie juju set a=b x=z [23:26] hazmat: so when you do juju set for multiple values, is config-changed run each time for those updated keys? or just once per set? [23:37] marcoceppi, the config-hook may run up to however many times the juju set is called (regardless of how many values in a single set), but the guarantee is that it will be called at least once with the latest config values [23:38] marcoceppi, short answer to the question, once per multi-value set [23:38] hazmat that's what I needed to know, thanks!