hazmat | niemeyer, hmm.. so destroy-environment cleans out state and containers, but there is one caveat that would need to be addressed by a tear down, namely updating the lxc cache image, and perhaps cleaning out the apt-proxy cache | 00:05 |
---|---|---|
hazmat | the precise lxc ubuntu script now has update cache functionality this incorporated | 00:06 |
_mup_ | juju/robust-zk-connect r415 committed by jim.baker@canonical.com | 00:14 |
_mup_ | Updated tests | 00:14 |
_mup_ | juju/robust-zk-connect r416 committed by jim.baker@canonical.com | 04:42 |
_mup_ | Docstrings, PEP8, PyFlakes | 04:42 |
niemeyer | Good morning all | 13:35 |
raphink | hi niemeyer | 13:35 |
niemeyer | Or good afternoon, to some :) | 13:35 |
mpl | yep. good morning to you. | 13:37 |
hazmat | Testing | 13:42 |
hazmat | g'morning | 13:42 |
rog | niemeyer: good afternoon :-) | 13:44 |
hazmat | jimbaker, could you send an email to the list re describing high level cli changes for scp/ssh/num-units | 14:34 |
hazmat | niemeyer, your still against the change in lp:~hazmat/juju/assume-local-ns-if-local-repo.. ie assume local ns if --repository is specified on the cli? | 14:35 |
hazmat | fwereade, the visible instance stuff looks good, its a +1 with the fix for local provider | 14:37 |
fwereade | hazmat, cool, let me take a quick look | 14:37 |
niemeyer | hazmat: Yeah | 14:43 |
hazmat | niemeyer, it looks like the dns entry for the store is in.. so people are getting client hangs, is it possible to put up a static page there that will cause a more immediate fault? | 14:44 |
hazmat | ie. if you forget to add a 'local:' the deploy just hangs | 14:44 |
niemeyer | hazmat: Yeah, should be possible | 14:44 |
hazmat | niemeyer, so i still think its rather confusing to have a --repository not imply local.. i think the best thing to avoid confusion is to just have resolve explicitly log which repo its using, that will at least give the user some indication of what's going on | 14:51 |
* hazmat notes its time to go to the dentist.. drill baby drill ;-) | 14:53 | |
niemeyer | hazmat: I find it confusing in the other direction, and problematic for several reasons | 15:01 |
niemeyer | hazmat: --repository to me simply implies "look for charms here as well" | 15:01 |
niemeyer | hazmat: having it affecting the resolution of urls sounds pretty surprising | 15:02 |
hazmat | niemeyer, right.. but the common case for specifying --repository is to deploy a local charm.. not deploying from the cs.. the dep lookup algorithm goes by first found dep, so i don't see that algorithm being affected, its just a matter of resolving the first item | 15:02 |
hazmat | your the taste master ;-) but i'd like to switch that branch out to do a log of repo being utilized to avoid confusion in this case, as i think its common | 15:04 |
hazmat | and is a nice practice in general | 15:04 |
niemeyer | hazmat: --repository is called like that because it enables any kind of repository | 15:07 |
niemeyer | hazmat: It's not the same as --local | 15:07 |
hazmat | niemeyer, so the goal is that it could enable a private remote repo? | 15:07 |
niemeyer | hazmat: Yes, that's what we discussed | 15:08 |
hazmat | currently that logic is hardcoded .... remote == 'store.juju." | 15:08 |
niemeyer | hazmat: implementation vs. public API | 15:08 |
hazmat | niemeyer, does that imply a different namespace prefix? .. could the user switch the default lookup to a private repo? | 15:08 |
niemeyer | hazmat: The user can do whatever he pleases.. the command line option is unrelated to that | 15:09 |
niemeyer | hazmat: We can change the logic, but then let's rename the option as well | 15:09 |
niemeyer | hazmat: and change it globally, and _fail_ in case the charm isn't found locally anywhere | 15:10 |
hazmat | niemeyer, i meant the user doing so sans code modification.. if they have a private repo, that they want to use for all their charms.. an additional environments.yaml repo config setting seems appropriate | 15:10 |
niemeyer | hazmat: So for --local, help="Resolve all charm references to the given local repository" | 15:10 |
hazmat | ic | 15:11 |
hazmat | argh.. have to come back to this latter.. late for my appt | 15:11 |
niemeyer | hazmat: "Enjoy" :) | 15:11 |
* hazmat lol | 15:11 | |
hazmat | probably not | 15:11 |
marcoceppi | I've got a question about checking cryptographic signatures from 3rd party sources | 15:18 |
marcoceppi | The third party source doesn't provide a has to check against, so I've created a bzr repo in LP that has the proper checksum. The install branches this and performs the check - that way I can continually update the checksum without having to always update the charm | 15:19 |
marcoceppi | Best practice for this? Or is this okay? | 15:19 |
jcastro | m_3: the hadoop email reminds me about automated testing of charms | 15:34 |
jcastro | know anything since the sprint about that? | 15:34 |
m_3 | jcastro: nope, haven't been following the conversation on it | 15:35 |
jcastro | marcoceppi: hey so, sorry to sound annoying, but Minecraft? | 15:58 |
marcoceppi | Yes, it's in relation to minecraft :) | 15:59 |
marcoceppi | My question is at least | 15:59 |
marcoceppi | It's kind of the last thing I need to do | 15:59 |
jcastro | ah ok | 15:59 |
jcastro | hazmat: SpamapS: m_3: what do you guys think about marco's suggestion for checking the checksum? | 16:00 |
m_3 | marcoceppi: sounds like this at least tells you if it's changed or not.. next best thing to 3rd party hash. doesn't tell if you were using a compromised binary to begin with | 16:06 |
fwereade | need to pop out for a bit, back later | 16:11 |
marcoceppi | m_3: I know the binary I have isn't compromised, if that helps. I guess I'll make a plea to Mojang to include checksums | 16:20 |
m_3 | marcoceppi: what you've done is a great workaround... seems like the same amount of work as if you added the hash directly into the charm, but either way works | 16:23 |
m_3 | i.e., when new binary comes out, you update hash... either in your repo _or_ the charm | 16:24 |
marcoceppi | Right, but the alternative I was looking at is, the charm won't have to be updated as nearly as often | 16:25 |
marcoceppi | I'll push up what I have now, then I think it'll be ready for review | 16:25 |
m_3 | marcoceppi: in general, you're signing and saying that the upstream binary is clean... I'd keep asking Mojang for hashes | 16:25 |
marcoceppi | yeah, I'll start bugging him on twitter | 16:26 |
jcastro | Jorg | 16:35 |
jcastro | whoops | 16:35 |
robbiew | m_3: uh oh....saw the video....now people will know your face...you're so screwed | 16:37 |
robbiew | lol | 16:37 |
jimbaker | hazmat, yes, i will send the emails describing these and other proposed cli changes | 16:39 |
m_3 | robbiew: no way... didn't know it was out... ugh | 16:42 |
bloodearnest | hey all - any way I can get "juju ssh" to use a unit's ip address rather than dns to attempt to connect? | 16:43 |
marcoceppi | bloodearnest: juju ssh service_name/unit_number ? | 16:44 |
bloodearnest | marcoceppi, that's what I used, but the dns on the unit instances is a .local, and resolution fails | 16:44 |
robbiew | m_3: http://cloud.ubuntu.com/2011/11/hadoop-world-ubuntu-hadoop-and-juju/ | 16:49 |
robbiew | :D | 16:49 |
bloodearnest | the command "juju ssh <n>" works, but "juju ssh <service/unit>" fails with a dns error | 16:49 |
hazmat | bloodearnest, that's odd.. which provider are you using? | 17:21 |
hazmat | marcoceppi, i guess separating the checksum from the download is better than no checksum at all | 17:22 |
SpamapS | Don't want to be a downer, but Juju doesn't always get positive mentions.. http://www.youtube.com/watch?v=oebqlzblfyo&feature=youtu.be .. whole thing is good, but juju pops up at 3:30 or so | 17:26 |
SpamapS | Luckily its just a WTF, not a specific argument against it. | 17:27 |
marcoceppi | SpamapS: This man is very angry | 17:34 |
jcastro | this is awesome | 17:34 |
SpamapS | and VERY smart | 17:35 |
jcastro | this guy did that SSD talk right? | 17:36 |
jcastro | at the US velocity? | 17:36 |
SpamapS | Dunno | 17:36 |
jcastro | ah yeah, it is him | 17:37 |
SpamapS | AWS -- Is shit, * Openstack -- more shit | 17:37 |
bloodearnest | hazmat, canonistack | 18:01 |
rog | i'm off for the day, see y'all tomorrow. | 18:08 |
fwereade | cya rog | 18:11 |
fwereade | later all | 18:24 |
niemeyer | fwereade: Cheers! | 18:24 |
niemeyer | rog: Have a good evening rog | 18:24 |
* niemeyer on lbox propose now | 18:25 | |
niemeyer | Is Launchpad read-only?> | 18:40 |
niemeyer | Seems to be back.. | 18:42 |
niemeyer | Seems to be down again.. something funky | 18:42 |
niemeyer | bzr: ERROR: Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist. | 18:42 |
niemeyer | hazmat: Btw, feel free to talk to IS if you'd like to have dummy server deployed sooner rather than later | 18:43 |
jcastro | jamespage: is the etherpad-lite charm still good to go? Or are you planning any surgery on it? | 18:54 |
jamespage | I think its still good - I'm not planning todo any more work on it | 18:54 |
jamespage | until they next make a release | 18:54 |
* jcastro nods | 18:54 | |
_mup_ | juju/local-repo-log-broken-charm r415 committed by kapil.thangavelu@canonical.com | 19:15 |
_mup_ | merge trunk | 19:15 |
hazmat | hmm | 19:21 |
jimbaker | (looks like this didn't get sent) hazmat, i think it makes sense to add timeout support to making more robust connections. but maybe this should be done in two phases: robust (but consequently never times out!); then add a --timeout option to juju (or maybe better --timeout-zk) | 21:21 |
jimbaker | the advantage of doing this is that vs an external timeout is we can timeout just the connection to zk setup | 21:21 |
hazmat | jimbaker, agreed in principal... we already have a timeout, but there's a post host up, pre timeout error afaicr as well that needs handling | 22:18 |
jimbaker | hazmat, sounds good | 22:29 |
hazmat | jimbaker, afaics the end behavior is the same, not sure i understand the distinction with an external timeout | 22:29 |
jimbaker | hazmat, the difference is that one be inadvertently interrupting some ZK modification | 22:30 |
jimbaker | i suppose the cli could be more nuanced in terms of handling signals however | 22:31 |
hazmat | jimbaker, there is a double wait, connection to zk, and zk initialized, and then cli done, the cli can be disrupted any at point though | 22:32 |
hazmat | s/any at/at any | 22:32 |
hazmat | interuppted that is | 22:32 |
jimbaker | hazmat, correct. i'm only referring to timeout the waiting up to the point of checking for /initialized | 22:33 |
jimbaker | in actual usage, any subsequent waiting is minimal of course | 22:33 |
hazmat | hmm.. well it could be substantial.. and there is a wait timeout firing race against initialized.. oh .. right i forget, its not bootstrap waiting.. which simplifies this considerably its everything else. | 22:34 |
jimbaker | hazmat, in this model, bootstrap doesn't wait, and in practice it actually works well | 22:34 |
jimbaker | hazmat, right now i'm just hunting down while we are seeing tx connection timeouts leaking out of the retry connect loop (no errback setup on them of course...) | 22:36 |
jimbaker | we seem them in http://wtf.labix.org/413/ec2-wordpress.out for example ("Unhandled error in Deferred") | 22:38 |
hazmat | one scenario i was thinking of in the context of the local provider, anodd behavior is that there is a long asyncop for the debootstrap of the master template and first unit, it might be nice to due that directly in bootstrap, so the user has feedback when its done and deploys/add-units are fairly normalized in times.. but that's orthogonal i guess. | 22:38 |
jimbaker | hazmat, my feeling is that the normal thing to do in this case is juju bootstrap && juju status, and that gets what you want | 22:39 |
jimbaker | maybe a hypothetical juju --waitfor bootstrap is just that, i don't know | 22:39 |
jimbaker | hazmat, actually in that case, it's really juju bootstrap && some activity && poll on juju status | 22:40 |
hazmat | jimbaker, that timeout exception from txzookeeper is odd, afaics that's caught by sshclient.connect | 22:41 |
jimbaker | hazmat, i know, i've traced it through and i believe that's the case, except for the fact that it appears like that on stderr ;) | 22:42 |
hazmat | jimbaker, indeed, reality is rather contrary in this case ;-) | 22:42 |
hazmat | jimbaker, it looks like _cb_tunnel_established is the one raising the error, but it looks like that has an errback | 22:47 |
jimbaker | hazmat, yes, all the deferreds have errbacks. i do wonder about the chain_deferred variant however | 22:50 |
hazmat | jimbaker, me too | 23:06 |
jimbaker | hazmat, brb | 23:08 |
hazmat | jimbaker, it consumes the error (ie handles it) and propogates the error down the client connect deferred which is yielded on | 23:08 |
hazmat | it looks right.. | 23:08 |
marcoceppi | So, config-changed. It gets run one or multiple times depending on how many config options there are in the install? | 23:11 |
marcoceppi | The problem I have, a service needs to be restart each time a configuration option is updated, but it's pretty slow to restart. | 23:14 |
hazmat | marcoceppi, config gets run once prior to start, and then once per time the config is changed | 23:23 |
hazmat | marcoceppi, multiple values can be set in a single command line, ie juju set a=b x=z | 23:24 |
marcoceppi | hazmat: so when you do juju set for multiple values, is config-changed run each time for those updated keys? or just once per set? | 23:26 |
hazmat | marcoceppi, the config-hook may run up to however many times the juju set is called (regardless of how many values in a single set), but the guarantee is that it will be called at least once with the latest config values | 23:37 |
hazmat | marcoceppi, short answer to the question, once per multi-value set | 23:38 |
marcoceppi | hazmat that's what I needed to know, thanks! | 23:38 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!