=== axw_ is now known as axw === mwhudson is now known as zz_mwhudson === zz_mwhudson is now known as mwhudson === mwhudson is now known as zz_mwhudson === axw_ is now known as axw === timrc is now known as timrc-afk === zz_mwhudson is now known as mwhudson === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === frobware is now known as zz_frobware === zz_frobware is now known as frobware === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying === freeflying is now known as freeflying_away === freeflying_away is now known as freeflying [09:24] hi there, I encounter issues during 'juju bootstrap' on canonistack, I noticed that two security groups are created at that time, one of them being empty (according to 'nova secgroup-list-rules juju-cs-0', is that expected ? [09:25] specifically, juju bootstrap says: 'Attempting to connect to 10.55.32.3:22' and sits there until a 10m timeout expires [09:29] mgz: ping ^ I have a gut feeling it's related to ssh keys and chinstrap but I can't find the best way to debug that :-/ [09:30] juju debug-log says the connection times out too so I'm blind [09:37] and FTR, juju bootstrap fails with: '2014-02-11 09:35:13 ERROR juju.provider.common bootstrap.go:140 bootstrap failed: waited for 10m0s without being able to connect: Received disconnect from UNKNOWN: 2: Too many authentication failures for ubuntu' === freeflying is now known as freeflying_away === mwhudson is now known as zz_mwhudson [11:06] axw_: I think I may be bitten by https://bugs.launchpad.net/juju-core/+bug/1275657 in the above ^ can you help me verify that ? [11:06] <_mup_> Bug #1275657: r2286 breaks bootstrap with authorized-keys in env.yaml [11:07] I'm using 1.17.2 from a saucy client by the way [11:08] ha damn, wrong TZ to reach axw_ :-/ [11:08] mgz: not around ? [11:09] hey vila, am now, was just in standup [11:09] mgz: cool ;) [11:09] what juju version are you using? [11:10] and can you actually route to that 10. address? [11:10] mgz: 1.17.2 acoording to --show-log [11:11] mgz: depends on the routes, let me recheck, nova list always shows the instance for one [11:11] mgz: ha, just got the timeout, so from memory: [11:11] bootstrap on trunk requires ssh, which requires working forwarding [11:11] I could reach it via chinstrap at one point [11:12] mgz: it was working this week-end with an already bootstrapped node (so the routing was working) [11:12] you probably need to set authorised-key-path to your canonical ssh key that chinstrap accepts [11:12] at that point that is, then I wanted to restart from a clean env, destroy-env [11:12] mgz: that would make sense according to the bug but why did it worked before ? [11:13] oh, and when in doubt, delete ~.juju/enviornments/*.jenv files [11:13] yeah, delete that, nova delete, even swift delete at one point [11:14] trying again with my canonistack key in authorized-keys-path [11:14] mgz: and about that empty secgroup ? [11:15] the per-machine ones are empty until a charm set some ports and `juju expose` is run [11:15] mgz: ha great, makes sense, thanks [11:16] hard to guess though ;) But makes perfect sense [11:18] ok, bootstrap running, instance up, can't connect via ssh even using -i with my canonical key (too early may be, re-trying) [11:19] mgz: using ssh -v, getting the host key (so the ssh server is up right ?), all my keys are attempted, none work [11:20] hmpf [11:21] mgz: and what's the config key to reduce that 10mins timeout ? [11:21] you can just interrupt,n? [11:22] hmm, looks like it's ok now, I remembered going into a weird state where I had to cleanup everything (that's how I discovered the swift delete...) [11:23] mgz: slight doubt, I had to generate a pub version of my canonistack key with 'ssh-keygen -y', that's the right command ? [11:24] why? [11:24] that's not right. [11:24] chinstrap knows nothing about any canonistack keys, you need one that chinstrap allows [11:27] mgz: hold on, [11:28] mgz: I need the pub one for authorized-keys-path: ~/.canonistack/vila.key.pub in env...s.yaml, and I need the private in in .ssh/config for Host 10.55.60.* 10.55.32.* [11:28] mgz: chinstrap itself is ok, I can ssh to it [11:29] mgz: probably from my launchpad key known by my ssh agent (yup confirmed) [11:30] right, just use the launchpad key [11:30] in juju config [11:33] * vila blinks [11:34] mgz: not sure that trick will work for our other use cases, but let's see if it works for me first [11:37] mgz: nope, same behavior [11:38] mgz: any way to set 'ssh -vvv' for juju attempts ? [11:38] if you poke the source probably [11:39] damn it, scratch that last attempts, the .jenv is not updated when I modify my envs.yaml ! [11:39] no [11:39] delete it after failed bootstraps :) [11:41] mgz: ok, so I'm in with.. and it's proceeding [11:42] pfew, at least I'm behind that wall [11:42] mgz: thanks, a few questions then [11:42] mgz: using the same key everywhere won't match all my use cases, was it just for debug or is it a hard requirement or.. why ? ;) [11:43] because we go through chinstrap, you have to give juju a key that chinstrap will accept [11:44] that's all really [11:44] mgz: hold on, juju relies on ~/.ssh /config right, so whatever I say there is not juju concern [11:45] mgz: I mean, juju doesn't know it has to go thru chinstrap [11:45] no, but it is supplying a given key, that needs to be accepted by the bouncer [11:45] mgz: and now it's juju destroy-env that is unhappy :-( [11:46] apparently that overrides the one in the config block for the forwarding agent [11:47] oh wait, my ssh/config still specifies my canonistack key so it seems that bootstrap and destroy-env use different tricks 8-/ [11:49] mgz: and juju debug-log has the same issue (.ssh/config for chinstrap bouncer fixed to use the lp key) [11:51] mgz: so connecting with ssh to the state server works, authorized keys there says juju-client-key, vila@launchpad, juju-system-key (all prefixed with Juju:) [11:51] but destroy-env and debug-log hang [11:51] destroy-env doesn't use ssh at all [11:52] you just want the sshuttle tunnel up for that [11:52] rhaaaaaaaaa [11:52] so it can talk over the juju api [11:52] always sshuttle after succesful bootstrap, damn it [11:52] having fun yet? :) [11:53] mgz: hehe, yeah, we're automating stuff but I seemed to be the only one that couldn't bootstrap anymore [11:53] you have too many keys :0 [11:53] which kind of limits the fun ;) [11:53] mgz: I was taught this was a good thing ? [11:54] mgz: and I even got a *new* one when introduced to canonistack... [11:54] yeah, I don't use that [11:54] just give juju my one that works on chinstrap [11:54] (well... and set agent forwarding so my *other* key can be used to talk to launchpad so bzr works... so it's not all simple) [11:55] mgz: so if I want to allow several keys from the juju instance, I need to switch from authorized-keys-path to authorized_keys and generate a proper content right ? [11:57] just using one really is best [11:58] but yeah, you can supply multiple [11:58] ha good, I need to check which one I need exactly but at some point I have to create a nova instance and tell it which key to use, all was fine until now :-} [11:59] mgz: anyway, I'm unblocked for now, thanks for the help ! [11:59] :) [13:36] mgz: I'm reading lp:juj-core log, how should I interpret: "Increment juju to 1.17.3." as opening .3 or releasing it ? (I suspect opening, so later revision will be part of 1.17.3, is that right ?) [13:41] vila: opening [13:41] yes ! [13:44] mgz: So i think my issue may be fixed by revno 2293 but introduced by revno 2286, I'll live with a single ssh key until 1.1.7.3 is released and re-test then [13:44] mgz: what's the best time to chat with Andrew Wilkins ? === timrc-afk is now known as timrc === rogpeppe1 is now known as rogpeppe === freeflying_away is now known as freeflying [14:53] hey fellas so what's the TLDR on HPCC [14:54] looks like Xiaoming has done the fixes the last review asked for [14:55] jcastro: looks like it was added back yesterday [14:56] jcastro: they're still not addressing the validation portion. It will only validate if you provide it a sha1sum via cfg, so if you don't do that it falls back to just downloading packages [14:56] I can tell now it's not going to pass for charm store inclusion [15:02] marcoceppi: you doing the next review? [15:02] lazyPower: I could [15:02] I bet XaoMing is tired of reading my review text by now ;) [15:03] *Xiaoming [15:03] I'll give it a go in a bit [15:12] working on adding tests to gunicorn charm [15:13] I have unit tests in ./tests [15:13] I want to add some functional tests before proposing [15:13] should I use amulet, or 'charm test', or some combination? [15:14] and should I put the unit tests somewhere else? === freeflying is now known as freeflying_away [15:24] charm test is just a test runner [15:25] bloodearnest: CHARM_DIR/tests is reserved for functional tests [15:25] marcoceppi: ok will move it [15:25] unit tests should either go in hooks or elsewhere [15:29] marcoceppi: so can I write a test in CHARM_DIR/tests using amulet that will be run by 'charm test' [15:31] bloodearnest: correct [15:32] marcoceppi: sweet. Do you know of any charms that have such tests I can use an example? [15:32] you can write tests in any language, but we recommend amulet [15:32] bloodearnest: yeah a few, one sec [15:53] Does anyone have a moment to help me figure out why I can't get my env to find the juju-tools in a private openstack cloud ? I've uploaded to a swift bucket and created the keystone endpoints but I keep getting ERROR juju supercommand.go:282 no matching tools found for constraint. This is juju 1.16.5 [15:59] gnuoy: on bootstrap I presume? can you pastebin a --debug log? [15:59] mgz, juju-metadata validate-tools [15:59] will do [16:00] hey folks! the cs:precise/openstack-dashboard-13 charm installs openstack grizzly's horizon but also installs an incompatible version of django (1.5.4) from some ubuntu-cloud repository :( should I file a bug about this or is this fixed somewhere, somehow? [16:01] roadmr: bug 1240667 [16:01] <_mup_> Bug #1240667: Version of django in cloud-tools conflicts with horizon:grizzly [16:01] and dimitern is working on it today [16:01] gnuoy: agy had this issue last week on prodstack [16:02] roadmr, i'm just testing the fix now on ec2 [16:02] roadmr, what did you deploy after bootstrapping? [16:02] mgz, I'll catch up with agy rather than use up your time (the log will take a few minutes to cleanup anyway as its doing fun things like displaying passwords in clear text) [16:02] bloodearnest, thanks [16:03] gnuoy: sure, poke me again if you need to [16:03] thanks, much appreciated [16:04] gnuoy: it was after an juju env upgrade from 1.14 -> 1.16 IIRC, juju was looking in the wrong place for the tools. [16:04] or rather, the tools were not where juju was looking [16:04] bloodearnest, that sounds very like my situation [16:05] dimitern: oh! I hadn'd looked for this in the cloud-archive project! I just did "juju deploy openstack-dashboard" on a maas provider (all nodes installed with Ubuntu 12.04, save for the maas controller which is trusty) [16:06] dimitern: all other openstack components are already deployed but I was getting the server error when trying to access horizon [16:09] roadmr, ok, thanks for the info, i'm trying now [16:10] dimitern: awesome, thanks :) [16:11] mgz, this is the last part of the output from validate-tools http://pastebin.ubuntu.com/6915759/ [16:12] I see it looking up the endpoint for juju-tools correctly [16:12] and then is seems to query the index files and give up [16:13] The auth url is the url of the public bucket I created and pointed the endpoint at fwiw [16:17] ev: ping [16:57] hi folks, should I expect that relation-data still be present during the *-relation-departed hook? for instance. if I relation-set blah=2 in the relation and that relation-departed hook fires, should I be able to relation-get blah during teardown? [16:58] ..currently I'm seeing permission denied on any relation-get calls from the departed hook [17:01] blackboxsw: I thought you could, but it's not reliable [17:06] marcoceppi, cool thanks, I was thinking I might persist a json file containing information I need for service teardown on the unit that needs it during *-departed hook. Hopefully that approach sounds reasonable. I can't think of anyway w/ juju to ensure I always have the data I need to properly teardown the service [17:06] so, I'd setup the persistent file during *-relation-changed or *-relation-joined and reference it during *-departed [17:06] blackboxsw: caching relation data to a dot file seems fine [17:06] within $CHARM_DIR [17:06] roger thanks marcoceppi [17:07] blackboxsw: IMO you should be able to run relation-get in relation-departed [17:07] do you have logs from when you get the perm denied? [17:07] marcoceppi, I'll deploy again and grab what I get from debug-hooks. [17:08] blackboxsw: cool, thanks! [17:08] thank you. will be about 30 mins though.... and I'll pastebin it === Kyle- is now known as Kyle [18:55] marcoceppi, sorry that took so long: https://pastebin.canonical.com/104649/ here's a pastebin showing that the departed hook doesn't have access to relation data that should be available [18:56] mgz: should the departed relation hook have access to userdata? I was under the impression it should [18:57] hello [18:57] blackboxsw: from what I understand (and have been telling people for the last few years), this is a bug [18:57] imo, departed should have relation-get access, and broken should have no relation access [18:57] KarielG0: hello o/ [18:57] dpb1, pointed me at https://juju.ubuntu.com/docs/authors-relations-in-depth.html which seems to say relation-data should exist until the last unit depars [18:57] departs [18:58] is going the Q&A going to be here or they haven't changed the channel? [18:59] is this the right channel for the Jono thing? [18:59] hmm not sure which Q&A you are referring to KarielG0. I just fired a couple questions into the room as the experts are here :) [19:00] * blackboxsw checks around [19:00] I like pie [19:00] Q&A is in #ubuntu-on-air - just reload the page and you can join the channel again [19:00] sorry about that [19:00] we can see you [19:01] thx jono [19:02] hey calendar app [19:03] blackboxsw: agreed that the docs getting it right are important. :) [19:04] marcoceppi, agreed. yeah I don't find any bug against relation-departed in juju. I'll file one [19:05] QUESTION; is there going to be a calendar app and weather app for 14.04 [19:07] blackboxsw: well that was written with my understanding of the relation stuff, so I could be wrong [19:07] linuxblack: go to #ubuntu-on-air [19:08] blackboxsw: either way a bug would be good [19:12] https://bugs.launchpad.net/juju-core/+bug/1279018 filed for clarity [19:12] <_mup_> Bug #1279018: relation-departed hook does not surface the departing relation-data to related units [19:12] thanks marcoceppi [19:19] marcoceppi, ping about juju charm and consolidation [19:19] aquarius: pong [19:20] marcoceppi, OK, as you know, I have discourse running on azure through juju [19:20] and it's a bit too xpensiv [19:20] expensive [19:20] I'm using £75 of compute hours a month [19:20] right [19:21] what I'd like to do is reduce that [19:21] It's deployed three VMs, and three "cloud services", whatever they are [19:21] now, it's possible that if that were all on one machine I'd still be paying the same amount [19:21] but I wonder if that's not the case [19:22] so: I come to you to ask whether you have any ideas [19:22] aquarius: probably not, a cloud machine is a the VM, with auto-dns stuff [19:22] aquarius: so, we have containerization within juju that works OK, not sure how well it works on azure [19:22] since a hundred pounds a month to run a not-very-busy-at-all discourse forum is insanely expensive by comparison with renting the most rubbish machine from, say, digitalocean :) [19:23] aquarius: but you can do something like, juju deploy --to lxc:0 postgresql; juju deploy --to lxc:0 cs:~marcoceppi/discourse [19:23] discoursehosting.com costs a lot less than that, and that's a managed service. :) [19:23] aquarius: you could just rent a bunch of "rubbish" machines and use juju manual provider [19:24] aquarius: can also check out hazmat's https://github.com/kapilt/juju-digitalocean if you're interested in trying it out in DO [19:24] aquarius: well, we never said deploying to the cloud was cheap ;) [19:25] marcoceppi, I suppose my thought is more this question: is there something I've done wrong in the setup, or something the charm does wrong in the setup... or is it actually the case that using juju to host a discourse forum on Azure is *expected* to cost approximately eight times as much as managed discourse hosting for it? :) [19:25] because if the answer really is the latter then this is something of a blow to the juju/azure model, I have to tell you :) [19:25] aquarius: you can do service co-location on a single machine, which will cut down costs from three machiens to one [19:25] something like juju deploy --to ? [19:26] right, but just using --to will do a bit of hulk smashing, using juju deploy --to lxc:0 or --to kvm:0 (soon) will set up containers on that machine [19:26] marcoceppi: do you have to setup the containers first? or will it auto create if it doesn't exist? [19:26] aquarius: also, each cloud provider costs differently, aws v hp cloud v azure, etc [19:27] rick_h_: it will autocreate if you just do lxc:0 that will create 0/lxc/# [19:27] aquarius: there's two levels of colocation and then there's just using a cheaper provider as well. [19:27] marcoceppi, I was wondering about that. I have two questions about it, though. First question: do I have to wipe everything and redeploy from scratch to do that, or can I "migrate" from my current set up to it? Second question: in your opinion, will that actually be cheaper? (Or will I just use 3x the CPU on one machine and pay the same?) [19:27] so lots of options [19:27] rick_h_, part of the reason we're using Azure is that MS kindly sponsored the forum [19:28] aquarius: it'll use the cpu of the combined services. If all three machines are running 90% cpu there's not going to be a good way to run on one box [19:28] aquarius: well, compute hours are typically billed based on frequency of cpu scheduler, but are typically $/hour of usage in general [19:28] if they're all running a load of .1 then you can colo them fine as long as your comfy with all the services sitting on one VM [19:28] rick_h_, I'm not sure how to work this out, but I'll bet you a fiver right now that those machines are idle at the moment ;) [19:29] aquarius: yeah, they're pretty idle, discourse probably uses the most of all, and it's pretty well behaved [19:29] aquarius: you'll have to do manual data migration from postgresql, etc [19:29] aquarius: then yea, I'd just look at colo'ing them. The deployer format supports colo'd units so I'd take your current environment, export a bundle, tweak it to be colo'd, and try to bring it up side by side with your current envionment [19:29] what concerns me more is that I don't know whether what we're basically saying here is "don't use juju to deploy discourse to Azure because it's just really really expensive" or whether we're saying "you should have used the charm in the following way, $alternativeway, and you didn't" [19:29] and then if that's cool, bring the new colo up along side the live, replicate pgsql, and go [19:29] basically, stand up a new azure environment, copy database + files over to new deployment, point public IP address to new site [19:29] tear down old deployment [19:30] I really, really, really do not want to tear everything down to zero and redeploy from scratch and restore postgres backups :( I am no sysadmin :( [19:30] aquarius: we're saying that juju's default use is services at scale and elasticity. That's more $$ ootb. You can go for less $$ by using juju's containerization to colo services. [19:30] especially since the very act of deployment will, I imagine, use up all my remaining CPU time for the month... [19:30] aquarius: the Gui is starting work on a visual helper for doing service placement over the next couple of months [19:32] rick_h_: still doesn't solve data migration from within charms [19:32] OK. Well, that's clear at least, even if it's not the answer I was looking for. :) [19:32] * marcoceppi goes back to pondering about a backup/restore charm [19:32] marcoceppi: no, but pgsql supports replication correct? [19:33] so that's something the charms can/should do anyway [19:33] but yea, there's some migration pain in there, but it can be scripted as well [19:33] rick_h_: yes, you could add-unit --to lxc:0 then remove the postgresql/0 unit after migration [19:33] I guess that *could* could [19:33] work* [19:33] marcoceppi: right, it's not as simple as it could/should be...but it's not impossible either [19:34] well, postgresql also dumps nightly backups too, which makes it easy to restore [19:34] but yeah [19:35] so aquarius, yes it's $$ to use any cloud doing one machine per service unit. Yes there's a way to avoid that using colocation. Ugh that migration tooling to help you turn one into the other isn't available as a super easy tool for you. [19:36] and there's stuff in the work that makes this easier in the future, but doesn't help you this weekend [19:36] *nod* === zz_mwhudson is now known as mwhudson [20:30] hello guys! where can a new developer find the codebase for juju, if he wants to contribute to the cause? [20:31] phaser: hello :) I believe this is where all the development happens: https://code.launchpad.net/juju-core [20:32] thank you for the quick response :) [20:35] aquarius, rick_h_ re do provider not ready yet. === thumper is now known as thumper-afk === thumper-afk is now known as thumper [23:47] i'm trying to bootstrap an environment and I'm getting an error about it being already bootstrapped but juju destroy-environment says there are no instances [23:47] How can I resolve this? [23:50] bdmurray: what's the name of the environment? [23:52] marcoceppi: the name? Its called openstack in environments.yaml [23:52] bdmurray: then what you'll want to do is rm -f ~/.juju/environments/openstack.jenv [23:52] then try to bootstrap === CyberJacob is now known as CyberJacob|Away [23:54] marcoceppi: that still didn't work [23:55] bdmurray: run `juju destroy-environment openstack` again, then delete the .jenv (again) then change the control-bucket name in the environments.yaml file to something else (doesn't matter what) then try bootstrapping again