/srv/irclogs.ubuntu.com/2014/02/11/#juju.txt

=== axw_ is now known as axw
=== mwhudson is now known as zz_mwhudson
=== zz_mwhudson is now known as mwhudson
=== mwhudson is now known as zz_mwhudson
=== axw_ is now known as axw
=== timrc is now known as timrc-afk
=== zz_mwhudson is now known as mwhudson
=== freeflying is now known as freeflying_away
=== freeflying_away is now known as freeflying
=== frobware is now known as zz_frobware
=== zz_frobware is now known as frobware
=== freeflying is now known as freeflying_away
=== freeflying_away is now known as freeflying
=== freeflying is now known as freeflying_away
=== freeflying_away is now known as freeflying
vilahi there, I encounter issues during 'juju bootstrap' on canonistack, I noticed that two security groups are created at that time, one of them being empty (according to 'nova secgroup-list-rules juju-cs-0', is that expected ?09:24
vilaspecifically, juju bootstrap says: 'Attempting to connect to 10.55.32.3:22' and sits there until a 10m timeout expires09:25
vilamgz: ping ^ I have a gut feeling it's related to ssh keys and chinstrap but I can't find the best way to debug that :-/09:29
vilajuju debug-log says the connection times out too so I'm blind09:30
vilaand FTR, juju bootstrap fails with: '2014-02-11 09:35:13 ERROR juju.provider.common bootstrap.go:140 bootstrap failed: waited for 10m0s without being able to connect: Received disconnect from UNKNOWN: 2: Too many authentication failures for ubuntu'09:37
=== freeflying is now known as freeflying_away
=== mwhudson is now known as zz_mwhudson
vilaaxw_: I think I may be bitten by https://bugs.launchpad.net/juju-core/+bug/1275657 in the above ^ can you help me verify that ?11:06
_mup_Bug #1275657: r2286 breaks bootstrap with authorized-keys in env.yaml <bootstrap> <regression> <ssh> <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1275657>11:06
vilaI'm using 1.17.2 from a saucy client by the way11:07
vilaha damn, wrong TZ to reach axw_ :-/11:08
vilamgz: not around ?11:08
mgzhey vila, am now, was just in standup11:09
vilamgz: cool ;)11:09
mgzwhat juju version are you using?11:09
mgzand can you actually route to that 10. address?11:10
vilamgz: 1.17.2 acoording to --show-log11:10
vilamgz: depends on the routes, let me recheck, nova list always shows the instance for one11:11
vilamgz: ha, just got the timeout, so from memory:11:11
mgzbootstrap on trunk requires ssh, which requires working forwarding11:11
vilaI could reach it via chinstrap at one point11:11
vilamgz: it was working this week-end with an already bootstrapped node (so the routing was working)11:12
mgzyou probably need to set authorised-key-path to your canonical ssh key that chinstrap accepts11:12
vilaat that point that is, then I wanted to restart from a clean env, destroy-env11:12
vilamgz: that would make sense according to the bug but why did it worked before ?11:12
mgzoh, and when in doubt, delete ~.juju/enviornments/*.jenv files11:13
vilayeah, delete that, nova delete, even swift delete at one point11:13
vilatrying again with my canonistack key in authorized-keys-path11:14
vilamgz: and about that empty secgroup ?11:14
mgzthe per-machine ones are empty until a charm set some ports and `juju expose` is run11:15
vilamgz: ha great, makes sense, thanks11:15
vilahard to guess though ;) But makes perfect sense11:16
vilaok, bootstrap running, instance up, can't connect via ssh even using -i with my canonical key (too early may be, re-trying)11:18
vilamgz: using ssh -v, getting the host key (so the ssh server is up right ?), all my keys are attempted, none work11:19
mgzhmpf11:20
vilamgz: and what's the config key to reduce that 10mins timeout ?11:21
mgzyou can just interrupt,n?11:21
vilahmm, looks like it's ok now, I remembered going into a weird state where I had to cleanup everything (that's how I discovered the swift delete...)11:22
vilamgz: slight doubt, I had to generate a pub version of my canonistack key with 'ssh-keygen -y', that's the right command ?11:23
mgzwhy?11:24
mgzthat's not right.11:24
mgzchinstrap knows nothing about any canonistack keys, you need one that chinstrap allows11:24
vilamgz: hold on,11:27
vilamgz: I need the pub one for authorized-keys-path: ~/.canonistack/vila.key.pub in env...s.yaml, and I need the private in in .ssh/config for Host 10.55.60.* 10.55.32.*11:28
vilamgz: chinstrap itself is ok, I can ssh to it11:28
vilamgz: probably from my launchpad key known by my ssh agent (yup confirmed)11:29
mgzright, just use the launchpad key11:30
mgzin juju config11:30
* vila blinks11:33
vilamgz: not sure that trick will work for our other use cases, but let's see if it works for me first11:34
vilamgz: nope, same behavior11:37
vilamgz: any way to set 'ssh -vvv' for juju attempts ?11:38
mgzif you poke the source probably11:38
viladamn it, scratch that last attempts, the .jenv is not updated when I modify my envs.yaml !11:39
mgzno11:39
mgzdelete it after failed bootstraps :)11:39
vilamgz: ok, so I'm in with.. and it's proceeding11:41
vilapfew, at least I'm behind that wall11:42
vilamgz: thanks, a few questions then11:42
vilamgz: using the same key everywhere won't match all my use cases, was it just for debug or is it a hard requirement or.. why ? ;)11:42
mgzbecause we go through chinstrap, you have to give juju a key that chinstrap will accept11:43
mgzthat's all really11:44
vilamgz: hold on, juju relies on ~/.ssh /config right, so whatever I say there is not juju concern11:44
vilamgz: I mean, juju doesn't know it has to go thru chinstrap11:45
mgzno, but it is supplying a given key, that needs to be accepted by the bouncer11:45
vilamgz: and now it's juju destroy-env that is unhappy :-(11:45
mgzapparently that overrides the one in the config block for the forwarding agent11:46
vilaoh wait, my ssh/config still specifies my canonistack key so it seems that bootstrap and destroy-env use different tricks 8-/11:47
vilamgz: and juju debug-log has the same issue (.ssh/config for chinstrap bouncer fixed to use the lp key)11:49
vilamgz: so connecting with ssh to the state server works, authorized keys there says juju-client-key, vila@launchpad, juju-system-key (all prefixed with Juju:)11:51
vilabut destroy-env and debug-log hang11:51
mgzdestroy-env doesn't use ssh at all11:51
mgzyou just want the sshuttle tunnel up for that11:52
vilarhaaaaaaaaa11:52
mgzso it can talk over the juju api11:52
vilaalways sshuttle after succesful bootstrap, damn it11:52
mgzhaving fun yet? :)11:52
vilamgz: hehe, yeah, we're automating stuff but I seemed to be the only one that couldn't bootstrap anymore11:53
mgzyou have too many keys :011:53
vilawhich kind of limits the fun ;)11:53
vilamgz: I was taught this was a good thing ?11:53
vilamgz: and I even got a *new* one when introduced to canonistack...11:54
mgzyeah, I don't use that11:54
mgzjust give juju my one that works on chinstrap11:54
mgz(well... and set agent forwarding so my *other* key can be used to talk to launchpad so bzr works... so it's not all simple)11:54
vilamgz: so if I want to allow several keys from the juju instance, I need to switch from authorized-keys-path to authorized_keys and generate a proper content right ?11:55
mgzjust using one really is best11:57
mgzbut yeah, you can supply multiple11:58
vilaha good, I need to check which one I need exactly but at some point I have to create a nova instance and tell it which key to use, all was fine until now :-}11:58
vilamgz: anyway, I'm unblocked for now, thanks for the help !11:59
mgz:)11:59
vilamgz: I'm reading lp:juj-core log, how should I interpret: "Increment juju to 1.17.3." as opening .3 or releasing it ? (I suspect opening, so later revision will be part of 1.17.3, is that right ?)13:36
mgzvila: opening13:41
vilayes !13:41
vilamgz: So i think my issue may be fixed by revno 2293 but introduced by revno 2286, I'll live with a single ssh key until 1.1.7.3 is released and re-test then13:44
vilamgz: what's the best time to chat with Andrew Wilkins ?13:44
=== timrc-afk is now known as timrc
=== rogpeppe1 is now known as rogpeppe
=== freeflying_away is now known as freeflying
jcastrohey fellas so what's the TLDR on HPCC14:53
jcastrolooks like Xiaoming has done the fixes the last review asked for14:54
marcoceppijcastro: looks like it was added back yesterday14:55
marcoceppijcastro: they're still not addressing the validation portion. It will only validate if you provide it a sha1sum via cfg, so if you don't do that it falls back to just downloading packages14:56
marcoceppiI can tell now it's not going to pass for charm store inclusion14:56
lazyPowermarcoceppi: you doing the next review?15:02
marcoceppilazyPower: I could15:02
lazyPowerI bet XaoMing is tired of reading my review text by now ;)15:02
lazyPower*Xiaoming15:03
marcoceppiI'll give it a go in a bit15:03
bloodearnestworking on adding tests to gunicorn charm15:12
bloodearnestI have unit tests in ./tests15:13
bloodearnestI want to add some functional tests before proposing15:13
bloodearnestshould I use amulet, or 'charm test', or some combination?15:13
bloodearnestand should I put the unit tests somewhere else?15:14
=== freeflying is now known as freeflying_away
marcoceppicharm test is just a test runner15:24
marcoceppibloodearnest: CHARM_DIR/tests is reserved for functional tests15:25
bloodearnestmarcoceppi: ok will move it15:25
marcoceppiunit tests should either go in hooks or elsewhere15:25
bloodearnestmarcoceppi: so can I write a test in CHARM_DIR/tests using amulet that will be run by 'charm test'15:29
marcoceppibloodearnest: correct15:31
bloodearnestmarcoceppi: sweet. Do you know of any charms that have such tests I can use an example?15:32
marcoceppiyou can write tests in any language, but we recommend amulet15:32
marcoceppibloodearnest: yeah a few, one sec15:32
gnuoyDoes anyone have a moment to help me figure out why I can't get my env to find the juju-tools in a private openstack cloud ? I've uploaded to a swift bucket and created the keystone endpoints but I keep getting ERROR juju supercommand.go:282 no matching tools found for constraint. This is juju 1.16.515:53
mgzgnuoy: on bootstrap I presume? can you pastebin a --debug log?15:59
gnuoymgz, juju-metadata validate-tools15:59
gnuoywill do15:59
roadmrhey folks! the cs:precise/openstack-dashboard-13 charm installs openstack grizzly's horizon but also installs an incompatible version of django (1.5.4) from some ubuntu-cloud repository :( should I file a bug about this or is this fixed somewhere, somehow?16:00
mgzroadmr: bug 124066716:01
_mup_Bug #1240667: Version of django in cloud-tools conflicts with horizon:grizzly <charms> <cloud-archive> <cts-cloud-review> <packaging> <regression> <ubuntu-cloud-archive:Confirmed> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1240667>16:01
mgzand dimitern is working on it today16:01
bloodearnestgnuoy: agy had this issue last week on prodstack16:01
dimiternroadmr, i'm just testing the fix now on ec216:02
dimiternroadmr, what did you deploy after bootstrapping?16:02
gnuoymgz, I'll catch up with agy rather than use up your time (the log will take a few minutes to cleanup anyway as its doing fun things like displaying passwords in clear text)16:02
gnuoybloodearnest, thanks16:02
mgzgnuoy: sure, poke me again if you need to16:03
gnuoythanks, much appreciated16:03
bloodearnestgnuoy: it was after an juju env upgrade from 1.14 -> 1.16 IIRC, juju was looking in the wrong place for the tools.16:04
bloodearnestor rather, the tools were not where juju was looking16:04
gnuoybloodearnest, that sounds very like my situation16:04
roadmrdimitern: oh! I hadn'd looked for this in the cloud-archive project! I just did "juju deploy openstack-dashboard" on a maas provider (all nodes installed with Ubuntu 12.04, save for the maas controller which is trusty)16:05
roadmrdimitern: all other openstack components are already deployed but I was getting the server error when trying to access horizon16:06
dimiternroadmr, ok, thanks for the info, i'm trying now16:09
roadmrdimitern: awesome, thanks :)16:10
gnuoymgz, this is the last part of the output from validate-tools http://pastebin.ubuntu.com/6915759/16:11
gnuoyI see it looking up the endpoint for juju-tools correctly16:12
gnuoyand then is seems to query the index files and give up16:12
gnuoyThe auth url is the url of the public bucket I created and pointed the endpoint at fwiw16:13
lazyPowerev: ping16:17
blackboxswhi folks, should I expect that relation-data still be present during the *-relation-departed hook?  for instance. if I relation-set blah=2 in the relation and that relation-departed hook fires, should I be able to relation-get blah during teardown?16:57
blackboxsw..currently I'm seeing permission denied on any relation-get calls from the departed hook16:58
marcoceppiblackboxsw: I thought you could, but it's not reliable17:01
blackboxswmarcoceppi, cool thanks, I was thinking I might persist a json file containing information I need for service teardown on the unit that needs it during *-departed hook. Hopefully that approach sounds reasonable. I can't think of anyway w/ juju to ensure I always have the data I need to properly teardown the service17:06
blackboxswso, I'd setup the persistent file during *-relation-changed or *-relation-joined and reference it during *-departed17:06
marcoceppiblackboxsw: caching relation data to a dot file seems fine17:06
marcoceppiwithin $CHARM_DIR17:06
blackboxswroger thanks marcoceppi17:06
marcoceppiblackboxsw: IMO you should be able to run relation-get in relation-departed17:07
marcoceppido you have logs from when you get the perm denied?17:07
blackboxswmarcoceppi, I'll deploy again and grab what I get from debug-hooks.17:07
marcoceppiblackboxsw: cool, thanks!17:08
blackboxswthank you. will be about 30 mins though.... and I'll pastebin it17:08
=== Kyle- is now known as Kyle
blackboxswmarcoceppi, sorry that took so long: https://pastebin.canonical.com/104649/   here's a pastebin showing that the departed hook doesn't have access to relation data that should be available18:55
marcoceppimgz: should the departed relation hook have access to userdata? I was under the impression it should18:56
KarielG0hello18:57
marcoceppiblackboxsw: from what I understand (and have been telling people for the last few years), this is a bug18:57
marcoceppiimo, departed should have relation-get access, and broken should have no relation access18:57
marcoceppiKarielG0: hello o/18:57
blackboxswdpb1, pointed me at  https://juju.ubuntu.com/docs/authors-relations-in-depth.html   which seems to say relation-data should exist until the last unit depars18:57
blackboxswdeparts18:57
KarielG0is going the Q&A going to be here or they haven't changed the channel?18:58
CheeseBurgis this the right channel for the Jono thing?18:59
blackboxswhmm not sure which Q&A you are referring to KarielG0. I just fired a couple questions into the room as the experts are here :)18:59
* blackboxsw checks around19:00
LevanI like pie19:00
jonoQ&A is in #ubuntu-on-air - just reload the page and you can join the channel again19:00
jonosorry about that19:00
Levanwe can see you19:00
blackboxswthx jono19:01
linuxblackhey calendar app19:02
dpb1blackboxsw: agreed that the docs getting it right are important. :)19:03
blackboxswmarcoceppi, agreed. yeah I don't find any bug against relation-departed in juju. I'll file one19:04
linuxblackQUESTION; is there going to be a calendar app and weather app for 14.0419:05
marcoceppiblackboxsw: well that was written with my understanding of the relation stuff, so I could be wrong19:07
marcoceppilinuxblack: go to #ubuntu-on-air19:07
marcoceppiblackboxsw: either way a bug would be good19:08
blackboxswhttps://bugs.launchpad.net/juju-core/+bug/1279018 filed for clarity19:12
_mup_Bug #1279018: relation-departed hook does not surface the departing relation-data to related units <juju-core:New> <https://launchpad.net/bugs/1279018>19:12
blackboxswthanks marcoceppi19:12
aquariusmarcoceppi, ping about juju charm and consolidation19:19
marcoceppiaquarius: pong19:19
aquariusmarcoceppi, OK, as you know, I have discourse running on azure through juju19:20
aquariusand it's a bit too xpensiv19:20
aquariusexpensive19:20
aquariusI'm using £75 of compute hours a month19:20
marcoceppiright19:20
aquariuswhat I'd like to do is reduce that19:21
aquariusIt's deployed three VMs, and three "cloud services", whatever they are19:21
aquariusnow, it's possible that if that were all on one machine I'd still be paying the same amount19:21
aquariusbut I wonder if that's not the case19:21
aquariusso: I come to you to ask whether you have any ideas19:22
marcoceppiaquarius: probably not, a cloud machine is a the VM, with auto-dns stuff19:22
marcoceppiaquarius: so, we have containerization within juju that works OK, not sure how well it works on azure19:22
aquariussince a hundred pounds a month to run a not-very-busy-at-all discourse forum is insanely expensive by comparison with renting the most rubbish machine from, say, digitalocean :)19:22
marcoceppiaquarius: but you can do something like, juju deploy --to lxc:0 postgresql; juju deploy --to lxc:0 cs:~marcoceppi/discourse19:23
aquariusdiscoursehosting.com costs a lot less than that, and that's a managed service. :)19:23
marcoceppiaquarius: you could just rent a bunch of "rubbish" machines and use juju manual provider19:23
rick_h_aquarius: can also check out hazmat's https://github.com/kapilt/juju-digitalocean if you're interested in trying it out in DO19:24
marcoceppiaquarius: well, we never said deploying to the cloud was cheap ;)19:24
aquariusmarcoceppi, I suppose my thought is more this question: is there something I've done wrong in the setup, or something the charm does wrong in the setup... or is it actually the case that using juju to host a discourse forum on Azure is *expected* to cost approximately eight times as much as managed discourse hosting for it? :)19:25
aquariusbecause if the answer really is the latter then this is something of a blow to the juju/azure model, I have to tell you :)19:25
marcoceppiaquarius: you can do service co-location on a single machine, which will cut down costs from three machiens to one19:25
roadmrsomething like juju deploy --to ?19:25
marcoceppiright, but just using --to will do a bit of hulk smashing, using juju deploy --to lxc:0 or --to kvm:0 (soon) will set up containers on that machine19:26
rick_h_marcoceppi: do you have to setup the containers first? or will it auto create if it doesn't exist?19:26
marcoceppiaquarius: also, each cloud provider costs differently, aws v hp cloud v azure, etc19:26
marcoceppirick_h_: it will autocreate if you just do lxc:0 that will create 0/lxc/#19:27
rick_h_aquarius: there's two levels of colocation and then there's just using a cheaper provider as well.19:27
aquariusmarcoceppi, I was wondering about that. I have two questions about it, though. First question: do I have to wipe everything and redeploy from scratch to do that, or can I "migrate" from my current set up to it? Second question: in your opinion, will that actually be cheaper? (Or will I just use 3x the CPU on one machine and pay the same?)19:27
rick_h_so lots of options19:27
aquariusrick_h_, part of the reason we're using Azure is that MS kindly sponsored the forum19:27
rick_h_aquarius: it'll use the cpu of the combined services. If all three machines are running 90% cpu there's not going to be a good way to run on one box19:28
marcoceppiaquarius: well, compute hours are typically billed based on frequency of cpu scheduler, but are typically $/hour of usage in general19:28
rick_h_if they're all running a load of .1 then you can colo them fine as long as your comfy with all the services sitting on one VM19:28
aquariusrick_h_, I'm not sure how to work this out, but I'll bet you a fiver right now that those machines are idle at the moment ;)19:28
marcoceppiaquarius: yeah, they're pretty idle, discourse probably uses the most of all, and it's pretty well behaved19:29
marcoceppiaquarius: you'll have to do manual data migration from postgresql, etc19:29
rick_h_aquarius: then yea, I'd just look at colo'ing them. The deployer format supports colo'd units so I'd take your current environment, export a bundle, tweak it to be colo'd, and try to bring it up side by side with your current envionment19:29
aquariuswhat concerns me more is that I don't know whether what we're basically saying here is "don't use juju to deploy discourse to Azure because it's just really really expensive" or whether we're saying "you should have used the charm in the following way, $alternativeway, and you didn't"19:29
rick_h_and then if that's cool, bring the new colo up along side the live, replicate pgsql, and go19:29
marcoceppibasically, stand up a new azure environment, copy database + files over to new deployment, point public IP address to new site19:29
marcoceppitear down old deployment19:29
aquariusI really, really, really do not want to tear everything down to zero and redeploy from scratch and restore postgres backups :( I am no sysadmin :(19:30
rick_h_aquarius: we're saying that juju's default use is services at scale and elasticity. That's more $$ ootb. You can go for less $$ by using juju's containerization to colo services.19:30
aquariusespecially since the very act of deployment will, I imagine, use up all my remaining CPU time for the month...19:30
rick_h_aquarius: the Gui is starting work on a visual helper for doing service placement over the next couple of months19:30
marcoceppirick_h_: still doesn't solve data migration from within charms19:32
aquariusOK. Well, that's clear at least, even if it's not the answer I was looking for. :)19:32
* marcoceppi goes back to pondering about a backup/restore charm19:32
rick_h_marcoceppi: no, but pgsql supports replication correct?19:32
rick_h_so that's something the charms can/should do anyway19:33
rick_h_but yea, there's some migration pain in there, but it can be scripted as well19:33
marcoceppirick_h_: yes, you could add-unit --to lxc:0 then remove the postgresql/0 unit after migration19:33
marcoceppiI guess that *could* could19:33
marcoceppiwork*19:33
rick_h_marcoceppi: right, it's not as simple as it could/should be...but it's not impossible either19:33
marcoceppiwell, postgresql also dumps nightly backups too, which makes it easy to restore19:34
marcoceppibut yeah19:34
rick_h_so aquarius, yes it's $$ to use any cloud doing one machine per service unit. Yes there's a way to avoid that using colocation. Ugh that migration tooling to help you turn one into the other isn't available as a super easy tool for you.19:35
rick_h_and there's stuff in the work that makes this easier in the future, but doesn't help you this weekend19:36
aquarius*nod*19:36
=== zz_mwhudson is now known as mwhudson
phaserhello guys! where can a new developer find the codebase for juju, if he wants to contribute to the cause?20:30
sarnoldphaser: hello :) I believe this is where all the development happens: https://code.launchpad.net/juju-core20:31
phaserthank you for the quick response :)20:32
hazmataquarius, rick_h_ re do provider not ready yet.20:35
=== thumper is now known as thumper-afk
=== thumper-afk is now known as thumper
bdmurrayi'm trying to bootstrap an environment and I'm getting an error about it being already bootstrapped but juju destroy-environment says there are no instances23:47
bdmurrayHow can I resolve this?23:47
marcoceppibdmurray: what's the name of the environment?23:50
bdmurraymarcoceppi: the name? Its called openstack in environments.yaml23:52
marcoceppibdmurray: then what you'll want to do is rm -f ~/.juju/environments/openstack.jenv23:52
marcoceppithen try to bootstrap23:52
=== CyberJacob is now known as CyberJacob|Away
bdmurraymarcoceppi: that still didn't work23:54
marcoceppibdmurray: run `juju destroy-environment openstack` again, then delete the .jenv (again) then change the control-bucket name in the environments.yaml file to something else (doesn't matter what) then try bootstrapping again23:55

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!