=== negronjl_ is now known as negronjl_away === negronjl_away is now known as negronjl_ === kadams54 is now known as kadams54-away [01:51] aisrael: gotcha, thanks! [02:25] thumper: does 'juju destroy-service ' call the 'stop' hook for the service's juju charm? [02:25] thomi: by default, I think yes [02:25] hmmm === axw_ is now known as axw [02:58] thumper: ...and 'juju destroy-service ' should block until the 'stop' hook has finished, right? [02:58] no [02:58] oh? [02:58] nothing blocks [02:58] except bootstrap [02:58] ahhh, well that might explain something === kadams54 is now known as kadams54-away [03:39] thumper: is there a way to call a charms 'stop' hook from within a deployed unit? I'm seeing some odd things where calling 'service stop ' does something different to asking juju to destroy the service [03:39] from the client do this: [03:40] juju run --unit=foo/2 'hooks/stop' [03:40] ahh, thanks === urulama__ is now known as urulama [04:38] ok, problem solved. Turns out if your charm is missing relation-departed and relation-broken hooks (even though this charm doesn't use relations), 'juju destroy-service' won't shut down the service nicely... confusingly, 'juju debug-hooks stop' shows the stop hook getting called. I have no idea what's going on, but happy that I found a reproducible way to get things back into a sane state. === JoshStrobl is now known as JoshStrobl-AFK [09:49] i've got my openstack infra stood up and connected, everything is green across the board and i can login to horizon [09:50] but im trying to make sure my changes to the ceilometer charm are working and im not sure how to force it to log something to the database [11:26] hmm, any idea why I might be getting a permssion denied error running relation-get from inside a juju-run environment? [11:26] specifically: [11:27] relation-get --format=json -r db-admin:42 - updown-app/0 [11:28] some context: I previously added/removed the db-admin relation to perform a db migration [11:28] if I juju run again, it works [12:21] users [12:27] Trying to use JUJU with aprivate Openstack cloud [12:31] All the Openstack endpoints, e.g. keystone and swift and nov-cloud-controller have IP addresses belonging to a management network. The juju bootstrap process fires up a juju-state-machine-0 but this is on the private tenant network and cannot see the management network endpoints. Bootstrap fails with connection errors. Any ideas? === Odd_Blok1 is now known as Odd_Bloke === scuttle|afk is now known as scuttlemonkey [15:57] lazyPower, ping? [15:57] mattyw: pong [15:58] lazyPower, hey hey, hope you're well... The mongodb charm. If I deploy 1 unit, relate it to my charm. Then add some units to mongo at a later date it should still setup a replicaset right? [15:59] mattyw: correct, i would test in a staging env however [15:59] i haven't looked at mongo in quite some time [15:59] i'm not even sure if we've upgraded to deploy the 3.x series === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [16:25] lazyPower, ok thanks === kadams54 is now known as kadams54-away [16:29] Hmm.. do we not have access to unit-get in action context? [16:29] i take that back, we do when i'm in debug hooks === kadams54-away is now known as kadams54 [16:37] marcoceppi: tvansteenburgh aisrael - is there a pattern for calling actions from amulet? eg: i have a health check action, and i''d like to flex it in an amulet test. is our path forward for now to just subprocess call it and parse the output of the uuid? or is there a branch w/ action helpers pending? [16:38] lazyPower: there is not a mechanism yet in amulet [16:38] I'll open a feature request for it [16:38] ok, i figured that might be the case. Thanks for the confirmation [16:38] http://paste.ubuntu.com/11373549/ [16:38] however that is silly nice to have, at a glance health check until we get the actual health-check hook [16:41] lazyPower: we've talked a bit more about health and we're shying away from actions tbh [16:41] marcoceppi: considering its a stop-gap fix until the hooks are impl, whats the issue w/ an action? [16:41] just curious [16:42] lazyPower: well it's not a wholistic view of health, which is what you really want, it's just a snapshot. cory_fu and I (and others) talked about using a monitoring relation instead for this [16:43] well, thats a fair statement in 80% of services out there. YOu're polling for a snapshot of a single units health vs the cluster - whereas this is a builtin of the service. [16:43] etcdctl does introspection of the cluster and reports a sanity check essentially - this is like the benchmark to figure out if your cluster is even configured properly [16:44] * marcoceppi nods [16:44] but most services wont have something like this, so yeah - 100% understood [16:44] and i agree w/ that statement too [16:52] hazmat: are we on for 4pm today? [16:54] tvansteenburgh: yup [16:54] yo yo hazmat o/ [16:55] lazyPower: greetings! [16:55] glad to see you're still kickin over there :) [16:58] hey guys, any idea when that hvm patch for aws t2.micros is landing? [17:26] jose: i haven't heard anything about it yet [17:26] lazyPower: huh. I saw the mp being approved on gh [17:26] or, well, as you call it on gh, pr [17:26] seems like you know more about it than I do [17:27] if you're using the nightly docker container, should be able to test it now [17:27] or you can fetch/compile if you're opposed to using the container [17:28] I'll check. [17:31] hmm... getting an interesting error when attempting to bzr branch lp:charm-helpers - http://paste.ubuntu.com/11374382/ [17:32] lazyPower: I get the same error [17:32] marcoceppi: have we updated charm-helpers repo with anything? or is this a breaking change introduced in bzr? [17:37] that issue should be there if the branch is set to be private, but it's public... [17:38] lazyPower, aisrael: seems to be a LP error. getting it with another branch [17:38] thanks for confirming jose [17:41] thats' weird [17:41] lazyPower: what did you do to achieve this? [17:41] marcoceppi: general LP error [17:42] marcoceppi: it appears to be an outage spurned by a launchpad rollout that just hit about 20 minutes ago [17:42] marcoceppi: see #is-outage, they're on it. [17:42] Seems to be working now [17:43] aisrael: weird, still broken for me [17:44] ^ [17:44] must have a sweet canadian LP mirror thats got the fix [17:44] * lazyPower is instantly jealous [17:48] jose: try now, appears to be resolved for me [17:48] +1 [17:48] thanks for the heads up [19:02] dpb: can you meet at 3:30 instead of 4? [19:03] dpb1: ^ [19:06] * dpb1 checks [19:06] tvansteenburgh: so, in 30 min? (meeting is at "2pm" here) [19:07] dpb1: yeah [19:07] tvansteenburgh: ok [19:07] cool, thanks === urulama is now known as urulama__ === kadams54 is now known as kadams54-away [19:47] i finally got my openstack system running but I'm not the best way to get images added. I followed marcoceppi's guide for standing up openstack on 2 machines but every image i try to add just sits there and says "queued" forever. [19:49] Zetas: incoming help [19:49] i've tried uploading some of the images found at this url http://docs.openstack.org/image-guide/content/ch_obtaining_images.html [19:49] ok thanks lazyPower === Zetas is now known as zetas [19:51] hi zetas - can you tell us the command you're using to add the glance image? [19:51] beisner: im adding it from horizon :/ [19:52] zetas, ok, t-shooting needs command line. [19:54] ok, I thought maybe there was a command to add some basic images by default. When I stood up the openstack bundle on AWS it came with come coreos images if i recall correctly. [19:54] My manual installation did not [19:55] zetas, yep, glance will be empty by default. [19:55] beisner: are we planning on adding an action to glance to load up the Ubuntu images? *shameless plug* [19:55] zetas, as far as troubleshooting it, I'd use the glance command line api to confirm basic service functionality. ie. glance image-list should exit 0 (good), even with no images. that would confirm that glance is able to authenticate and talk to all of its friends. [19:56] lazyPower, you're the 2nd to ask ;-) I think an "import all currently-supported Ubuntu cloud images" action would be really useful. [19:56] ok, i'll google for glance commands to add images and stuff as well, thanks for your help [19:58] hrm... beisner: i take it this isn't a good sign heh http://paste.ubuntu.com/11376730/ [19:58] beisner: I should have known you cats were already on the trail for that one, but it never hurts to ask :) [20:00] zetas, http://paste.ubuntu.com/11376741/ [20:00] ok awesome, thanks [20:01] lazyPower, so actually, the glance simplestreams sync charm has some awesomeness. configure it and relate it to glance, and you always have fresh images in glance as they hit cloudimages. [20:01] but yeah, use cases for actions is likely to grow wildly. [20:01] wat [20:01] we have a charm that does this for you? [20:01] i think it needs a little updating love, but yep. [20:02] we use it in serverstack [20:02] that ensures that when CI nova boots new instances, they are the most up-to-date (smallest apt-get update delta). [20:04] woot it worked, thanks beisner [20:08] im gonna try cs:trusty/glance-simplestreams-sync-1 [20:08] see how it works [20:08] zetas, great! so before adding to it, i'd poke around the cli to see if you can import a small cirros or ubuntu cloud image. generally speaking, if glance image-list works, the apis and authentication are probably happy. next line of t-shooting might be command syntax, image format, or a storage issue. [20:10] yea, image-list output an empty table [20:10] im gonna keep poking around the CLI while this allocates [20:11] my poor macbook is not loving running all these inside a virtualbox vm haha [20:11] zetas, an empty table with no errors is actually good. [20:11] echo $? [20:11] after glance image-list to confirm it exits 0, without error. [20:12] 0 [20:12] yep, seems to be all good [20:13] zetas, http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img [20:13] zetas, glance image-create --name="trusty" --is-public=true --progress --container-format=bare --disk-format=qcow2 < trusty-server-cloudimg-amd64-disk1.img [20:14] zetas, ^ those are rips from some automation which I know work. [20:14] awesome, as soon as it finishes downloading that image im gonna run it and see what happens [20:15] zetas, good luck, i've got to head out for a bit. [20:15] thanks for all your help beisner [20:15] zetas, you're welcome - happy to help [20:15] :) [20:24] hmm poop. I tried to deploy an instance with my new image but apparantely i don't have a hypervisor haha. I guess nova-compute isn't happy for some reason. [20:24] * zetas debugs [21:59] lazyPower: ping [21:59] thumper: pong [22:00] lazyPower: got a few minutes for a hangout? [22:00] thumper: not atm [22:00] lazyPower: will you later? [22:01] After dinner [22:01] eta? [22:02] lazyPower: are you east coast? [22:02] thumper: yup [22:02] lazyPower: so 6pm there now? [22:02] yup [22:02] when would be good? [22:02] i'll ping after i eat [22:03] :) [22:03] ok [22:03] i'll be around fairly late this evening, working on some leftover stuff from before holiday [22:03] kk === kadams54 is now known as kadams54-away [23:44] thumper: ping/pong - back from food [23:47] lazyPower: hangout? [23:48] https://plus.google.com/hangouts/_/canonical.com/django-afterhours [23:48] hah