=== benonsoftware is now known as bdunn === zz_CyberJacob is now known as CyberJacob === bdunn is now known as benonsoftware === benonsoftware is now known as MerryChristmas === MerryChristmas is now known as benonsoftware === CyberJacob is now known as zz_CyberJacob [11:46] Where is the "juju environment key" created? I'm trying to help someone who can't 'juju ssh' into any units (Permission denied (public key)), and it turns out they don't have a juju environment key either locally in their ~/.ssh or listed in the units authorized_keys? [11:48] Ah - it uses your existing key (just reading https://jujucharms.com/docs/stable/getting-started - been a while since I've setup a new dev machine) [12:34] marcoceppi: ping [12:48] marcoceppi, hey - can we add vivid and wily series to the charms distro? I'd like to publish some charms under non-trusty namespaces for some bundles we're putting together for lxd [12:51] jamespage: urulama can look if you need that in the store ^ [12:51] rick_h_, ta [12:51] rick_h_, specifically wily and vivid :-) [12:54] jamespage: vivid is ok, we'll need to update CS for wily [12:58] hello [12:58] I ran "juju destroy-service debci-web-swift", but it stays around as "life: dying", so that I can't re-deploy it [12:59] (juju 1.22.6 on trusty) [12:59] what can I do to really kill this service? [13:01] (the underlying machine is already gone) === lukasa is now known as lukasa_away [13:02] pitti, is there a relationship trapped in error on a remote unit? [13:02] that would cause the principal service to remain in the topology as life:dying [13:02] lazypower: [13:02] relations: [13:02] juju-info: [13:02] - ksplice [13:02] - landscape-client [13:02] lazypower: I tried to "remove-relation" both after destroy-service, but that didn't help [13:02] i. e. "juju remove-relation debci-web-swift landscape-client" (and same for ksplice) [13:03] hmm. I've run into some edge cases where that happens when i destroy the machine out from under the service, and relations were still present [13:03] but that was on 1.22.x i haven't seen that behavior in 1.24+ [13:03] http://askubuntu.com/questions/365724/juju-remove-units-stuck-in-dying-state-so-i-can-start-over has no answer [13:03] lazypower: I am on 1.22 [13:03] ah [13:03] welp [13:03] The only way i was able to reconcile was by tearing the env down and standing it back up :| [13:04] oh! https://jujucharms.com/docs/stable/charms-destroy#state-of- [13:04] lazypower: yeah, I don't really want to do that if I can avoid it [13:04] yeah, i hope that works === lukasa_away is now known as lukasa [13:04] you'll need to resolve every unit it was related to to make sure you grab the hook in error [13:04] $ juju resolved debci-web-swift/0 [13:04] ERROR unit "debci-web-swift/0" not found [13:04] hm, that doesn't work [13:05] (nor without /0) [13:05] try ksplice and landscape-client [13:05] $ juju resolved ksplice [13:05] error: invalid unit name "ksplice" [13:05] $ juju resolved ksplice/0 [13:05] ERROR unit "ksplice/0" is not in an error state [13:07] hm, so why does it already remove the machine and unit when it complains afterwards that it still haves relations? [13:07] "has" (urgh) [13:07] when you pass a --force, its going to assume you know what you're doing and force it. [13:08] there's an inconsistency in the env now w/ those relations and no machine/service under it, and i believe this is due to some oddity of how it was done. I in 1.24+ this has been resolved [13:08] (I didn't pass --force) [13:08] lazypower: ah, good to know it's resolved in later versions [13:09] I "resolved" every other instance of ksplice and landscape-client now, and yay, it's gone [13:09] thats good news :) [13:09] one of those relations were in error state, keeping the service definition around in the environment [13:10] lazypower: indeed the ksplice subordinate on a completely different unit was in "agent-state-info: 'hook failed: "config-changed"' [13:11] lazypower: thank you! [13:12] any time :) [13:17] jamespage: ack, will updat shortly [13:17] marcoceppi, ta [13:19] jamespage: vivid and wily added [13:19] marcoceppi, awesome-o! [13:20] urulama, how long with the charm-store bits take for wily? [13:20] urulama, (no sudden rush but would like to get something up this week) [13:24] jamespage: we're in the middle of deployment of all jujucharms.com services ... but that update should be small enough. i'll try to squeeze it in this week [13:24] urulama, ta [13:24] jamespage: but before that, you're free to use vivid charms [13:24] urulama, ack - thanks [13:30] jamespage: fyi, the output of new bundle deployment (openstack-base) with bundles supported in core: http://pastebin.ubuntu.com/12513350/ [13:31] urulama, all good then? [13:32] jamespage: yes, just showing how new juju deploy "bundle" will look like [13:32] \o/ [13:34] urulama: that output looks SO AMAZING [13:35] any idea when that will be available? [13:35] 1.26 [13:35] marcoceppi: ^ [13:35] whoa, that *does* look nice [13:35] frankban: ^ [13:36] urulama, looking at this, its also idempotent? so i can deploy 2 bundles with the same service, and it just reconciles between whats deployed + whats declared? [13:36] cool [13:36] lazypower: yes [13:36] hot diggity dog thats awesome [13:36] lazypower: it does its best [13:36] lazypower: look at the second call ... "reusing ..." [13:36] lazypower: from line 132 [13:36] * lazypower dances a happy dance [13:36] i see it [13:37] thats very very nice. I can't wait to use that === lazypower changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP === marlinc_ is now known as marlinc === lukasa is now known as lukasa_away === skaro is now known as Guest44591 [17:49] cory_fu: If I wanted to build an interface for the reactive framework, where would I start? [17:52] mbruzek: The docs (http://pythonhosted.org/charms.reactive/#relation-stubs) are pretty complete, but you likely want to start from one of the examples on http://interfaces.juju.solutions/ [17:53] mbruzek: The pgsql is the most complete example, I think [17:54] mbruzek: Though the provides.py in mysql might be slightly easier to follow [17:56] cory_fu: These are all new to me and hard to follow, @not_unless ? [17:56] From the docs: "Assert that the decorated function can only be called if the desired_states are active." [17:57] That one is actually entirely optional and is more intended to make the code easier to read and inspect. [17:57] mbruzek: http://pythonhosted.org/charms.reactive/charms.reactive.decorators.html will of course be very helpful [17:58] how is not_unless different than when? [17:58] And then the other docs page you'll care about for creating a relation stub is http://pythonhosted.org/charms.reactive/charms.reactive.relations.html [17:59] mbruzek: not_unless does not trigger the handler. It is nothing more than an assertion [17:59] http://pythonhosted.org/charms.reactive/charms.reactive.decorators.html#charms.reactive.decorators.not_unless [17:59] Again, for getting started, you can probably just ignore not_unless [18:00] Ok [18:01] mbruzek: I'm happy to answer questions and help you get started, but I am still sick and sort-of swapping today, so I might be a bit slow at times to reply [18:02] cory_fu: I didn't know you were out today, I saw you responding in other channels so I figured you were working [18:02] But feel free to ping me if you have questions [18:02] Let me RTFM and get back with questions [18:02] Yeah, like I said, "sort-of swapping." :p [18:05] cory_fu, looks like it got the majority of us [18:06] Damn [18:07] yeah, half of eco is down with this gnarly bug [18:07] *over half [20:00] quit [20:00] exit [22:01] i was logged into a juju machine runnning one of the ceph services and it issued the shutdown command [22:01] is there a way to start it back up? [23:31] Slugs_: shutdown on the machinme? [23:32] lazypower: how was the Charm summit? [23:34] marcoceppi: yes [23:34] Slugs_: is it a cloud instance? [23:34] openstack private cloud single install [23:35] sorry for the lack of informaiion [23:35] Slugs_: I don't understand, so what was the machine? A KVM? An openstack instance? a machine in MAAS? [23:35] Slugs_: basically, you just need to "power" that machine back on [23:35] Slugs_: juju should resume from there [23:36] ah ok, so juju has no control to power it on [23:36] i need to do that from somewhere else [23:37] Slugs_: yes, juju is just a series of agents running in a machine, it does things like starting and stopping instances but only as a part of asking the substrate (aws, openstack, maas, etc) for a machine and stopping machines as a part of removing it [23:38] Slugs_: having a juju machine on/off command isn't a terrible idea, we've just not had anyone ask for it yet ;) [23:38] i see to ask out of the box questions, however this is from lack of knowledge [23:39] s/see/seem [23:41] Slugs_: yeah, so juju talks to providers (aws, maas, etc) to get machines, then it does everything it needs to on top of them but doesn't really interact with the providers much out side of "gimme machine" and "get rid of machine" [23:41] yes this makes sense