[02:30] Anyone got a good example of something that consumes an interface layer like https://github.com/juju-solutions/interface-http or https://github.com/juju-solutions/interface-juju-info ? [02:30] The documentation on interface layers says they're the most misunderstood part, and I'm still misunderstanding even after reading the doc 4-5 times. [03:20] blahdeblah: possibly, what are you misunderstanding? [03:20] marcoceppi: How I can consume the private-address (and add public-address) in https://github.com/juju-solutions/interface-juju-info, for starts. [03:21] I couldn't find anything that showed how to get at the things defined in auto_accessors [03:24] blahdeblah: so, I can whip up an example [03:25] marcoceppi: Don't go to any special trouble. I was just hoping to find some examples of charms which used either of those layers so I could get a feel for how they're used. [03:25] blahdeblah: private-address is the only thing available in auto-accessors for juju-info [03:25] also, this layer has a few typos. [03:25] marcoceppi: I know; I wanted to add public-address as well [03:26] blahdeblah: ah, I see [03:26] marcoceppi: yeah - I might have fixed them in my fork [03:26] https://github.com/paulgear/interface-juju-info [03:26] I started playing with it and then got stuck on how to actually use it. [03:34] blahdeblah: https://gist.github.com/marcoceppi/fb911c63eac6a1db5c649a2f96439074 [03:34] marcoceppi: Something else that confused me was that interface uses scopes.GLOBAL, but the doc says "All connected services and units for this relation will share a single conversation. The same data will be broadcast to every remote unit, and retrieved data will be aggregated across all remote units and is expected to either eventually agree or be set by a single leader." So it seemed to me that there wouldn't be an opportunity to get the [03:34] private-address (or public-address, assuming I've done that right) from every unit. [03:34] ^ hope that didn't get cut off [03:35] blahdeblah: right, scopes.GLOBAL is wrong, you'd want scopes.UNIT [03:35] marcoceppi: OK; that gist is pretty simple [03:36] So then if I want to gather a list of public-address values from every unit, that would need to be added per my branch, then each unit would need to send the gathered data across the peer relation to the leader? [03:38] blahdeblah: so there are a few things - one are you sure you want to use the juju-info interface? or are you creating a new interface? [03:39] marcoceppi: I can't see any reason not to use juju-info, as long as it works. I just want to gather a list of all the public-addresses of all the units associated. [03:39] There might be a better way to do that. [03:40] e.g. does the subordinate charm automatically get the public-address of the associated primary charm? If so, I may be able to just ask the peer relation for it. [03:41] blahdeblah: it does not. [03:41] blahdeblah: so you don't want to add this to juju-info [03:41] marcoceppi: So what do I want? :-) [03:42] blahdeblah: one min otp [03:42] no worries [03:47] blahdeblah: so what's your end goal? [03:47] blahdeblah: because you can't just add features to an interface [03:47] esp the juju-info interface [03:47] marcoceppi: juju-info does actually provide public-address, from what I've been able to tell [03:47] I could be wrong, though [03:48] marcoceppi: End goal is to resurrect the spirit (if not the flesh) of lazyPower's DNS-Charm and implement the autogenerated part. [03:48] (as well as a provider for the Dynect DNS API, and do it all with appropriate layers & interfaces) [03:49] blahdeblah: well, technically, yes, because of spaces in juju [03:49] blahdeblah: but practically, unit-get public-address will be the same on the primary and the subordinate [03:49] marcoceppi: I don't understand that "technicallly..." part [03:49] blahdeblah: ohh, this sounds cool - though I'd wish juju just grow dns natively [03:50] I actually think charms are a better place for it [03:50] Because then they're user-customisable and don't need compiled code [03:50] blahdeblah: because we have net-spaces in Juju, I think public-address was added since you can bind a netspace to the relation [03:51] blahdeblah: let me check something. if it exists in 1.25 and 2.0 then it's safe to add to the relation [03:51] marcoceppi: I have no plans to target 2.0 yet, but I guess I do want forward compatibility with it [03:52] blahdeblah: well the idea is - if it exists in 1.25 and 2.0 (public-address in relation data) I don't see why we can't have it in the juju-info interface layer [03:52] yeah - if it works I'll drop you a PR from my branch [03:55] marcoceppi: The primary goal is to have DNS work fully automatically given a very small amount of configuration on a subordinate charm, and have the elected leader update Dynect without the end user having to touch anything when you add or remove units. [03:56] marcoceppi: And a secondary goal of me actually understanding how layers & interfaces work. [03:56] blahdeblah: first goal sounds fucking awesome [03:56] I'm kind of more motivated by the 2nd goal ;-) [03:57] (although, I do hate editing DNS records, too ;-) [04:00] blahdeblah: sadly, in 1.25, only private-address exists [04:01] marcoceppi: So does that mean there's no way to get at the public-address at all? Because it's been stored in juju and reported in juju status since forever? [04:01] blahdeblah: but, you could/should create a peer relation, and each unit can run `unit-get public-address` and the leader can just get those addresses [04:01] That's possibly marginally easier [04:01] blahdeblah: the subordinates live on the same host, and so unit-get will work as if you were on the primary [04:01] Is that exposed in charmhelpers as well as bash? [04:02] blahdeblah: even if juju-info had public-address, you'd still have to use a peer relation [04:02] blahdeblah: yup [04:02] blahdeblah: scope: container is a super special type of relation, it basically means that communication will only happen between this unit and it's counterpart, unlike standard relations [04:02] blahdeblah: you've always needed a peer ;) [04:03] Yeah - I knew I would. Otherwise there would be no way for any one unit to know about all the others. [04:04] blahdeblah: https://pythonhosted.org/charmhelpers/api/charmhelpers.core.hookenv.html#charmhelpers.core.hookenv.unit_get [04:04] yeah - found that [04:04] marcoceppi: I didn't follow that last part about scope: container, though. [04:06] blahdeblah: it just reafirms what you've said before, that you've always needed a peer. The reason is because of what scope:container means. Typically, in juju, every unit of each side of the relation has a channel of communication with each other. scope:container does not, it only has a channel of communication with the unit it is attached to physically [04:06] blahdeblah: so you couldn't from one suborindate unit, via juju-info, query the private-address or another units primary service unit [04:07] blahdeblah: where as, in non scope: container situations you could [04:07] OK - makes sense [04:07] Thanks a lot for your help; I think that's clarified it to the point where I might be able to make something vaguely coherent next week when I work on this. :-) [04:09] marcoceppi: ^ Just in case you looked away somewhere :-) [04:17] blahdeblah: cool, feel free to ping if you have questions! [04:17] marcoceppi: much appreciated :-) === danilos` is now known as danilos === rogpeppe3 is now known as rogpeppe [07:21] could anyone recommend one or more charms that follow the very latest patterns? [07:21] as a newbie it's hard to know which charms are up to date and which are using old ways [07:30] simonklb: generally i would suggest looking at the openstack and bigdata charms [07:30] simonklb: and then look into layers, as that is newer [07:30] i'm not aware of any specific charms that are a better example, hopefully someone else has some ideas, maybe marcoceppi [07:43] lathiat: thanks! [07:52] is it possible to combine @when and @when_not ? [07:54] nvm, it is, neat :) === jacekn_ is now known as jacekn [08:26] gnuoy, hmm long queues upstream to get stuff landed in not awesome... [08:26] was the api-endpoints command removed from juju in a recent update? [08:27] I'm getting: ERROR unrecognized command: juju api-endpoints [08:43] urulama, hey - around? I'm struggling with publishing bundles to the charm store... [08:44] jamespage: hey [08:45] jamespage: what error do you get? [08:45] urulama, well I don't using the push/publish commands [08:45] charm push . cs:~openstack-charmers-next/openstack-base-trusty-mitaka [08:45] worked ok [08:45] and so did [08:45] charm publish ~openstack-charmers-next/openstack-base-trusty-mitaka-0 [08:46] however I don't see them on jujucharms.com [08:46] jamespage: it's private, set only to ~openstack-charmers-next, if you havent granted it to "everyone" [08:46] urulama, oh I thought read public was the default [08:46] urulama, letme fix that [08:47] jamespage: publish doesn't change permissions [08:48] urulama, okies... [08:49] jamespage: ok, i see the charm now [08:49] sorry, bundle [08:49] uf, we need to add series to bundles to avoid this, i know the work has already started [08:50] urulama, yeah - setting --acl read everyone [08:50] urulama, I just pushed an update which did push ok but I got a [08:50] ERROR cannot add extra information: unauthorized: access denied for user "james-page" [08:50] at the end of the push operation [08:51] hm, thought that was resolved already [09:04] charm test seem to be removed as well, or am I missing something? [09:04] ERROR unrecognized command: charm test [09:08] urulama, hmm [09:08] urulama, so I'm seeing dupes of bundles I've switched from bzr injestion to direct publishing here: [09:08] https://jujucharms.com/u/openstack-charmers-next/ [09:17] urulama, I also see alot of 'access denied' messages for push and publish which go away if I keep re-runnnig the commands... [09:19] jamespage: it seems there's a charm store unit in production that is misbehaving, and when ha-proxy switches to it, you get an error. we'll ask webops solve it [09:19] urulama, thanks! [09:22] jamespage: is ~openstack-charmers-next/bundle/openstack-base-43 published and ~openstack-charmers-next/bundle/openstack-base-40 ingested? [09:23] urulama, yes [09:24] jamespage: would you please do "charm publish ~openstack-charmers-next/bundle/openstack-base-43" ... i'd like to see if ingestion changes that pointer from revision 43 to 40 every time it runs [09:25] urulama, ok done [09:28] jamespage, giving these three the once over when you have a sec? [09:28] https://review.openstack.org/#/c/307643/ [09:28] https://review.openstack.org/#/c/307564/ [09:28] https://review.openstack.org/#/c/307387/ [09:28] s/giving/would you mind giving/ [09:28] gnuoy, yes [09:28] ta [09:32] jamespage: yep, that's the case. ingestion overrides manual publishing. that'll be a high priority bug. https://github.com/CanonicalLtd/jujucharms.com/issues/250 [09:32] urulama, awesome [09:32] thanks [09:36] urulama, would removing the original branches help in this case? [09:36] jamespage: yes it would [09:37] * jamespage goes todo that then [09:40] jamespage: you'll have to do "charm publish" again to reset the pointer to the revision that you want [09:41] urulama, ac [09:41] k [09:48] gnuoy, those all look reasonable [09:48] gnuoy, I see you have functional tested them - thankyou [09:48] thats how I roll [09:49] gnuoy, I don't see the need to wait for the full recheck on these ones - amulet does not do upgrade tests anyway [09:49] jamespage, good point [09:50] gnuoy, ok landing when the queue catches up... [09:50] ta [09:51] gnuoy, queue appears to have caught up - all last nights approved changes have now loanded... [09:58] I use amulet to do upgrade tests by telling it to deploy an old known-good revision then running 'juju upgrade-charm --switch' myself [09:59] stub, fwiw those changes were around upgrading the packages rather than the charm itself but thanks for the pointer [10:04] Does anyone here know if there is documentation on gettings juju 2.0 to work with MAAS 2.0 [10:05] Garyx, thats know not to work right now - dev team are working on it [10:05] Juju 2.0 with MAAS 1.9 is OK [10:06] oks, been looking around where it actually says that juju 2.0 is not working with maas 2.0 so been banging my head to a rock a little with that one. [10:07] I'm going to use an action to upgrade packages, and I think Amulet has grown action support recently. [10:07] But I guess I'd need two ppas to write an integration test for that. [10:25] beisner, for some reason juju 2.0 beta-1 was the default juju on lescina [10:25] I've pushed it back to 1.25.x for now [10:58] anyone else here running juju 2.0 beta4 and is having troubles with the testing? [10:59] it seems it still looks for environment.yaml but environments are called models now [11:01] ah I see it's still under development https://github.com/juju/amulet/issues/116 [11:12] simonklb: As far as I can tell, testing does not work with 2.0 at the moment as amulet depends on juju-deployer and juju-deployer is only working against the 1.x series. [11:12] simonklb: There may be unpackaged versions around but I don't know how much further you will get. [11:13] actually amulet master isn't even up to snuff: https://github.com/juju/amulet/blob/master/amulet/helpers.py#L168 [11:15] simonklb: That code path may not be called if the version is being sniffed (although that would still stick you with the juju-1 vs juju executable issue) [11:17] yea it's unfortunate, because I was told to start with juju 2.0 right away - but without testing it's going to be difficult [11:26] simonklb: Yes, you are stuck with just unit testing (which means I'm stuck on 1.x, since I have a lot of integration tests to keep happy) [11:26] simonklb: I think the problem is expected to be fixed by the time you get to writing integration tests :) [11:27] well it's also a convenient way to test your charm while youre writing it, instead of deploying it in a real environment [11:28] maybe there is some way to make that easy with your normal juju deployment too? [11:28] integration testing isn't convenient due to the setup and teardown times, but I agree you want it once you get to a certain point. [11:28] how would you go about testing simple relations? [11:28] for example fetching host and port from mysql [11:30] You could mock that sort of thing, but I agree that is best as an integration test against a real model. But first, you can do unit tests of the helpers and logic used by your relation hooks which can help the slow integration test work go much smoother. [11:31] the problem is that I'm not entierly sure what every relation returns, maybe that is documented somewhere? [11:32] The interfaces are documented in the charms that implement them, and it is of variable quality. The reactive framework and relation stubs are trying to address that problem, but it is early days for that. [11:34] 'juju debug-hooks' is great for exploring, as you can poke the relation interactively. [11:36] ah, thanks! [11:37] right now I'm looking to build a reactive charm, from what I've gathered you're supposed to use the @when and @when_not decorators to execute when the charm enters differente states [11:38] however, the normal install, start and stop hooks etc still looks like they are required, right? [11:39] You usually have an @hook('install') to kick things off, yes. The trick is to keep the @hook decorated handlers to a minimum, and hand off to the @when decorated handlers as soon as possible. [11:40] yea, then I feel I'm on the right track at least :) [11:40] @when('config.changed') has recently replaced many uses of @hook('config-changed') too [11:40] nice nice [11:40] but to get the @hook decorator, do I still need to get the charm helpers package? [11:41] or can I get that from the reactive package somehow? [11:41] No, you import the decorator from charms.reactive [11:41] great [11:42] might be a good idea to include that in the charm create boilerplate, since they automatically include @when('X.installed') but not @hook('install') [11:42] I haven't seen that boilerplate yet. [11:43] simonklb: oic. Yes, that boilerplate doesn't need the @hook [11:44] I assume it's this one: https://github.com/juju-solutions/template-reactive-python [11:44] so I can run it without the install hook? [11:44] The handler is invoked because the state is not set (@when_not('foo.installed')), and at the end of the handler it sets the foo.installed state so it doesn't get reinvoked a second time. [11:44] yes, you can run without the install hook [11:45] I think I had some error like "missing hook: install" or similar when I deployed it before [11:45] I'll give it another try [11:46] You need to 'charm build' it before it can be deployed. That adds a lot of the missing boiler plate, by pulling in the layers you declare in layers.yaml [11:47] that might have been it, because now it runs :) [12:59] simonklb: https://jujucharms.com/docs/devel/developer-getting-started and you should check out kubernetes charm or any of the bigdata charms on https://bigdata.juju.solutions [13:01] marcoceppi: any eta on juju 2.0 support in amulet? [13:01] simonklb: it should be there, tvansteenburgh has the latest on that [13:01] marcoceppi: it looks like it's still looking for environments and not models [13:02] jamespage, gnuoy - wow upstream merge queue/activities seem to be taking many hrs [13:02] is there any newer version than the master on github? [13:02] simonklb: you need the latest deployer and jujuclient [13:04] tvansteenburgh: where can I get them? ppa or source? [13:04] simonklb: https://launchpad.net/~ahasenack/+archive/ubuntu/juju-deployer-daily [13:04] simonklb: https://launchpad.net/~ahasenack/+archive/ubuntu/python-jujuclient === scuttle|afk is now known as scuttlemonkey === scuttlemonkey is now known as scuttle|afk === scuttle|afk is now known as scuttlemonkey [13:47] tvansteenburgh: thanks [13:52] tvansteenburgh: still getting the same error - it's trying to fetch the environment.yaml from the old juju home path [13:53] looking at the amulet code - this seems to be where I end up https://github.com/juju/amulet/blob/master/amulet/helpers.py#L168 [13:53] I also saw that there was an issue for this: https://github.com/juju/amulet/issues/116 [13:57] simonklb: paste the traceback please [13:58] simonklb: i see the problem [13:58] simonklb: my juju2 branch was never merged :/ [13:58] simonklb: that is features missing in amulet for juju 2.0, not amulet supporting juju 2.0 [13:59] simonklb: okay it was just merged [13:59] yea sorry for being unclear [13:59] tvansteenburgh: weird, I reviewed it and never merged it [13:59] simonklb: please repull master and try again [13:59] marcoceppi: np === natefinch is now known as natefinch-afk [14:09] tvansteenburgh: seem to be working now, thanks! [14:43] Can I get a review of CLI for layer options: https://github.com/juju-solutions/layer-basic/pull/58 [14:56] jamespage: if you want to get rid of revision 40 in /u/openstack-charmers-next (which is marked as current development charm), you'll have to do "charm publish ~openstack-charmers-next/bundle/openstack-base-43 --channel=development" which will stop listing revision 40 as alternative development charm [15:02] urulama, ack [15:03] same for any duplicated stuff that you want to get rid of (not duplicate for development revisions) [15:13] urulama, I see something niggly for charms published with series in metadata [15:14] urulama, [15:14] cs:~openstack-charmers-next/ntp [15:14] is one of those; but I need to write bundles which are backwards compat with 1.25.5 [15:14] which needs cs:~openstack-charmers-next/xenial/ntp [15:14] which is resolvable, but I can't push a bundle with that it in [15:16] jamespage: on the phone, gimme 10min [15:22] urulama, np [15:33] rockstar, we decided that placing control plane components in lxd containers on compute nodes running nova-lxd was a no-no right? [15:33] jamespage: yes, at least for right now. [15:33] rockstar, okay [15:33] If we want to spend the time to fix that bug, we can probably support that. [15:34] rockstar, later [15:35] rockstar, crazy thing is that it works fine with lxc containers on 1.25 [15:36] cause lxc != lxd from a storage prespective... [15:36] As I recall, the problem was more of a charm specific problem, right? We're messing with the storage underneath AFTER we created some containers. [15:41] good morning! [15:41] is it possible to migrate the state server to a different host? === scuttlemonkey is now known as scuttle|afk === scuttle|afk is now known as scuttlemonkey === scuttlemonkey is now known as scuttle|afk === redir_afk is now known as redir [16:09] kwmonroe, kjackal: Did you guys see my PR for the puppet layer? https://github.com/juju-solutions/layer-puppet/pull/2 === natefinch-afk is now known as natefnich === natefnich is now known as natefinch [16:11] on it cory_fu [16:12] cory_fu you are an artist! Nice work! [16:14] layer-puppet-peeps: I've already worked out alot of the kinks ... see here -> https://github.com/jamesbeedy/layer-puppet-agent [16:15] Dude. How did we miss that. [16:17] bdx: Does that layer support masterless puppet? [16:19] it doesn't currently, although it could very easily .... I should add a flag for masterless [16:19] bdx: you're in portland right? [16:19] http://www.devopsdays.org/events/2016-portland/ [16:19] jcastro: yea [16:20] jcastro: crazy! I'm all about it! [16:21] ok, holla at me if there's like a hotel cost or something [16:22] jcastro: perfect, I could use a night out on the town ... the site is kindof wimpy though ... where/how do I sign up? [16:23] ooooh nm 'propose' [16:23] got it [16:52] charmers: what am I doing wrong here -> https://github.com/jamesbeedy/interface-memcache/blob/master/requires.py [16:58] bdx: what error are you getting/what are you expecting? [17:08] marcoceppi: I am not even getting an error, I'm just not getting anything returned from memcache_hosts() [17:09] beisner, hmmm [17:10] beisner, working out the release procedure with git/gerrit [17:10] I was kinda expecting to delete the stable branch and then re-create it [17:10] but apparently deleting branches is beyond my powers... [17:10] * jamespage thinks... [17:18] jamespage: can you move branches? [18:00] beisner: jamespage any chance on getting that c-h change merged? [18:03] anyone know the password/user combo for curtain installs with MAAS? I am trying to diagnose something and I forgot === scuttle|afk is now known as scuttlemonkey [18:11] thedac, this looks ready to land. ready? https://review.openstack.org/#/c/307480/ [18:12] beisner: yes, please [18:12] marcoceppi: got an error for ya -> http://paste.ubuntu.com/15935177/ [18:12] bdx: interesting [18:15] marcoceppi: heres where/how I'm using it -> https://github.com/jamesbeedy/layer-sentry/blob/master/reactive/sentry.py [18:15] https://github.com/jamesbeedy/layer-sentry/blob/master/reactive/sentry.py#L131-142 [18:20] mgz, not sure [18:21] like a rename? [18:21] jamespage: yeah, I'd expect the perms on a rename to be the same as delete, but maybe not? [18:22] jamespage: also, unrelated, I'm interested in any results/issues you have with testing using the 'lxd' name in bundles with 2.0 [18:22] mgz, well that generally worked ok [18:23] mgz, apart from the fact that juju managing lxd containers and the lxd charm we have for nova-lxd changing storage config underneath it rather exploded... [18:28] jamespage, yah i've not jumped into how upstream manages release tags and branches. how goes? [18:31] gnuoy, trusty-icehouse n-api looks good now on metal; cholcombe we're now back to the block devices things and I've got that caught by the tail on metal atm. [18:31] beisner, sweet, can i log in and look? [18:33] cholcombe, http://pastebin.ubuntu.com/15935632/ [18:34] beisner, what's the ceph yaml file look like that you deployed with? [18:34] cholcombe, http://bazaar.launchpad.net/~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs/view/head:/helper/bundles/baremetal7-next.yaml [18:35] so it should've just used sdb [18:35] as partition of sdb is mounted [18:35] that's what it does everywhere but trusty-icehouse [18:35] hmm [18:36] cory_fu, kwmonroe: Do we want the bigtop plugin to be a subordinate of ANY other charm (juju-info interface) or only the ones implementing the hadoop-plugin interface, or both? (I vote for only the ones having the hadoop-plugin interface) [18:37] Definitely NOT juju-info. :) [18:37] Yes to hadoop-plugin [18:37] Though, if we're going to extend that to support other services, I guess we'll need to rename it from "hadoop" plugin [18:37] But we can do that down the road [18:38] beisner, can you pgrep ceph-osd ? [18:38] that's the function i believe it's calling to determine if it should block on no block devices found [18:39] empty-handed, no process [18:40] that's the problem [18:40] something happened that prevented ceph-osd from starting [18:40] can you dump the logs? === scuttlemonkey is now known as scuttle|afk [18:54] cholcombe, fwiw if you're after logs, we've got all logs for all units on all jobs [18:55] yeah we might want to just keep moving on testing and save the logs for me to poke at [18:55] beisner, something weird seems to be happening with the hardening function [18:56] so it started working on osdizing the disk and then it seemed to have stopped only to think later that the /dev/sdb is in use and bails [18:57] beisner, i'm not entirely sure after it formats the disk why it seems to bail and then come back later confused about the disk being in use [18:58] cholcombe, i'll leave the enviro up till this eve, will be out of pocket this afternoon. [18:58] beisner, ok thanks [18:58] beisner, if you need to destroy it go ahead though. I don't want to hold the pipeline up [18:59] cholcombe, this (bug) is pretty much the place to be, i think all other metallic issues are sorted ;-) === redir is now known as redir_lunch === CyberJacob is now known as zz_CyberJacob [19:59] this seems odd, I have setup vlans on my MAAS server and used juju bootstrap to create my first node. However, when I ssh into the node and try and ping all the maas interfaces they are not all pingable [20:03] beisner, awesome. i'll keep digging === redir_lunch is now known as redir [20:14] marcoceppi: I replied to your last comment on the layer options CLI. I'd like to get a first pass through, but if you thinking chaining / nested options is important for bash charms, I can add that. [20:15] cory_fu: we have until oct for this to land, so there's time [20:15] marcoceppi: Eh? [20:15] IBM needs this ASAP [20:15] cory_fu: this is totes a 2.2 milestone, yeah? [20:15] OH [20:15] And it's in the base layer [20:15] IT"S A LAYER [20:15] :) [20:19] OK so I am going to rule out juju being the source of my network drama by doing a basic deploy in MAAS [20:21] hi [20:21] hi everyone [20:22] i need help regarding ceph-osd [20:22] i successfully deployed openstack earlier [20:22] but yesterday i switched off my nodes [20:22] now today when i powered them up [20:22] all components came up successfully [20:22] but ceph-osd is showing error [20:23] hook failed "update status" [20:23] freak: can you try to `juju resolved ceph-osd/#` where # is the number of the failed unit? [20:23] freak: also, /var/log/juju/unit-ceph-osd-#.log would be helpful [20:24] ok let me try that...although from juju-gui i also clicked resolve options but it didn't worked..ok this time i try from cli [20:24] and also share log [20:34] marcoceppi: I found the problem with my unit ids! [20:34] bdx: \o/ [20:35] marcoceppi: when my memcache-relation-joined hook runs, it errors 'relation not found' [20:36] marcoceppi:it runs `relation-list --format=json -r memcache:93` [20:36] marcoceppi: when I debug-hooks into the instance and run `relation-list --format=json -r memcache:95` it succeeds [20:37] so what am I to gather from this, that memcached charm is setting an incorrect relation id? [20:38] marcoceppi i exeuted the command juju resolved here is the output http://paste.ubuntu.com/15937564/ [20:38] it says already resolved [20:38] but my unit is still in hook failed update status [20:42] freak: the log should give us a bit more info, can you pastebin it please? should be located in /var/log/juju/unit-ceph-osd-0.log [20:43] inside the ceph-osd machine, that is [20:43] ok jose let me take the log [20:43] thank you :) [20:50] dear jose/marcoceppi here is the log file output http://paste.ubuntu.com/15937716/ [20:51] I have to leave but maybe Marco can give you a hand later :) [20:52] ok no issue === natefinch is now known as natefinch-afk [22:01] wtf is going on in juju-dev [22:01] someone kick that guy [22:05] bdx, we are working on it [22:09] beisner, i think if we revert c94e0b4b on ceph-osd we should be in the clear. I think I have an idea for a fix but there's no way I can code it and complete it by tonight [22:10] beisner, do you have a bug for that ceph-osd fail? I can link the revert task to it [22:12] cholcombe plz see the spreadsheet [22:13] beisner, ok. I don't see anything for ceph-osd though [22:14] i'll write up a new bug if that's ok? [22:23] beisner, i put in the revert but of course it has a merge conflict lol [22:24] cholcombe when i can expect that this ceph-osd issue be resolved [22:24] cholcombe. Mojo tab. Trusty-Icehouse [22:27] freak, well I can revert the change and we should be fine. However the feature is going to need a little more work. I didn't realize that ceph would fail the ceph osd crush create-or-move command if a bucket didn't exist [22:28] ok thanks cholcombe [22:29] ceph crush internally i believe has all the default buckets even if you don't specify them but on firefly the command is failing [22:47] anyone know if there is an easy way to have multiple ceph zones with juju charms and visible in horizon? [22:50] firl, what do you mean by zones? [22:50] cholcombe: availability zones [22:50] like one zone for ssd, one zone for sata spin, one for scsci [22:51] ah yes. There's a patch that we're going to work on for that but it's not ready yet [22:51] firl, if you'd like to give it a shot to write the patch yourself i'd be happy to guide you [22:52] firl, what i mean is I haven't started on that patch yet but I'm happy to get more help [22:53] gotcha, I don’t have the cycles right now :( just didn’t know if it was available yet. [22:54] i have a juju action that I wrote to make new pools ( availability zones ) [22:54] nice! [22:54] so if you had a crush rule that was just ssd's or just spinners it could use that when creating the pool [22:54] i think i wasn't properly understanding what you meant by az. In ceph speak i think you're referring to a pool [22:55] i would be ok with different pools, or even different servers servicing Openstack Availability zones [22:55] that firl that action is part of ceph-mon-next and should land in a few days in stable once 16.04 is released [22:56] you'll have to hand make the crush rule for now but in the future the charm will have support to create custom rules [22:56] firl, https://github.com/openstack/charm-ceph-mon/blob/master/actions/create-pool [22:56] hrmm i was thinking more of having multiple ceph installs across the cluster [22:57] https://github.com/openstack/charm-ceph-mon/blob/master/actions.yaml#L49 profile name is what you're looking to pass in [22:57] oh i see [22:57] there's no reason you couldn't deploy the same charm twice with a different name for another ceph cluster [22:58] yeah, but not sure how the relationships would work or show up [22:59] i'm not either. i haven't tried that yet [23:00] but I think you are right, having named pools also show up would be nice [23:00] they should show up and relate just fine but i'm not certain of it === natefinch-afk is now known as natefinch