[02:30] <blahdeblah> Anyone got a good example of something that consumes an interface layer like https://github.com/juju-solutions/interface-http or https://github.com/juju-solutions/interface-juju-info ?
[02:30] <blahdeblah> The documentation on interface layers says they're the most misunderstood part, and I'm still misunderstanding even after reading the doc 4-5 times.
[03:20] <marcoceppi> blahdeblah: possibly, what are you misunderstanding?
[03:20] <blahdeblah> marcoceppi: How I can consume the private-address (and add public-address) in https://github.com/juju-solutions/interface-juju-info, for starts.
[03:21] <blahdeblah> I couldn't find anything that showed how to get at the things defined in auto_accessors
[03:24] <marcoceppi> blahdeblah: so, I can whip up an example
[03:25] <blahdeblah> marcoceppi: Don't go to any special trouble.  I was just hoping to find some examples of charms which used either of those layers so I could get a feel for how they're used.
[03:25] <marcoceppi> blahdeblah: private-address is the only thing available in auto-accessors for juju-info
[03:25] <marcoceppi> also, this layer has a few typos.
[03:25] <blahdeblah> marcoceppi: I know; I wanted to add public-address as well
[03:26] <marcoceppi> blahdeblah: ah, I see
[03:26] <blahdeblah> marcoceppi: yeah - I might have fixed them in my fork
[03:26] <blahdeblah> https://github.com/paulgear/interface-juju-info
[03:26] <blahdeblah> I started playing with it and then got stuck on how to actually use it.
[03:34] <marcoceppi> blahdeblah: https://gist.github.com/marcoceppi/fb911c63eac6a1db5c649a2f96439074
[03:34] <blahdeblah> marcoceppi: Something else that confused me was that interface uses scopes.GLOBAL, but the doc says "All connected services and units for this relation will share a single conversation. The same data will be broadcast to every remote unit, and retrieved data will be aggregated across all remote units and is expected to either eventually agree or be set by a single leader."  So it seemed to me that there wouldn't be an opportunity to get the
[03:34] <blahdeblah> private-address (or public-address, assuming I've done that right) from every unit.
[03:34] <blahdeblah> ^ hope that didn't get cut off
[03:35] <marcoceppi> blahdeblah: right, scopes.GLOBAL is wrong, you'd want scopes.UNIT
[03:35] <blahdeblah> marcoceppi: OK; that gist is pretty simple
[03:36] <blahdeblah> So then if I want to gather a list of public-address values from every unit, that would need to be added per my branch, then each unit would need to send the gathered data across the peer relation to the leader?
[03:38] <marcoceppi> blahdeblah: so there are a few things - one are you sure you want to use the juju-info interface? or are you creating a new interface?
[03:39] <blahdeblah> marcoceppi: I can't see any reason not to use juju-info, as long as it works.  I just want to gather a list of all the public-addresses of all the units associated.
[03:39] <blahdeblah> There might be a better way to do that.
[03:40] <blahdeblah> e.g. does the subordinate charm automatically get the public-address of the associated primary charm?  If so, I may be able to just ask the peer relation for it.
[03:41] <marcoceppi> blahdeblah: it does not.
[03:41] <marcoceppi> blahdeblah: so you don't want to add this to juju-info
[03:41] <blahdeblah> marcoceppi: So what do I want? :-)
[03:42] <marcoceppi> blahdeblah: one min otp
[03:42] <blahdeblah> no worries
[03:47] <marcoceppi> blahdeblah: so what's your end goal?
[03:47] <marcoceppi> blahdeblah: because you can't just add features to an interface
[03:47] <marcoceppi> esp the juju-info interface
[03:47] <blahdeblah> marcoceppi: juju-info does actually provide public-address, from what I've been able to tell
[03:47] <blahdeblah> I could be wrong, though
[03:48] <blahdeblah> marcoceppi: End goal is to resurrect the spirit (if not the flesh) of lazyPower's DNS-Charm and implement the autogenerated part.
[03:48] <blahdeblah> (as well as a provider for the Dynect DNS API, and do it all with appropriate layers & interfaces)
[03:49] <marcoceppi> blahdeblah: well, technically, yes, because of spaces in juju
[03:49] <marcoceppi> blahdeblah: but practically, unit-get public-address will be the same on the primary and the subordinate
[03:49] <blahdeblah> marcoceppi: I don't understand that "technicallly..." part
[03:49] <marcoceppi> blahdeblah: ohh, this sounds cool - though I'd wish juju just grow dns natively
[03:50] <blahdeblah> I actually think charms are a better place for it
[03:50] <blahdeblah> Because then they're user-customisable and don't need compiled code
[03:50] <marcoceppi> blahdeblah: because we have net-spaces in Juju, I think public-address was added since you can bind a netspace to the relation
[03:51] <marcoceppi> blahdeblah: let me check something. if it exists in 1.25 and 2.0 then it's safe to add to the relation
[03:51] <blahdeblah> marcoceppi: I have no plans to target 2.0 yet, but I guess I do want forward compatibility with it
[03:52] <marcoceppi> blahdeblah: well the idea is - if it exists in 1.25 and 2.0 (public-address in relation data) I don't see why we can't have it in the juju-info interface layer
[03:52] <blahdeblah> yeah - if it works I'll drop you a PR from my branch
[03:55] <blahdeblah> marcoceppi: The primary goal is to have DNS work fully automatically given a very small amount of configuration on a subordinate charm, and have the elected leader update Dynect without the end user having to touch anything when you add or remove units.
[03:56] <blahdeblah> marcoceppi: And a secondary goal of me actually understanding how layers & interfaces work.
[03:56] <marcoceppi> blahdeblah: first goal sounds fucking awesome
[03:56] <blahdeblah> I'm kind of more motivated by the 2nd goal ;-)
[03:57] <blahdeblah> (although, I do hate editing DNS records, too ;-)
[04:00] <marcoceppi> blahdeblah: sadly, in 1.25, only private-address exists
[04:01] <blahdeblah> marcoceppi: So does that mean there's no way to get at the public-address at all?  Because it's been stored in juju and reported in juju status since forever?
[04:01] <marcoceppi> blahdeblah: but, you could/should create a peer relation, and each unit can run `unit-get public-address` and the leader can just get those addresses
[04:01] <blahdeblah> That's possibly marginally easier
[04:01] <marcoceppi> blahdeblah: the subordinates live on the same host, and so unit-get will work as if you were on the primary
[04:01] <blahdeblah> Is that exposed in charmhelpers as well as bash?
[04:02] <marcoceppi> blahdeblah: even if juju-info had public-address, you'd still have to use a peer relation
[04:02] <marcoceppi> blahdeblah: yup
[04:02] <marcoceppi> blahdeblah: scope: container is a super special type of relation, it basically means that communication will only happen between this unit and it's counterpart, unlike standard relations
[04:02] <marcoceppi> blahdeblah: you've always needed a peer ;)
[04:03] <blahdeblah> Yeah - I knew I would.  Otherwise there would be no way for any one unit to know about all the others.
[04:04] <marcoceppi> blahdeblah: https://pythonhosted.org/charmhelpers/api/charmhelpers.core.hookenv.html#charmhelpers.core.hookenv.unit_get
[04:04] <blahdeblah> yeah - found that
[04:04] <blahdeblah> marcoceppi: I didn't follow that last part about scope: container, though.
[04:06] <marcoceppi> blahdeblah: it just reafirms what you've said before, that you've always needed a peer. The reason is because of what scope:container means. Typically, in juju, every unit of each side of the relation has a channel of communication with each other. scope:container does not, it only has a channel of communication with the unit it is attached to physically
[04:06] <marcoceppi> blahdeblah: so you couldn't from one suborindate unit, via juju-info, query the private-address or another units primary service unit
[04:07] <marcoceppi> blahdeblah: where as, in non scope: container situations you could
[04:07] <blahdeblah> OK - makes sense
[04:07] <blahdeblah> Thanks a lot for your help; I think that's clarified it to the point where I might be able to make something vaguely coherent next week when I work on this. :-)
[04:09] <blahdeblah> marcoceppi: ^ Just in case you looked away somewhere :-)
[04:17] <marcoceppi> blahdeblah: cool, feel free to ping if you have questions!
[04:17] <blahdeblah> marcoceppi: much appreciated :-)
[07:21] <simonklb> could anyone recommend one or more charms that follow the very latest patterns?
[07:21] <simonklb> as a newbie it's hard to know which charms are up to date and which are using old ways
[07:30] <lathiat> simonklb: generally i would suggest looking at the openstack and bigdata charms
[07:30] <lathiat> simonklb: and then look into layers, as that is newer
[07:30] <lathiat> i'm not aware of any specific charms that are a better example, hopefully someone else has some ideas, maybe marcoceppi
[07:43] <simonklb> lathiat: thanks!
[07:52] <simonklb> is it possible to combine @when and @when_not ?
[07:54] <simonklb> nvm, it is, neat :)
[08:26] <jamespage> gnuoy, hmm long queues upstream to get stuff landed in not awesome...
[08:26] <simonklb> was the api-endpoints command removed from juju in a recent update?
[08:27] <simonklb> I'm getting: ERROR unrecognized command: juju api-endpoints
[08:43] <jamespage> urulama, hey - around? I'm struggling with publishing bundles to the charm store...
[08:44] <urulama> jamespage: hey
[08:45] <urulama> jamespage: what error do you get?
[08:45] <jamespage> urulama, well I don't using the push/publish commands
[08:45] <jamespage> charm push . cs:~openstack-charmers-next/openstack-base-trusty-mitaka
[08:45] <jamespage> worked ok
[08:45] <jamespage> and so did
[08:45] <jamespage> charm publish ~openstack-charmers-next/openstack-base-trusty-mitaka-0
[08:46] <jamespage> however I don't see them on jujucharms.com
[08:46] <urulama> jamespage: it's private, set only to ~openstack-charmers-next, if you havent granted it to "everyone"
[08:46] <jamespage> urulama, oh I thought read public was the default
[08:46] <jamespage> urulama, letme fix that
[08:47] <urulama> jamespage: publish doesn't change permissions
[08:48] <jamespage> urulama, okies...
[08:49] <urulama> jamespage: ok, i see the charm now
[08:49] <urulama> sorry, bundle
[08:49] <urulama> uf, we need to add series to bundles to avoid this, i know the work has already started
[08:50] <jamespage> urulama, yeah - setting --acl read everyone
[08:50] <jamespage> urulama, I just pushed an update which did push ok but I got a
[08:50] <jamespage> ERROR cannot add extra information: unauthorized: access denied for user "james-page"
[08:50] <jamespage> at the end of the push operation
[08:51] <urulama> hm, thought that was resolved already
[09:04] <simonklb> charm test seem to be removed as well, or am I missing something?
[09:04] <simonklb> ERROR unrecognized command: charm test
[09:08] <jamespage> urulama, hmm
[09:08] <jamespage> urulama, so I'm seeing dupes of bundles I've switched from bzr injestion to direct publishing here:
[09:08] <jamespage> https://jujucharms.com/u/openstack-charmers-next/
[09:17] <jamespage> urulama, I also see alot of 'access denied' messages for push and publish which go away if I keep re-runnnig the commands...
[09:19] <urulama> jamespage: it seems there's a charm store unit in production that is misbehaving, and when ha-proxy switches to it, you get an error. we'll ask webops solve it
[09:19] <jamespage> urulama, thanks!
[09:22] <urulama> jamespage: is ~openstack-charmers-next/bundle/openstack-base-43 published and ~openstack-charmers-next/bundle/openstack-base-40 ingested?
[09:23] <jamespage> urulama, yes
[09:24] <urulama> jamespage: would you please do "charm publish ~openstack-charmers-next/bundle/openstack-base-43" ... i'd like to see if ingestion changes that pointer from revision 43 to 40 every time it runs
[09:25] <jamespage> urulama, ok done
[09:28] <gnuoy> jamespage, giving these three the once over when you have a sec?
[09:28] <gnuoy> https://review.openstack.org/#/c/307643/
[09:28] <gnuoy> https://review.openstack.org/#/c/307564/
[09:28] <gnuoy> https://review.openstack.org/#/c/307387/
[09:28] <gnuoy> s/giving/would you mind giving/
[09:28] <jamespage> gnuoy, yes
[09:28] <gnuoy> ta
[09:32] <urulama> jamespage: yep, that's the case. ingestion overrides manual publishing. that'll be a high priority bug. https://github.com/CanonicalLtd/jujucharms.com/issues/250
[09:32] <jamespage> urulama, awesome
[09:32] <jamespage> thanks
[09:36] <jamespage> urulama, would removing the original branches help in this case?
[09:36] <urulama> jamespage: yes it would
[09:37]  * jamespage goes todo that then
[09:40] <urulama> jamespage: you'll have to do "charm publish" again to reset the pointer to the revision that you want
[09:41] <jamespage> urulama, ac
[09:41] <jamespage> k
[09:48] <jamespage> gnuoy, those all look reasonable
[09:48] <jamespage> gnuoy, I see you have functional tested them - thankyou
[09:48] <gnuoy> thats how I roll
[09:49] <jamespage> gnuoy, I don't see the need to wait for the full recheck on these ones - amulet does not do upgrade tests anyway
[09:49] <gnuoy> jamespage, good point
[09:50] <jamespage> gnuoy, ok landing when the queue catches up...
[09:50] <gnuoy> ta
[09:51] <jamespage> gnuoy, queue appears to have caught up - all last nights approved changes have now loanded...
[09:58] <stub> I use amulet to do upgrade tests by telling it to deploy an old known-good revision then running 'juju upgrade-charm --switch' myself
[09:59] <gnuoy> stub, fwiw those changes were around upgrading the packages rather than the charm itself but thanks for the pointer
[10:04] <Garyx> Does anyone here know if there is documentation on gettings juju 2.0 to work with MAAS 2.0
[10:05] <jamespage> Garyx, thats know not to work right now - dev team are working on it
[10:05] <jamespage> Juju 2.0 with MAAS 1.9 is OK
[10:06] <Garyx> oks, been looking around where it actually says that juju 2.0 is not working with maas 2.0 so been banging my head to a rock a little with that one.
[10:07] <stub> I'm going to use an action to upgrade packages, and I think Amulet has grown action support recently.
[10:07] <stub> But I guess I'd need two ppas to write an integration test for that.
[10:25] <jamespage> beisner, for some reason juju 2.0 beta-1 was the default juju on lescina
[10:25] <jamespage> I've pushed it back to 1.25.x for now
[10:58] <simonklb> anyone else here running juju 2.0 beta4 and is having troubles with the testing?
[10:59] <simonklb> it seems it still looks for environment.yaml but environments are called models now
[11:01] <simonklb> ah I see it's still under development https://github.com/juju/amulet/issues/116
[11:12] <stub> simonklb: As far as I can tell, testing does not work with 2.0 at the moment as amulet depends on juju-deployer and juju-deployer is only working against the 1.x series.
[11:12] <stub> simonklb: There may be unpackaged versions around but I don't know how much further you will get.
[11:13] <simonklb> actually amulet master isn't even up to snuff: https://github.com/juju/amulet/blob/master/amulet/helpers.py#L168
[11:15] <stub> simonklb: That code path may not be called if the version is being sniffed (although that would still stick you with the juju-1 vs juju executable issue)
[11:17] <simonklb> yea it's unfortunate, because I was told to start with juju 2.0 right away - but without testing it's going to be difficult
[11:26] <stub> simonklb: Yes, you are stuck with just unit testing (which means I'm stuck on 1.x, since I have a lot of integration tests to keep happy)
[11:26] <stub> simonklb: I think the problem is expected to be fixed by the time you get to writing integration tests :)
[11:27] <simonklb> well it's also a convenient way to test your charm while youre writing it, instead of deploying it in a real environment
[11:28] <simonklb> maybe there is some way to make that easy with your normal juju deployment too?
[11:28] <stub> integration testing isn't convenient due to the setup and teardown times, but I agree you want it once you get to a certain point.
[11:28] <simonklb> how would you go about testing simple relations?
[11:28] <simonklb> for example fetching host and port from mysql
[11:30] <stub> You could mock that sort of thing, but I agree that is best as an integration test against a real model. But first, you can do unit tests of the helpers and logic used by your relation hooks which can help the slow integration test work go much smoother.
[11:31] <simonklb> the problem is that I'm not entierly sure what every relation returns, maybe that is documented somewhere?
[11:32] <stub> The interfaces are documented in the charms that implement them, and it is of variable quality. The reactive framework and relation stubs are trying to address that problem, but it is early days for that.
[11:34] <stub> 'juju debug-hooks' is great for exploring, as you can poke the relation interactively.
[11:36] <simonklb> ah, thanks!
[11:37] <simonklb> right now I'm looking to build a reactive charm, from what I've gathered you're supposed to use the @when and @when_not decorators to execute when the charm enters differente states
[11:38] <simonklb> however, the normal install, start and stop hooks etc still looks like they are required, right?
[11:39] <stub> You usually have an @hook('install') to kick things off, yes. The trick is to keep the @hook decorated handlers to a minimum, and hand off to the @when decorated handlers as soon as possible.
[11:40] <simonklb> yea, then I feel I'm on the right track at least :)
[11:40] <stub> @when('config.changed') has recently replaced many uses of @hook('config-changed') too
[11:40] <simonklb> nice nice
[11:40] <simonklb> but to get the @hook decorator, do I still need to get the charm helpers package?
[11:41] <simonklb> or can I get that from the reactive package somehow?
[11:41] <stub> No, you import the decorator from charms.reactive
[11:41] <simonklb> great
[11:42] <simonklb> might be a good idea to include that in the charm create boilerplate, since they automatically include @when('X.installed') but not @hook('install')
[11:42] <stub> I haven't seen that boilerplate yet.
[11:43] <stub> simonklb: oic. Yes, that boilerplate doesn't need the @hook
[11:44] <simonklb> I assume it's this one: https://github.com/juju-solutions/template-reactive-python
[11:44] <simonklb> so I can run it without the install hook?
[11:44] <stub> The handler is invoked because the state is not set (@when_not('foo.installed')), and at the end of the handler it sets the foo.installed state so it doesn't get reinvoked a second time.
[11:44] <stub> yes, you can run without the install hook
[11:45] <simonklb> I think I had some error like "missing hook: install" or similar when I deployed it before
[11:45] <simonklb> I'll give it another try
[11:46] <stub> You need to 'charm build' it before it can be deployed. That adds a lot of the missing boiler plate, by pulling in the layers you declare in layers.yaml
[11:47] <simonklb> that might have been it, because now it runs :)
[12:59] <marcoceppi> simonklb: https://jujucharms.com/docs/devel/developer-getting-started and you should check out kubernetes charm or any of the bigdata charms on https://bigdata.juju.solutions
[13:01] <simonklb> marcoceppi: any eta on juju 2.0 support in amulet?
[13:01] <marcoceppi> simonklb: it should be there, tvansteenburgh has the latest on that
[13:01] <simonklb> marcoceppi: it looks like it's still looking for environments and not models
[13:02] <beisner> jamespage, gnuoy - wow upstream merge queue/activities seem to be taking many hrs
[13:02] <simonklb> is there any newer version than the master on github?
[13:02] <tvansteenburgh> simonklb: you need the latest deployer and jujuclient
[13:04] <simonklb> tvansteenburgh: where can I get them? ppa or source?
[13:04] <tvansteenburgh> simonklb: https://launchpad.net/~ahasenack/+archive/ubuntu/juju-deployer-daily
[13:04] <tvansteenburgh> simonklb: https://launchpad.net/~ahasenack/+archive/ubuntu/python-jujuclient
[13:47] <simonklb> tvansteenburgh: thanks
[13:52] <simonklb> tvansteenburgh: still getting the same error - it's trying to fetch the environment.yaml from the old juju home path
[13:53] <simonklb> looking at the amulet code - this seems to be where I end up https://github.com/juju/amulet/blob/master/amulet/helpers.py#L168
[13:53] <simonklb> I also saw that there was an issue for this: https://github.com/juju/amulet/issues/116
[13:57] <tvansteenburgh> simonklb: paste the traceback please
[13:58] <tvansteenburgh> simonklb: i see the problem
[13:58] <tvansteenburgh> simonklb: my juju2 branch was never merged :/
[13:58] <marcoceppi> simonklb: that is features missing in amulet for juju 2.0, not amulet supporting juju 2.0
[13:59] <tvansteenburgh> simonklb: okay it was just merged
[13:59] <simonklb> yea sorry for being unclear
[13:59] <marcoceppi> tvansteenburgh: weird, I reviewed it and never merged it
[13:59] <tvansteenburgh> simonklb: please repull master and try again
[13:59] <tvansteenburgh> marcoceppi: np
[14:09] <simonklb> tvansteenburgh: seem to be working now, thanks!
[14:43] <cory_fu> Can I get a review of CLI for layer options: https://github.com/juju-solutions/layer-basic/pull/58
[14:56] <urulama> jamespage: if you want to get rid of revision 40 in /u/openstack-charmers-next (which is marked as current development charm), you'll have to do "charm publish ~openstack-charmers-next/bundle/openstack-base-43 --channel=development" which will stop listing revision 40 as alternative development charm
[15:02] <jamespage> urulama, ack
[15:03] <urulama> same for any duplicated stuff that you want to get rid of (not duplicate for development revisions)
[15:13] <jamespage> urulama, I see something niggly for charms published with series in metadata
[15:14] <jamespage> urulama,
[15:14] <jamespage> cs:~openstack-charmers-next/ntp
[15:14] <jamespage> is one of those; but I need to write bundles which are backwards compat with 1.25.5
[15:14] <jamespage> which needs cs:~openstack-charmers-next/xenial/ntp
[15:14] <jamespage> which is resolvable, but I can't push a bundle with that it in
[15:16] <urulama> jamespage: on the phone, gimme 10min
[15:22] <jamespage> urulama, np
[15:33] <jamespage> rockstar, we decided that placing control plane components in lxd containers on compute nodes running nova-lxd was a no-no right?
[15:33] <rockstar> jamespage: yes, at least for right now.
[15:33] <jamespage> rockstar, okay
[15:33] <rockstar> If we want to spend the time to fix that bug, we can probably support that.
[15:34] <jamespage> rockstar, later
[15:35] <jamespage> rockstar, crazy thing is that it works fine with lxc containers on 1.25
[15:36] <jamespage> cause lxc != lxd from a storage prespective...
[15:36] <rockstar> As I recall, the problem was more of a charm specific problem, right? We're messing with the storage underneath AFTER we created some containers.
[15:41] <ReSam> good morning!
[15:41] <ReSam> is it possible to migrate the state server to a different host?
[16:09] <cory_fu> kwmonroe, kjackal: Did you guys see my PR for the puppet layer?  https://github.com/juju-solutions/layer-puppet/pull/2
[16:11] <kwmonroe> on it cory_fu
[16:12] <kjackal> cory_fu you are an artist! Nice work!
[16:14] <bdx> layer-puppet-peeps: I've already worked out alot of the kinks ... see here -> https://github.com/jamesbeedy/layer-puppet-agent
[16:15] <cory_fu> Dude.  How did we miss that.
[16:17] <cory_fu> bdx: Does that layer support masterless puppet?
[16:19] <bdx> it doesn't currently, although it could very easily  .... I should add a flag for masterless
[16:19] <jcastro> bdx: you're in portland right?
[16:19] <jcastro> http://www.devopsdays.org/events/2016-portland/
[16:19] <bdx> jcastro: yea
[16:20] <bdx> jcastro: crazy! I'm all about it!
[16:21] <jcastro> ok, holla at me if there's like a hotel cost or something
[16:22] <bdx> jcastro: perfect, I could use a night out on the town ... the site is kindof wimpy though ... where/how do I sign up?
[16:23] <bdx> ooooh nm 'propose'
[16:23] <bdx> got it
[16:52] <bdx> charmers: what am I doing wrong here -> https://github.com/jamesbeedy/interface-memcache/blob/master/requires.py
[16:58] <marcoceppi> bdx: what error are you getting/what are you expecting?
[17:08] <bdx> marcoceppi: I am not even getting an error, I'm just not getting anything returned from memcache_hosts()
[17:09] <jamespage> beisner, hmmm
[17:10] <jamespage> beisner, working out the release procedure with git/gerrit
[17:10] <jamespage> I was kinda expecting to delete the stable branch and then re-create it
[17:10] <jamespage> but apparently deleting branches is beyond my powers...
[17:10]  * jamespage thinks...
[17:18] <mgz> jamespage: can you move branches?
[18:00] <icey> beisner: jamespage any chance on getting that c-h change merged?
[18:03] <firl> anyone know the password/user combo for curtain installs with MAAS? I am trying to diagnose something and I forgot
[18:11] <beisner> thedac, this looks ready to land.  ready?  https://review.openstack.org/#/c/307480/
[18:12] <thedac> beisner: yes, please
[18:12] <bdx> marcoceppi: got an error for ya -> http://paste.ubuntu.com/15935177/
[18:12] <marcoceppi> bdx: interesting
[18:15] <bdx> marcoceppi: heres where/how I'm using it -> https://github.com/jamesbeedy/layer-sentry/blob/master/reactive/sentry.py
[18:15] <bdx> https://github.com/jamesbeedy/layer-sentry/blob/master/reactive/sentry.py#L131-142
[18:20] <jamespage> mgz, not sure
[18:21] <jamespage> like a rename?
[18:21] <mgz> jamespage: yeah, I'd expect the perms on a rename to be the same as delete, but maybe not?
[18:22] <mgz> jamespage: also, unrelated, I'm interested in any results/issues you have with testing using the 'lxd' name in bundles with 2.0
[18:22] <jamespage> mgz, well that generally worked ok
[18:23] <jamespage> mgz, apart from the fact that juju managing lxd containers and the lxd charm we have for nova-lxd changing storage config underneath it rather exploded...
[18:28] <beisner> jamespage, yah i've not jumped into how upstream manages release tags and branches.  how goes?
[18:31] <beisner> gnuoy, trusty-icehouse n-api looks good now on metal;  cholcombe we're now back to the block devices things and I've got that caught by the tail on metal atm.
[18:31] <cholcombe> beisner, sweet, can i log in and look?
[18:33] <beisner> cholcombe, http://pastebin.ubuntu.com/15935632/
[18:34] <cholcombe> beisner, what's the ceph yaml file look like that you deployed with?
[18:34] <beisner> cholcombe, http://bazaar.launchpad.net/~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs/view/head:/helper/bundles/baremetal7-next.yaml
[18:35] <cholcombe> so it should've just used sdb
[18:35] <cholcombe> as partition of sdb is mounted
[18:35] <beisner> that's what it does everywhere but trusty-icehouse
[18:35] <cholcombe> hmm
[18:36] <kjackal> cory_fu, kwmonroe: Do we want the bigtop plugin to be a subordinate of ANY other charm (juju-info interface) or only the ones implementing the hadoop-plugin interface, or both? (I vote for only the ones having the hadoop-plugin interface)
[18:37] <cory_fu> Definitely NOT juju-info.  :)
[18:37] <cory_fu> Yes to hadoop-plugin
[18:37] <cory_fu> Though, if we're going to extend that to support other services, I guess we'll need to rename it from "hadoop" plugin
[18:37] <cory_fu> But we can do that down the road
[18:38] <cholcombe> beisner, can you pgrep ceph-osd ?
[18:38] <cholcombe> that's the function i believe it's calling to determine if it should block on no block devices found
[18:39] <beisner> empty-handed, no process
[18:40] <cholcombe> that's the problem
[18:40] <cholcombe> something happened that prevented ceph-osd from starting
[18:40] <cholcombe> can you dump the logs?
[18:54] <beisner> cholcombe, fwiw if you're after logs, we've got all logs for all units on all jobs
[18:55] <cholcombe> yeah we might want to just keep moving on testing and save the logs for me to poke at
[18:55] <cholcombe> beisner, something weird seems to be happening with the hardening function
[18:56] <cholcombe> so it started working on osdizing the disk and then it seemed to have stopped only to think later that the /dev/sdb is in use and bails
[18:57] <cholcombe> beisner, i'm not entirely sure after it formats the disk why it seems to bail and then come back later confused about the disk being in use
[18:58] <beisner> cholcombe, i'll leave the enviro up till this eve, will be out of pocket this afternoon.
[18:58] <cholcombe> beisner, ok thanks
[18:58] <cholcombe> beisner, if you need to destroy it go ahead though.  I don't want to hold the pipeline up
[18:59] <beisner> cholcombe, this (bug) is pretty much the place to be, i think all other metallic issues are sorted ;-)
[19:59] <stormmore> this seems odd, I have setup vlans on my MAAS server and used juju bootstrap to create my first node. However, when I ssh into the node and try and ping all the maas interfaces they are not all pingable
[20:03] <cholcombe> beisner, awesome.  i'll keep digging
[20:14] <cory_fu> marcoceppi: I replied to your last comment on the layer options CLI.  I'd like to get a first pass through, but if you thinking chaining / nested options is important for bash charms, I can add that.
[20:15] <marcoceppi> cory_fu: we have until oct for this to land, so there's time
[20:15] <cory_fu> marcoceppi: Eh?
[20:15] <cory_fu> IBM needs this ASAP
[20:15] <marcoceppi> cory_fu: this is totes a 2.2 milestone, yeah?
[20:15] <marcoceppi> OH
[20:15] <cory_fu> And it's in the base layer
[20:15] <marcoceppi> IT"S A LAYER
[20:15] <cory_fu> :)
[20:19] <stormmore> OK so I am going to rule out juju being the source of my network drama by doing a basic deploy in MAAS
[20:21] <freak> hi
[20:21] <freak> hi everyone
[20:22] <freak> i need help regarding ceph-osd
[20:22] <freak> i successfully deployed openstack earlier
[20:22] <freak> but yesterday i switched off my nodes
[20:22] <freak> now today when i powered them up
[20:22] <freak> all components came up successfully
[20:22] <freak> but ceph-osd is showing error
[20:23] <freak> hook failed "update status"
[20:23] <marcoceppi> freak: can you try to `juju resolved ceph-osd/#` where # is the number of the failed unit?
[20:23] <marcoceppi> freak: also, /var/log/juju/unit-ceph-osd-#.log would be helpful
[20:24] <freak> ok let me try that...although from juju-gui i also clicked resolve options but it didn't worked..ok this time i try from cli
[20:24] <freak> and also share log
[20:34] <bdx> marcoceppi: I found the problem with my unit ids!
[20:34] <marcoceppi> bdx: \o/
[20:35] <bdx> marcoceppi: when my memcache-relation-joined hook runs, it errors 'relation not found'
[20:36] <bdx> marcoceppi:it runs `relation-list --format=json -r memcache:93`
[20:36] <bdx> marcoceppi: when I debug-hooks into the instance and run `relation-list --format=json -r memcache:95` it succeeds
[20:37] <bdx> so what am I to gather from this, that memcached charm is setting an incorrect relation id?
[20:38] <freak> marcoceppi  i exeuted the command juju resolved here is the output  http://paste.ubuntu.com/15937564/
[20:38] <freak> it says already resolved
[20:38] <freak> but my unit is still in hook failed update status
[20:42] <jose> freak: the log should give us a bit more info, can you pastebin it please? should be located in /var/log/juju/unit-ceph-osd-0.log
[20:43] <jose> inside the ceph-osd machine, that is
[20:43] <freak> ok jose let me take the log
[20:43] <jose> thank you :)
[20:50] <freak> dear jose/marcoceppi here is the log file output http://paste.ubuntu.com/15937716/
[20:51] <jose> I have to leave but maybe Marco can give you a hand later :)
[20:52] <freak> ok no issue
[22:01] <bdx> wtf is going on in juju-dev
[22:01] <bdx> someone kick that guy
[22:05] <alexisb> bdx, we are working on it
[22:09] <cholcombe> beisner, i think if we revert c94e0b4b on ceph-osd we should be in the clear.  I think I have an idea for a fix but there's no way I can code it and complete it by tonight
[22:10] <cholcombe> beisner, do you have a bug for that ceph-osd fail?  I can link the revert task to it
[22:12] <beisner> cholcombe plz see the spreadsheet
[22:13] <cholcombe> beisner, ok.  I don't see anything for ceph-osd though
[22:14] <cholcombe> i'll write up a new bug if that's ok?
[22:23] <cholcombe> beisner, i put in the revert but of course it has a merge conflict lol
[22:24] <freak> cholcombe  when i can expect that this ceph-osd issue be resolved
[22:24] <beisner> cholcombe. Mojo tab. Trusty-Icehouse
[22:27] <cholcombe> freak, well I can revert the change and we should be fine.  However the feature is going to need a little more work.  I didn't realize that ceph would fail the ceph osd crush create-or-move command if a bucket didn't exist
[22:28] <freak> ok thanks cholcombe
[22:29] <cholcombe> ceph crush internally i believe has all the default buckets even if you don't specify them but on firefly the command is failing
[22:47] <firl> anyone know if there is an easy way to have multiple ceph zones with juju charms and visible in horizon?
[22:50] <cholcombe> firl, what do you mean by zones?
[22:50] <firl> cholcombe: availability zones
[22:50] <firl> like one zone for ssd, one zone for sata spin, one for scsci
[22:51] <cholcombe> ah yes.  There's a patch that we're going to work on for that but it's not ready yet
[22:51] <cholcombe> firl, if you'd like to give it a shot to write the patch yourself i'd be happy to guide you
[22:52] <cholcombe> firl, what i mean is I haven't started on that patch yet but I'm happy to get more help
[22:53] <firl> gotcha, I don’t have the cycles right now :( just didn’t know if it was available yet.
[22:54] <cholcombe> i have a juju action that I wrote to make new pools ( availability zones )
[22:54] <firl> nice!
[22:54] <cholcombe> so if you had a crush rule that was just ssd's or just spinners it could use that when creating the pool
[22:54] <cholcombe> i think i wasn't properly understanding what you meant by az.  In ceph speak i think you're referring to a pool
[22:55] <firl> i would be ok with different pools, or even different servers servicing Openstack Availability zones
[22:55] <cholcombe> that firl that action is part of ceph-mon-next and should land in a few days in stable once 16.04 is released
[22:56] <cholcombe> you'll have to hand make the crush rule for now but in the future the charm will have support to create custom rules
[22:56] <cholcombe> firl, https://github.com/openstack/charm-ceph-mon/blob/master/actions/create-pool
[22:56] <firl> hrmm i was thinking more of having multiple ceph installs across the cluster
[22:57] <cholcombe> https://github.com/openstack/charm-ceph-mon/blob/master/actions.yaml#L49 profile name is what you're looking to pass in
[22:57] <cholcombe> oh i see
[22:57] <cholcombe> there's no reason you couldn't deploy the same charm twice with a different name for another ceph cluster
[22:58] <firl> yeah, but not sure how the relationships would work or show up
[22:59] <cholcombe> i'm not either.  i haven't tried that yet
[23:00] <firl> but I think you are right, having named pools also show up would be nice
[23:00] <cholcombe> they should show up and relate just fine but i'm not certain of it