/srv/irclogs.ubuntu.com/2016/04/19/#juju.txt

blahdeblahAnyone got a good example of something that consumes an interface layer like https://github.com/juju-solutions/interface-http or https://github.com/juju-solutions/interface-juju-info ?02:30
blahdeblahThe documentation on interface layers says they're the most misunderstood part, and I'm still misunderstanding even after reading the doc 4-5 times.02:30
marcoceppiblahdeblah: possibly, what are you misunderstanding?03:20
blahdeblahmarcoceppi: How I can consume the private-address (and add public-address) in https://github.com/juju-solutions/interface-juju-info, for starts.03:20
blahdeblahI couldn't find anything that showed how to get at the things defined in auto_accessors03:21
marcoceppiblahdeblah: so, I can whip up an example03:24
blahdeblahmarcoceppi: Don't go to any special trouble.  I was just hoping to find some examples of charms which used either of those layers so I could get a feel for how they're used.03:25
marcoceppiblahdeblah: private-address is the only thing available in auto-accessors for juju-info03:25
marcoceppialso, this layer has a few typos.03:25
blahdeblahmarcoceppi: I know; I wanted to add public-address as well03:25
marcoceppiblahdeblah: ah, I see03:26
blahdeblahmarcoceppi: yeah - I might have fixed them in my fork03:26
blahdeblahhttps://github.com/paulgear/interface-juju-info03:26
blahdeblahI started playing with it and then got stuck on how to actually use it.03:26
marcoceppiblahdeblah: https://gist.github.com/marcoceppi/fb911c63eac6a1db5c649a2f9643907403:34
blahdeblahmarcoceppi: Something else that confused me was that interface uses scopes.GLOBAL, but the doc says "All connected services and units for this relation will share a single conversation. The same data will be broadcast to every remote unit, and retrieved data will be aggregated across all remote units and is expected to either eventually agree or be set by a single leader."  So it seemed to me that there wouldn't be an opportunity to get the03:34
blahdeblahprivate-address (or public-address, assuming I've done that right) from every unit.03:34
blahdeblah^ hope that didn't get cut off03:34
marcoceppiblahdeblah: right, scopes.GLOBAL is wrong, you'd want scopes.UNIT03:35
blahdeblahmarcoceppi: OK; that gist is pretty simple03:35
blahdeblahSo then if I want to gather a list of public-address values from every unit, that would need to be added per my branch, then each unit would need to send the gathered data across the peer relation to the leader?03:36
marcoceppiblahdeblah: so there are a few things - one are you sure you want to use the juju-info interface? or are you creating a new interface?03:38
blahdeblahmarcoceppi: I can't see any reason not to use juju-info, as long as it works.  I just want to gather a list of all the public-addresses of all the units associated.03:39
blahdeblahThere might be a better way to do that.03:39
blahdeblahe.g. does the subordinate charm automatically get the public-address of the associated primary charm?  If so, I may be able to just ask the peer relation for it.03:40
marcoceppiblahdeblah: it does not.03:41
marcoceppiblahdeblah: so you don't want to add this to juju-info03:41
blahdeblahmarcoceppi: So what do I want? :-)03:41
marcoceppiblahdeblah: one min otp03:42
blahdeblahno worries03:42
marcoceppiblahdeblah: so what's your end goal?03:47
marcoceppiblahdeblah: because you can't just add features to an interface03:47
marcoceppiesp the juju-info interface03:47
blahdeblahmarcoceppi: juju-info does actually provide public-address, from what I've been able to tell03:47
blahdeblahI could be wrong, though03:47
blahdeblahmarcoceppi: End goal is to resurrect the spirit (if not the flesh) of lazyPower's DNS-Charm and implement the autogenerated part.03:48
blahdeblah(as well as a provider for the Dynect DNS API, and do it all with appropriate layers & interfaces)03:48
marcoceppiblahdeblah: well, technically, yes, because of spaces in juju03:49
marcoceppiblahdeblah: but practically, unit-get public-address will be the same on the primary and the subordinate03:49
blahdeblahmarcoceppi: I don't understand that "technicallly..." part03:49
marcoceppiblahdeblah: ohh, this sounds cool - though I'd wish juju just grow dns natively03:49
blahdeblahI actually think charms are a better place for it03:50
blahdeblahBecause then they're user-customisable and don't need compiled code03:50
marcoceppiblahdeblah: because we have net-spaces in Juju, I think public-address was added since you can bind a netspace to the relation03:50
marcoceppiblahdeblah: let me check something. if it exists in 1.25 and 2.0 then it's safe to add to the relation03:51
blahdeblahmarcoceppi: I have no plans to target 2.0 yet, but I guess I do want forward compatibility with it03:51
marcoceppiblahdeblah: well the idea is - if it exists in 1.25 and 2.0 (public-address in relation data) I don't see why we can't have it in the juju-info interface layer03:52
blahdeblahyeah - if it works I'll drop you a PR from my branch03:52
blahdeblahmarcoceppi: The primary goal is to have DNS work fully automatically given a very small amount of configuration on a subordinate charm, and have the elected leader update Dynect without the end user having to touch anything when you add or remove units.03:55
blahdeblahmarcoceppi: And a secondary goal of me actually understanding how layers & interfaces work.03:56
marcoceppiblahdeblah: first goal sounds fucking awesome03:56
blahdeblahI'm kind of more motivated by the 2nd goal ;-)03:56
blahdeblah(although, I do hate editing DNS records, too ;-)03:57
marcoceppiblahdeblah: sadly, in 1.25, only private-address exists04:00
blahdeblahmarcoceppi: So does that mean there's no way to get at the public-address at all?  Because it's been stored in juju and reported in juju status since forever?04:01
marcoceppiblahdeblah: but, you could/should create a peer relation, and each unit can run `unit-get public-address` and the leader can just get those addresses04:01
blahdeblahThat's possibly marginally easier04:01
marcoceppiblahdeblah: the subordinates live on the same host, and so unit-get will work as if you were on the primary04:01
blahdeblahIs that exposed in charmhelpers as well as bash?04:01
marcoceppiblahdeblah: even if juju-info had public-address, you'd still have to use a peer relation04:02
marcoceppiblahdeblah: yup04:02
marcoceppiblahdeblah: scope: container is a super special type of relation, it basically means that communication will only happen between this unit and it's counterpart, unlike standard relations04:02
marcoceppiblahdeblah: you've always needed a peer ;)04:02
blahdeblahYeah - I knew I would.  Otherwise there would be no way for any one unit to know about all the others.04:03
marcoceppiblahdeblah: https://pythonhosted.org/charmhelpers/api/charmhelpers.core.hookenv.html#charmhelpers.core.hookenv.unit_get04:04
blahdeblahyeah - found that04:04
blahdeblahmarcoceppi: I didn't follow that last part about scope: container, though.04:04
marcoceppiblahdeblah: it just reafirms what you've said before, that you've always needed a peer. The reason is because of what scope:container means. Typically, in juju, every unit of each side of the relation has a channel of communication with each other. scope:container does not, it only has a channel of communication with the unit it is attached to physically04:06
marcoceppiblahdeblah: so you couldn't from one suborindate unit, via juju-info, query the private-address or another units primary service unit04:06
marcoceppiblahdeblah: where as, in non scope: container situations you could04:07
blahdeblahOK - makes sense04:07
blahdeblahThanks a lot for your help; I think that's clarified it to the point where I might be able to make something vaguely coherent next week when I work on this. :-)04:07
blahdeblahmarcoceppi: ^ Just in case you looked away somewhere :-)04:09
marcoceppiblahdeblah: cool, feel free to ping if you have questions!04:17
blahdeblahmarcoceppi: much appreciated :-)04:17
=== danilos` is now known as danilos
=== rogpeppe3 is now known as rogpeppe
simonklbcould anyone recommend one or more charms that follow the very latest patterns?07:21
simonklbas a newbie it's hard to know which charms are up to date and which are using old ways07:21
lathiatsimonklb: generally i would suggest looking at the openstack and bigdata charms07:30
lathiatsimonklb: and then look into layers, as that is newer07:30
lathiati'm not aware of any specific charms that are a better example, hopefully someone else has some ideas, maybe marcoceppi07:30
simonklblathiat: thanks!07:43
simonklbis it possible to combine @when and @when_not ?07:52
simonklbnvm, it is, neat :)07:54
=== jacekn_ is now known as jacekn
jamespagegnuoy, hmm long queues upstream to get stuff landed in not awesome...08:26
simonklbwas the api-endpoints command removed from juju in a recent update?08:26
simonklbI'm getting: ERROR unrecognized command: juju api-endpoints08:27
jamespageurulama, hey - around? I'm struggling with publishing bundles to the charm store...08:43
urulamajamespage: hey08:44
urulamajamespage: what error do you get?08:45
jamespageurulama, well I don't using the push/publish commands08:45
jamespagecharm push . cs:~openstack-charmers-next/openstack-base-trusty-mitaka08:45
jamespageworked ok08:45
jamespageand so did08:45
jamespagecharm publish ~openstack-charmers-next/openstack-base-trusty-mitaka-008:45
jamespagehowever I don't see them on jujucharms.com08:46
urulamajamespage: it's private, set only to ~openstack-charmers-next, if you havent granted it to "everyone"08:46
jamespageurulama, oh I thought read public was the default08:46
jamespageurulama, letme fix that08:46
urulamajamespage: publish doesn't change permissions08:47
jamespageurulama, okies...08:48
urulamajamespage: ok, i see the charm now08:49
urulamasorry, bundle08:49
urulamauf, we need to add series to bundles to avoid this, i know the work has already started08:49
jamespageurulama, yeah - setting --acl read everyone08:50
jamespageurulama, I just pushed an update which did push ok but I got a08:50
jamespageERROR cannot add extra information: unauthorized: access denied for user "james-page"08:50
jamespageat the end of the push operation08:50
urulamahm, thought that was resolved already08:51
simonklbcharm test seem to be removed as well, or am I missing something?09:04
simonklbERROR unrecognized command: charm test09:04
jamespageurulama, hmm09:08
jamespageurulama, so I'm seeing dupes of bundles I've switched from bzr injestion to direct publishing here:09:08
jamespagehttps://jujucharms.com/u/openstack-charmers-next/09:08
jamespageurulama, I also see alot of 'access denied' messages for push and publish which go away if I keep re-runnnig the commands...09:17
urulamajamespage: it seems there's a charm store unit in production that is misbehaving, and when ha-proxy switches to it, you get an error. we'll ask webops solve it09:19
jamespageurulama, thanks!09:19
urulamajamespage: is ~openstack-charmers-next/bundle/openstack-base-43 published and ~openstack-charmers-next/bundle/openstack-base-40 ingested?09:22
jamespageurulama, yes09:23
urulamajamespage: would you please do "charm publish ~openstack-charmers-next/bundle/openstack-base-43" ... i'd like to see if ingestion changes that pointer from revision 43 to 40 every time it runs09:24
jamespageurulama, ok done09:25
gnuoyjamespage, giving these three the once over when you have a sec?09:28
gnuoyhttps://review.openstack.org/#/c/307643/09:28
gnuoyhttps://review.openstack.org/#/c/307564/09:28
gnuoyhttps://review.openstack.org/#/c/307387/09:28
gnuoys/giving/would you mind giving/09:28
jamespagegnuoy, yes09:28
gnuoyta09:28
urulamajamespage: yep, that's the case. ingestion overrides manual publishing. that'll be a high priority bug. https://github.com/CanonicalLtd/jujucharms.com/issues/25009:32
jamespageurulama, awesome09:32
jamespagethanks09:32
jamespageurulama, would removing the original branches help in this case?09:36
urulamajamespage: yes it would09:36
* jamespage goes todo that then09:37
urulamajamespage: you'll have to do "charm publish" again to reset the pointer to the revision that you want09:40
jamespageurulama, ac09:41
jamespagek09:41
jamespagegnuoy, those all look reasonable09:48
jamespagegnuoy, I see you have functional tested them - thankyou09:48
gnuoythats how I roll09:48
jamespagegnuoy, I don't see the need to wait for the full recheck on these ones - amulet does not do upgrade tests anyway09:49
gnuoyjamespage, good point09:49
jamespagegnuoy, ok landing when the queue catches up...09:50
gnuoyta09:50
jamespagegnuoy, queue appears to have caught up - all last nights approved changes have now loanded...09:51
stubI use amulet to do upgrade tests by telling it to deploy an old known-good revision then running 'juju upgrade-charm --switch' myself09:58
gnuoystub, fwiw those changes were around upgrading the packages rather than the charm itself but thanks for the pointer09:59
GaryxDoes anyone here know if there is documentation on gettings juju 2.0 to work with MAAS 2.010:04
jamespageGaryx, thats know not to work right now - dev team are working on it10:05
jamespageJuju 2.0 with MAAS 1.9 is OK10:05
Garyxoks, been looking around where it actually says that juju 2.0 is not working with maas 2.0 so been banging my head to a rock a little with that one.10:06
stubI'm going to use an action to upgrade packages, and I think Amulet has grown action support recently.10:07
stubBut I guess I'd need two ppas to write an integration test for that.10:07
jamespagebeisner, for some reason juju 2.0 beta-1 was the default juju on lescina10:25
jamespageI've pushed it back to 1.25.x for now10:25
simonklbanyone else here running juju 2.0 beta4 and is having troubles with the testing?10:58
simonklbit seems it still looks for environment.yaml but environments are called models now10:59
simonklbah I see it's still under development https://github.com/juju/amulet/issues/11611:01
stubsimonklb: As far as I can tell, testing does not work with 2.0 at the moment as amulet depends on juju-deployer and juju-deployer is only working against the 1.x series.11:12
stubsimonklb: There may be unpackaged versions around but I don't know how much further you will get.11:12
simonklbactually amulet master isn't even up to snuff: https://github.com/juju/amulet/blob/master/amulet/helpers.py#L16811:13
stubsimonklb: That code path may not be called if the version is being sniffed (although that would still stick you with the juju-1 vs juju executable issue)11:15
simonklbyea it's unfortunate, because I was told to start with juju 2.0 right away - but without testing it's going to be difficult11:17
stubsimonklb: Yes, you are stuck with just unit testing (which means I'm stuck on 1.x, since I have a lot of integration tests to keep happy)11:26
stubsimonklb: I think the problem is expected to be fixed by the time you get to writing integration tests :)11:26
simonklbwell it's also a convenient way to test your charm while youre writing it, instead of deploying it in a real environment11:27
simonklbmaybe there is some way to make that easy with your normal juju deployment too?11:28
stubintegration testing isn't convenient due to the setup and teardown times, but I agree you want it once you get to a certain point.11:28
simonklbhow would you go about testing simple relations?11:28
simonklbfor example fetching host and port from mysql11:28
stubYou could mock that sort of thing, but I agree that is best as an integration test against a real model. But first, you can do unit tests of the helpers and logic used by your relation hooks which can help the slow integration test work go much smoother.11:30
simonklbthe problem is that I'm not entierly sure what every relation returns, maybe that is documented somewhere?11:31
stubThe interfaces are documented in the charms that implement them, and it is of variable quality. The reactive framework and relation stubs are trying to address that problem, but it is early days for that.11:32
stub'juju debug-hooks' is great for exploring, as you can poke the relation interactively.11:34
simonklbah, thanks!11:36
simonklbright now I'm looking to build a reactive charm, from what I've gathered you're supposed to use the @when and @when_not decorators to execute when the charm enters differente states11:37
simonklbhowever, the normal install, start and stop hooks etc still looks like they are required, right?11:38
stubYou usually have an @hook('install') to kick things off, yes. The trick is to keep the @hook decorated handlers to a minimum, and hand off to the @when decorated handlers as soon as possible.11:39
simonklbyea, then I feel I'm on the right track at least :)11:40
stub@when('config.changed') has recently replaced many uses of @hook('config-changed') too11:40
simonklbnice nice11:40
simonklbbut to get the @hook decorator, do I still need to get the charm helpers package?11:40
simonklbor can I get that from the reactive package somehow?11:41
stubNo, you import the decorator from charms.reactive11:41
simonklbgreat11:41
simonklbmight be a good idea to include that in the charm create boilerplate, since they automatically include @when('X.installed') but not @hook('install')11:42
stubI haven't seen that boilerplate yet.11:42
stubsimonklb: oic. Yes, that boilerplate doesn't need the @hook11:43
simonklbI assume it's this one: https://github.com/juju-solutions/template-reactive-python11:44
simonklbso I can run it without the install hook?11:44
stubThe handler is invoked because the state is not set (@when_not('foo.installed')), and at the end of the handler it sets the foo.installed state so it doesn't get reinvoked a second time.11:44
stubyes, you can run without the install hook11:44
simonklbI think I had some error like "missing hook: install" or similar when I deployed it before11:45
simonklbI'll give it another try11:45
stubYou need to 'charm build' it before it can be deployed. That adds a lot of the missing boiler plate, by pulling in the layers you declare in layers.yaml11:46
simonklbthat might have been it, because now it runs :)11:47
marcoceppisimonklb: https://jujucharms.com/docs/devel/developer-getting-started and you should check out kubernetes charm or any of the bigdata charms on https://bigdata.juju.solutions12:59
simonklbmarcoceppi: any eta on juju 2.0 support in amulet?13:01
marcoceppisimonklb: it should be there, tvansteenburgh has the latest on that13:01
simonklbmarcoceppi: it looks like it's still looking for environments and not models13:01
beisnerjamespage, gnuoy - wow upstream merge queue/activities seem to be taking many hrs13:02
simonklbis there any newer version than the master on github?13:02
tvansteenburghsimonklb: you need the latest deployer and jujuclient13:02
simonklbtvansteenburgh: where can I get them? ppa or source?13:04
tvansteenburghsimonklb: https://launchpad.net/~ahasenack/+archive/ubuntu/juju-deployer-daily13:04
tvansteenburghsimonklb: https://launchpad.net/~ahasenack/+archive/ubuntu/python-jujuclient13:04
=== scuttle|afk is now known as scuttlemonkey
=== scuttlemonkey is now known as scuttle|afk
=== scuttle|afk is now known as scuttlemonkey
simonklbtvansteenburgh: thanks13:47
simonklbtvansteenburgh: still getting the same error - it's trying to fetch the environment.yaml from the old juju home path13:52
simonklblooking at the amulet code - this seems to be where I end up https://github.com/juju/amulet/blob/master/amulet/helpers.py#L16813:53
simonklbI also saw that there was an issue for this: https://github.com/juju/amulet/issues/11613:53
tvansteenburghsimonklb: paste the traceback please13:57
tvansteenburghsimonklb: i see the problem13:58
tvansteenburghsimonklb: my juju2 branch was never merged :/13:58
marcoceppisimonklb: that is features missing in amulet for juju 2.0, not amulet supporting juju 2.013:58
tvansteenburghsimonklb: okay it was just merged13:59
simonklbyea sorry for being unclear13:59
marcoceppitvansteenburgh: weird, I reviewed it and never merged it13:59
tvansteenburghsimonklb: please repull master and try again13:59
tvansteenburghmarcoceppi: np13:59
=== natefinch is now known as natefinch-afk
simonklbtvansteenburgh: seem to be working now, thanks!14:09
cory_fuCan I get a review of CLI for layer options: https://github.com/juju-solutions/layer-basic/pull/5814:43
urulamajamespage: if you want to get rid of revision 40 in /u/openstack-charmers-next (which is marked as current development charm), you'll have to do "charm publish ~openstack-charmers-next/bundle/openstack-base-43 --channel=development" which will stop listing revision 40 as alternative development charm14:56
jamespageurulama, ack15:02
urulamasame for any duplicated stuff that you want to get rid of (not duplicate for development revisions)15:03
jamespageurulama, I see something niggly for charms published with series in metadata15:13
jamespageurulama,15:14
jamespagecs:~openstack-charmers-next/ntp15:14
jamespageis one of those; but I need to write bundles which are backwards compat with 1.25.515:14
jamespagewhich needs cs:~openstack-charmers-next/xenial/ntp15:14
jamespagewhich is resolvable, but I can't push a bundle with that it in15:14
urulamajamespage: on the phone, gimme 10min15:16
jamespageurulama, np15:22
jamespagerockstar, we decided that placing control plane components in lxd containers on compute nodes running nova-lxd was a no-no right?15:33
rockstarjamespage: yes, at least for right now.15:33
jamespagerockstar, okay15:33
rockstarIf we want to spend the time to fix that bug, we can probably support that.15:33
jamespagerockstar, later15:34
jamespagerockstar, crazy thing is that it works fine with lxc containers on 1.2515:35
jamespagecause lxc != lxd from a storage prespective...15:36
rockstarAs I recall, the problem was more of a charm specific problem, right? We're messing with the storage underneath AFTER we created some containers.15:36
ReSamgood morning!15:41
ReSamis it possible to migrate the state server to a different host?15:41
=== scuttlemonkey is now known as scuttle|afk
=== scuttle|afk is now known as scuttlemonkey
=== scuttlemonkey is now known as scuttle|afk
=== redir_afk is now known as redir
cory_fukwmonroe, kjackal: Did you guys see my PR for the puppet layer?  https://github.com/juju-solutions/layer-puppet/pull/216:09
=== natefinch-afk is now known as natefnich
=== natefnich is now known as natefinch
kwmonroeon it cory_fu16:11
kjackalcory_fu you are an artist! Nice work!16:12
bdxlayer-puppet-peeps: I've already worked out alot of the kinks ... see here -> https://github.com/jamesbeedy/layer-puppet-agent16:14
cory_fuDude.  How did we miss that.16:15
cory_fubdx: Does that layer support masterless puppet?16:17
bdxit doesn't currently, although it could very easily  .... I should add a flag for masterless16:19
jcastrobdx: you're in portland right?16:19
jcastrohttp://www.devopsdays.org/events/2016-portland/16:19
bdxjcastro: yea16:19
bdxjcastro: crazy! I'm all about it!16:20
jcastrook, holla at me if there's like a hotel cost or something16:21
bdxjcastro: perfect, I could use a night out on the town ... the site is kindof wimpy though ... where/how do I sign up?16:22
bdxooooh nm 'propose'16:23
bdxgot it16:23
bdxcharmers: what am I doing wrong here -> https://github.com/jamesbeedy/interface-memcache/blob/master/requires.py16:52
marcoceppibdx: what error are you getting/what are you expecting?16:58
bdxmarcoceppi: I am not even getting an error, I'm just not getting anything returned from memcache_hosts()17:08
jamespagebeisner, hmmm17:09
jamespagebeisner, working out the release procedure with git/gerrit17:10
jamespageI was kinda expecting to delete the stable branch and then re-create it17:10
jamespagebut apparently deleting branches is beyond my powers...17:10
* jamespage thinks...17:10
mgzjamespage: can you move branches?17:18
iceybeisner: jamespage any chance on getting that c-h change merged?18:00
firlanyone know the password/user combo for curtain installs with MAAS? I am trying to diagnose something and I forgot18:03
=== scuttle|afk is now known as scuttlemonkey
beisnerthedac, this looks ready to land.  ready?  https://review.openstack.org/#/c/307480/18:11
thedacbeisner: yes, please18:12
bdxmarcoceppi: got an error for ya -> http://paste.ubuntu.com/15935177/18:12
marcoceppibdx: interesting18:12
bdxmarcoceppi: heres where/how I'm using it -> https://github.com/jamesbeedy/layer-sentry/blob/master/reactive/sentry.py18:15
bdxhttps://github.com/jamesbeedy/layer-sentry/blob/master/reactive/sentry.py#L131-14218:15
jamespagemgz, not sure18:20
jamespagelike a rename?18:21
mgzjamespage: yeah, I'd expect the perms on a rename to be the same as delete, but maybe not?18:21
mgzjamespage: also, unrelated, I'm interested in any results/issues you have with testing using the 'lxd' name in bundles with 2.018:22
jamespagemgz, well that generally worked ok18:22
jamespagemgz, apart from the fact that juju managing lxd containers and the lxd charm we have for nova-lxd changing storage config underneath it rather exploded...18:23
beisnerjamespage, yah i've not jumped into how upstream manages release tags and branches.  how goes?18:28
beisnergnuoy, trusty-icehouse n-api looks good now on metal;  cholcombe we're now back to the block devices things and I've got that caught by the tail on metal atm.18:31
cholcombebeisner, sweet, can i log in and look?18:31
beisnercholcombe, http://pastebin.ubuntu.com/15935632/18:33
cholcombebeisner, what's the ceph yaml file look like that you deployed with?18:34
beisnercholcombe, http://bazaar.launchpad.net/~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs/view/head:/helper/bundles/baremetal7-next.yaml18:34
cholcombeso it should've just used sdb18:35
cholcombeas partition of sdb is mounted18:35
beisnerthat's what it does everywhere but trusty-icehouse18:35
cholcombehmm18:35
kjackalcory_fu, kwmonroe: Do we want the bigtop plugin to be a subordinate of ANY other charm (juju-info interface) or only the ones implementing the hadoop-plugin interface, or both? (I vote for only the ones having the hadoop-plugin interface)18:36
cory_fuDefinitely NOT juju-info.  :)18:37
cory_fuYes to hadoop-plugin18:37
cory_fuThough, if we're going to extend that to support other services, I guess we'll need to rename it from "hadoop" plugin18:37
cory_fuBut we can do that down the road18:37
cholcombebeisner, can you pgrep ceph-osd ?18:38
cholcombethat's the function i believe it's calling to determine if it should block on no block devices found18:38
beisnerempty-handed, no process18:39
cholcombethat's the problem18:40
cholcombesomething happened that prevented ceph-osd from starting18:40
cholcombecan you dump the logs?18:40
=== scuttlemonkey is now known as scuttle|afk
beisnercholcombe, fwiw if you're after logs, we've got all logs for all units on all jobs18:54
cholcombeyeah we might want to just keep moving on testing and save the logs for me to poke at18:55
cholcombebeisner, something weird seems to be happening with the hardening function18:55
cholcombeso it started working on osdizing the disk and then it seemed to have stopped only to think later that the /dev/sdb is in use and bails18:56
cholcombebeisner, i'm not entirely sure after it formats the disk why it seems to bail and then come back later confused about the disk being in use18:57
beisnercholcombe, i'll leave the enviro up till this eve, will be out of pocket this afternoon.18:58
cholcombebeisner, ok thanks18:58
cholcombebeisner, if you need to destroy it go ahead though.  I don't want to hold the pipeline up18:58
beisnercholcombe, this (bug) is pretty much the place to be, i think all other metallic issues are sorted ;-)18:59
=== redir is now known as redir_lunch
=== CyberJacob is now known as zz_CyberJacob
stormmorethis seems odd, I have setup vlans on my MAAS server and used juju bootstrap to create my first node. However, when I ssh into the node and try and ping all the maas interfaces they are not all pingable19:59
cholcombebeisner, awesome.  i'll keep digging20:03
=== redir_lunch is now known as redir
cory_fumarcoceppi: I replied to your last comment on the layer options CLI.  I'd like to get a first pass through, but if you thinking chaining / nested options is important for bash charms, I can add that.20:14
marcoceppicory_fu: we have until oct for this to land, so there's time20:15
cory_fumarcoceppi: Eh?20:15
cory_fuIBM needs this ASAP20:15
marcoceppicory_fu: this is totes a 2.2 milestone, yeah?20:15
marcoceppiOH20:15
cory_fuAnd it's in the base layer20:15
marcoceppiIT"S A LAYER20:15
cory_fu:)20:15
stormmoreOK so I am going to rule out juju being the source of my network drama by doing a basic deploy in MAAS20:19
freakhi20:21
freakhi everyone20:21
freaki need help regarding ceph-osd20:22
freaki successfully deployed openstack earlier20:22
freakbut yesterday i switched off my nodes20:22
freaknow today when i powered them up20:22
freakall components came up successfully20:22
freakbut ceph-osd is showing error20:22
freakhook failed "update status"20:23
marcoceppifreak: can you try to `juju resolved ceph-osd/#` where # is the number of the failed unit?20:23
marcoceppifreak: also, /var/log/juju/unit-ceph-osd-#.log would be helpful20:23
freakok let me try that...although from juju-gui i also clicked resolve options but it didn't worked..ok this time i try from cli20:24
freakand also share log20:24
bdxmarcoceppi: I found the problem with my unit ids!20:34
marcoceppibdx: \o/20:34
bdxmarcoceppi: when my memcache-relation-joined hook runs, it errors 'relation not found'20:35
bdxmarcoceppi:it runs `relation-list --format=json -r memcache:93`20:36
bdxmarcoceppi: when I debug-hooks into the instance and run `relation-list --format=json -r memcache:95` it succeeds20:36
bdxso what am I to gather from this, that memcached charm is setting an incorrect relation id?20:37
freakmarcoceppi  i exeuted the command juju resolved here is the output  http://paste.ubuntu.com/15937564/20:38
freakit says already resolved20:38
freakbut my unit is still in hook failed update status20:38
josefreak: the log should give us a bit more info, can you pastebin it please? should be located in /var/log/juju/unit-ceph-osd-0.log20:42
joseinside the ceph-osd machine, that is20:43
freakok jose let me take the log20:43
josethank you :)20:43
freakdear jose/marcoceppi here is the log file output http://paste.ubuntu.com/15937716/20:50
joseI have to leave but maybe Marco can give you a hand later :)20:51
freakok no issue20:52
=== natefinch is now known as natefinch-afk
bdxwtf is going on in juju-dev22:01
bdxsomeone kick that guy22:01
alexisbbdx, we are working on it22:05
cholcombebeisner, i think if we revert c94e0b4b on ceph-osd we should be in the clear.  I think I have an idea for a fix but there's no way I can code it and complete it by tonight22:09
cholcombebeisner, do you have a bug for that ceph-osd fail?  I can link the revert task to it22:10
beisnercholcombe plz see the spreadsheet22:12
cholcombebeisner, ok.  I don't see anything for ceph-osd though22:13
cholcombei'll write up a new bug if that's ok?22:14
cholcombebeisner, i put in the revert but of course it has a merge conflict lol22:23
freakcholcombe  when i can expect that this ceph-osd issue be resolved22:24
beisnercholcombe. Mojo tab. Trusty-Icehouse22:24
cholcombefreak, well I can revert the change and we should be fine.  However the feature is going to need a little more work.  I didn't realize that ceph would fail the ceph osd crush create-or-move command if a bucket didn't exist22:27
freakok thanks cholcombe22:28
cholcombeceph crush internally i believe has all the default buckets even if you don't specify them but on firefly the command is failing22:29
firlanyone know if there is an easy way to have multiple ceph zones with juju charms and visible in horizon?22:47
cholcombefirl, what do you mean by zones?22:50
firlcholcombe: availability zones22:50
firllike one zone for ssd, one zone for sata spin, one for scsci22:50
cholcombeah yes.  There's a patch that we're going to work on for that but it's not ready yet22:51
cholcombefirl, if you'd like to give it a shot to write the patch yourself i'd be happy to guide you22:51
cholcombefirl, what i mean is I haven't started on that patch yet but I'm happy to get more help22:52
firlgotcha, I don’t have the cycles right now :( just didn’t know if it was available yet.22:53
cholcombei have a juju action that I wrote to make new pools ( availability zones )22:54
firlnice!22:54
cholcombeso if you had a crush rule that was just ssd's or just spinners it could use that when creating the pool22:54
cholcombei think i wasn't properly understanding what you meant by az.  In ceph speak i think you're referring to a pool22:54
firli would be ok with different pools, or even different servers servicing Openstack Availability zones22:55
cholcombethat firl that action is part of ceph-mon-next and should land in a few days in stable once 16.04 is released22:55
cholcombeyou'll have to hand make the crush rule for now but in the future the charm will have support to create custom rules22:56
cholcombefirl, https://github.com/openstack/charm-ceph-mon/blob/master/actions/create-pool22:56
firlhrmm i was thinking more of having multiple ceph installs across the cluster22:56
cholcombehttps://github.com/openstack/charm-ceph-mon/blob/master/actions.yaml#L49 profile name is what you're looking to pass in22:57
cholcombeoh i see22:57
cholcombethere's no reason you couldn't deploy the same charm twice with a different name for another ceph cluster22:57
firlyeah, but not sure how the relationships would work or show up22:58
cholcombei'm not either.  i haven't tried that yet22:59
firlbut I think you are right, having named pools also show up would be nice23:00
cholcombethey should show up and relate just fine but i'm not certain of it23:00
=== natefinch-afk is now known as natefinch

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!