[03:03] <jamespage> dosaboy, gnuoy: https://review.openstack.org/#/c/312229/ ready to land if good with you
[03:16] <lazyPower> Tribaal when you're around i could use a patch pilot to land this one so we can get some clean tests back on bundles using es :) https://code.launchpad.net/~lazypower/charms/trusty/elasticsearch/amulet-test-bump/+merge/293703
[12:42] <dosaboy> jamespage: landed
[13:12] <jamespage> beisner, gnuoy: grrrrr https://openstack-ci-reports.ubuntu.com/artifacts/test_charm_pipeline_amulet_full/openstack/charm-neutron-gateway/299205/3/2016-05-04_03-12-11/test_charm_amulet_full/juju-stat-tabular-collect.txt
[13:13] <jamespage> api-metadata still appears to be racey with conductors on xenial-mitaka
[13:55] <verterok> Hi, I'm using juju 1.25 and getting my bootstrap node disk filled by blobstore.[0-9] files (512M each). the machine has a small-ish disk.
[13:56] <verterok> is there a way to cleanup the blobstore files? or my only choice is to add a volume and move stuff around?
[14:05] <shilpa> hi, I am pushing my charm to charm store, when i do charm login
[14:05] <shilpa> I see this error : ERROR login failed: invalid OAuth credentials
[14:09] <shilpa> how can i resolve this issue please
[14:27] <cholcombe> what's the new method to get recommended status in the charm store?  I don't see it in the devel docs
[14:42] <Tribaal> lazyPower: ack - today's been a bit of a mess but I'll have a look
[14:43] <lazyPower> Tribaal - ack. Its a pretty minor change, the hook files were mode changes, and the tests were refactored
[14:43] <Tribaal> lazyPower: by the way, trade you? https://code.launchpad.net/~tribaal/charms/trusty/ntpmaster/python3-hooks/+merge/293587 :)
[14:45] <lazyPower> cholcombe - its dependent on the RQ tim is working on.  otherwise you follow the existing process of opening a bug, attaching all the info to it, and we handle the review manually
[14:46] <Tribaal> lazyPower: funny that symlink property changes show up as file additions.
[14:46] <lazyPower> Tribaal actualy i had the same thing just happen to ntpmaster
[14:46] <lazyPower> Bazaar (bzr) 2.7.0
[14:46] <Tribaal> (in the diff I mean)
[15:11] <beisner> jamespage, yah but woot for the smarter workload status goods ;-)
[15:42] <cholcombe> lazyPower, ok maybe i'll tell people to wait a week or so for the new process?
[15:45] <lazyPower> cholcombe marco sent that mail out like a week or two ago :)
[15:45] <cholcombe> lazyPower, whoops.  i need to go catch up on my emails :)
[15:47] <lazyPower> cholcombe - it happens to me too. this is a byproduct of drinking from a firehose :)
[15:47] <cholcombe> indeed
[15:48] <lazyPower> we may have blown you into the next county, but boy that cold water is refreshing eh?
[15:48] <cholcombe> :D
[16:00] <firl_> any juju ceph-mon charmers there?
[16:03] <marcoceppi> firl_: I'll grab one
[16:03] <cholcombe> firl_, yo
[16:03] <marcoceppi> found 'im
[16:03] <firl_> haha thanks!
[16:03] <firl_> for xenial-mitaka
[16:04] <firl_> ceph has been depreciated for new deploys
[16:04] <cholcombe> correct
[16:04] <firl_> so for a 6 node cluster
[16:04] <cholcombe> we'd like people to use ceph-mon going forward
[16:04] <firl_> 3 mons and 6 ceph-osd units?
[16:04] <cholcombe> yeah that sounds ok
[16:04] <firl_> so ceph-osd colocates with ceph-mon
[16:04] <cholcombe> you can even put the ceph-mon's into containers if you want
[16:04] <firl_> and I can have ceph-mon on an LXC now
[16:04] <firl_> perfect
[16:05] <cholcombe> the mon's are lightweight
[16:05] <firl_> ok awesome
[16:05] <firl_> thanks cholcombe
[16:06] <cholcombe> icey, and i thought it would make it more clear for people to deploy a monitor cluster and a osd cluster
[16:06] <firl_> yeah
[16:06] <firl_> just a paradigm shift and wanted to make sure it was right
[16:06] <cholcombe> by default ceph-mon waits for 3 mons
[16:07] <firl_> perfect
[16:07] <firl_> now, any JUJU gui charmers on? hah
[16:20] <marcoceppi> firl_: I can possibly help, but /me pokes jrwren and hatch
[16:20] <firl_> :)
[16:20] <firl_> we are having issues with deploying a bundle file to juju gui and the options being maintained
[16:21] <firl_> We are about to try it from the command line ( have to find the command still )
[16:22] <jrwren> firl_: can you describe the issues?
[16:22] <firl_> sure
[16:23] <firl_> I drag bundle file to juju gui via import, wait for deploy. After deploy finishes ( juju status ) options like os-data-network, fsid, etc for charms do not get applied
[16:23] <firl_> want the bundle file as reference?
[16:23] <jrwren> firl_: yes.
[16:24] <firl_> ok 1 second
[16:25] <firl_> jrwren http://pastebin.com/i5W1pw27
[16:27] <marcoceppi> firl_: what version of Juju?
[16:27] <firl_> of juju gui or juju
[16:27] <firl_> or agent
[16:28] <jrwren> I got nothing.
[16:28] <jrwren> Makyo: ^^?
[16:28] <firl_> “1.25.5-trusty-amd64” juju cli
[16:28] <firl_> juju-gui = latest charm
[16:29] <firl_> juju tools on state machine = https://streams.canonical.com/juju/tools/agent/1.25.5/juju-1.25.5-trusty-amd64.tgz
[16:32] <firl_> If the juju quickstart works should we open a juju gui bug?
[16:33] <Makyo> firl_: quickstart will deploy the bundle in the gui.  Is it all of the config options that go missing?
[16:35] <firl_> I think only like 1 config value stayed the same ( neutron-gateway ext-port )
[16:35] <firl_> re running now
[16:40] <Makyo> firl_: it looks like there's a problem with the bundle, though it's still a bug that that information isn't surfaced in the gui
[16:42] <Makyo> firl_: you can do `sudo pip install jujubundlelib`, then check the output of the bundle parser with `getchangeset bundle.yaml`
[16:42] <Makyo> Those errors should be surfaced in the gui, though, I'll file a bug
[16:43] <firl_> makyo, yeah switching to ceph-mon brought some of those up
[16:44] <firl_> it looks like it deploys other charms without options because of another charms invalid options
[16:44] <firl_> I think better UI feedback would be awesome
[16:45] <Makyo> firl_: agreed, gonna file a thing on that
[16:50] <cory_fu> tvansteenburgh: Do we have any idea how many tests are using Amulet's action_fetch (specifically, the return value)?
[16:51] <tvansteenburgh> cory_fu: no, but i'm working on that code right now, what's up?
[16:51] <cory_fu> Are you?  What are you changing, specifically, because I was also working on some changes?
[16:52] <tvansteenburgh> cory_fu: um, i have a shit load of changes
[16:52] <Makyo> firl_: filed at https://github.com/juju/juju-gui/issues/1614 - will take a look today, time permitting
[16:52] <tvansteenburgh> cory_fu: all juju2-compat related
[16:52] <cory_fu> tvansteenburgh: So, the problem that I'm having is that the return value of action_fetch doesn't provide you any way to distinguish between timeout or failure,  and if an action doesn't use action-set, then that is also indistinguishable from failure / timeout
[16:52] <tvansteenburgh> cory_fu: juju2 compat for actions, config, and wait
[16:53] <cory_fu> The other issue I was looking in to fixing was that it is a PITA to get the unit name to pass in, so I was looking at moving the action methods to UnitSentry, so you'd call it like self.d.sentry['service'][0].action_do('foo')
[16:54] <cory_fu> Instead of self.d.action_do(self.d.sentry['service'][0].info['unit_name'], 'foo')
[16:54] <tvansteenburgh> cory_fu: yeah, shoulda been that way from the start
[16:55] <cory_fu> It actually wasn't hard to move the methods, and I'm just updating the tests now, but I assumed we'd want to keep shims in the old location for backwards compat.
[16:55] <cory_fu> But maybe you're already moving them?
[16:55] <tvansteenburgh> yeah we have to
[16:55] <tvansteenburgh> no
[16:55] <tvansteenburgh> cory_fu: problem is i changed all those method bodies
[16:55] <tvansteenburgh> i hate you right now
[16:56] <cory_fu> ha
[16:57] <cory_fu> tvansteenburgh: Well, I care less about moving them than I do about the return value of action_fetch
[16:58] <cory_fu> Because I have actions that are just success / failure and there's no way to tell if they worked
[16:59] <tvansteenburgh> yeah. well i would prefer that you wait for mine and then rebase on that
[16:59] <tvansteenburgh> or i can make the change myself
[16:59] <cory_fu> +1 to you doing it.  It should be a trivial change, but it would also be backwards incompatible with the documented return value.  :(
[17:00] <tvansteenburgh> yeah. i might add a kw param to control the behavior instead
[17:01] <cory_fu> Well, that's not entirely true.  It's documented to return  "Action results, as json" which I took to mean the full output of `juju action fetch` but actually is json.loads(`juju action fetch`)['results'] if not (failure or timeout) else {}, which drops a lot of useful info
[17:01] <cory_fu> And isn't at all clear from the docs
[17:02] <tvansteenburgh> even so, there could be tests relying on the current behavior
[17:02] <cory_fu> Anyway, a keyword param for the full output would be acceptable
[17:03] <tvansteenburgh> it's either that or bump the major version,
[17:03] <tvansteenburgh> which might not be out of the question considering the juju2 support that's going in
[17:04] <tvansteenburgh> dunno, will ponder while i watch my paint dry, i mean tests run
[17:07] <thedac> gnuoy: tinwood: jamespage: Trivial fix to keystone credentials bug https://review.openstack.org/#/c/312610/
[17:07] <thedac> or wolsen if you have time ^^
[17:08] <wolsen> thedac, I'll take a look now
[17:08] <thedac> Thank you
[17:09] <wolsen> thedac, is this something in stable?
[17:10] <thedac> wolsen: no on master. The identity-credentials relation just landed
[17:10] <wolsen> thedac, great
[17:10] <thedac> This only affects that new relation
[17:11] <wolsen> thedac, +2
[17:11] <thedac> wolsen: thank you, sir
[17:11] <wolsen> thedac, oh don't worry I'll have a mongo charm review coming soonish
[17:12] <wolsen> thedac, I might tap you on the shoulder to give it a review as well
[17:12] <wolsen> ;-)
[17:12] <thedac> no problem, ping me when it is ready
[17:12] <wolsen> thedac, sure its the issue that bradm ran into when clustering mongodb in vivid+
[17:34] <marcoceppi> cory_fu: don't hate, but what do you think of charms.tool for a module for charmhelpers.core.hookenv?
[17:46] <cory_fu> marcoceppi: Why would I hate?  But weren't we going to make it more specifically "juju hook tools"
[17:47] <cory_fu> charms.hook_tools perhaps?
[17:47] <cory_fu> Actually, you're right, we should drop "hook"
[17:48] <cory_fu> marcoceppi: So yeah, +1 for charms.tool
[18:29] <thedac> gnuoy: tinwood: jamespage:  Not sure what the procedure is for a new interface. Here is the keystone credentials interface: https://github.com/thedac/charm-interface-keystone-credentials and it can be tested with https://github.com/thedac/mock-keystone-client
[18:31] <tinwood> thedac, I wrote the tests for the keystone-client directly into the interface git repo - but it's not merged yet as I'm not sure if the relevant fix on charm-tools has landed into the PPA yet.  Maybe gnuoy knows?
[18:32] <tinwood> thedac, interface-keystone rather than keystone-client.
[18:32] <thedac> right, I left tests out until that is sorted
[18:34] <tinwood> thedac, ah, ok good.
[18:38] <firl_> cholcombe you still around?
[18:41] <gnuoy> thedac, I'll take a look today
[18:41] <thedac> gnuoy: thanks
[18:44] <firl_> or any other ceph-osd charmers around?
[18:59] <gnuoy> thedac, landed and up on http://interfaces.juju.solutions/
[18:59] <thedac> gnuoy: sweet. Thank you
[18:59] <gnuoy> np
[19:19] <mattrae> hi, how does Juju determine which interface to use for the private-address value? I’m running into an issue in our manual provider environment where it is deciding to use the storage access network instead of the openstack management network where the default gateway resides
[19:24] <lazyPower> mattrae - that depends on the provider and which version of juju.
[19:25] <lazyPower> mattrae - but as i understand it, in maas it depends on how you've configured/tagged the networks in maas 1.9.   you model the networks there, and that information is then fed to juju
[19:26] <mattrae> lazyPower: ahh cool, this is using the manual provider and juju 1.25.5
[19:26] <lazyPower> mattrae - i'm not entirely certain but i think that is "autosensed" so its likely it pikced the wrong interface
[19:27] <lazyPower> mattrae - i would most def. hit the list up with this question :(  I don't think the people that are initimately familiar with our networking model are US based.
[19:28] <mattrae> lazyPower: cool thanks, thats good information :) do you know who might be familar with the manual provider?
[19:30] <mattrae> ohh cool, i'll email the list, thanks!
[21:12] <LiftedKilt> something is wrong with the ceph cluster in the openstack-lxd bundle - the cluster hangs on creating pgs
[21:13] <LiftedKilt> also the ceph-mon charm creates the /etc/ceph/ceph.conf file without whitespace on the last line, causing the last line to be ignored until manually edited
[21:13] <LiftedKilt> not sure if that is what is causing the problems with PG creation
[21:13] <LiftedKilt> beisner jamespage: any ideas?
[21:47] <lazyPower> LiftedKilt - Did you bug that? the ceph-mon newline for sure should be bugged
[21:48] <LiftedKilt> lazyPower: I hadn't yet
[21:49] <LiftedKilt> lazyPower: wasn't sure if I should wait and bug them together
[21:50] <lazyPower> im not sure about the bundle, but if a config file is shipping with known defects, we should get that patched in short order :)
[21:50] <lazyPower> cholcombe is pretty quick about that stuff too
[21:52] <LiftedKilt> lazyPower: ok I submitted a bug report for the mon charm / newline issue https://bugs.launchpad.net/charms/+source/ceph/+bug/1578403
[21:52] <mup> Bug #1578403: newline <ceph (Juju Charms Collection):New> <https://launchpad.net/bugs/1578403>
[21:53] <lazyPower> LiftedKilt - thanks a bunch for the bug. I'll make sure the ceph team is aware of a bitesized fix :)
[21:53] <LiftedKilt> great - thanks. I'm hoping that fixes the other issue as well actually.
[21:55] <pacavaca> Does juju provide something for accessing service logs (after it's been deployed)? Or it can only show me it's own logs and for service logs I should go to the machine/container and locate them manually?
[21:58] <cholcombe> LiftedKilt, one sec
[22:01] <cholcombe> LiftedKilt, yeah the whitespace thing is a bug that needs to be fixed
[22:02] <LiftedKilt> is the mds keyring config line being ignored on cluster bootstrap potentially responsible for the pgs failing to create?
[22:35] <LiftedKilt> cholcombe lazyPower: filed a bug report for the second portion of the failure after talking to the ceph team https://bugs.launchpad.net/charms/+source/ceph/+bug/1578419
[22:35] <mup> Bug #1578419: ceph and ceph-mon charm attach osds improperly, prohibiting pgs from creating <ceph (Juju Charms Collection):New> <https://launchpad.net/bugs/1578419>
[22:35] <jamespage> hmm
[22:36] <jamespage> LiftedKilt, I need to check on the recency of the openstack-charmers-next charms
[22:36] <jamespage> the automated injestion process was broken for a while and think they are all stale
[22:36] <jamespage> the stable charmsint he charm store will be up-to-date
[22:36] <LiftedKilt> if there's a better group of ceph charms to use, I'm all ears
[22:36] <jamespage> cs:xenial/ceph-mon
[22:37] <jamespage> cs:xenial/ceph-osd
[22:37] <LiftedKilt> ok - I will grab those into the bundle
[22:37] <jamespage> please do that for all of the charms - everything in -next is stale atm
[22:37] <LiftedKilt> I assumed the charmers-next charms were the latest
[22:37] <jamespage> LiftedKilt, they normally are but...
[22:37] <jamespage> not right now
[22:38] <jamespage> release of charms was on the 21st - most of the team have been travelling since then so not much drift in development yet
[22:38] <jamespage> so stable is less that two weeks old atm
[22:38] <LiftedKilt> that would make sense why the deployment worked on the trusty deployment then - the openstack base bundle uses the regular charms
[22:39] <jamespage> LiftedKilt, yah - I'm working an openstack-lxd bundle based on trunk charms - but ran out of test time...
[22:39] <cholcombe> LiftedKilt, thanks
[22:40] <jamespage> LiftedKilt, I think there was a change we backed out which picked the zone information from juju but that was not working well/consistently
[22:40] <jamespage> so we backed it out quite late in the charm release cycle.
[22:40] <LiftedKilt> jamespage: ok that makes sense
[22:48] <wolsen> marcoceppi, ping - want to get your thoughts on something for mongodb charm on xenial
[22:48] <marcoceppi> wolsen: absolutely
[22:49] <wolsen> marcoceppi: so currently the charm specifies either master = True or slave = True in /etc/mongodb.conf and attempts to start with --replSet in order to enable replica set replication
[22:49] <marcoceppi> wolsen: ah, the old charm
[22:49] <marcoceppi> wolsen: I'm working on a new charm that doesn't suck
[22:49] <marcoceppi> wolsen: but continue
[22:49] <wolsen> marcoceppi: hah, well I'm just trying to put in just enough of a fix to get it working until you get yours out the door
[22:49] <marcoceppi> wolsen: absolutley
[22:49] <LiftedKilt> jamespage: I noticed that the lxd charm in cs:xenial/lxd  has changed the config from block-device to block-devices plural, and the description no longer mentions the ability to use a file path instead of a physical block device - did that functinoality get removed?
[22:49] <wolsen> marcoceppi: so basically, replicaset replication is incompatible with master/slave repliation
[22:50] <marcoceppi> right
[22:50] <wolsen> marcoceppi: the charm by default scales (juju add-unit) along the replica-set lines
[22:50] <wolsen> marcoceppi: afaict, the only way to support master/slave replication is to deploy two mongo services and specify the master/slave relationship after the fact
[22:51] <wolsen> marcoceppi: which feels broken
[22:51] <wolsen> marcoceppi: it feels the best way is to make the master config option and the replicaset config option mutually exclusive - but then, it seems odd to have a service that breaks on juju add-unit
[22:52] <wolsen> marcoceppi: which leads me to just want to deprecate the master option
[22:52] <wolsen> thoughts?
[22:52] <wolsen> (e.g. dont' support the master/slave configuration - its deprecated in mongodb 3.x+)
[22:54] <marcoceppi> wolsen: remove for xenial, that's fine for me
[22:55] <wolsen> marcoceppi: well the trusty/mongodb charm would still be deployable to trusty
[22:55] <marcoceppi> wolsen: sure, so why not remove master/slave for xenial only
[22:56] <wolsen> marcoceppi: okay, I'll simply limit it for affected versions of mongodb (which includes xenial and wily)
[22:57] <marcoceppi> wolsen: why not cull it from the wily and xenial charms
[22:57] <wolsen> marcoceppi: is there a separate branch for wily and xenial for those?
[22:58] <marcoceppi> wolsen: there should be
[22:58] <marcoceppi> wolsen: if there isn't, then mongo isn't in those
[22:58] <cholcombe> LiftedKilt, yeah i believe you hit a bug with this patch: https://github.com/openstack/charm-ceph-mon/commit/aef61caa46e7fd6ba5b1280653d9aec5ffcd03d1
[22:58]  * wolsen looks
[22:59] <cholcombe> we discovered a problem where if a crush bucket was missing the ceph-osd would fail to start.  Your crushmap also doesn't look right.  It's missing the host bucket
[23:01] <LiftedKilt> ok - jamespage was saying that the charms in openstack-charmers-next are out of date. This patch made it into the generic xenial charms?
[23:01] <jamespage> yes
[23:01] <LiftedKilt> awesome thanks guys
[23:02] <cholcombe> LiftedKilt, sorry about that.  We had kind of a crazy march with a lot of stuff landing
[23:02] <LiftedKilt> cholcombe: I can imagine - everything was hitting major releases all at once
[23:03] <wolsen> marcoceppi: nope, no xenial or wily
[23:03] <marcoceppi> wolsen: patch, push to xenial only!
[23:03] <wolsen> marcoceppi: ack