=== natefinch-afk is now known as natefinch [03:03] dosaboy, gnuoy: https://review.openstack.org/#/c/312229/ ready to land if good with you [03:16] Tribaal when you're around i could use a patch pilot to land this one so we can get some clean tests back on bundles using es :) https://code.launchpad.net/~lazypower/charms/trusty/elasticsearch/amulet-test-bump/+merge/293703 === scuttlemonkey is now known as scuttle|afk === frankban_ is now known as frankban === julen_ is now known as julenl [12:42] jamespage: landed [13:12] beisner, gnuoy: grrrrr https://openstack-ci-reports.ubuntu.com/artifacts/test_charm_pipeline_amulet_full/openstack/charm-neutron-gateway/299205/3/2016-05-04_03-12-11/test_charm_amulet_full/juju-stat-tabular-collect.txt [13:13] api-metadata still appears to be racey with conductors on xenial-mitaka [13:55] Hi, I'm using juju 1.25 and getting my bootstrap node disk filled by blobstore.[0-9] files (512M each). the machine has a small-ish disk. [13:56] is there a way to cleanup the blobstore files? or my only choice is to add a volume and move stuff around? [14:05] hi, I am pushing my charm to charm store, when i do charm login [14:05] I see this error : ERROR login failed: invalid OAuth credentials [14:09] how can i resolve this issue please [14:27] what's the new method to get recommended status in the charm store? I don't see it in the devel docs [14:42] lazyPower: ack - today's been a bit of a mess but I'll have a look [14:43] Tribaal - ack. Its a pretty minor change, the hook files were mode changes, and the tests were refactored [14:43] lazyPower: by the way, trade you? https://code.launchpad.net/~tribaal/charms/trusty/ntpmaster/python3-hooks/+merge/293587 :) [14:45] cholcombe - its dependent on the RQ tim is working on. otherwise you follow the existing process of opening a bug, attaching all the info to it, and we handle the review manually [14:46] lazyPower: funny that symlink property changes show up as file additions. [14:46] Tribaal actualy i had the same thing just happen to ntpmaster [14:46] Bazaar (bzr) 2.7.0 [14:46] (in the diff I mean) [15:11] jamespage, yah but woot for the smarter workload status goods ;-) [15:42] lazyPower, ok maybe i'll tell people to wait a week or so for the new process? [15:45] cholcombe marco sent that mail out like a week or two ago :) [15:45] lazyPower, whoops. i need to go catch up on my emails :) [15:47] cholcombe - it happens to me too. this is a byproduct of drinking from a firehose :) [15:47] indeed [15:48] we may have blown you into the next county, but boy that cold water is refreshing eh? [15:48] :D [16:00] any juju ceph-mon charmers there? [16:03] firl_: I'll grab one [16:03] firl_, yo [16:03] found 'im [16:03] haha thanks! [16:03] for xenial-mitaka [16:04] ceph has been depreciated for new deploys [16:04] correct [16:04] so for a 6 node cluster [16:04] we'd like people to use ceph-mon going forward [16:04] 3 mons and 6 ceph-osd units? [16:04] yeah that sounds ok [16:04] so ceph-osd colocates with ceph-mon [16:04] you can even put the ceph-mon's into containers if you want [16:04] and I can have ceph-mon on an LXC now [16:04] perfect [16:05] the mon's are lightweight [16:05] ok awesome [16:05] thanks cholcombe [16:06] icey, and i thought it would make it more clear for people to deploy a monitor cluster and a osd cluster [16:06] yeah [16:06] just a paradigm shift and wanted to make sure it was right [16:06] by default ceph-mon waits for 3 mons [16:07] perfect [16:07] now, any JUJU gui charmers on? hah [16:20] firl_: I can possibly help, but /me pokes jrwren and hatch [16:20] :) [16:20] we are having issues with deploying a bundle file to juju gui and the options being maintained [16:21] We are about to try it from the command line ( have to find the command still ) [16:22] firl_: can you describe the issues? [16:22] sure [16:23] I drag bundle file to juju gui via import, wait for deploy. After deploy finishes ( juju status ) options like os-data-network, fsid, etc for charms do not get applied [16:23] want the bundle file as reference? [16:23] firl_: yes. [16:24] ok 1 second [16:25] jrwren http://pastebin.com/i5W1pw27 [16:27] firl_: what version of Juju? [16:27] of juju gui or juju [16:27] or agent [16:28] I got nothing. [16:28] Makyo: ^^? [16:28] ā€œ1.25.5-trusty-amd64ā€ juju cli [16:28] juju-gui = latest charm [16:29] juju tools on state machine = https://streams.canonical.com/juju/tools/agent/1.25.5/juju-1.25.5-trusty-amd64.tgz [16:32] If the juju quickstart works should we open a juju gui bug? [16:33] firl_: quickstart will deploy the bundle in the gui. Is it all of the config options that go missing? [16:35] I think only like 1 config value stayed the same ( neutron-gateway ext-port ) [16:35] re running now [16:40] firl_: it looks like there's a problem with the bundle, though it's still a bug that that information isn't surfaced in the gui [16:42] firl_: you can do `sudo pip install jujubundlelib`, then check the output of the bundle parser with `getchangeset bundle.yaml` [16:42] Those errors should be surfaced in the gui, though, I'll file a bug [16:43] makyo, yeah switching to ceph-mon brought some of those up [16:44] it looks like it deploys other charms without options because of another charms invalid options [16:44] I think better UI feedback would be awesome [16:45] firl_: agreed, gonna file a thing on that [16:50] tvansteenburgh: Do we have any idea how many tests are using Amulet's action_fetch (specifically, the return value)? [16:51] cory_fu: no, but i'm working on that code right now, what's up? [16:51] Are you? What are you changing, specifically, because I was also working on some changes? [16:52] cory_fu: um, i have a shit load of changes [16:52] firl_: filed at https://github.com/juju/juju-gui/issues/1614 - will take a look today, time permitting [16:52] cory_fu: all juju2-compat related [16:52] tvansteenburgh: So, the problem that I'm having is that the return value of action_fetch doesn't provide you any way to distinguish between timeout or failure, and if an action doesn't use action-set, then that is also indistinguishable from failure / timeout [16:52] cory_fu: juju2 compat for actions, config, and wait [16:53] The other issue I was looking in to fixing was that it is a PITA to get the unit name to pass in, so I was looking at moving the action methods to UnitSentry, so you'd call it like self.d.sentry['service'][0].action_do('foo') [16:54] Instead of self.d.action_do(self.d.sentry['service'][0].info['unit_name'], 'foo') [16:54] cory_fu: yeah, shoulda been that way from the start [16:55] It actually wasn't hard to move the methods, and I'm just updating the tests now, but I assumed we'd want to keep shims in the old location for backwards compat. [16:55] But maybe you're already moving them? [16:55] yeah we have to [16:55] no [16:55] cory_fu: problem is i changed all those method bodies [16:55] i hate you right now [16:56] ha [16:57] tvansteenburgh: Well, I care less about moving them than I do about the return value of action_fetch [16:58] Because I have actions that are just success / failure and there's no way to tell if they worked [16:59] yeah. well i would prefer that you wait for mine and then rebase on that [16:59] or i can make the change myself [16:59] +1 to you doing it. It should be a trivial change, but it would also be backwards incompatible with the documented return value. :( [17:00] yeah. i might add a kw param to control the behavior instead [17:01] Well, that's not entirely true. It's documented to return "Action results, as json" which I took to mean the full output of `juju action fetch` but actually is json.loads(`juju action fetch`)['results'] if not (failure or timeout) else {}, which drops a lot of useful info [17:01] And isn't at all clear from the docs [17:02] even so, there could be tests relying on the current behavior [17:02] Anyway, a keyword param for the full output would be acceptable [17:03] it's either that or bump the major version, [17:03] which might not be out of the question considering the juju2 support that's going in [17:04] dunno, will ponder while i watch my paint dry, i mean tests run [17:07] gnuoy: tinwood: jamespage: Trivial fix to keystone credentials bug https://review.openstack.org/#/c/312610/ [17:07] or wolsen if you have time ^^ [17:08] thedac, I'll take a look now [17:08] Thank you [17:09] thedac, is this something in stable? [17:10] wolsen: no on master. The identity-credentials relation just landed [17:10] thedac, great [17:10] This only affects that new relation [17:11] thedac, +2 [17:11] wolsen: thank you, sir [17:11] thedac, oh don't worry I'll have a mongo charm review coming soonish [17:12] thedac, I might tap you on the shoulder to give it a review as well [17:12] ;-) [17:12] no problem, ping me when it is ready [17:12] thedac, sure its the issue that bradm ran into when clustering mongodb in vivid+ [17:34] cory_fu: don't hate, but what do you think of charms.tool for a module for charmhelpers.core.hookenv? [17:46] marcoceppi: Why would I hate? But weren't we going to make it more specifically "juju hook tools" [17:47] charms.hook_tools perhaps? [17:47] Actually, you're right, we should drop "hook" [17:48] marcoceppi: So yeah, +1 for charms.tool [18:29] gnuoy: tinwood: jamespage: Not sure what the procedure is for a new interface. Here is the keystone credentials interface: https://github.com/thedac/charm-interface-keystone-credentials and it can be tested with https://github.com/thedac/mock-keystone-client [18:31] thedac, I wrote the tests for the keystone-client directly into the interface git repo - but it's not merged yet as I'm not sure if the relevant fix on charm-tools has landed into the PPA yet. Maybe gnuoy knows? [18:32] thedac, interface-keystone rather than keystone-client. [18:32] right, I left tests out until that is sorted [18:34] thedac, ah, ok good. [18:38] cholcombe you still around? [18:41] thedac, I'll take a look today [18:41] gnuoy: thanks [18:44] or any other ceph-osd charmers around? [18:59] thedac, landed and up on http://interfaces.juju.solutions/ [18:59] gnuoy: sweet. Thank you [18:59] np [19:19] hi, how does Juju determine which interface to use for the private-address value? Iā€™m running into an issue in our manual provider environment where it is deciding to use the storage access network instead of the openstack management network where the default gateway resides [19:24] mattrae - that depends on the provider and which version of juju. [19:25] mattrae - but as i understand it, in maas it depends on how you've configured/tagged the networks in maas 1.9. you model the networks there, and that information is then fed to juju [19:26] lazyPower: ahh cool, this is using the manual provider and juju 1.25.5 [19:26] mattrae - i'm not entirely certain but i think that is "autosensed" so its likely it pikced the wrong interface [19:27] mattrae - i would most def. hit the list up with this question :( I don't think the people that are initimately familiar with our networking model are US based. [19:28] lazyPower: cool thanks, thats good information :) do you know who might be familar with the manual provider? [19:30] ohh cool, i'll email the list, thanks! === natefinch is now known as natefinch-afk [21:12] something is wrong with the ceph cluster in the openstack-lxd bundle - the cluster hangs on creating pgs [21:13] also the ceph-mon charm creates the /etc/ceph/ceph.conf file without whitespace on the last line, causing the last line to be ignored until manually edited [21:13] not sure if that is what is causing the problems with PG creation [21:13] beisner jamespage: any ideas? [21:47] LiftedKilt - Did you bug that? the ceph-mon newline for sure should be bugged [21:48] lazyPower: I hadn't yet [21:49] lazyPower: wasn't sure if I should wait and bug them together [21:50] im not sure about the bundle, but if a config file is shipping with known defects, we should get that patched in short order :) [21:50] cholcombe is pretty quick about that stuff too [21:52] lazyPower: ok I submitted a bug report for the mon charm / newline issue https://bugs.launchpad.net/charms/+source/ceph/+bug/1578403 [21:52] Bug #1578403: newline [21:53] LiftedKilt - thanks a bunch for the bug. I'll make sure the ceph team is aware of a bitesized fix :) [21:53] great - thanks. I'm hoping that fixes the other issue as well actually. [21:55] Does juju provide something for accessing service logs (after it's been deployed)? Or it can only show me it's own logs and for service logs I should go to the machine/container and locate them manually? [21:58] LiftedKilt, one sec [22:01] LiftedKilt, yeah the whitespace thing is a bug that needs to be fixed [22:02] is the mds keyring config line being ignored on cluster bootstrap potentially responsible for the pgs failing to create? [22:35] cholcombe lazyPower: filed a bug report for the second portion of the failure after talking to the ceph team https://bugs.launchpad.net/charms/+source/ceph/+bug/1578419 [22:35] Bug #1578419: ceph and ceph-mon charm attach osds improperly, prohibiting pgs from creating [22:35] hmm [22:36] LiftedKilt, I need to check on the recency of the openstack-charmers-next charms [22:36] the automated injestion process was broken for a while and think they are all stale [22:36] the stable charmsint he charm store will be up-to-date [22:36] if there's a better group of ceph charms to use, I'm all ears [22:36] cs:xenial/ceph-mon [22:37] cs:xenial/ceph-osd [22:37] ok - I will grab those into the bundle [22:37] please do that for all of the charms - everything in -next is stale atm [22:37] I assumed the charmers-next charms were the latest [22:37] LiftedKilt, they normally are but... [22:37] not right now [22:38] release of charms was on the 21st - most of the team have been travelling since then so not much drift in development yet [22:38] so stable is less that two weeks old atm [22:38] that would make sense why the deployment worked on the trusty deployment then - the openstack base bundle uses the regular charms [22:39] LiftedKilt, yah - I'm working an openstack-lxd bundle based on trunk charms - but ran out of test time... [22:39] LiftedKilt, thanks [22:40] LiftedKilt, I think there was a change we backed out which picked the zone information from juju but that was not working well/consistently [22:40] so we backed it out quite late in the charm release cycle. [22:40] jamespage: ok that makes sense [22:48] marcoceppi, ping - want to get your thoughts on something for mongodb charm on xenial [22:48] wolsen: absolutely [22:49] marcoceppi: so currently the charm specifies either master = True or slave = True in /etc/mongodb.conf and attempts to start with --replSet in order to enable replica set replication [22:49] wolsen: ah, the old charm [22:49] wolsen: I'm working on a new charm that doesn't suck [22:49] wolsen: but continue [22:49] marcoceppi: hah, well I'm just trying to put in just enough of a fix to get it working until you get yours out the door [22:49] wolsen: absolutley [22:49] jamespage: I noticed that the lxd charm in cs:xenial/lxd has changed the config from block-device to block-devices plural, and the description no longer mentions the ability to use a file path instead of a physical block device - did that functinoality get removed? [22:49] marcoceppi: so basically, replicaset replication is incompatible with master/slave repliation [22:50] right [22:50] marcoceppi: the charm by default scales (juju add-unit) along the replica-set lines [22:50] marcoceppi: afaict, the only way to support master/slave replication is to deploy two mongo services and specify the master/slave relationship after the fact [22:51] marcoceppi: which feels broken [22:51] marcoceppi: it feels the best way is to make the master config option and the replicaset config option mutually exclusive - but then, it seems odd to have a service that breaks on juju add-unit [22:52] marcoceppi: which leads me to just want to deprecate the master option [22:52] thoughts? [22:52] (e.g. dont' support the master/slave configuration - its deprecated in mongodb 3.x+) [22:54] wolsen: remove for xenial, that's fine for me [22:55] marcoceppi: well the trusty/mongodb charm would still be deployable to trusty [22:55] wolsen: sure, so why not remove master/slave for xenial only [22:56] marcoceppi: okay, I'll simply limit it for affected versions of mongodb (which includes xenial and wily) [22:57] wolsen: why not cull it from the wily and xenial charms [22:57] marcoceppi: is there a separate branch for wily and xenial for those? [22:58] wolsen: there should be [22:58] wolsen: if there isn't, then mongo isn't in those [22:58] LiftedKilt, yeah i believe you hit a bug with this patch: https://github.com/openstack/charm-ceph-mon/commit/aef61caa46e7fd6ba5b1280653d9aec5ffcd03d1 [22:58] * wolsen looks [22:59] we discovered a problem where if a crush bucket was missing the ceph-osd would fail to start. Your crushmap also doesn't look right. It's missing the host bucket [23:01] ok - jamespage was saying that the charms in openstack-charmers-next are out of date. This patch made it into the generic xenial charms? [23:01] yes [23:01] awesome thanks guys [23:02] LiftedKilt, sorry about that. We had kind of a crazy march with a lot of stuff landing [23:02] cholcombe: I can imagine - everything was hitting major releases all at once [23:03] marcoceppi: nope, no xenial or wily [23:03] wolsen: patch, push to xenial only! [23:03] marcoceppi: ack