/srv/irclogs.ubuntu.com/2016/05/04/#juju.txt

=== natefinch-afk is now known as natefinch
jamespagedosaboy, gnuoy: https://review.openstack.org/#/c/312229/ ready to land if good with you03:03
lazyPowerTribaal when you're around i could use a patch pilot to land this one so we can get some clean tests back on bundles using es :) https://code.launchpad.net/~lazypower/charms/trusty/elasticsearch/amulet-test-bump/+merge/29370303:16
=== scuttlemonkey is now known as scuttle|afk
=== frankban_ is now known as frankban
=== julen_ is now known as julenl
dosaboyjamespage: landed12:42
jamespagebeisner, gnuoy: grrrrr https://openstack-ci-reports.ubuntu.com/artifacts/test_charm_pipeline_amulet_full/openstack/charm-neutron-gateway/299205/3/2016-05-04_03-12-11/test_charm_amulet_full/juju-stat-tabular-collect.txt13:12
jamespageapi-metadata still appears to be racey with conductors on xenial-mitaka13:13
verterokHi, I'm using juju 1.25 and getting my bootstrap node disk filled by blobstore.[0-9] files (512M each). the machine has a small-ish disk.13:55
verterokis there a way to cleanup the blobstore files? or my only choice is to add a volume and move stuff around?13:56
shilpahi, I am pushing my charm to charm store, when i do charm login14:05
shilpaI see this error : ERROR login failed: invalid OAuth credentials14:05
shilpahow can i resolve this issue please14:09
cholcombewhat's the new method to get recommended status in the charm store?  I don't see it in the devel docs14:27
TribaallazyPower: ack - today's been a bit of a mess but I'll have a look14:42
lazyPowerTribaal - ack. Its a pretty minor change, the hook files were mode changes, and the tests were refactored14:43
TribaallazyPower: by the way, trade you? https://code.launchpad.net/~tribaal/charms/trusty/ntpmaster/python3-hooks/+merge/293587 :)14:43
lazyPowercholcombe - its dependent on the RQ tim is working on.  otherwise you follow the existing process of opening a bug, attaching all the info to it, and we handle the review manually14:45
TribaallazyPower: funny that symlink property changes show up as file additions.14:46
lazyPowerTribaal actualy i had the same thing just happen to ntpmaster14:46
lazyPowerBazaar (bzr) 2.7.014:46
Tribaal(in the diff I mean)14:46
beisnerjamespage, yah but woot for the smarter workload status goods ;-)15:11
cholcombelazyPower, ok maybe i'll tell people to wait a week or so for the new process?15:42
lazyPowercholcombe marco sent that mail out like a week or two ago :)15:45
cholcombelazyPower, whoops.  i need to go catch up on my emails :)15:45
lazyPowercholcombe - it happens to me too. this is a byproduct of drinking from a firehose :)15:47
cholcombeindeed15:47
lazyPowerwe may have blown you into the next county, but boy that cold water is refreshing eh?15:48
cholcombe:D15:48
firl_any juju ceph-mon charmers there?16:00
marcoceppifirl_: I'll grab one16:03
cholcombefirl_, yo16:03
marcoceppifound 'im16:03
firl_haha thanks!16:03
firl_for xenial-mitaka16:03
firl_ceph has been depreciated for new deploys16:04
cholcombecorrect16:04
firl_so for a 6 node cluster16:04
cholcombewe'd like people to use ceph-mon going forward16:04
firl_3 mons and 6 ceph-osd units?16:04
cholcombeyeah that sounds ok16:04
firl_so ceph-osd colocates with ceph-mon16:04
cholcombeyou can even put the ceph-mon's into containers if you want16:04
firl_and I can have ceph-mon on an LXC now16:04
firl_perfect16:04
cholcombethe mon's are lightweight16:05
firl_ok awesome16:05
firl_thanks cholcombe16:05
cholcombeicey, and i thought it would make it more clear for people to deploy a monitor cluster and a osd cluster16:06
firl_yeah16:06
firl_just a paradigm shift and wanted to make sure it was right16:06
cholcombeby default ceph-mon waits for 3 mons16:06
firl_perfect16:07
firl_now, any JUJU gui charmers on? hah16:07
marcoceppifirl_: I can possibly help, but /me pokes jrwren and hatch16:20
firl_:)16:20
firl_we are having issues with deploying a bundle file to juju gui and the options being maintained16:20
firl_We are about to try it from the command line ( have to find the command still )16:21
jrwrenfirl_: can you describe the issues?16:22
firl_sure16:22
firl_I drag bundle file to juju gui via import, wait for deploy. After deploy finishes ( juju status ) options like os-data-network, fsid, etc for charms do not get applied16:23
firl_want the bundle file as reference?16:23
jrwrenfirl_: yes.16:23
firl_ok 1 second16:24
firl_jrwren http://pastebin.com/i5W1pw2716:25
marcoceppifirl_: what version of Juju?16:27
firl_of juju gui or juju16:27
firl_or agent16:27
jrwrenI got nothing.16:28
jrwrenMakyo: ^^?16:28
firl_ā€œ1.25.5-trusty-amd64ā€ juju cli16:28
firl_juju-gui = latest charm16:28
firl_juju tools on state machine = https://streams.canonical.com/juju/tools/agent/1.25.5/juju-1.25.5-trusty-amd64.tgz16:29
firl_If the juju quickstart works should we open a juju gui bug?16:32
Makyofirl_: quickstart will deploy the bundle in the gui.  Is it all of the config options that go missing?16:33
firl_I think only like 1 config value stayed the same ( neutron-gateway ext-port )16:35
firl_re running now16:35
Makyofirl_: it looks like there's a problem with the bundle, though it's still a bug that that information isn't surfaced in the gui16:40
Makyofirl_: you can do `sudo pip install jujubundlelib`, then check the output of the bundle parser with `getchangeset bundle.yaml`16:42
MakyoThose errors should be surfaced in the gui, though, I'll file a bug16:42
firl_makyo, yeah switching to ceph-mon brought some of those up16:43
firl_it looks like it deploys other charms without options because of another charms invalid options16:44
firl_I think better UI feedback would be awesome16:44
Makyofirl_: agreed, gonna file a thing on that16:45
cory_futvansteenburgh: Do we have any idea how many tests are using Amulet's action_fetch (specifically, the return value)?16:50
tvansteenburghcory_fu: no, but i'm working on that code right now, what's up?16:51
cory_fuAre you?  What are you changing, specifically, because I was also working on some changes?16:51
tvansteenburghcory_fu: um, i have a shit load of changes16:52
Makyofirl_: filed at https://github.com/juju/juju-gui/issues/1614 - will take a look today, time permitting16:52
tvansteenburghcory_fu: all juju2-compat related16:52
cory_futvansteenburgh: So, the problem that I'm having is that the return value of action_fetch doesn't provide you any way to distinguish between timeout or failure,  and if an action doesn't use action-set, then that is also indistinguishable from failure / timeout16:52
tvansteenburghcory_fu: juju2 compat for actions, config, and wait16:52
cory_fuThe other issue I was looking in to fixing was that it is a PITA to get the unit name to pass in, so I was looking at moving the action methods to UnitSentry, so you'd call it like self.d.sentry['service'][0].action_do('foo')16:53
cory_fuInstead of self.d.action_do(self.d.sentry['service'][0].info['unit_name'], 'foo')16:54
tvansteenburghcory_fu: yeah, shoulda been that way from the start16:54
cory_fuIt actually wasn't hard to move the methods, and I'm just updating the tests now, but I assumed we'd want to keep shims in the old location for backwards compat.16:55
cory_fuBut maybe you're already moving them?16:55
tvansteenburghyeah we have to16:55
tvansteenburghno16:55
tvansteenburghcory_fu: problem is i changed all those method bodies16:55
tvansteenburghi hate you right now16:55
cory_fuha16:56
cory_futvansteenburgh: Well, I care less about moving them than I do about the return value of action_fetch16:57
cory_fuBecause I have actions that are just success / failure and there's no way to tell if they worked16:58
tvansteenburghyeah. well i would prefer that you wait for mine and then rebase on that16:59
tvansteenburghor i can make the change myself16:59
cory_fu+1 to you doing it.  It should be a trivial change, but it would also be backwards incompatible with the documented return value.  :(16:59
tvansteenburghyeah. i might add a kw param to control the behavior instead17:00
cory_fuWell, that's not entirely true.  It's documented to return  "Action results, as json" which I took to mean the full output of `juju action fetch` but actually is json.loads(`juju action fetch`)['results'] if not (failure or timeout) else {}, which drops a lot of useful info17:01
cory_fuAnd isn't at all clear from the docs17:01
tvansteenburgheven so, there could be tests relying on the current behavior17:02
cory_fuAnyway, a keyword param for the full output would be acceptable17:02
tvansteenburghit's either that or bump the major version,17:03
tvansteenburghwhich might not be out of the question considering the juju2 support that's going in17:03
tvansteenburghdunno, will ponder while i watch my paint dry, i mean tests run17:04
thedacgnuoy: tinwood: jamespage: Trivial fix to keystone credentials bug https://review.openstack.org/#/c/312610/17:07
thedacor wolsen if you have time ^^17:07
wolsenthedac, I'll take a look now17:08
thedacThank you17:08
wolsenthedac, is this something in stable?17:09
thedacwolsen: no on master. The identity-credentials relation just landed17:10
wolsenthedac, great17:10
thedacThis only affects that new relation17:10
wolsenthedac, +217:11
thedacwolsen: thank you, sir17:11
wolsenthedac, oh don't worry I'll have a mongo charm review coming soonish17:11
wolsenthedac, I might tap you on the shoulder to give it a review as well17:12
wolsen;-)17:12
thedacno problem, ping me when it is ready17:12
wolsenthedac, sure its the issue that bradm ran into when clustering mongodb in vivid+17:12
marcoceppicory_fu: don't hate, but what do you think of charms.tool for a module for charmhelpers.core.hookenv?17:34
cory_fumarcoceppi: Why would I hate?  But weren't we going to make it more specifically "juju hook tools"17:46
cory_fucharms.hook_tools perhaps?17:47
cory_fuActually, you're right, we should drop "hook"17:47
cory_fumarcoceppi: So yeah, +1 for charms.tool17:48
thedacgnuoy: tinwood: jamespage:  Not sure what the procedure is for a new interface. Here is the keystone credentials interface: https://github.com/thedac/charm-interface-keystone-credentials and it can be tested with https://github.com/thedac/mock-keystone-client18:29
tinwoodthedac, I wrote the tests for the keystone-client directly into the interface git repo - but it's not merged yet as I'm not sure if the relevant fix on charm-tools has landed into the PPA yet.  Maybe gnuoy knows?18:31
tinwoodthedac, interface-keystone rather than keystone-client.18:32
thedacright, I left tests out until that is sorted18:32
tinwoodthedac, ah, ok good.18:34
firl_cholcombe you still around?18:38
gnuoythedac, I'll take a look today18:41
thedacgnuoy: thanks18:41
firl_or any other ceph-osd charmers around?18:44
gnuoythedac, landed and up on http://interfaces.juju.solutions/18:59
thedacgnuoy: sweet. Thank you18:59
gnuoynp18:59
mattraehi, how does Juju determine which interface to use for the private-address value? Iā€™m running into an issue in our manual provider environment where it is deciding to use the storage access network instead of the openstack management network where the default gateway resides19:19
lazyPowermattrae - that depends on the provider and which version of juju.19:24
lazyPowermattrae - but as i understand it, in maas it depends on how you've configured/tagged the networks in maas 1.9.   you model the networks there, and that information is then fed to juju19:25
mattraelazyPower: ahh cool, this is using the manual provider and juju 1.25.519:26
lazyPowermattrae - i'm not entirely certain but i think that is "autosensed" so its likely it pikced the wrong interface19:26
lazyPowermattrae - i would most def. hit the list up with this question :(  I don't think the people that are initimately familiar with our networking model are US based.19:27
mattraelazyPower: cool thanks, thats good information :) do you know who might be familar with the manual provider?19:28
mattraeohh cool, i'll email the list, thanks!19:30
=== natefinch is now known as natefinch-afk
LiftedKiltsomething is wrong with the ceph cluster in the openstack-lxd bundle - the cluster hangs on creating pgs21:12
LiftedKiltalso the ceph-mon charm creates the /etc/ceph/ceph.conf file without whitespace on the last line, causing the last line to be ignored until manually edited21:13
LiftedKiltnot sure if that is what is causing the problems with PG creation21:13
LiftedKiltbeisner jamespage: any ideas?21:13
lazyPowerLiftedKilt - Did you bug that? the ceph-mon newline for sure should be bugged21:47
LiftedKiltlazyPower: I hadn't yet21:48
LiftedKiltlazyPower: wasn't sure if I should wait and bug them together21:49
lazyPowerim not sure about the bundle, but if a config file is shipping with known defects, we should get that patched in short order :)21:50
lazyPowercholcombe is pretty quick about that stuff too21:50
LiftedKiltlazyPower: ok I submitted a bug report for the mon charm / newline issue https://bugs.launchpad.net/charms/+source/ceph/+bug/157840321:52
mupBug #1578403: newline <ceph (Juju Charms Collection):New> <https://launchpad.net/bugs/1578403>21:52
lazyPowerLiftedKilt - thanks a bunch for the bug. I'll make sure the ceph team is aware of a bitesized fix :)21:53
LiftedKiltgreat - thanks. I'm hoping that fixes the other issue as well actually.21:53
pacavacaDoes juju provide something for accessing service logs (after it's been deployed)? Or it can only show me it's own logs and for service logs I should go to the machine/container and locate them manually?21:55
cholcombeLiftedKilt, one sec21:58
cholcombeLiftedKilt, yeah the whitespace thing is a bug that needs to be fixed22:01
LiftedKiltis the mds keyring config line being ignored on cluster bootstrap potentially responsible for the pgs failing to create?22:02
LiftedKiltcholcombe lazyPower: filed a bug report for the second portion of the failure after talking to the ceph team https://bugs.launchpad.net/charms/+source/ceph/+bug/157841922:35
mupBug #1578419: ceph and ceph-mon charm attach osds improperly, prohibiting pgs from creating <ceph (Juju Charms Collection):New> <https://launchpad.net/bugs/1578419>22:35
jamespagehmm22:35
jamespageLiftedKilt, I need to check on the recency of the openstack-charmers-next charms22:36
jamespagethe automated injestion process was broken for a while and think they are all stale22:36
jamespagethe stable charmsint he charm store will be up-to-date22:36
LiftedKiltif there's a better group of ceph charms to use, I'm all ears22:36
jamespagecs:xenial/ceph-mon22:36
jamespagecs:xenial/ceph-osd22:37
LiftedKiltok - I will grab those into the bundle22:37
jamespageplease do that for all of the charms - everything in -next is stale atm22:37
LiftedKiltI assumed the charmers-next charms were the latest22:37
jamespageLiftedKilt, they normally are but...22:37
jamespagenot right now22:37
jamespagerelease of charms was on the 21st - most of the team have been travelling since then so not much drift in development yet22:38
jamespageso stable is less that two weeks old atm22:38
LiftedKiltthat would make sense why the deployment worked on the trusty deployment then - the openstack base bundle uses the regular charms22:38
jamespageLiftedKilt, yah - I'm working an openstack-lxd bundle based on trunk charms - but ran out of test time...22:39
cholcombeLiftedKilt, thanks22:39
jamespageLiftedKilt, I think there was a change we backed out which picked the zone information from juju but that was not working well/consistently22:40
jamespageso we backed it out quite late in the charm release cycle.22:40
LiftedKiltjamespage: ok that makes sense22:40
wolsenmarcoceppi, ping - want to get your thoughts on something for mongodb charm on xenial22:48
marcoceppiwolsen: absolutely22:48
wolsenmarcoceppi: so currently the charm specifies either master = True or slave = True in /etc/mongodb.conf and attempts to start with --replSet in order to enable replica set replication22:49
marcoceppiwolsen: ah, the old charm22:49
marcoceppiwolsen: I'm working on a new charm that doesn't suck22:49
marcoceppiwolsen: but continue22:49
wolsenmarcoceppi: hah, well I'm just trying to put in just enough of a fix to get it working until you get yours out the door22:49
marcoceppiwolsen: absolutley22:49
LiftedKiltjamespage: I noticed that the lxd charm in cs:xenial/lxd  has changed the config from block-device to block-devices plural, and the description no longer mentions the ability to use a file path instead of a physical block device - did that functinoality get removed?22:49
wolsenmarcoceppi: so basically, replicaset replication is incompatible with master/slave repliation22:49
marcoceppiright22:50
wolsenmarcoceppi: the charm by default scales (juju add-unit) along the replica-set lines22:50
wolsenmarcoceppi: afaict, the only way to support master/slave replication is to deploy two mongo services and specify the master/slave relationship after the fact22:50
wolsenmarcoceppi: which feels broken22:51
wolsenmarcoceppi: it feels the best way is to make the master config option and the replicaset config option mutually exclusive - but then, it seems odd to have a service that breaks on juju add-unit22:51
wolsenmarcoceppi: which leads me to just want to deprecate the master option22:52
wolsenthoughts?22:52
wolsen(e.g. dont' support the master/slave configuration - its deprecated in mongodb 3.x+)22:52
marcoceppiwolsen: remove for xenial, that's fine for me22:54
wolsenmarcoceppi: well the trusty/mongodb charm would still be deployable to trusty22:55
marcoceppiwolsen: sure, so why not remove master/slave for xenial only22:55
wolsenmarcoceppi: okay, I'll simply limit it for affected versions of mongodb (which includes xenial and wily)22:56
marcoceppiwolsen: why not cull it from the wily and xenial charms22:57
wolsenmarcoceppi: is there a separate branch for wily and xenial for those?22:57
marcoceppiwolsen: there should be22:58
marcoceppiwolsen: if there isn't, then mongo isn't in those22:58
cholcombeLiftedKilt, yeah i believe you hit a bug with this patch: https://github.com/openstack/charm-ceph-mon/commit/aef61caa46e7fd6ba5b1280653d9aec5ffcd03d122:58
* wolsen looks22:58
cholcombewe discovered a problem where if a crush bucket was missing the ceph-osd would fail to start.  Your crushmap also doesn't look right.  It's missing the host bucket22:59
LiftedKiltok - jamespage was saying that the charms in openstack-charmers-next are out of date. This patch made it into the generic xenial charms?23:01
jamespageyes23:01
LiftedKiltawesome thanks guys23:01
cholcombeLiftedKilt, sorry about that.  We had kind of a crazy march with a lot of stuff landing23:02
LiftedKiltcholcombe: I can imagine - everything was hitting major releases all at once23:02
wolsenmarcoceppi: nope, no xenial or wily23:03
marcoceppiwolsen: patch, push to xenial only!23:03
wolsenmarcoceppi: ack23:03

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!