[01:49] <marcoceppi> magicaltrout: the charm push/login stuff goes here: https://github.com/juju/charmstore-client
[06:28] <webscholar> any one here deployed wordpress on juju2 beta3 ?
[06:39] <gnuoy> rbasak, jamespage, beisner, I've created a bug for the mysql charm, I'll mark it as a dupe if the root cause is one of the ones that rbasak found last night but I'm going to keep digging at this point because I still think some fault lies with the charm
[07:04] <gnuoy> Bug #1567778
[07:04] <mup> Bug #1567778: Charm fails on xenial installing mysql 5.7 <mysql (Juju Charms Collection):New> <https://launchpad.net/bugs/1567778>
[07:37] <jamespage> gnuoy, ack - do you need a second pair of eyes on that?
[07:40] <jamespage> gnuoy, if not I'll poke at bradm's mitaka/nova-cc problem whilst hacking on removal of neutron from nova-cc
[07:41] <bradm> jamespage: I have LP#1567807 if you want that one too? :)
[07:41] <jamespage> bug 1567807
[07:41] <mup> Bug #1567807: nova delete doesn't work with EFI booted VMs <canonical-bootstack> <nova (Ubuntu):New> <https://launchpad.net/bugs/1567807>
[07:42] <bradm> jamespage: thats possibly more an upstream thing, though.
[07:42] <bradm> jamespage: oh, fwiw with that n-c-c issue, I dropped the stack back to r226 and the issue went away.
[07:43] <jamespage> bradm, just trying to repro now
[07:44] <bradm> jamespage: this happened twice on redeploying the same stack, so it doesn't seem like its too tricky.  unless there's something odd with my setup
[07:44] <jamespage> bradm, that was with cloud:trusty-mitaka right?
[07:44] <bradm> jamespage: correct.
[07:44] <bradm> jamespage: with ppc64 and arm64 compute nodes, not that I know if that makes a difference.
[07:45] <jamespage> bradm, that should not make a difference
[07:45] <gnuoy> jamespage, I've got to the bottom of the mysql bug and working on a fix now. If you could take bradms that'd be great
[07:46] <jamespage> gnuoy, on it - you focus on unblocking mysql
[07:46] <gnuoy> ta
[07:46] <jamespage> I'll review and landing that when you are ready
[07:46] <gnuoy> ack
[07:46] <jamespage> cause right now we can't fix fa due to that blocker...
[07:47] <gnuoy> yep
[07:49] <bradm> jamespage: I didn't think it would, but its the only vaguely special thing about this stack, just about everything else is vanilla.  Oh, g-s-s we have 3 of, using a patched version that gives us better control over the properties on images.
[07:50] <bradm> but that shouldn't do anything either.
[08:01] <jamespage> bradm, hey - can I get a copy of the nova.conf from that deployment please?
[08:05] <bradm> jamespage: on n-c-c ?
[08:05] <jamespage> bradm, yes please
[08:05] <bradm> jamespage: https://paste.ubuntu.com/15664360/
[08:06] <bradm> jamespage: unfortunately I've run out of week, disappearing for dinner now.
[08:06] <jamespage> bradm, ack
[08:08] <bradm> jamespage: you can bug a bootstack person to grab any info, they have access to that stack.  they probably just don't know much about it.
[08:09] <gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/mysql/1567778/+merge/291339 (not run any amulet yet but thought I'd see if you have any objections first)
[08:10] <jamespage> bradm, ok reproduced
[08:10] <bradm> jamespage: excellent news.
[08:17] <jamespage> gnuoy, looks reasonable
[08:17] <jamespage> was the db init getting confused by files in /var/lib/mysql ?
[08:17] <gnuoy> jamespage, yep https://bugs.launchpad.net/charms/+source/mysql/+bug/1567778/comments/2
[08:17] <mup> Bug #1567778: Charm fails on xenial installing mysql 5.7 <mysql (Juju Charms Collection):Confirmed for gnuoy> <https://launchpad.net/bugs/1567778>
[08:19] <jamespage> bradm, I suspect a problem with the packaging in mitaka-updates with the keystone v3 changes in the charms...
[08:19] <jamespage> probably
[08:19] <magicaltrout> marcoceppi: thats very nice, I have no idea how to compile it
[08:27] <gnuoy> jamespage, err, is osci likely to still do CI things against my mysql branch since its in LP ?
[08:27] <jamespage> gnuoy, it might
[08:28] <jamespage> gnuoy, branch scanner will run in 2 mins
[08:35] <jamespage> gnuoy, tests running now
[08:38] <gnuoy> ack
[08:39] <jamespage> gnuoy, bradm: looks like a openstack bug with the packages in mitaka-updates - proposed tests ok so I'll promote proposed -> updates shortly
[08:40] <gnuoy> jamespage, you think the root cause of Bug #1567236 is a packaging bug?
[08:40] <mup> Bug #1567236: nova-api-os-compute api failure - DuplicateOptError: duplicate option: auth-url <canonical-bootstack> <nova-cloud-controller (Juju Charms Collection):In Progress by james-page> <https://launchpad.net/bugs/1567236>
[08:41] <jamespage> gnuoy, well technically and openstack bug but yes
[08:41] <gnuoy> I mean openstack bug
[08:41] <jamespage> gnuoy, I can repro with mitaka-updates, but its all ok with mitaka-proposed
[08:41] <gnuoy> jamespage, ok, I assumed it was a duplicate value in a c config file and couldn't see how it could be
[08:42] <jamespage> gnuoy, it was not
[08:42] <gnuoy> jamespage, thanks for getting to the bottom of it
[08:42] <jamespage> probably a problem with parser for oslo.config or suchlike
[08:42] <gnuoy> kk
[09:08] <jamespage> wolsen, dosaboy: hey just a reminder that today its really the last day for landing features for the 16.04 charm release
[09:08] <jamespage> we should have the gate unblocked shortly - gnuoy - your mysql changes look to be testing ok
[09:08] <gnuoy> jamespage, https://review.openstack.org/#/c/303302/ if you have a sec
[09:09] <jamespage> gnuoy, your indentation is well wonky
[09:09] <gnuoy> oh, let me check
[09:10] <gnuoy> ah, yeah, one sec
[09:24] <neiljerram> gnuoy, rbasak, morning
[09:26] <neiljerram> mysql charm on Xenial is still not working for me this morning - but I think that's expected at the moment, right?
[09:31] <neiljerram> I just did an interactive 'apt-get install mysql-server' on another Xenial box, and it was fine.  But the charm install fails as you can see here: http://pastebin.com/pTxBmCJF
[09:35] <gnuoy> neiljerram, If your using the mysql charm then yes, that is expected. the fix is going through testing atm
[09:37] <neiljerram> gnuoy, Thanks - is there a bug that I can track?
[09:37] <gnuoy> neiljerram, Bug #1567778
[09:38] <mup> Bug #1567778: Charm fails on xenial installing mysql 5.7 <mysql (Juju Charms Collection):Confirmed for gnuoy> <https://launchpad.net/bugs/1567778>
[09:38] <gnuoy> neiljerram, The merge proposal is here https://code.launchpad.net/~gnuoy/charms/trusty/mysql/1567778/+merge/291339 Waiting for the CI to report functional test results atm
[09:39] <neiljerram> gnuoy, cool, I can test that locally here too
[09:39] <gnuoy> tip top
[09:41] <gnuoy> Do you have a +2 laying around for https://review.openstack.org/#/c/303302/ ?
[09:41] <gnuoy> jam^
[09:41] <gnuoy> jamespage, ^
[09:41] <gnuoy> (Sorry jam)
[09:43] <jamespage> gnuoy, done
[09:43] <gnuoy> ta
[09:44] <jamespage> gnuoy, does the mysql charm have a xenial test?
[09:44] <gnuoy> jamespage, that is a very good question
[09:45] <gnuoy> jamespage, yes, but not enabled
[09:45] <gnuoy> argh
[09:45] <jamespage> gnuoy, well can you test that by hand please
[09:45] <gnuoy> defo
[09:45] <jamespage> gnuoy, if it passes on regression testing then we'll land with your manual test
[09:45] <gnuoy> yep, and I'll post a second mp to enable the test
[09:46] <jamespage> gnuoy, https://review.openstack.org/#/c/303321/
[09:46] <jamespage> I think that just about covers it
[09:50] <jamespage> gnuoy, I need todo a companion to neutron-api to drop the > kilo test
[09:59] <gnuoy> jamespage, you've left some code in the neutron-api-* hooks, what is that there to support?
[10:01] <jamespage> gnuoy, hmm - looking
[10:01] <jamespage> gnuoy, actually they are still required - the nova-cc charm passes data from n-api to nova-compute
[10:02] <gnuoy> urgh, ok
[10:03] <jamespage> gnuoy, merging your mysql changes
[10:03] <gnuoy> jamespage, wait
[10:03] <gnuoy> jamespage, don't you want the results of tests/021-basic-xenial-mitaka ?
[10:03] <jamespage> gnuoy, oh yeah - your still executing the xenial test right....
[10:03] <jamespage> yes I do
[10:11] <jamespage> gnuoy, nova-cc needs a bit more work - just found another neutron-y bit
[10:12] <gnuoy> jamespage, ack. err xenial mitaka amulet test for mysql failed becuase keystone failed to install. looking into that now
[10:20] <gnuoy> jamespage, I'm really confused. The xenial mataka amulet test for keystone was enabled and passed for the run keystone via mod_wsgi change. But install of the charm on xenial is now saying that python-psutil has no NUM_CPU variable
[10:20] <gnuoy> I guess python-psutil could have changed in the last 48 hours
[10:21] <jamespage> gnuoy, hmm it might
[10:27] <neiljerram> gnuoy, FYI, my deployment with your mysql changes is in progress now...
[10:27] <gnuoy> neiljerram, ok, thanks
[10:34] <jamespage> gnuoy, looking now
[10:34] <jamespage> psutil has not changed...
[10:35] <jamespage> gnuoy, 23:05:54 juju-test.conductor.021-basic-xenial-mitaka RESULT  : PASS
[10:35] <jamespage> yup it passed...
[10:35] <gnuoy> jamespage, nope it hasn't changed
[10:35] <gnuoy> >>> from psutil import NUM_CPUS
[10:35] <gnuoy> Traceback (most recent call last):
[10:35] <gnuoy>   File "<stdin>", line 1, in <module>
[10:35] <gnuoy> ImportError: cannot import name NUM_CPUS
[10:36] <gnuoy> jamespage, the only thing I can think is that the amulet full test results in the change are from a previous patch set
[10:36] <jamespage> gnuoy, the charm-helpers code should deal with that
[10:36] <gnuoy> jamespage, errr I don't think so
[10:36] <jamespage> gnuoy, psutil.cpu_count is used if detected...
[10:37] <jamespage> gnuoy, # NOTE: use cpu_count if present (16.04 support)
[10:37] <neiljerram> gnuoy, mysql unit is now happy, but the related neutron-api unit has hook failed: "shared-db-relation-changed" for mysql:shared-db.  http://pastebin.com/ehHutDmk
[10:37] <jamespage> neiljerram, unit log from neutron-api would be useful here...
[10:37] <gnuoy> jamespage, ah! I bet mysql has brought in stable keystone
[10:38] <jamespage> gnuoy, oh that's quite possible
[10:38] <jamespage> and likely that's why the test is still disabled
[10:38] <gnuoy> yep
[10:38] <gnuoy> jamespage, I'll tactically fix amulet to use keystone from next and rerun the test
[10:38] <jamespage> gnuoy, I suggest we land part one which does not regress older ubuntu releases and appears to start to fix the xenial problems...
[10:39] <gnuoy> jamespage, ok, if you're happy to land the mp as is that makes sense to me
[10:40] <jamespage> gnuoy, +1
[10:40] <neiljerram> jamespage, https://transfer.sh/dyzRS/unit-neutron-api-0.log
[10:41] <jamespage> gnuoy, https://review.openstack.org/303344 anotherkeystone/apache2
[10:42] <gnuoy> jamespage, https://review.openstack.org/#/c/303328/
[10:42] <jamespage> neiljerram, https://transfer.sh/dyzRS/unit-neutron-api-0.log
[10:42] <jamespage> no not that
[10:43] <jamespage> oslo_db.exception.DBError: (pymysql.err.DataError) (1171, u'All parts of a PRIMARY KEY must be NOT NULL; if you need NULL in a key, use UNIQUE instead') [SQL: u'ALTER TABLE cisco_csr_identifier_map MODIFY ipsec_site_conn_id VARCHAR(36) NULL']
[10:43] <jamespage> that may be an incompat of neutron with mysql-5.7
[10:44] <neiljerram> I'll see if there are wider reports about that.
[10:47] <jamespage> gnuoy, and another - https://review.openstack.org/#/c/303347/
[10:48] <gnuoy> jamespage, https://review.openstack.org/#/c/303328/
[10:49] <jamespage> gnuoy, oppps overlap
[10:49] <jamespage> gnuoy, as you already +2'ed mine, lets abandon yours...
[10:50] <gnuoy> +1
[10:51] <jamespage> neiljerram, I think our dep8 tests in xenial should be validating that - let me see
[10:51] <neiljerram> jamespage, There appear to be reports of that problem with MySQL 5.7 across software in general.  Nothing Neutron-specific that I could find, though.
[10:52] <jamespage> neiljerram, db sync with local mysql 5.7 in ci for packaging looks ok - https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac/autopkgtest-xenial/xenial/i386/n/neutron/20160408_095603@/log.gz
[10:53] <neiljerram> jamespage, And would that be with Mitaka Neutron code?
[10:53] <jamespage> neiljerram, although that sync is not actually done in the ci tests
[10:53] <jamespage> neiljerram, yes
[10:55] <neiljerram> neiljerram, But perhaps the DB in that ci doesn't include cisco_csr_identifier_map table.  I've never heard of that table before.
[10:56] <jamespage> neiljerram, same packages as from your charm deployment which I find very odd
[10:56] <jamespage> neiljerram, what is cisco_csr_identifier_map ?
[10:56] <neiljerram> jamespage, I got that from the error message above.
[10:57] <jamespage> neiljerram, yeah - just trying to figure out why I don't see that migration in the CI tests - they should be the same always right? there is no conditional migration in neutron any longer based on plugin ...
[10:57] <jamespage> well at least I think that is the case...
[10:59] <jamespage> neiljerram, lemme stand up a xenial-mitaka env and take a look as well
[10:59] <jamespage> mysql fix is in the main charm branch now
[10:59] <neiljerram> jamespage, gnuoy, thanks on both counts
[10:59] <gnuoy> neiljerram, np
[11:00]  * jamespage needs coffee biab
[11:12] <jamespage> neiljerram, yah - just hit the same thing
[11:12] <jamespage> hmmm
[11:19] <gnuoy> jamespage, are you planning to do a full amulet run against your dpdk branch ?
[11:19] <jamespage> gnuoy, yes
[11:19] <gnuoy> kk
[11:24] <jamespage> neiljerram, that migration comes from neutron-vpnaas
[11:26] <jamespage> I suspect that mysql-5.7 is behaving a little diff to 5.6 here
[11:26] <jamespage> but need to prove that
[11:28] <neiljerram> jamespage, But it looks like mysql-5.7 has been out for a while, so I would guess that other OpenStack/Neutron users have been using it.
[11:28] <jamespage> neiljerram, yeah but its probably not in a gate anywhere just yet...
[11:29] <neiljerram> jamespage, So I wonder what is the Xenial-specific or charm-specific factor here?  Could it be that common deployments don't include neutron-vpnaas?
[11:29] <jamespage> neiljerram, this is not a new migration step (its from liberty)
[11:32] <jamespage> neiljerram, trying neutron-api on xenial against a trusty mysql
[11:36] <jamespage> gnuoy, suspicion that we may be seeing the same restart race as we saw for apache on the dashboard with keystone now
[11:38] <jamespage> gnuoy, Apr  8 10:32:01 juju-osci-sv19-machine-2 cron[1086]: Error: bad username; while reading /etc/cron.d/keystone-token-flush
[11:38] <jamespage> urgh that's not great either...
[11:39] <jamespage> neiljerram, neutron-api on xenial is find against mysql-5.5 on trusty
[11:39] <jamespage> craptastic
[11:39]  * jamespage raises a bug
[11:40] <neiljerram> jamespage, So basically no OpenStack on Xenial, at the moment... :-(
[11:40] <jamespage> rbasak, fyi ^^
[11:40] <jamespage> neiljerram, nope
[11:52] <marcoceppi> hey magicaltrout, sorry you're having so much trouble with this
[11:52] <gnuoy> jamespage, do you have a bug number for xenial db migration issue?
[11:53] <marcoceppi> jamespage: I'm about ~400 copyright notices away from having the charm package clean lint
[11:53] <jamespage> gnuoy, https://bugs.launchpad.net/ubuntu/+source/neutron-vpnaas/+bug/1567899
[11:53] <mup> Bug #1567899: alembic migration failure with mysql 5.7 <amd64> <apport-bug> <ec2-images> <xenial> <neutron:New> <neutron-vpnaas (Ubuntu):New> <https://launchpad.net/bugs/1567899>
[11:53] <gnuoy> jamespage, ta
[11:53] <jamespage> I think I have a fix - just tried it locally
[11:54] <gnuoy> that was quick
[11:54] <rbasak> jamespage: mysqld runs in a stricter mode in 5.7 by default. If necessary, you can configure it to be as relaxed as previously.
[11:56] <rbasak> jamespage: I'm told that putting NOT NULL in the schema should do it, and there should be no consequences because it was enforced like that previously anyway.
[11:58] <rbasak> (and that it's not strict mode related)
[11:59] <jamespage> rbasak, can you comment on that bug please...
[12:25] <jamespage> gnuoy, neiljerram: ok fix proposed upstream and I've uploaded as a patch to xenial as well
[12:25] <jamespage> might not be 100% right but should unblock testing for now
[12:27] <neiljerram> jamespage, Many thanks, I'll take a look shortly.  Will your upload go into the main Xenial archive, or do I need to add a PPA for it?
[12:27] <jamespage> neiljerram, it will go into the main xenial archive
[12:27] <neiljerram> jamespage, thanks
[12:27] <jamespage> neiljerram, I'll shove it in ppa:james-page/xenial as well
[12:28] <jamespage> uploaded - will take a short while to build
[12:46] <pmatulis> what is --resource for in 'juju deploy'?
[12:48] <rick_h_> pmatulis: put it in the other channel
[13:10] <jamespage> neiljerram, gnuoy: vpnaas fix accepted by the release team
[13:11] <gnuoy> jamespage, excellent, good work
[13:11] <beisner> yah - rbasak, jamespage, gnuoy - nice work tracking down the 5.7 issues & thx for the charm work around
[13:12] <gnuoy> np
[13:12] <rbasak> Note that I haven't yet uploaded fixes for the bugs I found last night yet for MySQL itself.
[13:13] <rbasak> Workaround is to not have ~/.my.cnf or ~root/.my.cnf while the maintainer scripts run
[13:13] <rbasak> Upstream will have updated packaging for me on (probably) Monday. It'll have some other fixes too.
[13:13] <rbasak> If that's awkward for anyone please let me know.
[13:14] <rbasak> MySQL itself> I mean packaging src:mysql-5.7
[13:21] <A-Kaser> Hi, I would like to deploy my charm located on lp ( https://code.launchpad.net/~frbayart/charms/trusty/datafellas-notebook/trunk )
[13:21] <A-Kaser> juju deploy cs:~frbayart/trusty/datafellas-notebook don't work
[13:21] <A-Kaser> what is the correct url ?
[13:22] <neiljerram> lp: instead of cs: I think
[13:23] <neiljerram> Also need the 'charms' and 'trunk' parts in there - so: lp:~frbayart/charms/trusty/datafellas-notebook/trunk
[13:24] <A-Kaser> I have tried lot of url format with cs and lp without success
[13:25] <A-Kaser> I think I have skipped a step
[13:28] <pmatulis> with 'juju deploy', are there just 2 "repositories"? charm store (cs) and local?
[13:40] <rick_h_> pmatulis: yes, and local isn't a repository but a path
[13:41] <rick_h_> A-Kaser: does it show in jujucharms.com?
[13:41] <rick_h_> A-Kaser: there's a big time gap between a bzr push and the charmstore seeing it
[13:42] <rick_h_> A-Kaser: there's a new process that's in beta right now for going direct to the charmstore.
[13:42] <rick_h_> A-Kaser: jcastro marcoceppi do we have a first draft from your sprint we could share?
[13:43] <marcoceppi> rick_h_: A-Kaser yes
[13:43] <marcoceppi> https://github.com/marcoceppi/docs/blob/17b54ba8f49d464a0b26b11fab7087705166c6bf/src/en/authors-charm-store.md
[13:44] <marcoceppi> feedback can be placed here https://github.com/juju/docs/pull/975
[13:44] <rick_h_> ty marcoceppi
[13:44] <A-Kaser> rick_h_: no I'm not un jujucharms.com, I do nothing for that
[13:45] <rick_h_> A-Kaser: right, jujucharms.com pulls charms from LP automatically but it takes 1-2hrs
[13:45] <jamespage> gnuoy, for these - https://review.openstack.org/#/q/status:open+branch:master+topic:network-spaces
[13:45] <rick_h_> A-Kaser: the new process that is in testing marcoceppi linked you to docs if you want to try it out and get it in immediately to deploy it
[13:45] <jamespage> I don't intend on a full recheck as the code is the same in all versions...
[13:45] <A-Kaser> rick_h_: ok I'm reading marcoceppi doc
[13:46] <A-Kaser> just to know, new charms need to be declared somewhere in jujucharms
[13:46] <rick_h_> A-Kaser: juju asks jujucharms.com for the charm when you deploy
[13:46] <A-Kaser> or all repository with "charms" in name  are added ?
[13:46] <rick_h_> A-Kaser: so if jujucharms.com doesn't know about it yet, neither does juju deploy
[13:47] <rick_h_> A-Kaser: all repos in charms are pulled in every 1-2hrs
[13:47] <rick_h_> A-Kaser: or you can use the new system and avoid that old process
[13:48] <A-Kaser> ok
[13:51] <A-Kaser> loOk I suppose I need to upgrade my charm command
[13:56] <A-Kaser> I'm using ppa:juju/stable , and I don't have "push" subcommand with "charm"
[13:57] <rick_h_> A-Kaser: yes, it's in the devel one right now. But if you go there you'll end up on juju 2.0 as well.
[13:58] <pmatulis> rick_h_: ty
[13:58] <A-Kaser> I have started with juju2 but I had too many problem with it
[13:58] <A-Kaser> so people ask to me to move to v1
[13:58] <rick_h_> A-Kaser: understand, so you should be ok. it'll make juju2 available but it's a different install
[13:59] <rick_h_> or you can wait on ingestion into jujucharms.com for now
[13:59] <rick_h_> A-Kaser: the charm package should hit stable soon. In next week I'd expect
[14:03] <gnuoy> tinwood, got a sec to review https://code.launchpad.net/~gnuoy/charm-helpers/custom-restart/+merge/291372 ?
[14:03] <tinwood> yep, running an amulet test, so will jump on it :)
[14:03] <gnuoy> thanks
[14:03] <A-Kaser> rick_h_: I have pushed my repo yesterday
[14:05] <A-Kaser> I'm testing to install charm-tools v2 and keep juju1
[14:28] <gnuoy> rbasak, do you know if percona-server will follow mysql and shift to 5.7 for xenial imminently ?
[14:29] <rbasak> gnuoy: I'm not aware of any plans. I haven't heard from Percona in a while.
[14:30] <gnuoy> rbasak, ack, thanks
[14:32] <beisner> gnuoy, hrm.  i know that your mysql charm MP amulet test passed, but i've got 2 for 2 fails in mysql full amulet after that landed.  weird.    http://pastebin.ubuntu.com/15688809/
[14:34] <gnuoy> beisner, which amulet tests is that, the mysql one?
[14:34] <beisner> yep
[14:34] <gnuoy> beisner, I *think* that is actually the mysql amulet tests being incompatable with the new mitaka schema
[14:34] <gnuoy> s/mitaka/mitaka keystone/
[14:35] <beisner> gnuoy, quite possibly.  we'll need to update the test if so
[14:35] <gnuoy> beisner, the error is a missing column in a keystone table
[14:35] <beisner> indeed
[14:36] <beisner> perhaps crossed paths with a pkg rev in trusty-mitaka uca?  (why it passed earlier, but fails now)
[14:37] <jamespage> tinwood, out and in for ceph osds?
[14:37] <tinwood> done for ceph-osd, about to submit for ceph.
[14:38] <jamespage> tinwood, yeah I was questioning 'out' - that causes an evac of all data off the unit
[14:38] <tinwood> jamespage, I put a note in the actions.yaml that the 'user' needs to do a pause-health before doing an osd pause.
[14:39] <tinwood> jamespage, ... if they want to stop the migration of PGs.
[14:39] <tinwood> jamespage, https://review.openstack.org/#/c/303360/1/actions.yaml
[14:41] <gnuoy> beisner, oh, sorry I missed the point you were making. 019-basic-trusty-mitaka went from working a few hours ago to not working now
[14:41] <jamespage> tinwood, yeah - I was looking at that
[14:42] <jamespage> beisner, I pushed a load of updates into the updates pocket earlier today for mitaka
[14:42] <jamespage> so that might be the cause...
[14:43] <tinwood> jamespage,  I didn't think it should be prescriptive (at the charm level) and enforce a noout on the cluster for the pause? But I'm open to suggestions.
[14:44] <jamespage> tinwood, my base assumption was that we could noout/nodown the osd's on a specific unit
[14:44] <tinwood> jamespage, I though noout worked at cluster level?
[14:45] <gnuoy> beisner, Bug #1567971
[14:45] <mup> Bug #1567971: Mitaka amulet tests fail <mysql (Juju Charms Collection):New> <https://launchpad.net/bugs/1567971>
[14:45] <jamespage> tinwood, yes that's correct - my assumption was wrong...
[14:46] <A-Kaser> marcoceppi: thx for the doc
[14:49] <beisner> thx gnuoy
[14:49] <A-Kaser> now I have another error with juju deploy cs:~frbayart/datafellas-notebook-0
[14:53] <tinwood> gnuoy, I posted my comments on the charmhelpers change.  comment only, as there was a corner case, but it's (in hindsight) fairly unlikely to be activated.
[14:53] <gnuoy> tinwood, thanks
[15:32] <beisner> coreycb, \o/ @  https://review.openstack.org/#/c/302427/
[15:33] <coreycb> beisner, hooray!
[15:33] <beisner> apache2 init script racing, same as we saw in dashboard as jamespage pointed out
[15:33] <coreycb> gnuoy, jamespage, this ^ could use a review and +2 by an openstack charmer, it's blocking rockstar
[15:33] <gnuoy> coreycb, I can tkae a look
[15:34] <coreycb> gnuoy, thanks
[15:35] <gnuoy> beisner, I'm working on the assumption that Bug #1567741 is an apache race and am looking to land https://code.launchpad.net/~gnuoy/charm-helpers/custom-restart/+merge/291372 which will allow us to pass a custom restart function to the restart_on_change function. We can use that to replace the redefinition of restart_on_change that we've done in someplaces like openstack_dashboard
[15:35] <mup> Bug #1567741: get_keystone_manager exhausts retries on:  Unable to establish connection to http://localhost:35347/v2.0/OS-KSADM/services <uosci> <keystone (Juju Charms Collection):New> <https://launchpad.net/bugs/1567741>
[15:37] <gnuoy> jamespage, I don't suppose you have time for https://code.launchpad.net/~gnuoy/charm-helpers/custom-restart/+merge/291372 ?
[15:37] <jamespage> I was about to say I'm spinning my wheels looking for something todo
[15:37] <jamespage> gnuoy, yes
[15:38] <gnuoy> thanks
[15:38] <jamespage> gnuoy, coreycb: you have 'liberty' at the top of your nova.conf - I'd ask you fix that and we'll push that review through without OSCI...
[15:39] <coreycb> jamespage, shoot, ok
[15:39] <beisner> gnuoy, ack & tyvm.  that looks handy.
[15:42] <jamespage> gnuoy, +1 landed for you
[15:42] <gnuoy> jamespage, fantastic, thank you
[15:42] <coreycb> jamespage, I pushed a new patch series to my review if you want to land it, or i cna
[15:42] <coreycb> can
[15:44] <jamespage> coreycb, OK I'll take that - gnuoy <<<<<
[15:44] <jamespage> coreycb, maybe you could peek at https://review.openstack.org/#/q/status:open+branch:master+topic:network-spaces for me :-0
[15:46] <jamespage> coreycb, +2/+1 workflow on your nova-cc change
[15:50] <coreycb> jamespage, thanks, sure I'll take a look
[16:00] <gnuoy> beisner, I've osci https://code.launchpad.net/~gnuoy/charms/trusty/mysql/1567971/+merge/291388 to chew on, hopefully should fix mitaka amulet.
[16:01] <beisner> gnuoy, i think the xenial-mitaka target will fail there, because it will pull in the stable keystone charm
[16:01] <gnuoy> beisner, on a related note, once 16.04 is done could we s/mysql/percona/ accross all our amulet tests and stop worrying about mysql charm ?
[16:01] <gnuoy> beisner, that +x to xenial was a mistake
[16:01] <beisner> gnuoy, that should be the goal.  but first:  refactor pxc's amulet tests so that it covers something other than trusty.
[16:02] <gnuoy> ack
[16:03] <gnuoy> beisner, -x'd xenial in the mp
[16:03] <beisner> coolio ta gnuoy
[16:04] <gnuoy> ok, keystone apache race here I come
[16:06] <beisner> gnuoy, added 2 cards re: mysql->pxc in charm backlog for 16.10, but we should aim for 16.07 in that.
[16:07] <gnuoy> beisner, ack, +1. Might be a nice thing to twiddle on during ODS
[16:08] <beisner> gnuoy, if i've got nothing else pressing, i'd like to do that.  i usually take on one fairly major amulet or spec addition or refactor during ods and sprint time.
[16:08] <beisner> or if you'd like to take it on, quite ok too
[16:09] <gnuoy> beisner, be my guest, I would be forced to buy you beer though
[16:09] <beisner> heat, rmq, lxd tests all came about ~sprint time.   will work for beer
[16:09] <gnuoy> \o.
[16:09] <gnuoy> \o/ I meant
[16:33] <jamespage> gnuoy, yah - ready to review that apache keystone race fix once you have it ready
[16:33] <jamespage> keep hitting it on a recheck-full
[16:34] <gnuoy> jamespage, trying to decide how proffesional to be at the moment. Either easy stop-sleep-start or slightly more involved stop-check-pids-start
[16:35]  * gnuoy thinks its beginning to show that my xchat spell checker is broken
[16:36] <beisner> gnuoy, let's have a look at openstack-dashboard where nearly the same was worked around
[16:36] <gnuoy> beisner, thats got a sleep
[16:36] <beisner> bah
[16:37] <beisner> a fixed sleep is an invitation to race elsewhere
[16:37] <beisner> i like check; sleep; loop; with a max total wait.
[16:52] <coreycb> I'm trying to get to juju 2.0 from 1.25.3.  The devel ppa doesn't have a new juju-core, only has a new juju2 client.
[16:53] <rick_h_> coreycb: yes, apt-get install juju2
[16:53] <coreycb> rick_h_, you're fast!
[16:53] <rick_h_> coreycb: in xenial it'll just be 'juju'
[16:53] <coreycb> rick_h_, thanks
[16:54] <rick_h_> coreycb: np
[16:54] <rick_h_> coreycb: let us know how it goes. Have fun playing
[16:54]  * rick_h_ sends patience pills to coreycb :)
[16:55] <coreycb> rick_h_, thanks, I may need them.  I've been stuck in packaging land too long.
[16:59] <gnuoy> jamespage, beisner https://review.openstack.org/303538 <- open for initial comments/critism. I need to fix up the unit tests now
[17:04] <jamespage> gnuoy, looks ok
[17:06] <beisner> gnuoy, def feels like restart_pid_check is an apache2 pkg thing, but i understand why we're doing it here.  do you know if the appropriate pkg bug is raised?
[17:06] <gnuoy> beisner, it has
[17:06] <beisner> gnuoy, i like how this paves the way for optional restart funcs
[17:07] <gnuoy> beisner, theres a comment in the horizon charm which references the bug
[17:07] <beisner> kk thx
[17:07] <beisner> gnuoy, we should prob also comment similarly in restart_pid_check so that a year or 3 from now we know wth that's there.
[17:07] <jamespage> gnuoy, generally looks ok - some minor comments on the review...
[17:08] <gnuoy> beisner, agreed
[17:08] <gnuoy> jamespage, thanks, I'll take a look
[17:11] <jamespage> going to take a break for a bit - suspect I will be back later...
[17:11] <beisner> gnuoy, also 1 comment
[17:11] <coreycb> rick_h_, do I need to purge  juju-core 1.25.3?
[17:11] <rick_h_> coreycb: no
[17:11] <gnuoy> ack, ta
[17:11] <rick_h_> coreycb: they fit side by side
[17:12] <coreycb> rick_h_, ok for some reason /usr/bin/juju --version is still 1.25.3-xenial-amd64
[17:12] <rick_h_> coreycb: for juju2 and beta3 update-alternatives
[17:15] <coreycb> rick_h_, that's better, thanks. I should have figured that on my own.  sudo update-alternatives --config juju
[17:54] <gnuoy> beisner, jamespage, thanks for the comments, I've updated the pull request
[17:57] <beisner> gnuoy, /melikes
[17:57] <gnuoy> tip top
[18:00] <gnuoy> ok, I'm stepping out for a little bit. I'll be back later if there are any nacks to respond to
[18:01] <beisner> gnuoy, thx. i'll keep an eye on it.  will do a full recheck shortly.
[18:26] <beisner> fyi tinwood, i think the test fail on https://review.openstack.org/#/c/303467/ will be resolved when https://review.openstack.org/#/c/303538/ lands
[18:26] <beisner> gnuoy, mysql amulet test update landed.  thx again for that.
[19:03] <tinwood> beisner, thanks.  I'll hold off investigating for the moment.
[20:23] <c0s> fellas, with juju2 - how would I deploy from local repo? Somehow this
[20:23] <c0s>   juju deploy --repository=`pwd` local:trusty/apache-bigtop-namenode/
[20:23] <c0s> doesn't work, claiming the charm/bundle URL is wrong.
[20:23] <c0s> kwmonroe: cory_fu - any hints? ;) ^^^^
[20:24] <rick_h_> c0s: just deploy the directory. leave local: out
[20:24] <rick_h_> c0s: ./apache-bigtop-namenode
[20:25] <cory_fu> rick_h_: Our charms don't yet include the series in the metadata.  Don't you have to specify the series for local charms?  --series perhaps?
[20:25] <c0s> I see. rick_h_ and if I need to deploy a bundle - it'd be the same, right?
[20:26] <rick_h_> cory_fu: i think you can just --series --force ?
[20:26] <c0s> although, it seems that bundle need to have the info about particular revisions of the charms, hence the repository is needed
[20:26] <rick_h_> c0s: yes, bundle is the same
[20:27] <cory_fu> c0s: Within a bundle.yaml, you currently still need local: but that is supposed to change
[20:27] <c0s> this
[20:27] <c0s>   juju deploy `pwd`/trusty/apache-bigtop-namenode/ --series trusty
[20:27] <c0s> at least started doing something
[20:27] <cory_fu> But if you're talking about deploying a local bundle, you would just: juju deploy bundle.yaml
[20:28] <c0s> cory_fu: "local bundle" as in 'sitting on my laptop', yes
[20:38] <c0s> guys, could you share some best practices with me, please?
[20:38] <c0s> Say, I deployed something and it failed.
[20:38] <c0s> Is there a quick way to clean up and re-deploy again?
[20:38] <jamespage> beisner, gnuoy's apache fix recheck-full's ok
[20:38] <jamespage> beisner, wanna do the honors?
[20:38] <beisner> jamespage, yep on it
[20:39] <jamespage> beisner, awesome
[20:39] <LiftedKilt> c0s: same problem - I destroy the model every time, but it leaves the ghost model in list-models
[20:40] <c0s> yeah... I stepped on it last night were I couldn't destroy the controller and it wasn't clear why
[20:40] <c0s> then I realized that I have a ghost model around
[20:40] <cory_fu> c0s: So, I'm struggling with this as well.  The answer is supposed to be one of: fix the issue + upgrade-charm --force-units + resolved --retry, debug-hooks + resolved --retry, or destroy-model + create-model + re-deploy
[20:40] <cory_fu> But all of those are having issues for me in 2.0 beta3
[20:40] <LiftedKilt> c0s: a clean-model or something would be nice - especially with a flag to retain the deployed machines
[20:40] <c0s> got it.
[20:40] <c0s> +1 LiftedKilt
[20:41] <c0s> ok, for now I will just remove the service and the machine. Luckily I only have a single node ;)
[20:41] <cory_fu> LiftedKilt: I believe there's an open issue about cleaning up those models.
[20:41] <c0s> stepping away for 20 to grab a bite
[20:42] <LiftedKilt> c0s: with a single node it's not terrible, but with 20 or so machines and a full openstack deployment it gets annoying
[20:42] <LiftedKilt> cory_fu: yeah I saw something about thata while back. Hopefully it makes it into beta4
[20:44] <c0s> yup, agree LiftedKilt
[20:44] <cory_fu> LiftedKilt: I've also been running into an issue where, if I have done an upgrade-charm in one model, then tear that down and start another, deploys of that charm don't work until I upgrade-charm in the new env as many or more times as I had done in the previous env
[20:44] <LiftedKilt> cory_fu: oh that's fun
[20:44] <LiftedKilt> haha
[21:26] <magicaltrout> awww
[21:27] <magicaltrout> a dodgy tomcat charm has broken tomcat until monday
[21:28] <magicaltrout> its like the juju gods don't want me to get saiku reviewed
[21:30] <magicaltrout> kwmonroe: if you have anything depending on archive.apache.org other than tomcat
[21:30] <magicaltrout> it will fail to deploy until monday
[21:33] <magicaltrout> https://bugs.launchpad.net/charms/+source/tomcat/+bug/1568153 for that reason
[21:33] <mup> Bug #1568153: incorrect reliance on archive.apache.org <tomcat (Juju Charms Collection):New> <https://launchpad.net/bugs/1568153>
[21:35] <magicaltrout> marcoceppi: as the guy with many fingers in pies you should probably be aware of that as well
[21:37] <cory_fu> magicaltrout: We never install directly from *.apache.org for that exact reason (and because things get dropped, and the archive is slow, etc.)
[21:37] <cory_fu> We mirror everything to a jujubigdata S3 bucket
[21:37] <marcoceppi> magicaltrout: huzzah. tbh, tomcat should probably be a layer instead of a charm
[21:37]  * marcoceppi ponders this
[21:38] <cory_fu> marcoceppi: https://i.imgur.com/c7NJRa2.gif
[21:38] <magicaltrout> lol
[21:38] <magicaltrout> fair enough cory_fu, just a heads up
[21:38] <cory_fu> Tomcat could serve as either a base layer or standalone charm, I think.
[21:39] <magicaltrout> marcoceppi: many thanks to your database guys for unpicking my LP rebranding
[21:39] <marcoceppi> cory_fu: totally, but a standalone charm built from a layer ;)
[21:40] <cory_fu> Well, I mean it could serve both roles at the same time, potentially.  If you deploy it directly, it's a standalone charm.  But it could also be coded such that it does the right thing if included via layer.yaml
[21:41] <magicaltrout> well thats sorta why i piped up on the ML about a webapp interface
[21:45] <magicaltrout> 10:45 on friday sounds like a good time to update my boxes... what could possibly go wrong
[21:49] <LiftedKilt> what does "Waiting for agent initialization to finish" mean? I've got an openstack bundle I wrote, and all the charms except the nova-compute are just stuck in limbo
[21:51] <LiftedKilt> bundle for reference: https://paste.ubuntu.com/15699491/
[21:53] <rick_h_> LiftedKilt: run juju status --format=yaml
[21:54] <LiftedKilt> rick_h_: https://paste.ubuntu.com/15699542/
[21:55] <c0s> weird... getting a lot of error messages in the debug like
[21:55] <c0s> unit-apache-bigtop-namenode-0: 2016-04-08 21:55:07 ERROR juju.worker.dependency engine.go:509 "uniter" manifold worker returned unexpected error: preparing operation "install local:trusty/apache-bigtop-namenode-0": failed to download charm "local:trusty/apache-bigtop-namenode-0" from ["https://172.31.4.145:17070/model/3f1319b4-ab01-485f-82f1-27afffe6f1a5/charms?file=%2A&url=local%3Atrusty%2Fapache-bigtop-namenode-0"]: expected sha256 "8a9a5571efb6c9f575f56d54d10cd76
[21:55] <c0s> unit-apache-bigtop-namenode-0: 2016-04-08 21:55:09 ERROR juju.worker.dependency engine.go:509 "metric-collect" manifold worker returned unexpected error: failed to read charm from: /var/lib/juju/agents/unit-apache-bigtop-namenode-0/charm: stat /var/lib/juju/agents/unit-apache-bigtop-namenode-0/charm: no such file or directory
[21:55] <c0s> cory_fu: I am deploying from local repo - any ideas what'd be issue?
[21:57] <cory_fu> c0s: Yes, that's exactly the upgrade-charm issue I'm running in to contantly
[21:57] <cory_fu> c0s: Just do `juju upgrade-charm namenode` until it works
[21:57] <c0s> I am not even doing upgrade ;(
[21:57] <c0s> I am creating new model and deploying fresh
[21:58] <c0s> that's the status
[21:58] <c0s> ID                       WORKLOAD-STATUS JUJU-STATUS VERSION   MACHINE PORTS PUBLIC-ADDRESS MESSAGE
[21:58] <c0s> apache-bigtop-namenode/0 unknown         allocating  2.0-beta3 0             54.183.6.62    Waiting for agent initialization to finish
[21:58] <cory_fu> c0s: Yeah, I think it has something to do with the new model  resetting the internal rev of the charm it's looking for but subsequent deploys still incrementing the the internal rev in the controller
[21:59] <cory_fu> i.e., every time you `juju deploy namenode` or `juju upgrade-charm namenode`, the controller increments the rev.  But every new model gets its rev set back to 0
[22:06] <c0s> so, shall I nuke the controller cory_fu?
[22:06] <cory_fu> c0s: That would  certainly work, but you can probably work around it by doing pointless upgrade-charms, as I mentioned
[22:07] <c0s> nah... nuking is easier and surely cleaner. Thanks man!
[22:08] <cory_fu> Sorry you're hitting these beta bugs.  They're certainly irksome
[22:08] <cory_fu> c0s: It's EOD time for me.  Have a good weekend
[22:08] <c0s> no worries.
[22:08] <cory_fu> And safe travels!
[22:08] <c0s> I will push for a couple more hours and then switch off as well - need to pack and stuff. The flight is at noon.
[22:08] <c0s> Thanks! TTL
[22:08] <c0s> Have a good weekend, cory_fu
[22:17] <LiftedKilt> cory_fu: in my debug logs I'm seeing stuff like
[22:17] <LiftedKilt> manifold worker stopped: preparing operation "install cs:~openstack-charmers-next/xenial/openstack-dashboard-200": failed to download charm "cs:~openstack-charmers-next/xenial/openstack-dashboard-200
[22:17] <LiftedKilt> is that the same problem you are looking at?
[22:19] <cory_fu> LiftedKilt: Possibly.  I've seen it show that "failed to download" message in two different ways.  One mentions the hashes not matching, which can be worked-around with upgrade-charm, but I've also occasionally see it mention "Bad request" and then I have to just destroy the controller and re-bootstrap
[22:19] <LiftedKilt> cory_fu: Ok yeah I'm seeing the bad request 400 message
[22:20] <cory_fu> Yeah, I have no idea what causes that or how to get around it other than starting over
[22:20] <cory_fu> LiftedKilt: Now would be a good time for you to open a bug about that one
[22:21] <cory_fu> I can't reproduce it consistently and didn't open a bug about it the times it happened to me (shame on me)
[22:21] <LiftedKilt> cory_fu: I'll check on Monday and open one then - I want to try and get openstack up and running before end of the day
[22:21] <cory_fu> ha.  That's about what I said to myself when I didn't open a bug about it before.
[22:31] <LiftedKilt> cory_fu: Thanks for guilting me into it haha
[22:31] <LiftedKilt> https://bugs.launchpad.net/juju-core/+bug/1568176
[22:31] <mup> Bug #1568176: charm deployment requests invalid revision number <juju-core:New> <https://launchpad.net/bugs/1568176>
[22:32] <LiftedKilt> linked it in case you wanted to add any pertinent info
[23:00] <dsc> would anyone here know how provide MAAS credentials to juju2? all of the previous mass schema wont be accepted
[23:01] <mattrae> dsc: this solution worked for me http://askubuntu.com/a/744065
[23:02] <dsc> thanks so much...I was looking through the code to try to figure out how it was intended to be parsed
[23:02] <mattrae> sure no problem
[23:29] <dsc> it looks like the MAAS rest api has changed in version 2.0+ and in my search of the juju source I couldnt find any refrences to the api path that is appended to the uri other than in test files. anyone know where that is set (I need to change /MAAS/api/1.0/ to/MAAS/api/2.0/ )
[23:30] <LiftedKilt> dsc: maas 2.0 support is still pending for juju2
[23:30] <dsc> I checked out the maas2 branch to look for it