[00:01] <LaserAllan> JanC: you still here?
[08:37] <frickler> jamespage: coreycb: neutron-8.1.2 fails to rebuild for me because it misses https://review.openstack.org/321791, and nova-13* seems to have an issue with paramiko, also getting nova-13.1.0 would be nice
[08:37] <frickler> jamespage: on a related note, do you know why ceph and keystone are still stuck in the xenial queue?
[09:16] <lordievader> Good morning.
[10:13] <cpaelzer> rbasak: I saw you already answering mails this morning, would you have the time to make pacemaker available for the merge process via the importer?
[10:14] <cpaelzer> rbasak: I sent a mail to the list already earlier this morning if you want to reply for tracking
[10:15] <cpaelzer> not urgent thou, just one of the two next things on my list - so I thought it is worth a ping ahead of time
[10:16] <cpaelzer> and the list is to the horizon and back anyway :-)
[10:28] <rbasak> cpaelzer: OK, I started the import of pacemaker. It may take a while.
[10:45] <jamespage> cpaelzer, hey - revisiting the work I started on ovs 2.6 today - had some test failures first time round...
[10:52] <jamespage> frickler, poking on the ceph sru - not sure why that's blocking - we've had a standing mre exception with the sru team for the last 4 years, they normally go through pretty quick
[10:53] <jamespage> I'll let coreycb answer the other two - or ddellav might know as well
[10:57] <jamespage> cpaelzer, hmm suspect I'm going to need a dpdk 16.04 build
[10:58] <jamespage> http://paste.ubuntu.com/19068274/
[11:02] <cpaelzer> jamespage: great to hear
[11:03] <cpaelzer> jamespage: I'm already on DPDK 16.07 and I see the matching OVS patches in the OVS entry queue
[11:03] <cpaelzer> jamespage: I doubt it, but let me know if I can help
[11:03] <jamespage> cpaelzer, I'll try with your 16.04 and 16.07 ppa's
[11:04]  * cpaelzer hands a virtual thank-you-beer to jamespage
[11:04] <jamespage> thank me when I have it done :-)
[11:04] <cpaelzer> you can collect those and turn them to real when we met next time
[11:06] <cpaelzer> rbasak: that is why I asked in advance - thanks for starting it
[11:24] <frickler> jamespage: coreycb: we would also very much like to get an update to keystonemiddleware-4.4.1 incorporating the fix for https://bugs.launchpad.net/keystonemiddleware/+bug/1533724, this crashed one of our deployments over the weekend
[11:32] <jamespage> tyhicks, hey - quick question - I'd like to try to put one of the remote console access stacks into main this cycle for openstack
[11:32] <jamespage> tyhicks, it will require some MIR's but I wanted to see if you had a preference  - there are some choices
[11:33] <jamespage> tyhicks, I think the choice is novnc or spice
[11:33] <jamespage> spice is already in main, but the html shim is not yet
[11:36] <valluttaja> anyone familiar with mod_wsgi?
[11:36] <jamespage> ish
[12:38] <coreycb> frickler, hi, we've got nova 13.1.0 in progress.  what was the issue with paramiko?
[12:40] <coreycb> frickler, ddellav is testing keystone in xenial-proposed, he may have an update
[12:45] <coreycb> frickler, we'll get keystonemiddleware 4.4.1 into the SRU queue soon
[13:14] <ddellav> coreycb frickler i had tested that awhile ago and it passed but im running them again now just to be safe. I'll keep you updated on the outcome.
[13:14] <coreycb> ddellav, alright let's try to get those bugs marked verification-done today
[13:16] <ddellav> coreycb lp:1592865?
[13:18] <coreycb> ddellav, yes
[13:25] <coreycb> frickler, btw neutron 8.1.2 builds ok for me without that patch
[13:45] <le_pig> hmm
[13:54] <frickler> coreycb: looks like I had cluttered my build node by doing some devstack runs, which pip-installed newer libraries that got used during the build tests and made them fail. I'm rerunning the build now on a fresh node
[13:56] <v1s> I running ubuntu 16.04 server I have a usb2eth adapter connected to wan and then I have the built in eth and wifi for local network. I am using hostapd / bridge-utils / dnsmasq. The wan is working fine but only one system connecting to the wifi is pingable and I see other sytems in the dhcp client list but cant reach any of them any ideas ? can post any conf to check
[14:03] <cpaelzer> v1s: so your wireless and your builtin net are connected to the same network ip/netmask range?
[14:15] <v1s> @cpaelzer: yes they are bridged using ip 10.20.30.1/24
[14:19] <tyhicks> jamespage: hi - I'm not familiar enough with either project to have a preference
[14:19] <tyhicks> jamespage: considering that spice is already in main, that might be the route that results in the least amount of code going from universe to main
[14:20] <jamespage> tyhicks, that was my thinking as well
[14:20] <tyhicks> sarnold: once you start your day, can you chime on whether you have any preference here? ^
[14:23] <mdeslaur> tyhicks, jamespage: please make it spice, it's a better choice and supports(will support?) 3d acceleration
[14:23] <jamespage> sounds like we are all in agreement :-)
[14:25] <tyhicks> sounds good
[14:37] <frickler> coreycb: o.k., all builds did run fine now, sorry for the confusion.
[14:38] <frickler> can I download packages for stuff in the SRU queue somewhere or would I have to build them myself?
[14:38] <coreycb> frickler, ok good, no problem
[14:39] <coreycb> frickler, you could get them from here, for core packages at least: https://code.launchpad.net/~ubuntu-server-dev/+git
[14:40] <coreycb> frickler, it would probably make more sense for us to just poke the sru team to get things moving a long
[14:42] <coreycb> ddellav, mind poking the sru team for a review of keystone 9.0.2?  that's still in the review queue.
[14:43] <frickler> coreycb: ddellav: also python-keystoneauth1-2.4.1 pls, assuming jamespage has already done enough poking for ceph ;)
[14:46] <coreycb> ddellav, frickler: I just asked infinity for a review of those in #ubuntu-devel
[18:49] <coreycb> ddellav, jamespage: for the mitaka keystonemiddleware point release I did it in an ubuntu/mitaka branch on alioth.  that seems to make more sense to do for a dependency that we share with debian, rather than just working from the archive without a repo.
[18:49] <coreycb> http://anonscm.debian.org/cgit/openstack/python-keystonemiddleware.git/?h=ubuntu/mitaka
[19:20] <coreycb> frickler, a few of the packages we discussed earlier are in the xenial review queue now: https://launchpad.net/ubuntu/xenial/+queue?queue_state=1&queue_text=
[19:25] <codepython777>  If I set a variable in /etc/profile  - does it drop down to my bash shell always?
[19:27] <tarpman> codepython777: the "Invocation" section in the bash(1) man page talks about the conditions under which bash runs specific startup files
[19:37] <codepython777> tarpman: it worked it seems, thanks
[20:00] <coreycb> ddellav, jamespage: I see trove has b2 out for newton so I'm going to do that now and keep track of what's done in our spreadsheet
[20:00] <ddellav> coreycb ok sounds good
[20:32] <kwoot> Anybody awake for some br0 troubles?
[20:33] <kwoot> please?
[20:35] <Sling> maybe ask a real question instead
[20:41] <kwoot> Good idea! So, I switched disks on a system to upgrade the hardware. of course this also means 2 new mac addresses on a dual homed system. On it runs a small kvm host (webserver) that services several small sites). Thing is, everything seems to work, but networking from the kvm host does not. It should run over br1, and br1 is bridged_ports to eth4 (system renames them from eth0 and eth1 to eth3 and eth4).
[20:42] <kwoot> I have no clue why it is failing at this time.
[20:43] <kwoot> The internal network kan connect through the dual homed system as usual, the system can ping to everybody (including the kvm host), but the kvm host can not ping to anybody. Local time is almost 23:00 so I could be missing the obvious here.
[20:45] <kwoot> correction: kvm host can ping himself and the ip of the br1 interface, but not the gateway/modem
[20:54] <kwoot> reboot. no joy
[21:14] <coreycb> jamespage, ddellav: CI should be mostly blue after the next round of builds go through.  I didn't update nova-lxd, the snapshot tar is 13.0.0, but 13.0.0 was already released so I wasn't sure what was up with that.