[00:11] <teward> sarnold: there might be after we fuss with the dynamic module stuff
[00:11] <teward> because we'll have a 'hybrid' build but the core static nginx will have the same modules compiled in for all flavors
[00:11] <teward> so we have a 'base' set of stuff
[00:12] <teward> plus additional addon dynamic modules
[00:12] <teward> hence the clusterf*** that is a hybrid between dynamic and static buildings
[00:12] <teward> oh fun, looks like it doesn't build in Precise anymore
[00:12] <teward> Figures.
[00:14] <sarnold> awww bugger
[00:14] <sarnold> I was impressed that it looked like it was going to
[00:14] <sarnold> precise's toolchain is feeling pretty old at this point
[00:14] <sarnold> even trusty is feeling .. not so trusty. heh.
[00:16] <teward> sarnold: yeah, well
[00:17] <teward> sarnold: i already pulled Precise support from the mainline PPA
[00:17] <teward> several months ago
[00:17] <teward> just pulled a "Nothing past 1.10.1" on the Stable PPA for Precise"
[00:17] <teward> but it won't build, so...
[00:17] <teward> sarnold: it *looks* like it builds everywhere else
[00:17] <teward> so blah
[00:17] <teward> but, of course, that's PPAs, not the standard repos
[00:17] <teward> so I can do what I want there :P
[00:18] <teward> but it's definitely a nice test bed for a merge build test
[00:18] <sarnold> teward: but that diff looked promisingly small enough that it's probably also right :)
[00:20] <teward> sarnold: you're right, but it doesn't want to behave in 12.04
[00:20] <teward> so blah
[00:21] <teward> there's a "Please backport" bug in place for Zesty -> Yakkety+Xenial+Trusty
[00:21] <teward> not sure if that falls under SRU policies
[00:21] <teward> but even without dynamic modules, it'll need those build flag changes
[02:42] <AlecTaylor> hi
[02:42] <AlecTaylor> Any chance someone can assign this bug? - https://bugs.launchpad.net/cloud-images/+bug/1569237
[06:05] <stanford_AI> What do you think of our Drone product? http://adia.tech/
[08:54] <lordievader> Good morning.
[08:55] <cpaelzer> Hi lordievader
[08:55] <lordievader> Hey cpaelzer, how are you?
[08:58] <cpaelzer> good, thanks for asking
[08:59] <cpaelzer> how about you lordievader - day still ok?
[09:03] <AlecTaylor> Any chance someone can assign this bug? - https://bugs.launchpad.net/cloud-images/+bug/1569237
[09:09] <cpaelzer> Odd_Bloke: you were on this bug before and it has to much context unknown to me to answer - could you once more look at the bug AlecTaylor pinged about?
[09:22] <lordievader> cpaelzer: Day is quite okay here, yes :)
[09:29] <ktechmidas> Anyone here use LXD? I have /var/lib/lxd on a seperate hard drive, I pulled it out of one machine and plugged it into another in the hope it would just work. I see all my containers on the new machine, but just "ERROR" next to all of them
[09:29] <ktechmidas> how can I get it working on my new machine?
[09:32] <ktechmidas> I'm using ZFS... but it appears there is nothing in the usual /var/lib/lxd/containers/container directories
[09:32] <ktechmidas> so it maybe hasn't mounted properly?
[09:32] <ktechmidas> not sure
[09:34] <ktechmidas> does it even support offline migration?
[09:55] <Odd_Bloke> rbasak: Thanks for your response in to that Vagrant bug. :)
[10:12] <ikonia> win 1
[10:25] <AlecTaylor> hi
[10:26] <AlecTaylor> Any chance someone can assign this bug? - https://bugs.launchpad.net/cloud-images/+bug/1569237
[10:27] <Odd_Bloke> AlecTaylor: o/
[10:27] <Odd_Bloke> AlecTaylor: rbasak posted a comment with a pointer at IRC logs this morning; did you have a chance to read through that?
[10:28] <Odd_Bloke> AlecTaylor: The TL;DR is that we have two different classes of users for our Ubuntu box: Ubuntu users who happen to use Vagrant, and Vagrant users who happen to use Ubuntu.  Finding a way to make both parties happy has proved to be challenging, to say the least.
[10:28] <AlecTaylor> Odd_Bloke: Nope, just scrolled through my logs, I must've been logged out when he replied
[10:29] <Odd_Bloke> AlecTaylor: It was a bug comment, rather than a comment in IRC. :)
[10:29] <Odd_Bloke> AlecTaylor: We regularly observe that clouds that ask us to change the default user from ubuntu to something else get push back from Ubuntu users, because they expect the ubuntu user to be present.
[10:29] <AlecTaylor> Odd_Bloke: Quick question: what's the default Xenial password?
[10:29] <Odd_Bloke> So switching from ubuntu->vagrant just alienates a different section of our userbase.
[10:30] <AlecTaylor> Yeah was thinking two builds or something
[10:30]  * AlecTaylor just surprised himself, `vagrant ssh` just worked :O
[10:30] <AlecTaylor> That was failing earlier today
[10:31] <AlecTaylor> Hmm let me try again
[10:31] <Odd_Bloke> I haven't observed problems with `vagrant ssh` when they've been reported before.
[10:31] <Odd_Bloke> s/observed/reproduced/
[10:31] <Odd_Bloke> Which has obviously made fixing them... difficult. ^_^
[10:33] <AlecTaylor> Odd_Bloke: I was just reading through `man ssh`, looking for a way to quiet the password auth
[10:33] <AlecTaylor> Anyway `ssh -i ~/tmp/1ed4f71347864691b097db406f555b6a/.vagrant/machines/default/virtualbox/private_key ubuntu@127.0.0.1` is prompting me for a password
[10:34] <AlecTaylor> But `~/tmp/1ed4f71347864691b097db406f555b6a$ vagrant ssh` works
[10:34] <AlecTaylor> What am I missing?
[10:35] <Odd_Bloke> AlecTaylor: @127.0.0.1 would be your host not the Vagrant guest, right?
[10:35] <AlecTaylor> Ahh silly me
[10:35] <AlecTaylor> Yeah was just thinking that
[10:36] <AlecTaylor> Hmm, I know I can find it with `ip addr` or `ifconfig`, but is there a `vagrant` command for it, like `vagrant ssh-config`?
[10:36] <Odd_Bloke> ssh -i .vagrant/machines/default/virtualbox/private_key ubuntu@127.0.0.1 -p 2222  # WFM
[10:38] <AlecTaylor> Thanks
[10:38] <Odd_Bloke> AlecTaylor: FWIW, I worked that out by doing `vagrant ssh --debug` and seeing this line: INFO ssh: Invoking SSH: ssh ["ubuntu@127.0.0.1", "-p", "2222", "-o", "Compression=yes", "-o", "DSAAuthentication=yes", "-o", "LogLevel=FATAL", "-o", "IdentitiesOnly=yes", "-o", "StrictHostKeyChecking=no", "-o", "UserKnownHostsFile=/dev/null", "-i", "/home/daniel/.vagrant/machines/default/virtualbox/private_key"]
[10:38] <AlecTaylor> Ahh neat
[10:39] <Odd_Bloke> I'm not sure if there's a better way to work out that port number.
[10:39] <AlecTaylor> vagrant port
[10:39] <Odd_Bloke> Presumably if you have more than one Vagrant machine running it's not the same for all of them?
[10:39] <AlecTaylor> I'm already parsing all the output in Python so that's fine, so thanks
[11:40] <saju_m> Hi
[11:42] <saju_m> I have a doubt related to ubuntu server reboot
[11:42] <saju_m> I have a node (ubuntu 14.04.03 LTS) which is a part of cassandra, zookeeper and rabbitmq cluster. I what to reboot this node. What are the precautions I should take before restarting this node. Since it part of a cluster, data should auto backup by other nodes in the cluster.But I am afraid to restart it directly. Please suggest some ideas.
[11:46] <rbasak> saju_m: you're welcome to ask that question here, but it sounds like that's more of a cassandra, zookeeper and rabbitmq question. You might get a better answer asking in their community help areas.
[11:47] <saju_m> zookeeper
[11:48] <saju_m> rbasak, thanks, let me check
[11:49] <saju_m> apache
[12:45] <coreycb> zul, today is non-client library freeze for ocata
[12:46] <coreycb> fyi
[12:46] <zul> coreycb: yeah i know...that *all* im going to be doing today fyi
[12:47] <coreycb> zul, sounds good
[12:48] <zul> good morning btw
[13:34] <zioproto> jamespage: are you familiar with the packages python-networking-l2gw and neutron-l2gateway-agent
[13:34] <zioproto> looks like to use this neutron feature you need to add more tables to the database
[13:34] <jamespage> zioproto, rings some bells
[13:34] <zioproto> I installed the ubuntu packages in Mitaka but the alembic migration fails
[13:35] <jamespage> zioproto, ah right now I remember
[13:35] <zioproto> I opened a bug
[13:35]  * jamespage thinks a bit harder
[13:35] <zioproto> https://bugs.launchpad.net/networking-l2gw/+bug/1657747
[13:35] <zioproto> what kind of testing get this stuff when packaged for ubuntu ?
[13:35] <zioproto> I should expect the alembic migrations to get trough ?
[13:37] <zioproto> is this some code a vendor packaged just to do some PoC, or it is something worthed trying in production as far as you know ?
[13:39] <jamespage> zioproto, I think it was put in archive to support vmware-nsx
[13:39] <jamespage> but that's all a bit of a mess as well atm
[13:39] <jamespage> these things all fall outside of the core neutron governance so quality and release alignment can be a bit variable
[13:40] <zioproto> sounds like a plan to abandon the thing
[13:42] <jamespage> zioproto, I remember that at least at release it worked (I have my fingerprints in the changelog afterall)
[13:42] <jamespage> its possible we've had some level of drift against mitaka point releases
[13:45] <zioproto> it look like a foreign check problem
[13:45] <zioproto> I cant create a table
[13:45] <zioproto> but even disabling the foreign checks to make a test would not let me create the table
[13:45] <zioproto> this specific alembic migration did not change a lot in time
[13:45] <jamespage> zioproto, what db backend are you using?
[13:45] <zioproto> according to the git repo
[13:45] <zioproto> mysql
[13:46] <jamespage> the migrations are working ok for me in a mitaka deployment I had up for testing
[13:46] <zioproto> so maybe because this is a db
[13:46] <zioproto> that has been upgrading since icehouse
[13:46] <zioproto> could be that is different from a fresh mitaka db
[13:46] <jamespage> zioproto, well that is more than likely
[13:46] <jamespage> I wonder whether its a networks.id mistmatch on type
[13:47] <zioproto> varchar(36)
[13:47] <zioproto> | id                      | varchar(36)  | NO   | PRI | NULL    |       |
[13:47] <zioproto> this is the output of 'describe networks;'
[13:49] <jamespage> zioproto, no that matches ok
[13:49] <jamespage> hmm
[13:49] <jamespage> puzzling
[13:49] <jamespage> well you raised the bug in the right place - lets see if it gets some attention
[13:50] <jamespage> zioproto, fwiw I only packaged it because it was a dep for vmware-nsx; the testing we've done with nsx does not include any l2gw stuff
[13:50] <zioproto> great
[13:50] <zioproto> there is any way to get good debug information from mysql ?
[13:50] <zioproto> telling why the query fails ?
[13:50] <jamespage> might be something in the mysql error log maybe?
[13:51] <zioproto> I will check
[13:53] <zioproto> bingo
[13:53] <zioproto> the command
[13:53] <zioproto> SHOW ENGINE INNODB STATUS;
[13:53] <zioproto> give some good info
[13:54] <zioproto> http://paste.openstack.org/show/595618/
[13:54] <zioproto> maybe the key sentence is
[13:54] <zioproto> such columns in old tables
[13:54] <zioproto> cannot be referenced by such columns in new tables.
[13:57] <zioproto> the l2gateway tables are created with the wrong collation
[13:57] <zioproto> utf8_general_ci
[13:57] <zioproto> instead of utf8_unicode_ci
[13:57] <zioproto> I am not expert
[13:57] <zioproto> I dont know if this is a problem with the query
[14:01] <zioproto> can I pass to neutron-db-manage the collation value it should use ?
[14:02] <zioproto> I have this problem https://bugzilla.redhat.com/show_bug.cgi?id=1320243
[14:12] <zioproto> jamespage: if you do show table status; on your mysql neutron db you see all the tables with the same collation ?
[14:14] <jamespage> zioproto, mine are all utf8_general_ci
[14:16] <jamespage> zioproto, is is possible that you switched the default collation between original install and now?
[14:18] <zioproto> FIXED it !!!
[14:18] <zioproto>  alter database neutron collate utf8_unicode_ci;
[14:18] <zioproto> and I deleted the tables created by the half run alembic migration
[14:20] <zioproto> thanks for the help
[15:10] <zioproto> Cant figure out in ubuntu where to configure openvswitch to start the db with a --remote
[15:10] <zioproto> I need to run it like
[15:10] <zioproto> ovsdb-server --remote ptcp:6632:10.225.0.27
[15:10] <zioproto> do I really have to hack the init script ?
[15:14] <zul> coreycb: im not a debian developer
[15:14] <zul> coreycb: jamepsage is
[15:14] <coreycb> zul, k
[15:14] <coreycb> jamespage, , any chance you could upload 0.158 of ubuntu-dev-tools to debian?
[15:17] <zul> coreycb: doing debtcollector
[15:18] <coreycb> zul, ok i'll start from the bottom of the list and let you know if i get to #20
[15:19] <zul> coreycb: ok i got like a factory line going on right nwo
[15:38] <zul> coreycb: fyi oslo.messaging is ftbfs for me right now
[15:50] <jamespage> er
[16:23] <teward> sarnold: so, now that I fixed the build issues, on to the merge xD
[17:00] <zul> coreycb/jamespage: oslo.middleware needs webob fix as well
[17:01] <coreycb> zul, sigh.. can you add to the bug?
[17:01] <zul> yeah
[18:04] <coreycb> zul, everything from 20->36 are uploaded for ocata (minus mox3. not sure we need it)
[18:06] <zul> coreycb; ok working on 10 - 20
[19:12] <zul> coreycb: ok all libraries either building locally, been uploaded, in the archive, or need more prodding
[19:12] <coreycb> zul, awesome
[21:58] <EmilienM> coreycb, zul: I know mwhahaha already told you but the latest OpenStack package update for ocata broke us a lot
[21:58] <EmilienM> do you run CI on the packages? We can't even spawn a VM anymore
[22:23] <LambdaComplex> What init service does Ubuntu Server use?
[22:26] <tarpman> LambdaComplex: which version?
[22:26] <coreycb> EmilienM, it's probably because of webob
[22:27] <coreycb> EmilienM, https://bugs.launchpad.net/ubuntu/+source/python-oslo.middleware/+bug/1657452
[22:27] <coreycb> EmilienM, i could use support if you want to help push on that, if in fact that's what you're hitting.
[22:28] <LambdaComplex> tarpman: 16.04.1 LTS
[22:28] <tarpman> LambdaComplex: systemd
[22:29] <coreycb> EmilienM, we do test, there are just so many moving pieces during the dev cycle that are getting auto-backported etc. ie. dependencies like webob that are not really openstack that get synced from debian.
[22:29] <coreycb> EmilienM, so we test one day and the next day webob is at 1.7.0
[22:29] <LambdaComplex> tarpman: Exclusively? I talked to someone who said something about it being some combination of systemd and upstart, but I'm wondering if he was mistaken
[22:30] <tarpman> LambdaComplex: past versions used upstart. ubuntu desktop might still use upstart for some session management stuff - not sure. phone I think still does
[22:30] <tarpman> LambdaComplex: server should be exclusively systemd at this point AFAIK
[22:31] <EmilienM> coreycb: yes we hit that
[22:31] <EmilienM> mwhahaha: ^ fyi
[22:31] <OerHeks> tarpman +1 , systemd on system level, upstart for user level AFAIK
[22:31] <EmilienM> coreycb: glance is unable to find the image, and we got this webob error
[22:32] <LambdaComplex> OerHeks: But Ubuntu Server is exclusively systemd?
[22:32] <tarpman> LambdaComplex: if you install from a server CD and then "apt get install ubuntu-desktop", what do you call the result
[22:33] <LambdaComplex> tarpman: ...The Ship of Theseus? :D
[22:33] <coreycb> EmilienM, that sounds like it.  sigmavirus is fixing glance via a separate bug: https://bugs.launchpad.net/ubuntu/+source/glance/+bug/1657459
[22:34] <coreycb> EmilienM, it doesn't look like anyone's working on nova though :(
[22:34] <coreycb> EmilienM, there's a thread on the openstack-dev ML
[23:10] <lynorian> well check the manifests if it does not have upstart on the manifest than no it does not have upstart
[23:11] <lynorian> also apt-cache rdepends will tell you what reverse depends on it