[00:11] sarnold: there might be after we fuss with the dynamic module stuff [00:11] because we'll have a 'hybrid' build but the core static nginx will have the same modules compiled in for all flavors [00:11] so we have a 'base' set of stuff [00:12] plus additional addon dynamic modules [00:12] hence the clusterf*** that is a hybrid between dynamic and static buildings [00:12] oh fun, looks like it doesn't build in Precise anymore [00:12] Figures. [00:14] awww bugger [00:14] I was impressed that it looked like it was going to [00:14] precise's toolchain is feeling pretty old at this point [00:14] even trusty is feeling .. not so trusty. heh. [00:16] sarnold: yeah, well [00:17] sarnold: i already pulled Precise support from the mainline PPA [00:17] several months ago [00:17] just pulled a "Nothing past 1.10.1" on the Stable PPA for Precise" [00:17] but it won't build, so... [00:17] sarnold: it *looks* like it builds everywhere else [00:17] so blah [00:17] but, of course, that's PPAs, not the standard repos [00:17] so I can do what I want there :P [00:18] but it's definitely a nice test bed for a merge build test [00:18] teward: but that diff looked promisingly small enough that it's probably also right :) [00:20] sarnold: you're right, but it doesn't want to behave in 12.04 [00:20] so blah [00:21] there's a "Please backport" bug in place for Zesty -> Yakkety+Xenial+Trusty [00:21] not sure if that falls under SRU policies [00:21] but even without dynamic modules, it'll need those build flag changes === SupaYoshi_ is now known as SupaYoshi [02:42] hi [02:42] Any chance someone can assign this bug? - https://bugs.launchpad.net/cloud-images/+bug/1569237 === seyeongkim_ is now known as seyeongkim === yokel_ is now known as yokel === manjo` is now known as manjo === alai` is now known as alai === Jalen_ is now known as Jalen [06:05] What do you think of our Drone product? http://adia.tech/ === _ruben_ is now known as _ruben [08:54] Good morning. [08:55] Hi lordievader [08:55] Hey cpaelzer, how are you? [08:58] good, thanks for asking [08:59] how about you lordievader - day still ok? [09:03] Any chance someone can assign this bug? - https://bugs.launchpad.net/cloud-images/+bug/1569237 [09:03] Launchpad bug 1569237 in cloud-images "vagrant xenial box is not provided with vagrant/vagrant username and password" [Undecided,New] [09:09] Odd_Bloke: you were on this bug before and it has to much context unknown to me to answer - could you once more look at the bug AlecTaylor pinged about? === disposable3 is now known as disposable2 [09:22] cpaelzer: Day is quite okay here, yes :) [09:29] Anyone here use LXD? I have /var/lib/lxd on a seperate hard drive, I pulled it out of one machine and plugged it into another in the hope it would just work. I see all my containers on the new machine, but just "ERROR" next to all of them [09:29] how can I get it working on my new machine? [09:32] I'm using ZFS... but it appears there is nothing in the usual /var/lib/lxd/containers/container directories [09:32] so it maybe hasn't mounted properly? [09:32] not sure [09:34] does it even support offline migration? [09:55] rbasak: Thanks for your response in to that Vagrant bug. :) [10:12] win 1 [10:25] hi [10:26] Any chance someone can assign this bug? - https://bugs.launchpad.net/cloud-images/+bug/1569237 [10:26] Launchpad bug 1569237 in cloud-images "vagrant xenial box is not provided with vagrant/vagrant username and password" [Undecided,New] [10:27] AlecTaylor: o/ [10:27] AlecTaylor: rbasak posted a comment with a pointer at IRC logs this morning; did you have a chance to read through that? [10:28] AlecTaylor: The TL;DR is that we have two different classes of users for our Ubuntu box: Ubuntu users who happen to use Vagrant, and Vagrant users who happen to use Ubuntu. Finding a way to make both parties happy has proved to be challenging, to say the least. [10:28] Odd_Bloke: Nope, just scrolled through my logs, I must've been logged out when he replied [10:29] AlecTaylor: It was a bug comment, rather than a comment in IRC. :) [10:29] AlecTaylor: We regularly observe that clouds that ask us to change the default user from ubuntu to something else get push back from Ubuntu users, because they expect the ubuntu user to be present. [10:29] Odd_Bloke: Quick question: what's the default Xenial password? [10:29] So switching from ubuntu->vagrant just alienates a different section of our userbase. [10:30] Yeah was thinking two builds or something [10:30] * AlecTaylor just surprised himself, `vagrant ssh` just worked :O [10:30] That was failing earlier today [10:31] Hmm let me try again [10:31] I haven't observed problems with `vagrant ssh` when they've been reported before. [10:31] s/observed/reproduced/ [10:31] Which has obviously made fixing them... difficult. ^_^ [10:33] Odd_Bloke: I was just reading through `man ssh`, looking for a way to quiet the password auth [10:33] Anyway `ssh -i ~/tmp/1ed4f71347864691b097db406f555b6a/.vagrant/machines/default/virtualbox/private_key ubuntu@127.0.0.1` is prompting me for a password [10:34] But `~/tmp/1ed4f71347864691b097db406f555b6a$ vagrant ssh` works [10:34] What am I missing? [10:35] AlecTaylor: @127.0.0.1 would be your host not the Vagrant guest, right? [10:35] Ahh silly me [10:35] Yeah was just thinking that [10:36] Hmm, I know I can find it with `ip addr` or `ifconfig`, but is there a `vagrant` command for it, like `vagrant ssh-config`? [10:36] ssh -i .vagrant/machines/default/virtualbox/private_key ubuntu@127.0.0.1 -p 2222 # WFM [10:38] Thanks [10:38] AlecTaylor: FWIW, I worked that out by doing `vagrant ssh --debug` and seeing this line: INFO ssh: Invoking SSH: ssh ["ubuntu@127.0.0.1", "-p", "2222", "-o", "Compression=yes", "-o", "DSAAuthentication=yes", "-o", "LogLevel=FATAL", "-o", "IdentitiesOnly=yes", "-o", "StrictHostKeyChecking=no", "-o", "UserKnownHostsFile=/dev/null", "-i", "/home/daniel/.vagrant/machines/default/virtualbox/private_key"] [10:38] Ahh neat [10:39] I'm not sure if there's a better way to work out that port number. [10:39] vagrant port [10:39] Presumably if you have more than one Vagrant machine running it's not the same for all of them? [10:39] I'm already parsing all the output in Python so that's fine, so thanks === chmurifree is now known as chmuri [11:40] Hi [11:42] I have a doubt related to ubuntu server reboot [11:42] I have a node (ubuntu 14.04.03 LTS) which is a part of cassandra, zookeeper and rabbitmq cluster. I what to reboot this node. What are the precautions I should take before restarting this node. Since it part of a cluster, data should auto backup by other nodes in the cluster.But I am afraid to restart it directly. Please suggest some ideas. [11:46] saju_m: you're welcome to ask that question here, but it sounds like that's more of a cassandra, zookeeper and rabbitmq question. You might get a better answer asking in their community help areas. [11:47] zookeeper [11:48] rbasak, thanks, let me check [11:49] apache === haasn` is now known as haasn [12:45] zul, today is non-client library freeze for ocata [12:46] fyi [12:46] coreycb: yeah i know...that *all* im going to be doing today fyi [12:47] zul, sounds good [12:48] good morning btw === jdstrand_ is now known as jdstrand [13:34] jamespage: are you familiar with the packages python-networking-l2gw and neutron-l2gateway-agent [13:34] looks like to use this neutron feature you need to add more tables to the database [13:34] zioproto, rings some bells [13:34] I installed the ubuntu packages in Mitaka but the alembic migration fails [13:35] zioproto, ah right now I remember [13:35] I opened a bug [13:35] * jamespage thinks a bit harder [13:35] https://bugs.launchpad.net/networking-l2gw/+bug/1657747 [13:35] Launchpad bug 1657747 in networking-l2gw "Alembic migration l2gateway_models fails when creating tables" [Undecided,New] [13:35] what kind of testing get this stuff when packaged for ubuntu ? [13:35] I should expect the alembic migrations to get trough ? [13:37] is this some code a vendor packaged just to do some PoC, or it is something worthed trying in production as far as you know ? [13:39] zioproto, I think it was put in archive to support vmware-nsx [13:39] but that's all a bit of a mess as well atm [13:39] these things all fall outside of the core neutron governance so quality and release alignment can be a bit variable [13:40] sounds like a plan to abandon the thing [13:42] zioproto, I remember that at least at release it worked (I have my fingerprints in the changelog afterall) [13:42] its possible we've had some level of drift against mitaka point releases [13:45] it look like a foreign check problem [13:45] I cant create a table [13:45] but even disabling the foreign checks to make a test would not let me create the table [13:45] this specific alembic migration did not change a lot in time [13:45] zioproto, what db backend are you using? [13:45] according to the git repo [13:45] mysql [13:46] the migrations are working ok for me in a mitaka deployment I had up for testing [13:46] so maybe because this is a db [13:46] that has been upgrading since icehouse [13:46] could be that is different from a fresh mitaka db [13:46] zioproto, well that is more than likely [13:46] I wonder whether its a networks.id mistmatch on type [13:47] varchar(36) [13:47] | id | varchar(36) | NO | PRI | NULL | | [13:47] this is the output of 'describe networks;' [13:49] zioproto, no that matches ok [13:49] hmm [13:49] puzzling [13:49] well you raised the bug in the right place - lets see if it gets some attention [13:50] zioproto, fwiw I only packaged it because it was a dep for vmware-nsx; the testing we've done with nsx does not include any l2gw stuff [13:50] great [13:50] there is any way to get good debug information from mysql ? [13:50] telling why the query fails ? [13:50] might be something in the mysql error log maybe? [13:51] I will check [13:53] bingo [13:53] the command [13:53] SHOW ENGINE INNODB STATUS; [13:53] give some good info [13:54] http://paste.openstack.org/show/595618/ [13:54] maybe the key sentence is [13:54] such columns in old tables [13:54] cannot be referenced by such columns in new tables. [13:57] the l2gateway tables are created with the wrong collation [13:57] utf8_general_ci [13:57] instead of utf8_unicode_ci [13:57] I am not expert [13:57] I dont know if this is a problem with the query [14:01] can I pass to neutron-db-manage the collation value it should use ? [14:02] I have this problem https://bugzilla.redhat.com/show_bug.cgi?id=1320243 [14:02] bugzilla.redhat.com bug 1320243 in mariadb-galera "Change OpenStack db collation from utf_unicode_ci to utf8_general_ci" [Medium,New] [14:12] jamespage: if you do show table status; on your mysql neutron db you see all the tables with the same collation ? [14:14] zioproto, mine are all utf8_general_ci [14:16] zioproto, is is possible that you switched the default collation between original install and now? [14:18] FIXED it !!! [14:18] alter database neutron collate utf8_unicode_ci; [14:18] and I deleted the tables created by the half run alembic migration [14:20] thanks for the help === skeezix-hf is now known as Ofir === Ofir is now known as skeezix-hf === skeezix-hf is now known as Ofir === Ofir is now known as skeezix-hf [15:10] Cant figure out in ubuntu where to configure openvswitch to start the db with a --remote [15:10] I need to run it like [15:10] ovsdb-server --remote ptcp:6632:10.225.0.27 [15:10] do I really have to hack the init script ? [15:14] coreycb: im not a debian developer [15:14] coreycb: jamepsage is [15:14] zul, k [15:14] jamespage, , any chance you could upload 0.158 of ubuntu-dev-tools to debian? [15:17] coreycb: doing debtcollector [15:18] zul, ok i'll start from the bottom of the list and let you know if i get to #20 [15:19] coreycb: ok i got like a factory line going on right nwo === JanC_ is now known as JanC [15:38] coreycb: fyi oslo.messaging is ftbfs for me right now [15:50] er [16:23] sarnold: so, now that I fixed the build issues, on to the merge xD === ratliff_ is now known as ratliff [17:00] coreycb/jamespage: oslo.middleware needs webob fix as well [17:01] zul, sigh.. can you add to the bug? [17:01] yeah [18:04] zul, everything from 20->36 are uploaded for ocata (minus mox3. not sure we need it) [18:06] coreycb; ok working on 10 - 20 [19:12] coreycb: ok all libraries either building locally, been uploaded, in the archive, or need more prodding [19:12] zul, awesome [21:58] coreycb, zul: I know mwhahaha already told you but the latest OpenStack package update for ocata broke us a lot [21:58] do you run CI on the packages? We can't even spawn a VM anymore [22:23] What init service does Ubuntu Server use? [22:26] LambdaComplex: which version? [22:26] EmilienM, it's probably because of webob [22:27] EmilienM, https://bugs.launchpad.net/ubuntu/+source/python-oslo.middleware/+bug/1657452 [22:27] Launchpad bug 1657452 in OpenStack Identity (keystone) "Incompatibility with python-webob 1.7.0" [Medium,In progress] [22:27] EmilienM, i could use support if you want to help push on that, if in fact that's what you're hitting. [22:28] tarpman: 16.04.1 LTS [22:28] LambdaComplex: systemd [22:29] EmilienM, we do test, there are just so many moving pieces during the dev cycle that are getting auto-backported etc. ie. dependencies like webob that are not really openstack that get synced from debian. [22:29] EmilienM, so we test one day and the next day webob is at 1.7.0 [22:29] tarpman: Exclusively? I talked to someone who said something about it being some combination of systemd and upstart, but I'm wondering if he was mistaken [22:30] LambdaComplex: past versions used upstart. ubuntu desktop might still use upstart for some session management stuff - not sure. phone I think still does [22:30] LambdaComplex: server should be exclusively systemd at this point AFAIK [22:31] coreycb: yes we hit that [22:31] mwhahaha: ^ fyi [22:31] tarpman +1 , systemd on system level, upstart for user level AFAIK [22:31] coreycb: glance is unable to find the image, and we got this webob error [22:32] OerHeks: But Ubuntu Server is exclusively systemd? [22:32] LambdaComplex: if you install from a server CD and then "apt get install ubuntu-desktop", what do you call the result [22:33] tarpman: ...The Ship of Theseus? :D [22:33] EmilienM, that sounds like it. sigmavirus is fixing glance via a separate bug: https://bugs.launchpad.net/ubuntu/+source/glance/+bug/1657459 [22:33] Launchpad bug 1657459 in glance (Ubuntu) "WebOb>=1.2.3 requirement for Glance will lead to 0 bytes backing image files on OpenStack Newton, although the image file sent to the python client does not have 0 bytes" [High,Triaged] [22:34] EmilienM, it doesn't look like anyone's working on nova though :( [22:34] EmilienM, there's a thread on the openstack-dev ML [23:10] well check the manifests if it does not have upstart on the manifest than no it does not have upstart [23:11] also apt-cache rdepends will tell you what reverse depends on it