[00:18] <mdeslaur> nacc: ah, interesting...I'll take a look....they're the ones that usually give us the patches
[00:18] <mdeslaur> nacc: which CVE is this? did you take the regression patch into consideration?
[00:19] <nacc> mdeslaur: yeah it's the other one
[00:19] <nacc> debian/patches/CVE-2016-2513.patch
[00:19] <nacc> fancy!
[00:20] <nacc> mdeslaur: the patch in our package doesn't match https://github.com/django/django/commit/67b46ba7016da2d259c1ecc7d666d11f5e1cfaab
[00:21] <nacc> mdeslaur: oh i'm really sorry!
[00:21] <nacc> mdeslaur: i was looking at the master commit, not the stable-1.8.x commit
[00:21] <nacc> https://github.com/django/django/commit/f4e6e02f7713a6924d16540be279909ff4091eb6
[00:21] <nacc> still a different sha, though?
[00:21] <mdeslaur> ah! ok
[00:22] <mdeslaur> well, a different sha happens often when we get prerelease patches
[00:22] <nacc> makes sense
[00:22] <nacc> i will try and verify the upstream patch is identical to what we took
[00:23] <mdeslaur> I just looked at the diff between the two, there are a few doc changes, but nothing important
[00:23] <nacc> mdeslaur: ok, thanks!
[00:23] <nacc> and sorry again for the noise!
[00:24] <mdeslaur> nacc: np! I'd rather we double check than to have a broken patch we didn't notice, so thanks!
[00:24] <nacc> mdeslaur: np, good side-effect of our merge process :)
[04:58] <cpaelzer> good morning
[05:25] <dnl> hi, can somebody please close https://bugs.launchpad.net/ubuntu/+source/lzlib/+bug/598691
[05:25] <dnl> this has been fixed years ago already
[05:26] <cpaelzer> dnl: sure, I'll look at it - thanks for reporting
[05:27] <dnl> great, thanks :)
[05:35] <dnl> cpaelzer: thanks
[06:10] <pitti> Good morning
[06:25] <tsimonq2> o/ pitti, how are you? :)
[06:28] <pitti> slangasek: nova/s390x holds up too many things; I went back to force-badtesting it, but this time only for the current version in y; so it won't apply to new nova uploads
[06:28] <pitti> (just a FYI)
[07:58] <flexiondotorg> pitti, If you are able can you cast an eye over this SRU please - https://bugs.launchpad.net/ubuntu-mate/+bug/1581168
[07:59] <flexiondotorg> I've completed the testing and tagged appropriately.
[08:06] <pitti> flexiondotorg: we normally let SRUs mature for 7 days before we release them; are you absolutely sure that this does not break anything?
[08:11] <flexiondotorg> pitti, Yes, I am. I've been running that packages for a couple of months now.
[08:11] <pitti> okay
[08:11] <flexiondotorg> And some of the fixes originated from the Ubuntu MATE and are already SRUd in the Ubuntu themes ;-)
[08:11] <pitti> released
[08:12] <flexiondotorg> Many thanks!
[10:20] <bluesabre> Hello everyone! Seeking sponsorship to -proposed for this xserver-xorg-video-intel xenial SRU, https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/1568604
[10:25] <rbasak> tjaalton: ^
[10:26] <rbasak> I think I've seen the mouse cursor disappearing too, FWIW, so it would be nice to have this fixed. A wholesale new upstream snapshot in a package that is critical for maybe 50% of Ubuntu users doesn't exactly seem "Regression potential here seems minimal." though.
[10:28] <tseliot> rbasak, bluesabre: I suspect tjaalton is on holiday. Can you bisect the driver to see what commit solves the problem, please? That would make the SRU much more self-contained.
[10:29] <bluesabre> tseliot: Yes, I'll work with others to try to find the fix(es)
[10:30] <tseliot> bluesabre: when you're done with that, I'll sponsor the upload. Thanks
[11:15] <cjwatson> All new ARM builds (armel, armhf, arm64) created as of 10:59 UTC today will be dispatched to arm64 VMs on scalingstack.
[11:16] <cjwatson> Just in case this causes any issues (hopefully not).
[11:21] <flexiondotorg> rbasak, Can you cast an eye over https://bugs.launchpad.net/ubuntu/+bug/1602270 please
[11:21] <flexiondotorg> rbasak, Here is the new .dsc - https://launchpad.net/~ubuntu-mate-dev/+archive/ubuntu/crazy-mate/+files/mate-hud_16.10.0-1~yakkety1.2.dsc
[11:24] <cjwatson> Also this means building armhf on 4.2 kernels rather than 3.2.
[14:44] <jtaylor> hm the xenial binutils update broke linux perf
[14:45] <jtaylor> though maybe that will fix itself when the new linux comes out of proposed
[14:45] <jtaylor> how did binutils make it out of proposed without a complete transition?
[14:45] <jtaylor> in particular the libbfd soname change
[14:46] <smoser> pitti, around?
[14:48] <smoser> pitti, i'd really appreciate your thoguhts at https://bugs.launchpad.net/juju-core/+bug/1602192 . systemd's -.mount job (mount /) is behaving odd sometimes.
[14:54] <pitti> smoser: hey
[14:54] <pitti> smoser: queueing (team meeting in a few minutes)
[14:55] <smoser> thanks
[14:55] <smoser> pitti, you want me to try to get you a system to look at ?
[14:55] <smoser> my script fairly easily reproduces and cleans up after itself. you just need lxd
[14:56] <pitti> smoser: reproduction script sounds perfect
[14:57] <pitti> smoser: is zfs relevant at all?
[14:57] <pitti> I have lxd set up on my laptop (but on my normal btrfs file system)
[15:00] <smoser> zfs possibly is relavant.
[15:00] <smoser> i can get you an instance where i set it up if you want.
[15:00] <pitti> smoser: I started the script on my laptop
[15:00] <smoser> yeah, i ran to 134 on my laptop or something
[15:00] <pitti> if it doesn't reproduce that way, I'll try a vm with a zfs pool
[15:00] <smoser> i think it ended up exhausting IP addresses on the range.
[15:00] <pitti> smoser: sure, if that's easy for you that can never hurt
[15:01] <smoser> do you know if you have access to server stack ?
[15:01] <pitti> smoser: oh, you mean it doesn't create them serially, but all 134 were running in parallel?
[15:01] <smoser> thats the easiest thing for me
[15:01] <pitti> smoser: I've heard about a lot of *Stack, but not this one; example IP?
[15:01] <smoser>  10.245.162.60
[15:01] <smoser> see if you can reacn that over vpn
[15:01] <pitti> semiosis: yes, I do
[15:01] <pitti> err, smoser
[15:02] <smoser> i can try on canonistack if you can't get there, just everything slow on canonistack
[15:02] <pitti> smoser: no, seems fine
[15:02] <smoser> k. i'll get you in tehre then
[15:07] <pitti> x-013 failed to boot. keeping x-013.
[15:08] <pitti> ● -.mount                       loaded failed failed /
[15:08] <pitti> smoser: so, yep, can reproduce, unrelated to zfs
[15:27] <slangasek> pitti: nova> ack
[15:39] <pitti> smoser: I left some initial notes in the bug
[15:39] <pitti> needs some research
[15:43] <smoser> pitti, htanks
[15:48] <pitti> Jul 14 15:47:42 x-013 systemd[1]: inotify_init1() failed: Too many open files
[15:48] <pitti> smoser: haha
[15:49] <pitti> smoser: curiously with plain LXC it *also* fails in the 13th container
[15:51] <GunnarHj> pitti: Yakkety isn't open for translation yet. Is that intentional?
[15:53] <pitti> GunnarHj: only in the sense of "known", not "desirable"
[15:54] <smoser> pitti, 13 is an unlucky number.
[15:54] <pitti> for sure
[15:54] <smoser> it is very odd taht 13 is so common
[15:54] <pitti> but I have some handles on that
[15:55] <pitti> smoser: you mean it fails on the 13th for other people too?
[15:55] <pitti> if we have a "1024 open files" limit somewhere and every container opens some 80 files, then it would be quite plausible
[15:56] <pitti> anyway, testing the other stuff first, the inotify errors might be a red herring
[15:56] <smoser> it did once fail for me on 13th.
[15:56] <smoser> but locally i made it to like 123
[15:56] <smoser> or something.
[15:56] <GunnarHj> pitti: Are there any obstacles, or is it just about finding the time to do it?
[15:56] <smoser> but yeah, open file handles could be possible.
[15:57] <smoser> the original bug opener said 13 and i have definitely seen 13
[15:57] <pitti> GunnarHj: TBH I'm not very familiar with the process; that's still somewhere between wgrant and dpm
[15:58] <GunnarHj> pitti: Ok, then I'd better ping them about it. Thanks!
[15:59] <smoser> pitti, rharper suggests libvirt serivce file does something to stop limits on processes and such
[16:04] <smoser> rharper, /lib/systemd/system/lxd.service has similar to /lib/systemd/system/libvirt-bin.service
[16:06] <rharper> smoser: ok, there is a LimitNOFILE=65535 that can be set
[16:10] <smoser> yeah, lxd.service has that at infinity
[16:10]  * smoser laughs at pinging someone by saying that.
[16:11] <rharper> haha
[16:13] <teward> heh
[16:17] <jderose> tyhicks: much thanks for getting the fixed ecryptfs-utils out so quickly! i'm very happy this is making it onto the 16.04.1 ISO
[16:18] <tyhicks> jderose: and a big thanks to you for the patch :)
[16:18] <jderose> tyhicks: well, i should have followed up long ago when i first encountered this, just got too busy with other stuff. better late than never though :)
[16:21] <nacc> rbasak: so it's not so trivial as adding the needs-root restriction, as only one specific test needs root (and other tests fail if they have root)
[16:22] <rbasak> nacc: :-(
[16:22] <rbasak> nacc: there's an example in the juju-core dep8 test of how to drop root, if that helps.
[16:22] <nacc> rbasak: ok, i'll take a look
[16:22] <rbasak> (IIRC there were a couple of gotchas)
[16:22] <rbasak> I'm not sure what the current state of the Juju packaging is. It should be in juju-core in Trusty I think. If not, Wily.
[16:44] <GunnarHj> rbasak: Hi Robie, any news on the language packageset?
[16:49] <phillw> Yes, I know I'm not allowed on here and will immediately leave. But, this is an error ..
[16:49] <phillw> 2 not fully installed or removed.
[16:49] <phillw> After this operation, 0 B of additional disk space will be used.
[16:49] <phillw> Setting up mysql-server-5.7 (5.7.13-0ubuntu3) ...
[16:49] <phillw> Renaming removed key_buffer and myisam-recover options (if present)
[16:49] <phillw> sed: can't read /etc/mysql/my.cnf.migrated: No such file or directory
[16:49] <phillw> dpkg: error processing package mysql-server-5.7 (--configure):
[16:49] <phillw>  subprocess installed post-installation script returned error exit status 2
[16:49] <phillw> dpkg: dependency problems prevent configuration of mysql-server:
[16:49] <phillw>  mysql-server depends on mysql-server-5.7; however:
[16:49] <phillw>   Package mysql-server-5.7 is not configured yet.
[16:49] <phillw> dpkg: error processing package mysql-server (--configure):
[16:49] <phillw>  dependency problems - leaving unconfigured
[16:49] <phillw> No apport report written because the error message indicates it's a follow-up error from a previous failure.
[16:51] <nacc> strange.
[16:51] <nacc> that seems to be a 16.10 issue ( rbasak --^ )
[16:52] <rbasak> He's gone
[16:52] <rbasak> That was an oversight. It's now fixed in ubuntu4.
[16:52] <rbasak> (probably not in the release pocket yet)
[16:53] <nacc> rbasak: yeah, the 'strange.' was more indicated at the showing up to pastebomb the channel and then to leave
[16:53] <rbasak> Ah
[17:37] <SpamapS> Hey old friends. I am having issues downloading from cloud-images.ubuntu.com ... anybody know a place to ping the admins on IRC?
[17:41] <nacc> smoser: --^ did you say it was having issues?
[17:43] <SpamapS> from traceroutes around the net.. looks to be saturated
[17:43] <SpamapS> 16  SOURCE-MANA.edge5.London1.Level3.net (2001:1900:5:2:2::131a)  164.646 ms  166.219 ms  164.621 ms
[17:43] <SpamapS> 17  cloud-images-ubuntu-com.sawo.canonical.com (2001:67c:1360:8001:ffff:ffff:ffff:fffe)  530.27 ms  584.155 ms  667.539 ms
[17:44] <nacc> SpamapS: yeah, that's hwat i've heard, it's just bogged down right now (not 100% on it)
[17:44] <nacc> SpamapS: i believe the right folks have been notified
[17:45] <SpamapS> I wonder if that's due to the fact that it's hosting all the vagrant boxes now
[17:45] <SpamapS> so many laptops :)
[17:49] <smoser> SpamapS, i opened an rt.
[17:50] <smoser> yeah, i think its just saturated.
[17:52] <SpamapS> smoser: k, thanks
[17:57] <smoser> SpamapS, can you easily check from europe
[17:57] <smoser> i dont have a system there that i can test easily
[17:57] <smoser> i wondered if its just the link over the ocean
[18:07] <SpamapS> smoser: yeah I can actually spin up a vm in london.. hang on
[18:09] <smoser> from lcy01 (same datacenter) i get 40M/s
[18:09] <smoser> :)
[18:10] <SpamapS> I don't think softlayer's london DC is the same one.. but it might be
[18:12] <SpamapS> oh here, I have Amsterdam too
[18:12]  * SpamapS spins that up
[18:23] <SpamapS> smoser: from a London VM...
[18:23] <SpamapS> 12  canonical-3.edge1.lon003.pnap.net (212.118.242.74)  9.896 ms  8.517 ms  7.827 ms
[18:23] <SpamapS> 13  cloud-images-ubuntu-com.sawo.canonical.com (91.189.88.141)  7.915 ms  9.270 ms *
[18:23] <SpamapS> so yeah, probably just the edge router that's saturated
[18:23]  * SpamapS trying AMS
[18:23] <smoser> wget ?
[18:23] <smoser> wget http://cloud-images.ubuntu.com/daily/server/xenial/current/xenial-server-cloudimg-amd64-root.tar.xz -O /dev/null
[18:24] <SpamapS> smoser: same, 40MB/s
[18:24] <SpamapS> it's possible that's the same DC
[18:24] <SpamapS> actually, funny story
[18:24] <SpamapS> it bounces through amsterdam
[18:24] <SpamapS> so not same DC
[18:24] <SpamapS> http://paste.ubuntu.com/19393128/
[18:27] <smoser> its that little pond that sits between us and london
[18:27] <smoser> wonder if its related to brexit
[18:27] <SpamapS> haha
[18:27] <SpamapS> London exits the EU, and the internet
[18:28] <smoser> SpamapS, i'm getting 6M/s now here.
[18:28] <SpamapS> 100%[[18:28] <smoser> definitely improved.
[18:29] <SpamapS> smoser: oh actually yes
[18:29] <SpamapS> mine sped up and finished
[18:29] <SpamapS> overall it was 144kB/s.. but that's 1 hour of 15kB/s averaged in
[18:40] <slangasek> jamespage: hi, looking at ceph in the NEW queue... why are we generating -dbg packages in the archive for this?
[18:42] <slangasek> jamespage: also, some lintian errors; not sure if these are regressions, but they're problematic: E: ceph-mon: package-installs-python-bytecode usr/lib/python2.7/dist-packages/ceph_rest_api.pyo
[18:50] <slangasek> jamespage: have checked, and the python errors are a regression.  not a blocker for NEW because it doesn't impact your binary splitting, but a pretty bad bug that ought to be fixed
[18:51] <slangasek> jamespage: and accepted, including the -dbg packages, which we really do not need any more of in the archive
[18:52] <sarnold> heh aren't those things a gig each?
[18:53] <sarnold> my local mirror has nine gigs of ceph *dbg* packages.. it would be nice to get that back :)
[18:55]  * smoser left mirror long ago due to such things. caching proxy now just keeps what it needs.
[18:57] <slangasek> caching proxy always has the tradeoff that a cache miss is slow
[19:28] <jamespage> slangasek, ack - thanks for the feedback
[19:29] <jamespage> I'd not spotted the pyo
[20:42] <rbasak> GunnarHj: sorry. I'll try and sort it out tomorrow as a priority.
[22:59] <GunnarHj> rbasak: Great. No urgency, really, just wondered.