[00:18] nacc: ah, interesting...I'll take a look....they're the ones that usually give us the patches [00:18] nacc: which CVE is this? did you take the regression patch into consideration? [00:19] mdeslaur: yeah it's the other one [00:19] debian/patches/CVE-2016-2513.patch [00:19] The password hasher in contrib/auth/hashers.py in Django before 1.8.10 and 1.9.x before 1.9.3 allows remote attackers to enumerate users via a timing attack involving login requests. (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-2513) [00:19] fancy! [00:20] mdeslaur: the patch in our package doesn't match https://github.com/django/django/commit/67b46ba7016da2d259c1ecc7d666d11f5e1cfaab [00:21] mdeslaur: oh i'm really sorry! [00:21] mdeslaur: i was looking at the master commit, not the stable-1.8.x commit [00:21] https://github.com/django/django/commit/f4e6e02f7713a6924d16540be279909ff4091eb6 [00:21] still a different sha, though? [00:21] ah! ok [00:22] well, a different sha happens often when we get prerelease patches [00:22] makes sense [00:22] i will try and verify the upstream patch is identical to what we took [00:23] I just looked at the diff between the two, there are a few doc changes, but nothing important [00:23] mdeslaur: ok, thanks! [00:23] and sorry again for the noise! === _salem is now known as salem_ [00:24] nacc: np! I'd rather we double check than to have a broken patch we didn't notice, so thanks! [00:24] mdeslaur: np, good side-effect of our merge process :) === hypera1r is now known as hyperair === salem_ is now known as _salem [04:58] good morning [05:25] hi, can somebody please close https://bugs.launchpad.net/ubuntu/+source/lzlib/+bug/598691 [05:25] Launchpad bug 598691 in lzlib (Ubuntu) "Incorrect symbolic link" [Undecided,New] [05:25] this has been fixed years ago already [05:26] dnl: sure, I'll look at it - thanks for reporting [05:27] great, thanks :) [05:35] cpaelzer: thanks [06:10] Good morning [06:25] o/ pitti, how are you? :) [06:28] slangasek: nova/s390x holds up too many things; I went back to force-badtesting it, but this time only for the current version in y; so it won't apply to new nova uploads [06:28] (just a FYI) [07:58] pitti, If you are able can you cast an eye over this SRU please - https://bugs.launchpad.net/ubuntu-mate/+bug/1581168 [07:59] Launchpad bug 1581168 in ubuntu-mate "SRU: GTK3 scrollbars in Radiant-MATE not styled like GTK2" [High,Fix committed] [07:59] I've completed the testing and tagged appropriately. [08:06] flexiondotorg: we normally let SRUs mature for 7 days before we release them; are you absolutely sure that this does not break anything? [08:11] pitti, Yes, I am. I've been running that packages for a couple of months now. [08:11] okay [08:11] And some of the fixes originated from the Ubuntu MATE and are already SRUd in the Ubuntu themes ;-) [08:11] released [08:12] Many thanks! [10:20] Hello everyone! Seeking sponsorship to -proposed for this xserver-xorg-video-intel xenial SRU, https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/1568604 [10:20] Launchpad bug 1568604 in X.Org X server "Mouse cursor lost when unlocking with Intel graphics" [Medium,Confirmed] [10:25] tjaalton: ^ [10:26] I think I've seen the mouse cursor disappearing too, FWIW, so it would be nice to have this fixed. A wholesale new upstream snapshot in a package that is critical for maybe 50% of Ubuntu users doesn't exactly seem "Regression potential here seems minimal." though. [10:28] rbasak, bluesabre: I suspect tjaalton is on holiday. Can you bisect the driver to see what commit solves the problem, please? That would make the SRU much more self-contained. [10:29] tseliot: Yes, I'll work with others to try to find the fix(es) [10:30] bluesabre: when you're done with that, I'll sponsor the upload. Thanks [11:15] All new ARM builds (armel, armhf, arm64) created as of 10:59 UTC today will be dispatched to arm64 VMs on scalingstack. [11:16] Just in case this causes any issues (hopefully not). [11:21] rbasak, Can you cast an eye over https://bugs.launchpad.net/ubuntu/+bug/1602270 please [11:21] Launchpad bug 1602270 in Ubuntu "[needs-packaging] mate-hud" [Wishlist,New] [11:21] rbasak, Here is the new .dsc - https://launchpad.net/~ubuntu-mate-dev/+archive/ubuntu/crazy-mate/+files/mate-hud_16.10.0-1~yakkety1.2.dsc [11:24] Also this means building armhf on 4.2 kernels rather than 3.2. === hikiko is now known as hikiko|ln === hikiko|ln is now known as hikiko === _salem is now known as salem_ [14:44] hm the xenial binutils update broke linux perf [14:45] though maybe that will fix itself when the new linux comes out of proposed [14:45] how did binutils make it out of proposed without a complete transition? [14:45] in particular the libbfd soname change [14:46] pitti, around? [14:48] pitti, i'd really appreciate your thoguhts at https://bugs.launchpad.net/juju-core/+bug/1602192 . systemd's -.mount job (mount /) is behaving odd sometimes. [14:48] Launchpad bug 1602192 in juju-core "deploy 30 nodes on lxd, machines never leave pending" [Critical,In progress] [14:54] smoser: hey [14:54] smoser: queueing (team meeting in a few minutes) [14:55] thanks [14:55] pitti, you want me to try to get you a system to look at ? [14:55] my script fairly easily reproduces and cleans up after itself. you just need lxd [14:56] smoser: reproduction script sounds perfect [14:57] smoser: is zfs relevant at all? [14:57] I have lxd set up on my laptop (but on my normal btrfs file system) [15:00] zfs possibly is relavant. [15:00] i can get you an instance where i set it up if you want. [15:00] smoser: I started the script on my laptop [15:00] yeah, i ran to 134 on my laptop or something [15:00] if it doesn't reproduce that way, I'll try a vm with a zfs pool [15:00] i think it ended up exhausting IP addresses on the range. [15:00] smoser: sure, if that's easy for you that can never hurt [15:01] do you know if you have access to server stack ? [15:01] smoser: oh, you mean it doesn't create them serially, but all 134 were running in parallel? [15:01] thats the easiest thing for me [15:01] smoser: I've heard about a lot of *Stack, but not this one; example IP? [15:01] 10.245.162.60 [15:01] see if you can reacn that over vpn [15:01] semiosis: yes, I do [15:01] err, smoser [15:02] i can try on canonistack if you can't get there, just everything slow on canonistack [15:02] smoser: no, seems fine [15:02] k. i'll get you in tehre then [15:07] x-013 failed to boot. keeping x-013. [15:08] ● -.mount loaded failed failed / [15:08] smoser: so, yep, can reproduce, unrelated to zfs [15:27] pitti: nova> ack [15:39] smoser: I left some initial notes in the bug [15:39] needs some research [15:43] pitti, htanks [15:48] Jul 14 15:47:42 x-013 systemd[1]: inotify_init1() failed: Too many open files [15:48] smoser: haha [15:49] smoser: curiously with plain LXC it *also* fails in the 13th container [15:51] pitti: Yakkety isn't open for translation yet. Is that intentional? [15:53] GunnarHj: only in the sense of "known", not "desirable" [15:54] pitti, 13 is an unlucky number. [15:54] for sure [15:54] it is very odd taht 13 is so common [15:54] but I have some handles on that [15:55] smoser: you mean it fails on the 13th for other people too? [15:55] if we have a "1024 open files" limit somewhere and every container opens some 80 files, then it would be quite plausible [15:56] anyway, testing the other stuff first, the inotify errors might be a red herring [15:56] it did once fail for me on 13th. [15:56] but locally i made it to like 123 [15:56] or something. [15:56] pitti: Are there any obstacles, or is it just about finding the time to do it? [15:56] but yeah, open file handles could be possible. [15:57] the original bug opener said 13 and i have definitely seen 13 [15:57] GunnarHj: TBH I'm not very familiar with the process; that's still somewhere between wgrant and dpm [15:58] pitti: Ok, then I'd better ping them about it. Thanks! [15:59] pitti, rharper suggests libvirt serivce file does something to stop limits on processes and such [16:04] rharper, /lib/systemd/system/lxd.service has similar to /lib/systemd/system/libvirt-bin.service [16:06] smoser: ok, there is a LimitNOFILE=65535 that can be set [16:10] yeah, lxd.service has that at infinity [16:10] * smoser laughs at pinging someone by saying that. [16:11] haha [16:13] heh [16:17] tyhicks: much thanks for getting the fixed ecryptfs-utils out so quickly! i'm very happy this is making it onto the 16.04.1 ISO [16:18] jderose: and a big thanks to you for the patch :) [16:18] tyhicks: well, i should have followed up long ago when i first encountered this, just got too busy with other stuff. better late than never though :) [16:21] rbasak: so it's not so trivial as adding the needs-root restriction, as only one specific test needs root (and other tests fail if they have root) [16:22] nacc: :-( [16:22] nacc: there's an example in the juju-core dep8 test of how to drop root, if that helps. [16:22] rbasak: ok, i'll take a look [16:22] (IIRC there were a couple of gotchas) [16:22] I'm not sure what the current state of the Juju packaging is. It should be in juju-core in Trusty I think. If not, Wily. [16:44] rbasak: Hi Robie, any news on the language packageset? [16:49] Yes, I know I'm not allowed on here and will immediately leave. But, this is an error .. [16:49] 2 not fully installed or removed. [16:49] After this operation, 0 B of additional disk space will be used. [16:49] Setting up mysql-server-5.7 (5.7.13-0ubuntu3) ... [16:49] Renaming removed key_buffer and myisam-recover options (if present) [16:49] sed: can't read /etc/mysql/my.cnf.migrated: No such file or directory [16:49] dpkg: error processing package mysql-server-5.7 (--configure): [16:49] subprocess installed post-installation script returned error exit status 2 [16:49] dpkg: dependency problems prevent configuration of mysql-server: [16:49] mysql-server depends on mysql-server-5.7; however: [16:49] Package mysql-server-5.7 is not configured yet. [16:49] dpkg: error processing package mysql-server (--configure): [16:49] dependency problems - leaving unconfigured [16:49] No apport report written because the error message indicates it's a follow-up error from a previous failure. [16:51] strange. [16:51] that seems to be a 16.10 issue ( rbasak --^ ) [16:52] He's gone [16:52] That was an oversight. It's now fixed in ubuntu4. [16:52] (probably not in the release pocket yet) [16:53] rbasak: yeah, the 'strange.' was more indicated at the showing up to pastebomb the channel and then to leave [16:53] Ah [17:37] Hey old friends. I am having issues downloading from cloud-images.ubuntu.com ... anybody know a place to ping the admins on IRC? [17:41] smoser: --^ did you say it was having issues? [17:43] from traceroutes around the net.. looks to be saturated [17:43] 16 SOURCE-MANA.edge5.London1.Level3.net (2001:1900:5:2:2::131a) 164.646 ms 166.219 ms 164.621 ms [17:43] 17 cloud-images-ubuntu-com.sawo.canonical.com (2001:67c:1360:8001:ffff:ffff:ffff:fffe) 530.27 ms 584.155 ms 667.539 ms [17:44] SpamapS: yeah, that's hwat i've heard, it's just bogged down right now (not 100% on it) [17:44] SpamapS: i believe the right folks have been notified [17:45] I wonder if that's due to the fact that it's hosting all the vagrant boxes now [17:45] so many laptops :) [17:49] SpamapS, i opened an rt. [17:50] yeah, i think its just saturated. [17:52] smoser: k, thanks [17:57] SpamapS, can you easily check from europe [17:57] i dont have a system there that i can test easily [17:57] i wondered if its just the link over the ocean [18:07] smoser: yeah I can actually spin up a vm in london.. hang on [18:09] from lcy01 (same datacenter) i get 40M/s [18:09] :) [18:10] I don't think softlayer's london DC is the same one.. but it might be [18:12] oh here, I have Amsterdam too [18:12] * SpamapS spins that up [18:23] smoser: from a London VM... [18:23] 12 canonical-3.edge1.lon003.pnap.net (212.118.242.74) 9.896 ms 8.517 ms 7.827 ms [18:23] 13 cloud-images-ubuntu-com.sawo.canonical.com (91.189.88.141) 7.915 ms 9.270 ms * [18:23] so yeah, probably just the edge router that's saturated [18:23] * SpamapS trying AMS [18:23] wget ? [18:23] wget http://cloud-images.ubuntu.com/daily/server/xenial/current/xenial-server-cloudimg-amd64-root.tar.xz -O /dev/null [18:24] smoser: same, 40MB/s [18:24] it's possible that's the same DC [18:24] actually, funny story [18:24] it bounces through amsterdam [18:24] so not same DC [18:24] http://paste.ubuntu.com/19393128/ [18:27] its that little pond that sits between us and london [18:27] wonder if its related to brexit [18:27] haha [18:27] London exits the EU, and the internet [18:28] SpamapS, i'm getting 6M/s now here. [18:28] 100%[==========================================================================================================================================================================>] 147,791,640 27.9MB/s in 6.5s [18:28] definitely improved. [18:29] smoser: oh actually yes [18:29] mine sped up and finished [18:29] overall it was 144kB/s.. but that's 1 hour of 15kB/s averaged in [18:40] jamespage: hi, looking at ceph in the NEW queue... why are we generating -dbg packages in the archive for this? [18:42] jamespage: also, some lintian errors; not sure if these are regressions, but they're problematic: E: ceph-mon: package-installs-python-bytecode usr/lib/python2.7/dist-packages/ceph_rest_api.pyo === salem_ is now known as _salem [18:50] jamespage: have checked, and the python errors are a regression. not a blocker for NEW because it doesn't impact your binary splitting, but a pretty bad bug that ought to be fixed [18:51] jamespage: and accepted, including the -dbg packages, which we really do not need any more of in the archive [18:52] heh aren't those things a gig each? [18:53] my local mirror has nine gigs of ceph *dbg* packages.. it would be nice to get that back :) [18:55] * smoser left mirror long ago due to such things. caching proxy now just keeps what it needs. [18:57] caching proxy always has the tradeoff that a cache miss is slow === cmagina is now known as cmagina-errand [19:28] slangasek, ack - thanks for the feedback [19:29] I'd not spotted the pyo === _salem is now known as salem_ === cmagina-errand is now known as cmagina === salem_ is now known as _salem === _salem is now known as salem_ === salem_ is now known as _salem === _salem is now known as salem_ === salem_ is now known as _salem [20:42] GunnarHj: sorry. I'll try and sort it out tomorrow as a priority. === _salem is now known as salem_ === salem_ is now known as _salem [22:59] rbasak: Great. No urgency, really, just wondered.