[01:41] <SpamapS> marcoceppi: FYI, a few of us are headed to karaoke in Korea Town later this evening.. :)
[01:41] <SpamapS> marcoceppi: Cafe Brass Monkey. You can pass that along if you want.
[02:38] <Bluefoxicy> I have no idea what i'd file for a bug for this
[02:38] <Bluefoxicy> Install Monodevelop, create new ASP.NET MVC project, it doesn't compile out of the box.
[02:39] <Bluefoxicy> Requires System.Web.Mvc 3.0, which comes installed; this requires a ton of 3.0 libmono stuff, which doesn't come installed.  4 hours of futzing with nuget packages later, still won't work.  :|
[02:39] <Bluefoxicy> at the very least, a new, blank project should work out of the box.
[07:44] <pitti> Good morning
[07:59] <mitya57> spineau, will do now
[08:03] <spineau> mitya57: Good morning
[08:03] <mitya57> Good morning!
[08:04] <mitya57> spineau, also, I'll take the liberty to change the ugly .install.in file to a normal install file using dh-exec
[08:04] <mitya57> (if you don't mind)
[08:04] <spineau> mitya57: oh sure, if there's a more compliant way to do the same thing
[08:05] <mitya57> I'll say more nice rather than more compliant
[08:07] <mitya57> spineau, and also, set -e as a separate line in debian/rules didn't make any sense :)
[08:09] <spineau> mitya57: indeed, I'm reading how dh-exec works
[08:21] <mitya57> spineau, uploaded to both Debian and Ubuntu
[08:21] <spineau> mitya57: thanks again
[08:21] <mitya57> Yw!
[08:47] <flexiondotorg> Can I request that today's patch pilots take a look at sponsoring some Ubuntu MATE requests that we would like for 16.04 Alpha 2.
[08:49] <flexiondotorg> These are in the queue for Ubuntu MATE: https://bugs.launchpad.net/ubuntu/+source/ubuntu-mate-artwork/+bug/1535927 https://bugs.launchpad.net/ubuntu/+source/ubuntu-mate-settings/+bug/1535922 https://bugs.launchpad.net/ubuntu/+source/ubuntu-mate-meta/+bug/1535910
[09:32] <Odd_Bloke> apw: Quick kernel SRU/lifecycle question; when we see an SRU for a release kernel, will there also have been an SRU release for the LTS kernels?
[09:36] <Mirv> flexiondotorg: looking and probably sponsoring the first two, I don't know what needs to be updated in the meta package
[09:36] <flexiondotorg> Mirv, That's great.
[09:36] <flexiondotorg> Mirv, Usually someone from foundations updates the meta package.
[09:38] <Laney> Mirv: download the source package, run ./update, debuild, upload
[09:39] <Mirv> that sounds easy
[09:39] <Mirv> ah, it just runs germinate update
[09:39] <flexiondotorg> Mirv, Yep, that's all that is needed :-)
[09:46] <Unit193> Mirv: Thanks!
[09:49] <Mirv> Unit193: you're welcome!
[10:06] <gj|work> I'm going to set up a LX-Container to run a Desktop based on Trusty. I neary works out of the box, but there is a problem with the user session management: If i log in -- either via GUI or via ssh -- there is no session listed by 'loginctl'. From this, i e.g. can't do actions requirering authorisation via the desktop. Please, how to debug (or solve) this?
[10:13] <xnox> doko, thank you for fixing gcc on arm64
[10:22] <ginggs_> doko, yeah thanks! i fired off rebuilds for a couple of things earlier
[10:44] <cyphermox> good morning!
[10:48] <mitya57> spineau, ignore the rejected mail please, I've sorted it out with a ftp master
[10:52] <spineau> mitya57: ok, nice, thank you
[11:07] <flexiondotorg> Mirv, Thanks for your sponsoring :-)
[11:15] <spineau> cyphermox: hello, do you know who can do the universe->main migration for pyotherside? I'd like to unblock the xenial builds
[11:16] <spineau> cyphermox: mitya57 sponsored a new version this morning that passed all the tests. The package is ready for a copy
[11:16] <cyphermox> spineau: ask in #ubuntu-release :)
[11:16] <spineau> cyphermox: ok thanks
[12:13] <Mirv> spineau_afk: you'll need to go through this process https://wiki.ubuntu.com/MainInclusionProcess
[12:13] <Mirv> spineau_afk: ah, I only didn't find a bug because it was already handled, great :)
[12:33] <dasjoe> cjwatson: re: grub2 + zfs: thank you!
[12:35] <cjwatson> dasjoe: np, sorry it took a while
[12:41] <dasjoe> cjwatson: no worries, I'm happy the patches will be in 16.04 so I'll be able to upgrade my servers' rpools :)
[13:09] <LocutusOfBorg> xnox, I just uploaded re2c that should already have the Big Endian fix, can you please "syncpackage -s costamagnagianfranco -V 0.16-1 -f re2c when possible?
[13:10] <spineau> Mirv: indeed, Colin took care of it while I was lunching :)
[13:47] <Odd_Bloke> pitti: Hello! I'm debugging a problem we're seeing in some (partner-specific) cloud images, and have a couple of questions: Was network interface renaming enabled in wily?  And what are the possible (sensible ^_^) ways that it could be disabled?
[13:48] <Odd_Bloke> pitti: (A little background: two wily images in different clouds, one is renaming (and therefore breaking), the other is not renaming; I'm trying to work out why renaming is happening in one and not the other)
[14:44] <pitti> Odd_Bloke: we switched to ifnames in wily, yes; /usr/share/doc/udev/README.Debian.gz shows to ways how to disable that (one kernel CLI based, one config file based)
[14:45] <pitti> Odd_Bloke: was this an upgrade? we don't change the  naming schema on upgrades, if there's an existing 70-network-names.rules that'll continue to work
[14:45] <pitti> mid-term that doesn't solve the problem, though, and we should stop hardcoding "eth0" in images (that leads to actual problems)
[14:47] <Odd_Bloke> pitti: No, this is on new instances; and, yeah, we need to work out how we're going to get rid of eth0 stuff.
[14:52] <rharper> Odd_Bloke: one possibility for the different behavior is the type of nic being used;  at some point systemd didn't recognize virtio devices as "Stable" and would not attempt to apply naming rules (ens3); but things like emulated e1000 (qemu default nic) would get renamed;
[14:52] <rharper> Odd_Bloke: at some version systemd did include virtio in the nic renaming
[14:54] <Odd_Bloke> rharper: Aha, really useful note; thanks!
[14:56] <rharper> Odd_Bloke: https://bugs.launchpad.net/ubuntu/+bug/1483457  may have some other useful bits of info as well
[15:00] <rharper> rbasak: I've got a new strongswan package merge;  it still needs the upgrade path changes but I would like you to take a look at what I've got so far, if that's ok;
[15:05] <mitya57> Laney, new keyring uploaded (to Debian)
[15:05] <mitya57> Looking at autopilot-legacy now
[15:10] <Laney> mitya57: ssssssssweeeeeeeeeeetttttttttt
[15:15] <kickinz1> rbasak, ping
[15:20] <mitya57> Laney, https://github.com/sphinx-doc/sphinx/pull/2261 fixes the first issue, but now I get another one
[15:21] <mitya57> Ah, no, that's because of missing build-dep, so after applying that PR it works
[15:22] <mitya57> Laney, I won't add it in Debian until upstream ACKs it, but feel free to cherry-pick that into Ubuntu in the mean time
[15:29] <elopio> doko: ping. I was told you could give us a hand here: https://bugs.launchpad.net/ubuntu/+source/make-dfsg/+bug/1536727
[15:48] <rbasak> slangasek, tdaitx: what's the status of the squid3 merge, please? It's been months! Is this going to get done imminently, or do I need to take it back to make sure it gets done in time for feature freeze?
[16:10] <slangasek> rbasak: I think you should take it back, I don't think we're going to have
[16:10] <slangasek> rbasak: time to get to it
[16:10] <slangasek> rbasak: sorry!
[16:13] <rbasak> slangasek: OK, thanks.
[16:13] <rbasak> tdaitx: please note I'm taking the squid3 merge back.
[16:18] <Laney> mitya57: thanks
[16:18] <Laney> what missing build-dep?
[16:18] <Laney> that's basically the same patch that I haxed up locally for testing and would have PRed if you didn't respond :P
[16:22] <pitti> kirkland: heh, on snappy (or any r/o root) you don't have much choice about /tmp being a tmpfs :)
[16:23] <pitti> kirkland: interesting point/data about /tmp being slow on cloud deployments, I wasn't aware that this is that bad
[16:23] <kirkland> pitti: well, usually the storage in the cloud is remote to the machine hosting the vm
[16:24] <kirkland> pitti: it's typically a huge disk array or san
[16:24] <pitti> RLY
[16:24] <kirkland> pitti: sure, "EBS" is exactly that
[16:24] <kirkland> pitti: same with OpenStack
[16:24] <pitti> kirkland: so, more tmpfs then :)
[16:24] <kirkland> pitti: azure's I/O is absolutely horribly embarrassingly bad
[16:24] <kirkland> pitti: almost unusable
[16:25] <pitti> kirkland: wow, how do people use DBs and high-load webapps there?
[16:25] <kirkland> pitti: they usually pay extra for expensive ssd-backed instances
[16:25] <kirkland> pitti: or instances that have *local* ssd storage, in an ephemeral volume
[16:25] <kirkland> pitti: however, that volume is usually not protected against failure
[16:25] <kirkland> pitti: it's "ephemeral"
[16:26] <rbasak> rharper: sure, I'll review. Not sure I'll get a chance today though. Is your tree in Launchpad?
[16:26] <Odd_Bloke> (Whilst cursing the CTO who chose Azure ^_^)
[16:26] <rharper> rbasak: yes, it's updated
[16:26] <rbasak> Thanks
[16:26] <rharper> the "merge" branch is origin/new/debian_copy_in_old/debian
[16:26] <rbasak> kirkland: if it's particularly needed for azure, could we use vendor-data there to enable it on a per-cloud basis where it makes sense?
[16:26] <pitti> kirkland: so you said "[treshold] will be 3 GB", so that's planned/done now?
[16:27] <rbasak> kirkland: a more gradual switch like that would be less scary
[16:27] <rharper> and I've placed a build of the current merge in my ppa (  ppa:raharper/merges )
[16:27] <rbasak> rharper: ack
[16:27] <rharper> rbasak: I'm preparing a note to ubuntu-devel for review and comments;  I have some remaining todos, specifically the upgrade path
[16:27] <rbasak> rharper: sounds good!
[16:28] <rbasak> rharper: BTW, I'm gaining some experience reviewing a bunch of people picking up my workflow at once.
[16:28] <kirkland> pitti: well, "would be" according to the rfc/proposal
[16:28] <rharper> rbasak:  =)
[16:28] <rbasak> rharper: I'm starting to think an intermediate review of the "logical" stage might be a good idea. It sounds like you're well past that stage this time though.
[16:29] <rharper> rbasak: I think the logical stage is a *great* place for review
[16:29] <rharper> that's exactly where experience can help with how much to break up or leave
[16:29] <rharper> I am past that but it's definitely the right place for sanity check before rebase
[16:29] <rbasak> rharper: right, but also because if there's any confusion it can be cleared up before rebasing on new/debian to avoid wasted effort.
[16:29] <rharper> =)
[16:29] <rharper> yes
[16:30] <bdmurray> pitti: I did get some work done on the email functionality but it is incomplete / untested.
[16:30] <rharper> I have to say there were many times that I thought I had lost it all but having everything in git with tags/branches made recovering *much* easier
[16:30]  * rbasak wonders if he can unblock others by getting others doing this to sanity check each other
[16:30] <rharper> rbasak:  so thanks! =)
[16:30] <rbasak> rharper: np :)
[16:31] <rharper> rbasak: personally the logical stage is very easy to review with the changelog, so I think it makes a great place to do peer review
[16:31] <rbasak> Note that a tag is effectively a backup in git, unless you destroy your git repo some more direct way. You can always reset a branch back to a tag.
[16:31] <rharper> yep
[16:31] <pitti> bdmurray: oh, great! btw, I realized that the place we discussed yesterday might not be the best -- we must remember that we sent the mail, not do it every time we run britney
[16:31] <rbasak> Also, even if you forget to tag, the reflog can help undo as well; it's just a bit more effort to figure out the exact point you want to restore to.
[16:32] <rharper> I also backed up my git/gitwd at various stages as well =)
[16:32] <pitti> bdmurray: so shuffling this stuff around and sending the mail when downloading a new result would be better
[16:35] <bdmurray> pitti: where does downloading a new result happen?
[16:36] <pitti> bdmurray: "def fetch_one_result" in autopkgest.py
[16:36] <pitti> bdmurray: that's the only place that adds to self.test_results, the other parts just read it (or init it from the results.cache file)
[16:39] <tsimonq2> hi, I am trying to use backportpackage for the sake of learning how to use it, and I keep getting backportpackage: Error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
[16:39] <tsimonq2> my OpenPGP key is in LP
[16:40] <tsimonq2> so idk what the problem is
[16:40] <cjwatson> that has nothing to do with your key - for some reason your system is unable to verify LP's SSL certificate
[16:40] <tsimonq2> cjwatson: how do I diagnose this?
[16:41] <cjwatson> do you have full output?
[16:41] <cjwatson> including the command you're running
[16:41] <tsimonq2> backportpackage -u ppa:tsimonq2/backports -s xenial -d precise ccid
[16:41] <tsimonq2> backportpackage: Error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
[16:41] <cjwatson> any HTTP proxy in use in your environment?
[16:42] <tsimonq2> nope
[16:43] <cjwatson> what release is the host system?
[16:43] <tsimonq2> xenia;
[16:44] <cjwatson> ok, I would suggest trying with something simpler such as lp-shell (in the lptools package), and probably drill down with things like strace from there
[16:45] <tsimonq2> cjwatson: so I installed lptools, what do you suggest I do now?
[16:46] <cjwatson> see if you can reproduce the same thing with lp-shell, and if so, drill down with things like strace
[16:47] <cjwatson> don't have time to advise further right now sorry
[16:47] <tsimonq2> all ok
[16:47] <tsimonq2> *ahh
[16:47] <tsimonq2> thanks
[16:47] <cjwatson> the LP API itself is working fine
[16:48] <tsimonq2> httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
[16:48] <tsimonq2> same error
[16:49]  * tsimonq2 uses the Python shell and imports the launchpadlib library
[16:50] <cjwatson> lp-shell is basically just that
[16:52] <tsimonq2> hmm http://pastebin.ubuntu.com/14598694/
[16:52] <tsimonq2> same error
[16:54] <cjwatson> right, the possibilities are basically that you don't have the right system certificates installed, or that there's some network corruption between you and launchpad.net (in which I include both random data corruption and "transparent" proxies), or that somebody is trying to man-in-the-middle your HTTPS connections
[16:55] <cjwatson> perhaps visiting https://api.launchpad.net/ in a browser will be more informative (or perhaps not)
[16:55] <cjwatson> browsers have tools to report on certificates, as does e.g. the openssl command-line tool
[17:03] <tsimonq2> cjwatson: what page should I look at for an example?
[17:04] <cjwatson> tsimonq2: unimportant
[17:05] <tsimonq2> cjwatson: what am I looking for? this is a normal output...
[17:07] <cjwatson> tsimonq2: browsers typically let you look at the security certificates in use for any given page, and you could use that to help work out why Python apparently isn't picking up the same certificates
[17:07] <cjwatson> it's a debugging exercise and I don't know exactly what the problem is, so I cannot walk you through it
[17:08] <tsimonq2> cjwatson: I'm not by any means literate in this, what am I supposed to look for?
[17:08] <cjwatson> please could somebody else help tsimonq2?
[17:08] <cjwatson> like I say I don't have the time at the moment
[17:08] <xnox> tsimonq2, i find it very odd why you hae launchpadlib installed globally from pacakging, yet httplib2 in /usr/local
[17:08] <xnox> tsimonq2, i would expect for you to use httplib2 from /usr/lib i.e. one provided on your system.
[17:09] <cjwatson> ah good point, pretty sure the system httplib2 carries a patch to use system certificates
[17:09] <tsimonq2> xnox: so what do I need to do?
[17:10] <xnox> tsimonq2, why do you have python packages in /usr/local? it's quite a bad idea. as that affects everything on the system. Either use virtualenv, or per user python.
[17:10] <tsimonq2> thanks for your help cjwatson :)
[17:10] <tsimonq2> xnox: I didn't know that, how do I fix it? :)
[17:10] <xnox> tsimonq2, i would try $ sudo mkdir -p /var/tmp/backup; sudo mv /usr/local/lib/python2.7 /var/tmp/backup/usr-local-python2.7
[17:10] <cjwatson> I would instead try sudo pip uninstall httplib2
[17:10] <cjwatson> though it may take out other stuff you've locally installed there
[17:11] <xnox> tsimonq2, assuming you do know what you have in /usr/local....
[17:11] <xnox> right.
[17:11] <cjwatson> but that will have the useful effect of telling you what you need to fix to use a virtualenv instead ...
[17:12] <xnox> cjwatson, btw - i dream to kill /usr/local support by default. there are more people who break their systems with it, than those who really know how to use it. and most people don't need /usr/local at all, given the fancy pants deployment things of the modern day.
[17:12] <tsimonq2> ooh ooh ooh thanks xnox, that fixed it :)
[17:13] <xnox> tsimonq2, well we don't know what we broke =) andy why you have things installed in urs/local. if you run something and a python thing is missing, try installing things with apt first, rather than pip.
[17:13] <tsimonq2> ahh ok
[17:13] <tsimonq2> and now it should be in my ppa, yay :) https://launchpad.net/~tsimonq2/+archive/ubuntu/backports
[17:47] <kickinz1> rbasak, ntp logical delta against old/ubuntu is: logical/1_4.2.6.p5+dfsg-3ubuntu9
[17:47] <kickinz1> rbasak, new merge proposal is: logical/4.2.8p4+dfsg-3ubuntu1
[18:10] <kickinz1> rbasak, hold-on
[19:10] <dannf> cjwatson: fyi : https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1533009/comments/28
[19:11] <dannf> cjwatson: i saw your grub upload, just noting that because it'll hit GCC6 as well, but likely needing a different flag
[19:35] <mitya57>  Laney: thanks for the sphinx upload! And btw the pr is now merged upstream.
[19:55] <cjwatson> dannf: phcoder implemented the relocations, so I can probably drop it once I've tested that that's enough
[19:55] <cjwatson> (which I haven't yet)
[20:42] <RAOF> What would people feel about linking Mir using the gold linker?
[21:05] <coreycb`> mitya57, is sphinx working ok for you?  I'm getting a 'no such file or directory' error.
[21:06] <coreycb`> I'm not sure if it's on my end or something with sphinx
[21:18] <coreycb`> nevermind, I think it's on my end