[01:55] <geomyidae_> What initiates the build process though? Like, for mono, there are dozens of individual debs created. Is there a master manifest that instructs for what packages to build?
[02:20] <tarpman> geomyidae_: http://en.wikipedia.org/wiki/Debian_build_toolchain
[02:58] <geomyidae_> tarpman: now that looks like what I've been after
[06:00] <pitti> infinity: I didn't actually upload util-linux, hang on; but yes, hte previous times I merged, which is generally preferable
[06:04] <pitti> infinity: oh ok, I read on now, seems all settled?
[06:07] <pitti> infinity: but some of the other debian fixes look worthwhile as well; not sure if we still want to squeeze them into utopic, though
[06:08] <pitti> (mostly for lack of time to merge and test, not because they look scary)
[07:14] <dholbach> good morning
[08:28] <doko> pitti, real issue? https://jenkins.qa.ubuntu.com/view/Utopic/view/AutoPkgTest/job/utopic-adt-apport/lastBuild/?
[08:33] <pitti> doko: nope, apport-kde test flakiness; retried
[08:34] <doko> pitti: binutils amd64 too please
[08:34] <pitti> whoa, the world just exploded -- a gazillion adt failures
[08:35] <pitti> jibel: did you just update teh adt jobs? "adt--amd64-cloud.img" -- that looks wrong
[08:35] <pitti> this caused the failure flood
[08:36] <jibel> pitti, no, phone phone phone at the moment
[08:36] <jibel> pitti, ⟫ distro-info --devel
[08:36] <jibel> ubuntu-distro-info: Distribution data outdated.
[08:37] <cjwatson> upgrade to the distro-info-data in utopic
[08:37] <cjwatson> the dates were corrected
[08:37] <jibel> pitti, for distro info the release of utopic is today
[08:37] <cjwatson> jibel: ^-
[08:37] <jibel> was
[08:37] <pitti> ah, so that's it -- I thought we would explicitly set the release?
[08:38] <cjwatson> we should anyway, otherwise the jobs will fail again on release day
[08:38] <pitti> well no, I don't want to use distro-info at all -- we know that the job is for utopic, we sohuldn't rely on it to assume that we want to run it in the current devel series
[08:38] <cjwatson> Yeah
[08:39] <pitti> RELEASE=$(distro-info -d || distro-info -s)
[08:39] <pitti> oh, seems we just need to fix that in the local config
[08:41] <pitti> jibel: so ./jenkins/run-autopkgtest is supposed to get called with $RELEASE set -- where does that call distro-info?
[08:41] <dholbach> a new distro-info-data was uploaded to Debian yesterday IIRC
[08:41] <dholbach> maybe we still need to sync it?
[08:42] <dholbach> ah no, it's synced already, 0.22
[08:42] <pitti> jibel: and the jenkins job has export RELEASE=utopic
[08:42] <pitti> so I don't understand how distro-info is related here
[08:42] <jibel> pitti, I'll check, I'm pretty sure we fixed that last release
[08:43] <pitti> or why it only affects amd64, but not i386
[08:44] <Laney> dholbach: maybe SRU though
[08:45] <dholbach> hum, did we do SRUs of it before?
[08:45] <jibel> pitti, which job failed on amd64 but not i386? apport failed no i386 but not amd64
[08:46] <Laney> yes, see https://launchpad.net/ubuntu/+source/distro-info-data/0.8ubuntu0.6
[08:47] <pitti> jibel: http://d-jenkins.ubuntu-ci:8080/view/Utopic/view/AutoPkgTest/job/utopic-adt-vtk6/ for example
[08:47] <pitti> jibel: apport just was a race
[08:47] <seb128> bdmurray, slangasek, https://bugs.launchpad.net/ubuntu/+source/glib2.0/+bug/1381804/comments/5
[08:47] <pitti> $ WORKSPACE=`pwd`/workspace ARCH=amd64 RELEASE=utopic PACKAGE=libpng auto-package-testing/jenkins/run-autopkgtest
[08:47] <jibel> pitti, no apport failed for the same reason
[08:47] <jibel> pitti, qemu-img: /run/shm/adt--i386-cloud.img.overlay-1413448420.0155885: Could not open '/home/auto-package-testing/cache/disks/adt--i386-cloud.img': Could not open '/home/auto-package-testing/cache/disks/adt--i386-cloud.img': No such file or directory: No such file or directory
[08:47] <pitti> jibel: ^ if I run that it succeeds
[08:48] <pitti> jibel: yes, so $RELEASE isn't set; but the jenkins XML config does set it, and it's also in the console log
[08:51] <jibel> pitti, it should be fine now, only alderamin was affected.
[08:51] <pitti> jibel: oh, what changed/did you change? I just ran this on alderamin
[08:52] <pitti> jibel: ok, I'll go through and retry the failures
[08:52] <jibel> pitti, RELEASE=$(distro-info -d||distro-info -s) in /home/auto-package-testing/.adtrc
[08:53] <pitti> jibel: oh, I see
[08:53] <pitti> jibel: cheers!
[08:53] <pitti> what a trap
[08:54] <jibel> pitti, actually this line coud be removed completely since RELEASE is defined in the env
[08:55] <jibel> but .adtrc override it
[08:56] <pitti> jibel: yeah, this is probably a leftover from the time when devs were still running that locally
[08:56] <pitti> but in the CI lab we always want to explicitly specify the release, arch, etc.
[08:57] <pitti> jibel: anyway, thanks for your help! sorry for the distraction
[08:59] <jibel> pitti, a leftover or the machine has been restored with a not really up-to-date backup.
[09:31] <dholbach> @pilot in
[09:33] <rbasak> cjwatson: I'd like to upload jpds' fix for strongswan in the same manner as python-greenlet yesterday. 5.1.2-0ubuntu3 to replace 5.1.2-0ubuntu2 in utopic to fix rebuild FTBFS, but bypassing 5.1.3-0ubuntu1 in utopic-proposed which introduces a new FTBFS.
[09:33] <rbasak> If this is acceptable, please could you remove 5.1.3-0ubuntu1 from utopic-proposed?
[09:39] <cjwatson> rbasak: is this a fix cherry-picked from 5.1.3-0ubuntu1, or a further thing?
[09:40] <rbasak> I'm pretty sure it's cherry-picked, but let me check.
[09:40] <rbasak> (it's just an additional build dep)
[09:41] <cjwatson> rbasak: ok
[09:41] <cjwatson> jpds: heads-up that the above is happening
[09:41] <rbasak> -  gperf, libcap-dev [linux-any], dh-autoreconf
[09:41] <rbasak> +  gperf, libcap-dev [linux-any], libgcrypt20-dev | libgcrypt11-dev, dh-autoreconf
[09:41] <rbasak> That's all
[09:41] <cjwatson> rbasak: it's gone
[09:42] <rbasak> 5.1.3-0ubuntu1 has libgcrypt11-dev. So it's not quite exactly the same. I'm not sure why, but both seem acceptable to me.
[09:42] <rbasak> Thanks!
[09:44] <cjwatson> Check the linkage of the rest of its dependency chain, I guess.
[09:44] <cjwatson> Probably a good idea to avoid both libgcrypt versions being linked into the one process.
[09:44] <rbasak> rmadison says libgcrypt20-dev is in main, but check-mir seems to disagree
[09:46] <cjwatson> Believe rmadison
[09:47] <cjwatson> Anyway, I don't care what's in main vs. universe for this, it's the dependency chain of that package that matters
[09:47] <cjwatson> Since both libgcrypt{11,20}-dev are in main
[09:47] <rbasak> So I did a grep across all the generated binary Depends and Recommends lines
[09:48] <rbasak> I only see libgcrypt20 there
[09:48] <rbasak> Does that sufficiently check what you're after?
[09:50] <rbasak> strongswan-plugin-gcrypt was built in Trusty against libgcrypt11. This changes it to libgcrypt20.
[09:50] <rbasak> That's the only change to the binary deps AFAICS.
[09:53] <cjwatson> rbasak: Check for libgnutls too
[09:54] <cjwatson> If libgnutls26 is there, that pulls in libgcrypt11
[09:54] <cjwatson> rbasak: Anyway, sounds reasonable to try building it against libgcrypt20
[09:55] <cjwatson> Assuming it actually works :)
[09:56] <rbasak> No mention of gnutls in the dependencies at all, so we're good. Thanks, I'll upload.
[09:58] <doko> apw, ogasawara: could you have a look at https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1381973 and see if this can be fixed in the precise kernel?
[10:09] <Laney> ev: I'm already preparing to upload whoopsie, no need to do that
[10:09] <Laney> just wanted to test build on armhf
[10:12] <rbasak> barry: if you have time, please could you take a look at bug 1381564? Needs a version bump. Looks like we could do it in Debian and sync over if we're quick.
[10:12] <rbasak> (I can't upload)
[10:13] <rbasak> http://sourceforge.net/p/pyparsing/code/commit_browser lists changes between 2.0.2 and 2.0.3 (Javascript required)
[10:13] <rbasak> Looks like they're all suitable bugfixes.
[10:14] <rbasak> And the test suite is updated, etc.
[10:14] <apw> doko, where can i see the FTBFS log
[10:15] <doko> apw, ohh, not anymore :-/  just upload the 1.5.9-5 to a non-virtualized ppa
[10:16] <doko> apw, ahh, wait, build logs are here: https://launchpad.net/ubuntu/+source/keyutils/1.5.9-5
[10:19] <infinity> doko: Worth noting that those didn't *all* fail because of the precise kernel. :/
[10:19] <infinity> doko: ppc64el was trusty.
[10:20] <infinity> doko: And so was powerpc.
[10:20] <doko> infinity, the build succeeded on the powerpc porter boxes
[10:21] <infinity> doko: Curious...
[10:21] <apw> bah ... it thinks we've released
[10:21] <apw> pull-lp-source: Warning: Distribution data outdated. Please check for an update for distro-info-data. See /usr/share/doc/distro-info-data/README.Debian for details.
[10:22] <infinity> apw: dist-upgrade, already fixed.
[10:22] <apw> ta
[10:22] <infinity> doko: I wonder what's responsible for making /proc/keys exist.
[10:23] <infinity> doko: If it's on a porter but not a buildd, this may not be a kernel issue.
[10:23] <infinity> Oh.
[10:24] <apw> signed kernel perhaps ?
[10:24] <infinity> No, it just was fixed in 3.13.0-35 (which is what the porter is running), and the buildd was 3.13.0-33
[10:24] <infinity> https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1344405
[10:24] <apw> heh, well that was easy
[10:25] <infinity> apw: Well, that fixed trusty.  Would still be nice to have it in precise too.
[10:25] <infinity> Assuming CONFIG_KEYS_DEBUG_PROC_KEYS was a thing on 3.2
[10:25] <infinity> I suspect it was, cause I think this testsuite passes on Debian's 3.2-based buildds.
[10:26] <apw> infinity, it is a thing back there, and is not on
[10:26] <apw> debian.master/config/config.common.ubuntu:# CONFIG_KEYS_DEBUG_PROC_KEYS is not set
[10:26] <apw> i guess i'll open that bug again against precise
[10:26] <infinity> apw: Yeahp, would be nice to flip it on.  Ta.
[10:28] <infinity> I have some pretty low opinions of userspace things unconditionally assuming esoteric kernel config options, but this seems like one that a few bits look for now.
[10:29] <infinity> doko: Thanks for the porter/buildd disparity hint.  Made the real issue pop right out. :)
[10:30] <sarnold_> .. and it does seem like the reason for keyutils to exist :)
[10:33] <doko> infinity, cjwatson: gptsync and refit are removed in unstable. superseded by refind. will try to fix the ftbfs for refit
[10:39] <doko> seb128, Laney: ping on the ido ftbfs ... http://people.ubuntuwire.org/~wgrant/rebuild-ftbfs-test/test-rebuild-20140914-utopic.html
[11:04] <dholbach> happyaron, are you taking care of bug 1374949?
[11:05] <bluesabre> good morning dholbach, and thanks for the uploads :)
[11:05] <dholbach> bluesabre, anytime
[11:08] <dholbach> *kkc: ok nevermind, looks like they're already synced
[11:16] <LocutusOfBorg1> dholbach, it is having some troubles, seems that two developers synced it at the same time
[11:16] <LocutusOfBorg1> https://launchpad.net/ubuntu/+source/libkkc/0.3.4-1
[11:16] <LocutusOfBorg1> look
[11:17] <dholbach> hum
[11:17] <dholbach> I'm not sure what that means
[11:19] <LocutusOfBorg1> discussed in #-release
 Who accepted libkkc an hour or two ago?  It would be slightly helpful to know exactly how you did it
 (Since we ended up with two sets of simultaneous builds for it, which isn't supposed to be able to happen)
[12:03] <doko> tedg, please could you have a look at https://bugs.launchpad.net/ubuntu/+source/ido/+bug/1382020 ?
[12:29] <ev> Laney: cheers
[12:32] <happyaron> dholbach: yes I'm on it
[12:49] <mitya57> dholbach, hi, if you are piloting, can you please look at my metacity MP?
[13:05] <barry> rbasak: looking!
[13:05] <tedg> doko, Sure
[13:39] <barry> rbasak: https://bugs.launchpad.net/ubuntu/+source/pyparsing/+bug/1381564/comments/4
[13:40] <doko> tedg, thanks! are you going to upload?
[13:40] <rbasak> barry: thank you!
[13:40] <rbasak> barry: are we actually in final freeze yet?
[13:40] <barry> rbasak: yw!
[13:41] <barry> rbasak: according to the topic in #u-release, not yet
[13:41] <doko> barry, then time to fix bzr ;)
[13:41] <barry> doko: ah
[13:42] <doko> barry lacks a bit of enthusiasm ;-P
[13:43] <barry> doko: yeah, sadly.  if there was a patch already, maybe. but gosh, i have nfc about those failures and i haven't hacked in bzr in a long while.  not sure i could even get a fix for that before ff
[13:44]  * barry wonders if there are any bzr hackers left
[13:44] <pitti> mvo: just FYI, I'm looking into the systemd failed test which currently holds back your new util-linux
[13:44] <mvo> pitti: thanks
[13:52] <infinity> xnox: Around?
[13:52] <infinity> pitti: Thanks, that systemd test failure looks pretty bizarrely broken.
[13:53] <tedg> doko, Another bug popped up after that one with lcov. Going to make sure Jenkins is happy.
[13:53] <pitti> infinity: yeah, not sure what changed there recently; after rebooting with systemd-sysv, autopkgtest's helper init.d script fails to start
[13:57] <dholbach> @pilot out
[14:05] <mitya57> dholbach: thanks a lot!
[14:05] <dholbach> anytime
[14:06] <pitti> infinity: oh crap, I suck; I found the problem; that'll require another autopkgtest upload into utopic
[14:06] <pitti> infinity: not to solve it on CI (we use the git checkout for that), but to fix broken local adt VMs
[14:08] <infinity> pitti: Upload away. :)
[14:09] <pitti> infinity: I just missed dinstall, so the usual debian -> sync route will mean I can sync tomorrow morning; is that enough?
[14:09] <pitti> infinity: otherwise I can upload a fake sync to utopic
[14:09] <infinity> pitti: Nah, tomorrow is fine.
[14:09] <pitti> *nod*
[14:12] <doko> barry, please don't close bugs which are not yet fixed (pyparsing)
[14:15] <bdmurray> seb128: I saw - thanks for getting someone to look at it.
[14:15] <seb128> bdmurray, hey, yw!
[14:17] <doko> seb128, ping on https://bugs.launchpad.net/ubuntu/+source/unity-scope-manpages/+bug/1382011  or can you suggest some responsible person/team?
[14:18] <seb128> doko, try asking thostr
[14:18] <doko> ok
[14:19] <rcrit> I'm new to Ubuntu, mostly working on *cough* Fedora in the past. I'm seeing some different behavior of openssl than what I saw in Fedora, related to system certificates
[14:20] <rcrit> for example, this is failing to validate the certs for me: openssl s_client -host www.google.com -port 443
[14:20] <rcrit> the same on Fedora results in a validated connection
[14:20] <rcrit> I'd like to be able to use a common, shared set of CA certificates. I see a bunch in /etc/ssl/certs but they don't seem to be used automatically
[14:21] <sarnold_> rcrit: you have to specify the certs you want to trust: openssl s_client -CApath /etc/ssl/certs/ -host www.google.com -port 443
[14:21] <rcrit> shouldn't that be the default?
[14:25] <rcrit> well, maybe I"ll open a bug and pursue it there, thanks.
[14:27] <rcrit> hmm, curl gets this right.
[14:29] <rbasak> Most stuff does use it IME. I think it's just s_client that doesn't.
[14:30] <rbasak> (I'm sure there are exceptions though, and in principle I agree with fixing those)
[14:36] <slangasek> seb128: thanks!  glad it was a straightforward fix, sorry for having to make you guys look at it for us
[14:36] <seb128> slangasek, no worry, yw!
[14:48] <pitti> mvo, infinity: adt VM building fixed, rolled out to CI, systemd succeeds again, util-linux blocked; I'll sync autopkgtest 3.6 tomorrow morning
[14:59] <seb128> bdmurray, pitti, Saviq: I hit some unity8-dash segfaults on the phone but apport fails to collect a dump, so no retrace/gdb possible, is that a known issue? do you have any clue why that might be the case?
[14:59] <pitti> seb128: what does /var/log/apport.log say?
[14:59] <pitti> (might not have enough memory for the core dump)
[15:00] <seb128> pitti, http://paste.ubuntu.com/8574366/
[15:01] <pitti> seb128: hm, so that doesn't say anything
[15:02] <pitti> seb128: ah, it doesn't log if the core dump gets too large
[15:08] <seb128> pitti, what's the limit, is it easy to tweak?
[15:08] <pitti> seb128: 3/4 of available RAM size (otherwise you'd easily run into OOM)
[15:08] <pitti> seb128: but might be ok for merely uploading it
[15:10] <seb128> shrug
[15:10] <seb128> unity-dash using 637M?!
[15:10] <seb128> pitti, thanks
[15:10] <pitti> yeah :(
[15:11] <pitti> seb128: ask jibel how much fun he has with the OOM killer due to unity taking so much RAM
[15:13] <pitti> seb128: you can try: /usr/lib/python3/dist-packages/problem_report.py, line 373; change it to
[15:13] <pitti>                         if False and size > limit:
[15:13] <pitti> (or delete the entire if)
[15:14] <pitti> seb128: or, what's probably better: edit /usr/share/apport/apport line 339; that specifies the limit as usable_ram() * 3 / 4
[15:15] <seb128> pitti, going to try that, danke
[15:15] <seb128> Saviq, that ^ means we actually get unity-dash segfault issues not showing on e.u.c
[15:17] <tseliot> doko: I've just filed the two MIRs, as requested: LP: #1382091 and LP: #1382086
[15:18] <doko> tseliot, thanks, will try to look at these tomorrow, leaving soonish today
[15:18] <tseliot> doko: thanks
[15:18] <doko> tseliot, is a team subscribed to these packages?
[15:18] <tseliot> doko: yes, ubuntu-mir
[15:19] <tseliot> doko: wait, to the packages?
[15:19] <doko> tseliot, yes
[15:19] <Saviq> seb128, that looks crazy, have you got a lot of music on the phone for example?
[15:19] <seb128> Saviq, no, I've none
[15:19] <seb128> no video, no music
[15:20] <tseliot> doko: the package is synced from debian, so I assume not?
[15:20] <seb128> Saviq, I just killed it, didn't use the phone, it's using 525M
[15:20] <Saviq> seb128, what do you use to measure?
[15:21] <seb128> Saviq, top
[15:21] <seb128> that might not be right
[15:21] <Saviq> smemstat -p `pidof unity8-dash`
[15:22] <seb128> Saviq, bottom line is that apport bails out from collecting the dump, which means no bt/retracing/e.u.c
[15:22] <doko> tseliot, no, on https://launchpad.net/ubuntu/+source/ocl-icd  "subscribe to ..."
[15:22] <seb128>  10405     0.0 B    45.0 M    48.9 M    68.9 M phablet    unity8-dash
[15:22] <seb128> Saviq, ^ hum
[15:22] <Saviq> seb128, yeah, that's more like it
[15:22] <Saviq> seb128, and how much free mem you got?
[15:22]  * Saviq kills dash to see
[15:22] <doko> tseliot, and here: https://launchpad.net/ubuntu/+source/khronos-opencl-headers
[15:23] <ogra_> on a freshlyy booted 110 inatsll with G+ and dekko open my dash uses 112M in idle
[15:23] <tseliot> doko: right, nobody is subscribed
[15:23] <seb128> Saviq, playing a bit with the click store
[15:23] <seb128>  10405     0.0 B   117.8 M   122.1 M   143.4 M phablet    unity8-dash
[15:23] <doko> tseliot, so please subscribe your team you usually use for such things
[15:23] <seb128> KiB Mem:    983764 total,   752348 used,   231416 free,    20912 buffers
[15:23] <seb128> Saviq, ^
[15:24] <Saviq> seb128, that's expected, it doesn't unload all the images
[15:24] <Saviq> seb128, not straight away, that is
[15:24] <seb128> k
[15:24] <seb128> so I don't know if apport measures the memory wrong
[15:24] <seb128> or if there is some other issues
[15:24] <seb128> but I got like 5 segfault today, and not had a dump
[15:24] <Saviq> seb128, it collected fine after SIGABRT here
[15:24]  * Saviq still got 300MB free
[15:25] <tseliot> doko: I'll subscribe the ubuntu-x team for now
[15:25] <seb128> Saviq, let me sig11 it
[15:25] <Saviq> seb128, huh
[15:25] <Saviq> seb128, I had a .crash file and whoopsie seems to have deleted it?!
[15:25] <seb128> Saviq, weird
[15:25] <seb128> mines are still there
[15:25] <seb128> they just do 90k and don't include a dump
[15:27] <Saviq> seb128, right, that does suggest core collection ran out of mem
[15:31] <mvo> jibel: hi, do we still run precise->trusty->utopic upgrade tests currently? I wonder if bug #1381570 would have been triggered here
[15:34] <jibel> mvo, Hi, no. We are running P->T and T->U but not P->T->U
[15:34] <Saviq> seb128, but mine seems fine @12M... but HUH, all the crashes seem to go away without being uploaded ¿?
[15:35] <seb128> Saviq, well, maybe the way I hammered on the click store earlier to reproduce those segfaults made the number raise a bit, and with some apps running I could have been like unity8 = 170M and not enough free RAM
[15:36] <Saviq> seb128, yeah, might be, and yeah, we know we need to do better @ mem management
[15:38] <seb128> Saviq, can't do everything in one cycle, my main concern there was more than if most users hit those cases, then e.u.c doesn't tell us the really story
[15:39] <mvo> jibel: aha, ok, that explain why this was not found via autotesting, thanks
[16:47] <GunnarHj> kirkland: ping?
[21:11] <roadmr> hello folks, the files for utopic netboot are incomplete, after loading pxelinux.0 it complains about some missing .c32 files which I had to go fetch from the syslinux-common package. Would it be reasonable to expect the netboot installer stuff to work out-of-the-box? or is there a document somewhere explaining this procedure?
[21:18] <goodwill> roadmr: what is utopic?
[21:18] <roadmr> goodwill: Utopic Unicorn, the development version of Ubuntu, soon to be released as 14.10
[21:30] <goodwill> ah
[22:31] <paran> !regression-alert
[22:31] <bdmurray> paran: where?
[22:33] <paran> bug #1376966. SRU for ubuntu-drivers-common broke X for me and others.
[22:33] <paran> Ubuntu wiki said to report SRU regressions here on IRC :-)
[22:37] <bdmurray> paran: looking, thanks
[22:40] <paran> bdmurray: very short summary, gpu-manager looks in wrong files with a bad regex, and incorrectly think the nvidia kernel module is blacklisted. I included a patch.
[22:41] <bdmurray> paran: Yes, I see. Thanks for that.
[22:43] <bdmurray> paran: I've updated the bug and assigned it to a developer.
[22:50] <paran> bdmurray: sounds good, thanks.