[00:58] <cjwatson> xnox: Almost anything is more straightforward than that
[01:22] <wgrant> xnox: Oh, nice, WSGIProxy is exactly what I thought we'd have to implement.
[03:40] <pitti> Good morning
[03:41] <Unit193> Howdy, pitti.
[03:41] <RAOF> pitti: Aloha!
[03:42] <pitti> RAOF: ah, I forgot to check again yesterday
[03:42] <pitti> RAOF: so, it built fine
[03:42] <pitti> RAOF: it took over 2 hours, and at some point it just dropped off my brain
[03:42] <pitti> RAOF: do you want the debs/log/etc., or is "it builds" enough?
[03:42] <RAOF> It builds is good for me.
[03:43] <RAOF> Let me push a tag for you to upload.
[03:54] <RAOF> pitti: debian/1.2.1-1 should be good to go to unstable. Let's get this transition on the road!
[03:54] <pitti> RAOF: ah, soname bump?
[03:55] <pitti> RAOF: hm, gbp-pull gives me nothing new
[03:55] <RAOF> To git+ssh://git.debian.org/git/collab-maint/colord
[03:55] <RAOF>    3045b6b..3719f1b  master -> master
[03:55] <RAOF>  * [new tag]         debian/1.2.1-1 -> debian/1.2.1-1
[03:56] <pitti> ok, so I was just blind, or gbp-pull didn't show me the new tag
[03:56] <RAOF> pitti: Yeah, a bunch of deprecated symbols got dropped in the 1.2 series, making it libcolord2 et al.
[04:02] <pitti> RAOF: quite impressive that on my box creating a profile takes ~ 20 seconds where on mips it takes about half an hour; I wasn't aware that these mips boxes were *that* slow..
[04:04] <RAOF> It turns out that you can do a lot of computation with billions of transistors!
[04:08] <infinity> pitti: Which mips box is that?
[04:08] <pitti> infinity: gabrielli.debian.org (the porter box)
[04:08] <infinity> pitti: The Cavium machine is pretty peppy, except for crappy I/O.
[04:08] <pitti> ah, I stopped at the first "mips porterbox" on the db.d.o. page
[04:08] <infinity> pitti: And gabrielli is the machine I was thinking of.
[04:08] <infinity> pitti: Maybe I only think it's peppy, cause I build with -j16
[04:09] <pitti> infinity: I built with -j8, but doesn't help much with colord as it doesn't parallelize the calculation
[04:09] <infinity> The individual cores certainly aren't screamers (they're not meant to be, it's essentially a switch in a desktop case)
[04:09] <pitti> it has 16 cores
[04:09] <RAOF> infinity: Yeah, profile generation is explicitly serial, as each profile takes over a GiB of memory to compute.
[04:09] <pitti> infinity: yeah, I used to have one of these d-link linux routers, they were quite nice to hack on
[04:09] <infinity> RAOF: Ow.
[04:10] <pitti> RAOF: uploaded
[04:10] <RAOF> So the relevant performance indicator is IPC :)
[04:10] <RAOF> Times clockspeed, obviously.
[04:10] <RAOF> pitti: Much ta.
[04:11] <infinity> pitti: I'd love someone to talk me into doing an Ubuntu MIPS port some day, but we'd need a whole lot of gabrielli-sized machines to make it go, I suspect.
[04:11] <infinity> At least, to provide the same SLA people are currently used to in LP and whine about when it degrades. :P
[04:11] <RAOF> Virtualise it on prodstack! :P
[04:12] <pitti> the internet of ubuntus!
[04:13]  * RAOF loves how his copyright line doesn't fit in 80 characters
[04:20] <infinity> RAOF: Get a shorter name.
[04:49] <Noskcaj> Can someone please review https://code.launchpad.net/~noskcaj/libgweather/3.12.2
[05:00] <Noskcaj> cjwatson, Mind if i take the unzip merge?
[06:12] <dpm> hi wgrant, pitti is currently working on generating utopic langpacks for the phone, but as per the schedule we set up in LP, the next export is not due until Wednesday next week. In order not to have to wait until then, would it be possible to run a one-off export of translations for utopic today?
[06:13] <dpm> We've done this in the past, e.g. when we needed exports outside of the crontab for a release, but I don't know the procedure for requesting these nowadays
[06:15] <pitti> hey dpm
[06:16] <dpm> morning pitti :)
[06:16] <pitti> dpm: I refined the script to calculate source packages shipped on touch now, just committed to langpack-o-matic; now I'll work on generating langpacks based on that
[06:16] <dpm> excellent!
[06:16] <pitti> dpm: I think we don't do the -base/update split for touch
[06:17] <pitti> it makes little sense with image based updates
[06:17] <dpm> yes, I agree
[06:17] <dpm> pitti, do we have to change all universe packages in the list to be imported to LP? Can we determine beforehand which ones we need to import translations for?
[06:18] <pitti> dpm: not necessary for building langpacks; they'll just contain the bits from main and with X-Use-Ubuntu-Langpacks (or whatever that magic key was)
[06:19] <pitti> dpm: I suppose we could download all binaries from that pastebin, check if they have anything in /usr/share/locale; all which have mo files are candiates for importing
[06:22] <dpm> pitti, yes, but ultimately to have meaningful langpacks, well need the X-Use-Ubuntu-Langpacks (can't remember the exact name, either :) for their .mo files to be in the export to be fed to langpack-o-matic. So at some point, we'll either have to mark all the universe ones with X-Use-Ubuntu-Langpacks or do an initial filtering (e.g. with your latest suggestion) and only mark those that make sense to be imported
[06:23] <pitti> right
[06:23] <pitti> dpm: but then we should also strip them
[06:23] <pitti> otherwise langpacks wouldn't have much effect
[06:24] <pitti> (only for new languages)
[06:24] <dpm> pitti, yes, but that's what X-Use-Ubuntu-Langpacks does already, IIRC - it strips them, imports the translations and then LP exports those
[06:25] <dpm> or were you suggesting something else?
[06:25] <pitti> dpm: ah right, just checked; I thought we also had something like "export translation tarballs, but don't strip"
[06:26] <pitti> but I think we export tarballs for all builds
[06:26] <dpm> pitti, ah, no, I don't think we have a "don't strip" option
[06:27] <dpm> pitti, what do you mean by "all builds" in this context?
[06:28] <pitti> dpm: I mean, I think we export _translations.tar.gz for all universe packages, we just don't strip them or export them from LP
[06:29] <dpm> pitti, ah, I didn't know that. Where are these universe translations tarballs exported to?
[06:30] <pitti> dpm: to LP translations, as usual
[06:32] <dpm> oh, you mean they're generated as part of the package build, right? So you're saying you could use that to feed to langpack-o-matic instead of the big weekly Launchpad export?
[06:33] <pitti> dpm: no, not at all; I'm saying we merely need to mark these packages as X-Use-Ubuntu-Langpacks:, and it should be all good then
[06:33] <dpm> ok, gotcha
[06:34] <pitti> dpm: so your web app shows that we have some 20 languages with coverage >= 70%; that might be a good set to start with?
[06:35] <dpm> pitti, yeah, once we get this up and running, I think I'd like to set the percentage to 80 or even 90%, but right now, 70% should be good to initially get some more languages in. Can langpack-o-matic do this calculation and filtering based on coverage %?
[06:36] <pitti> dpm: not at the moment, but I'll teach it
[06:36] <dpm> :)
[06:36] <pitti> dpm: not that simple to get the absolute number of translatable strings though, so it might need some handholding
[06:37] <pitti> dpm: i. e. if I just look at a set of .po files, then I don't see the ones which don't have any translations at all
[06:37] <pitti> dpm: I suppose your web app compares with the .pots ?
[06:39] <dpm> pitti, yes, I use the .pots to get the total, but currently all apps in that list are checked out to a local branch to calculate the stats, so I have access to the .pot files. However, langpack-o-matic could have some other options if it cannot access the .pot files directly
[06:40] <pitti> dpm: I need to check whether the LP exports contain the pots
[06:40] <dpm> - Read the template stats from the daily LP translations data dump in JSON at http://people.canonical.com/~dpm/data/ubuntu-l10n/
[06:41] <dpm> - Use the LP Translations API - It's not complete, but the templates part is available. I can't recall if it does more than just listing template names per source package, though
[06:41] <pitti> dpm: ah, nice
[06:41] <pitti> dpm: i. e. in http://people.canonical.com/~dpm/data/ubuntu-l10n/trusty-potemplate-stats.json
[06:41] <dpm> pitti, exactly
[06:41] <pitti> dpm: that would mean that accounts-service has 10 translatable strings?
[06:42] <dpm> I've got an RT open to get the utopic stats
[06:42] <pitti> and apt 624
[06:42] <dpm> pitti, indeed
[06:42] <pitti> ok
[06:42] <pitti> dpm: so the stats will be skewed a bit as we ship e. g. apt and gdb on the phone although they aren't "visible"
[06:43] <pitti> dpm: implementing that will take some time, perhaps we start with a fixed list of language codes
[06:43] <pitti> based on your web app
[06:43] <pitti> it won't change every other week anyway
[06:43] <dpm> +1
[06:44] <wgrant> dpm, pitti: Export is running.
[06:44] <pitti> wgrant: cheers
[06:45] <dpm> awesome, thanks wgrant
[06:45] <debfx> ScottK, Laney: do we still need to enforce that trusty->precise backports end up in saucy too? is precise->saucy a supported upgrade path?
[06:46] <dpm> pitti, I'd use only the totals for the packages that we're considering as part of the UI. I've ran my stats webapp for source packages to calculate the distro translation coverage in the past, and I use the "priority" field to determine if it's a template whose translations are visible. E.g. anything with a priority <= 100 I'm not counting towards the total number of strings
[06:47] <dpm> and those priorities are set in LP, generally I go through the templates when we open translations and set them all up and tweak them afterwards if needed
[06:48] <dpm> pitti, a third option would be for me to add an API to my web stats to return the language coverage for each language
[06:48] <dpm> s/language/translation/ coverage
[06:56] <pitti> dpm: downloading a single .json file sounds fine to me
[06:58] <dholbach> good morning
[06:58] <dpm> pitti, yes, whatever works best for you. I've also checked that you can get the string count from the LP API, so just to confirm, this is also an option you could use. https://launchpad.net/+apidoc/devel.html#translation_template
[06:58] <dpm> "message_count"
[08:16] <mitya57> Mirv: Good to know that!
[08:16] <Laney> debfx: nope, indeed not
[09:00] <cjwatson> Noskcaj: Would prefer to do it myself, thanks.
[09:00] <Noskcaj> ok. I'll cancel my merge proposal then
[09:01] <cjwatson> In general I don't need help with my merges unless explicitly requested
[09:02] <Noskcaj> ok
[09:03] <infinity> doko: Did you see my binutils ping?  You dropped my delta from http://launchpadlibrarian.net/173790298/binutils_2.24.51.20140425-0ubuntu1_2.24.51.20140425-0ubuntu2.diff.gz
[09:04] <infinity> doko: Which is why the lintian testsuite is failing again.
[09:04] <infinity> doko: You may also have dropped it from Debian, I didn't check.
[09:05] <doko> infinity, ouch. will reapply it
[09:06] <infinity> doko: Ta.
[09:07] <infinity> doko: Should probably add an autopkgtest that does the same thing as the lintian test (attempt to link -lm to a simple test program using ld directly) to make sure it doesn't regress.
[09:10] <ogra_> xnox, seeing your mail exchange with janimo ... didnt we say the non RTM devices will keep loop mounting ? so for these a proper image name definitely makes sense
[11:19] <pitti> infinity, cjwatson: do you know, do we still need pre-depends: for packages using xz compression? (it's the default now, but I'm also wondering about precise)
[11:19] <pitti> ${misc:Pre-Depends} ought to take care of that?
[11:20] <cjwatson> I think it was lucid-to-precise updates that needed that
[11:21] <pitti> ok, thanks; these days dpkg doesn't generate any misc:Pre-Depends, but I guess it can't hurt to add it anyways
[11:21] <pitti> context: langpack builds (so far they've been using bzip2, but that's now marked as deprecated)
[11:21] <cjwatson> dpkg wouldn't anyway, it was debhelper
[11:22] <cjwatson> I think it dropped them a little while back
[11:22] <cjwatson> But yeah, for anything at all recent there's no need to keep them now
[11:22] <pitti> ack, thanks
[11:22] <cjwatson> Maybe for precise it's simplest to leave them as bz2
[11:24] <pitti> I just tested it under precise, seems to work just fine
[11:24] <pitti> build/install/lintian/etc.
[11:24] <infinity> lucid didn't support xz.
[11:24] <infinity> So the pre-depends is needed on packages in precise.
[11:24] <infinity> It's not needed in trusty.
[11:25] <infinity> pitti: ^
[11:25] <pitti> infinity: thanks
[11:28] <cjwatson> Right, even for packages in -updates, because otherwise we break lucid-to-precise upgrades which will likely have -updates enabled during the upgrade.
[11:28] <cjwatson> If dh_installdeb or whatever it was DTRT in precise then ${misc:Pre-Depends} would be simple enough.
[11:29] <pitti> no, it doesn't seem to do that, it just sets it to empty
[11:30] <pitti> so I bumped the existing manual pre-dep now
[11:30] <cjwatson> I guess it can't know since the compression method is set in options to dh_builddeb, which is too late to do anything to the control file.
[11:35] <dpm_> pitti, it seems that the langpack export was pretty quick this time. We've now got the first full langpack available at: https://translations.launchpad.net/ubuntu/utopic/+language-packs :)
[11:35] <pitti> dpm_: oh, wow! that's just the *perfect* timing
[11:35] <dpm_> :)
[11:35] <pitti> dpm_: I just started downloading the latest trusty langpack for testing my recent langpack-o-matic stuff on real data :)
[11:36] <dpm_> aha!
[11:36] <pitti> (so far I've just written test cases with the mini-tarballs in tests/)
[11:36] <pitti> dpm_: cool, so utopic langpack time, too :)
[11:36] <dpm_> \o/
[11:37] <pitti> wgrant: and yes, that was mighty fast -- whatever you did, rock!
[11:42] <pitti> ouch -- the whole unpack and import of the full translation tarball takes ~ 15 s on my machine, ~ 2 h on germanium; yay tmpfs..
[11:44] <pitti> dpm_: http://paste.ubuntu.com/7594256/
[11:45] <pitti> ogra_: ^ FYI: first iteration of -touch langpack is ~ 800 kB for German (which should have fairly good coverage), whereas the regular langpack is 3.6 (not including -gnome)
[11:45] <pitti> 3.6 MB, I mean
[11:45] <ogra_> wheee !!!!
[11:45]  * ogra_ hugs pitti 
[11:46] <pitti> ogra_: and dpm determined we'll need some 20 or so (all languages with >= 70% coverage)
[11:46] <ogra_> well, as many as we can ship ...
[11:47] <ogra_> but that will reduce our size massively ... awesome news
[11:49] <pitti> ogra_: there's still room for optimization here -- e. g. this ships translations for gdb, apt, etc. (see http://paste.ubuntu.com/7594256/)
[11:49] <pitti> and OTOH it does'n yet have many apps (we need to mark those in universe which we want to get included in langpacks and stripped)
[11:49] <pitti> but it's a first ballpark figure
[11:49] <pitti> anyway, time for lunch :)
[11:49] <dpm_> pitti, that's awesome, thanks! I think that's good for the initial ones, but do you think we could do some further filtering in the next iteration? I see things such as colord, gdebi, libgweather-locations and others that we could probably drop.
[11:49] <ogra_> ah, well, we want to get the devices in the hands of developers ...
[11:49] <ogra_> i think it makes sense to keep that stuff in for now
[11:50] <ogra_> (gdb, apt etc)
[11:50] <pitti> dpm_: yes, absolutely
[11:50] <mlankhorst> chrisccoulson: I've added a hack to the bug report in case you want to use nouveau, but I think it's too ugly to even propose it as a sru :P
[11:52] <dpm_> cool. pitti, also, could you add 'ca' to the list of langpacks generated? It's the locale I'm testing on (apart from zh_CN), but I'll make sure it gets to 70% before the stats are updated tomorrow :)
[11:52] <pitti> dpm_: I currently built all, I still need to add a --languages feature
[11:52] <dpm_> ah, perfect
[11:52] <pitti> or implement --min-coverage etc.
[11:55] <chrisccoulson> mlankhorst, oh, it doesn't support gl from more than one thread. that kinda sucks :/
[11:56] <mlankhorst> chrisccoulson: the hack I wrote probably works though :P
[11:57] <mlankhorst> but yeah I'd have to make things thread-safe in a cleaner way
[12:04] <chrisccoulson> mlankhorst, I also need to implement the missing bits to make chromium's GPU blacklist work in oxide, so that we could disable the GL compositor and webgl
[12:05] <chrisccoulson> that would mean the only thread accessing gl would be the rendering thread for the qml scenegraph
[12:05] <mlankhorst> chrisccoulson: the hack makes nouveau thread safe enough not to crash
[12:05] <chrisccoulson> but that's a fair bit of work and I was hoping not to do it yet :/
[12:05] <chrisccoulson> yeah, the issue is that it's not me with the crash (I don't have nvidia hardware)
[12:06] <mlankhorst> Laney: can you test?
[12:06] <Laney> test what?
[12:06] <mlankhorst> Laney: hack from https://bugs.launchpad.net/nouveau/+bug/1309044
[12:09] <mlankhorst> but it requires libdrm 2.4.54 first, just grab it from debian or utopic
[12:09] <Laney> i'm on utopic already
[12:09] <Laney> do you have a package?
[12:09] <mlankhorst> no, just the diff, let me build one..
[12:09] <Laney> it's okay
[12:09] <Laney> assuming it applies
[12:10] <Laney> i thought you could reproduce this yourself though
[12:10] <mlankhorst> yeah :p
[12:10] <mlankhorst> but no idea if this breaks other things or not
[12:12] <Laney> mlankhorst: erm what is this a patch to? :(
[12:13] <mlankhorst> mesa
[12:13] <mlankhorst> but I'm upgrading my chroot, just give me a bit
[12:14] <Laney> it's okay, building now
[12:18] <pete-woods> sil2100: hi, I've done the testing for the HUD stuff in silo #9, I don't think and landing folks want to hit the publish button, though, because the status still makes it look like the build failed
[12:22] <Laney> mlankhorst: trying ...
[12:24] <chrisccoulson> mlankhorst, is there an easy, reliable way to detect that nouveau is being used?
[12:24] <mlankhorst> yes, but I would rather not add blacklists
[12:25] <sil2100> pete-woods: hey! Let me take a look at that, maybe it needs a small push from our side
[12:26] <chrisccoulson> mlankhorst, as a stop-gap, making the webbrowser-app abort is probably better than the whole machine locking up :)
[12:26] <mlankhorst> chrisccoulson: though in that case I would prefer only adding certain nouveau versions, not blacklist newer nouveau entirely
[12:27] <Laney> mlankhorst: I had some weird flickering when I logged in
[12:27] <Laney> WHAT HAVE YOU DONE
[12:28] <mlankhorst> no idea
[12:28] <Laney> actually I have it all the time
[12:28] <mlankhorst> yeah I've noticed, no idea about it yet :/
[12:28] <Laney> haha
[12:28] <Laney> why did you get me to run it if you knew?
[12:28] <mlankhorst> might be that flush I removed on swapbuffers, but hey i told you it was hacky :P
[12:28] <Laney> oh god
[12:28]  * Laney babbles
[12:40] <pete-woods> sil2100: thanks :)
[13:05] <jdstrand> jamesh: hey, for bug #1303962, does this policy look reasonable:
[13:05] <jdstrand> # Allow communications with mediascanner2
[13:05] <jdstrand> dbus (send)
[13:05] <jdstrand>      bus=session
[13:05] <jdstrand>      path=/com/canonical/MediaScanner2{,/**},
[13:05] <ScottK> debfx: I'd say no.
[13:06] <jdstrand> the {,/**} specifies an alternation such that both path=/com/canonical/MediaScanner2 and path=/com/canonical/MediaScanner2/<something else> matches
[13:12] <pitti> ogra_: TBH, repeating where things changed in the changelog seems pretty obsolete to me; after all, that's all in the individual commits?
[13:12] <pitti> ogra_: of course, the commit logs need to be useful; i. e. explain *why* something was done, not just *how*
[13:13] <ogra_> pitti, i just want somethig descriptive so that one of the landers can understand the change
[13:13] <pitti> ogra_: yeah, summaries still help to find affected packages, of course; but I think having all the details in debian/changelog is too noisy
[13:13] <ogra_> ist fine to have a caption referring to the feature across the ten packages it spans
[13:13] <pitti> ogra_: or did you actually mean bzr changelogs?
[13:14] <pitti> if these suck, then yes, big fail
[13:14] <ogra_> pitti, i mean debian changelogs ...
[13:14] <pitti> (but they always implicitly have the "where")
[13:14] <ogra_> well, bzr changelogs are usually not much better
[13:14] <ogra_> the debian changelog is assembled from them in CI
[13:15] <pitti> ogra_: ah right, with CI train debian changelog == MP "commit changelog", meh
[13:15] <ogra_> pitti, my mail is more towards the upstream devs that arent familiar with packaging ... i havent seen malformed changelogs from experienced packagers yet
[13:16] <ogra_> if your MP commit mesage only says "hey, new feature added !!!! i rock" ... thats not helpful
[13:16] <pitti> ogra_: heh, I don't think it's inherently related to packaging; more like "did that developer ever have to go through the pain of researching commits from some years ago" :)
[13:16] <pitti> yes
[13:16] <pitti> so in git terms, the one-line summary should be in the debian/changelog, the detailled 20-line description shouldn't
[13:17] <pitti> but right, we don't have the detailled 20-line description with our approach (at most in the individual commits of each merge)
[13:19] <ogra_> well, even a *proper* one liner would help
[13:20] <ogra_> most of the time its not even that
[13:25] <jamesh> jdstrand: currently everything is hanging off a single object path, so the /** bit isn't strictly necessary.  You could also lock it down by the interface name too if you want.
[13:26] <jamesh> jdstrand: it looks like that policy would work though.
[13:32] <jdstrand> jamesh: I'll lock it down more like you said. if we need to refine, it is easy enough to do
[13:32] <jdstrand> jamesh: thanks!
[13:35] <shadeslayer> pitti: wut https://jenkins.qa.ubuntu.com/job/utopic-adt-kapptemplate/3/console
[13:36] <mlankhorst> Laney: so no crashes then?
[13:36] <Laney> mlankhorst: it didn't crash
[13:36] <mlankhorst> :D
[13:36] <Laney> but
[13:36] <Laney> I couldn't really interact with it
[13:36] <mlankhorst> why?
[13:36] <Laney> because it was flickering so much
[13:37] <mlankhorst> ai
[13:37] <mlankhorst> ok so needs more thought then
[13:50] <pitti> shadeslayer: yep, will restart; we had to reboot the ppc64el boxes, and jenkins as the pain that it is vomits on that for the x86 boxes too
[13:50] <shadeslayer> heh
[13:51] <pitti> this affected some 5 other tests too, all restarted now
[13:51] <Laney> is that why I got a failure for a test which actually passed?
[13:51] <pitti> le sigh, *want jenkins go away*
[13:51] <pitti> Laney: yes
[13:52] <Laney> nod
[14:14] <xnox> pitti: not only i get shutdown-hang, a basic lxc container fails to boot as well *sigh*
[14:15] <pitti> xnox: darn; the code looks rather correct to me, so I guess it's hanging someplace else now? :-(
[14:16] <xnox> pitti: let me go the traditional route and upload remaining fixes for tasks & try concurrent boot without changing sysv-init at all.
[14:16] <xnox> cause that suppose to work (or at least did in limited sense)
[14:17] <xnox> and the fact that startpar is not producing any logs anywhere is annoying. (and even it's stdout/stderr is slurrped up by the rc script)
[14:23] <pitti> dpm_: actually, with http://people.canonical.com/~dpm/data/ubuntu-l10n/utopic-potemplate-stats.log the percentage cutoff is not that hard to do; how reliable/up to date are these json files in general?
[14:24] <pitti> dpm_: with that file I'll add the numbers of all templates that we have in the package list, and with msgfmt --statistics I can add the number of translated strings, the rest is simple division and comparison
[14:25] <dpm_> pitti, they're pretty good, and they are updated once a day. As an alternative, you can use the LP API to get instant stats if we need it to be more frequent.
[14:25] <pitti> dpm_: nah, that's totally fine
[14:25] <pitti> thanks
[14:26] <dpm_> pitti, that sounds good to me. If you're looking for a pure python solution, you might want to use polib instead of the cli gettext tools
[14:26] <bdmurray> mvo: bug 1309447 failed the verification for the SRU. I posted the traceback I received.
[14:27] <pitti> dpm_: not installed on macquarie, I guess I'll just keep the existing msgfmt call (it's already calling that anyway for normalization)
[14:27] <dpm_> ok, sounds fine, then
[14:28] <mvo> bdmurray: thanks, let me have a look
[14:44] <mlankhorst> pmcgowan: oh btw, i checked and bug https://bugs.launchpad.net/checkbox-gui/+bug/1318584 is not a regression from trusty
[14:45] <shadeslayer> pitti: https://jenkins.qa.ubuntu.com/job/utopic-adt-kate/ < doesn't seem to have actually run the tests?
[14:45] <shadeslayer> https://jenkins.qa.ubuntu.com/job/utopic-adt-kate/ARCH=i386,label=adt/2/console
[14:45] <pmcgowan> mlankhorst, hmm? meaning its busted on earlier stuff?
[14:46] <pitti> shadeslayer: it could certainly do with some more verbosity..
[14:46] <shadeslayer> kbuildsycoca4 running... < kdeinit starts, but then nothing
[14:46] <mlankhorst> pmcgowan: yeah, trusty crashes in the same way, and I diffed the qplatformscreen.cpp file against 5.2.1, didn't find any changes
[14:46] <shadeslayer> klauncher: Exiting on signal 1 < that's from shutdown I kdeinit4_shutdown think
[14:47] <shadeslayer> pitti: btw locally my hook seems to run them
[14:47] <pitti> shadeslayer: perhaps this is missing some xvfb-run etc. magic?
[14:47] <shadeslayer> http://paste.ubuntu.com/7595244/
[14:47] <pmcgowan> mlankhorst, I assume the concern is trusty since thats what is used for preinstalls
[14:49] <mlankhorst> it looked to me like the concern was it being a regression that prevented 5.3.0 from landing..
[14:51] <pitti> mkdir failed: : No such file or directory
[14:51] <pitti> trying to create local folder /home/shadeslayer: Permission denied
[14:51] <pitti> shadeslayer: ^ something seems wrong in your hook?
[14:52] <mlankhorst> pmcgowan: from the qplatformscreen.cpp documentation
[14:52] <mlankhorst>     Reimplement in subclass to return the pixel geometry of the screen
[14:52] <mlankhorst> \fn QRect QPlatformScreen::geometry() const = 0
[14:52] <shadeslayer> pitti: http://paste.ubuntu.com/7595266/ < afaict that comes from kdeinit
[14:53] <pmcgowan> mlankhorst, no the concern aiui is validating trusty images with the test tool
[14:54] <pitti> shadeslayer: ah, this probably still keeps your $HOME although it runs as user pbuilder
[14:54] <shadeslayer> most likely
[14:54] <pitti> shadeslayer: btw, adt-run has a -s/--shell-on-fail option which you might want to use
[14:55] <pitti> shadeslayer: ah no, it's currently broken with --output-dir, so not yet :/ (and --tmp-dir is obsolete)
[14:55] <shadeslayer> pitti: the hook gives me a shell on fail :)
[14:55] <shadeslayer> see lines 31-33
[14:55] <pitti> shadeslayer: anyway, I suppose this needs to be checked in schroot or qemu to see what's going on
[14:55] <pitti> shadeslayer: right, that's what I was looking at
[14:56] <mlankhorst> pmcgowan: hm might be found with valgrind..
[14:57] <mlankhorst> if object was already destroyed
[14:57] <shadeslayer> uf :P
[14:57] <shadeslayer> seems like so much work
[14:59] <pitti> shadeslayer: well, what is this test supposed to do?
[14:59] <pitti> shadeslayer: the essence is that it calls dh_auto_test
[14:59] <shadeslayer> yep
[14:59] <pitti> shadeslayer: you really want to do that during build instead
[15:00] <pitti> that doesn't test the installed pacakges at all
[15:00] <pitti> and also explains why nothign is being tested in CI as there is no built tree
[15:01] <pitti> shadeslayer: (it doesn't have build-needed); but you really shouldn't run tests against the built tree in an autopkgtest anyway :)
[15:01] <pitti> shadeslayer: so that same appraoach in debian/rules override_dh_auto_test seems just fine
[15:01] <pitti> (and dropping the autopkgtest, or replacing it with something that actually tests the installed package)
[15:01] <shadeslayer> so I wonder why debian does it this way
[15:02] <pitti> shadeslayer: I know, it's a rampant error which apparently keeps spreading via copy & paste :(
[15:02] <pitti> shadeslayer: may I point to https://lists.debian.org/debian-devel-announce/2014/05/msg00004.html ?
[15:03]  * shadeslayer looks
[15:05] <shadeslayer> pitti: cheers, I'll collaborate with debian
[15:05] <pitti> shadeslayer: many thanks!
[15:29] <xnox> stgraber: https://launchpadlibrarian.net/177016056/buildlog.txt.gz this looks odd (during recipe build now)
[15:30] <stgraber> xnox: hmm, yeah, that's weird... I did put a breaks + replace in there didn't I?
[15:30] <xnox> stgraber: you did, but it's unpacking a weird upstart at the same time.
[15:31] <stgraber> xnox: ah, yeah, I guess that may break if using the upstart from the PPA maybe, if it has a version higher than the break but still has 01-upstart-lsb
[15:32] <xnox> stgraber: right, let me fix that. However, that shouldn't have happened as we use latest packaging.... which has the right commit dropping the hook.
[15:32] <cjwatson> Yeah, it'll always dist-upgrade from the archive in question first
[15:34] <xnox> right, last build of upstart into ppa for utopic used packaging revision 1562 and thus still has the hook, which now prevents building new upstart into ppa via recipe.
[15:35] <xnox> will delete upstart from PPA, check the recipe and re-trigger the build.
[15:37] <pitti> dpm_: so I implemented the "count number of translated strings" in langpack-o-matic now; turns out that we ought to remove quite a number of really poor langpacks for regular Ubuntu, too: http://paste.ubuntu.com/7595543/
[15:37] <pitti> dpm_: even with a really low cutoff like 5% we can get rid of a lot of noise (where there are only 5 strings translated in the whole Ubuntu..)
[15:38] <xnox> stgraber: cjwatson: ha, recipe uses trusty packaging instead of utopic.
[15:39] <xnox> which i think happens on each distro-series branch. lp:ubuntu/upstart recipe branches become lp:ubuntu/(devel-1)/upstart
[15:39] <dpm> pitti, that'd work for me if that helps managing language packs. I'm not too worried about it, as for me the priority would be to get more languages in the CD rather than reducing the number of langpacks in the archive. But if it helps removing cruft, wfm, yes
[15:40] <pitti> dpm: yeah, I implemented it for the 70% threshold for the touch packs, but it has that nice side effect :)
[15:40] <pitti> if we reduce the buildd load and archive overhead from 800 to 600 packages that's already something
[15:41] <dpm> :)
[15:42] <cjwatson> pitti: it's much more manageable with cross-architecture builder pooling now, anyway
[15:42] <pitti> cjwatson: right, but it's also Packages.gz size etc.
[15:42] <cjwatson> so we can build 16 language packs at a time in parallel and *blam*
[15:43] <cjwatson> Yeah
[15:43] <pitti> certainly not a huge saving, but we'll get that for free
[15:44] <pitti> and with that, good bye everyone! see you on Tuesday (swap day tomorrow, Pentecost on Monday)
[15:51] <bdmurray> mvo: do you have any plans to upload aptdaemon? bug 1324833
[15:53] <mvo> bdmurray: I had no plans, but I can do it tomorrow
[16:18] <hallyn> BenC: sorry on bug 1321365, i'll be testing a 1.2.5 candidate with that fix today
[16:19] <BenC> hallyn: Thanks. I’m working on testing libvirt fully. Tryign 1.2.5 right now (if you have wip src package, please let me try it too)
[16:19] <BenC> hallyn: I’ll let you know if I run into any other issues
[16:19] <BenC> hallyn: But please backport that fix to trusty too
[16:19] <hallyn> BenC: zul will push it to ppa:zulcss/libvirt-testing momentarily
[16:20] <hallyn> BenC: yes, will do.  i was going to wait for 1.2.5 but at this point forget that.  lemme check check other pending srus,
[16:20] <hallyn> yeah that's the only one i have listed for trusty
[16:20] <hallyn> say is saucy eol yet?
[16:25] <zul> BenC:  just doing a local test build first
[16:43] <hallyn> stgraber: hey, if you're around, would you midn looking at the libvirt pkg i pushed to trusty-proposed?  it's urgent for ben, a 2-line fix;  it's not in utopic yet only bc we're still testing the huge and painful 1.2.5 merge there.
[16:44] <hallyn> stgraber: and BenC is blocked without it
[16:44]  * hallyn shoulda just pushed the two-line fix originally, instead of trying to save the build farm some effort :)
[16:45] <stgraber> hallyn: ok, I'll take a look
[16:46] <stgraber> hallyn: minor nitpicking, why did you use ubuntu13.1.1 instead of ubuntu13.2? :)
[16:47] <hallyn> stgraber: bc ubuntu13.1 is still in utopic for a day <shrug>  i bikeshedded for 2 mins then figured this seemed safest
[16:47] <hallyn> happy to change it back :)
[16:47] <hallyn> will look neater in the end
[16:48] <stgraber> hallyn: nah, I'll accept it as is, the version is not wrong, it's just unusual ;)
[16:49] <stgraber> hallyn: accepted
[16:50] <hallyn> stgraber: thanks
[17:24] <BenC> error: Failed to create domain from vir1.xml
[17:24] <BenC> error: internal error: cannot load AppArmor profile 'libvirt-ffcc0d49-cf2d-48f7-bb59-fb60731b3c59'
[17:24] <BenC> hallyn: Any ideas on that?
[17:24] <BenC> This is using the 1.2.5 in zul’s  ppa
[17:24] <BenC> zul: ^^
[17:26] <zul> BenC:  no thats new to me
[17:28] <BenC> zul: This references an lp bug http://stackoverflow.com/questions/12069297/create-virtual-machine-using-libvirt-error-related-to-apparmor
[17:29] <BenC> zul: The workaround doesn’t apply to my xml though. It’s already raw
[17:29] <zul> BenC:  that version didnt have the device-tree fix since I havent uploaded it to my ppa yet hallyn is uploading a new fix
[17:30] <BenC> zul: I have that fixed locally, so that’s not the issue
[17:30] <zul> BenC:  then its another issue
[17:33] <hallyn> BenC: i've pushed the fix against 1.2.2 to both utopic and trusty-proposed.  you can grab the binaries from trusty-proposed
[17:33] <BenC> hallyn: Thanks
[17:46] <BenC> hallyn: Thanks
[17:46] <BenC> zul: No messages from apparmor in syslog when this happens
[17:46] <BenC> I even disabled apparmor and it still errors the same
[17:47] <hallyn> BenC: you get this with the 1.2.2 version as well?
[17:47] <BenC> hallyn: I can’t use 1.2.2 because it doesn’t configure the pci correctly, but no, I didn’t see this error there
[17:48] <BenC> It showed that it successfully loaded/removed the apparmor profiles for the guest too
[17:53] <hallyn> what patch do you need to configure teh pci correctly
[18:01] <bdmurray> pitti: have you seen the updated comments in bug 1259829 regarding some other models?
[18:07] <BenC> zul: If I set security_driver = "none" in libvirt/qemu.conf, and teardown apparmor, I can start vms with no issue
[18:07] <jamespage> gaughen, btw firefly is in -updates now
[18:08] <BenC> zul: When that error happens, virt-aa-helper isn’t even being called, so it fails even before that point in the code
[18:08] <zul> hallyn: ^^^
[18:14] <BenC> zul, hallyn: Something in virDomainDefFormatInternal() is erroring out (and it isn’t printing an error to syslog, so that limits the possibilities)
[18:22] <gaughen> jamespage, excellent
[18:23] <BenC> zul, hallyn: Here’s a FAQ: Not having a <uuid></uuid> in the xml causes that error
[18:26]  * hallyn mumbles something about xml
[19:25] <Unit193> henrix: Hello, are you alive there?