[00:41] <astraljava> SpamapS: All the recent talk about concussions surely isn't for nothing. But not saying anything about you, specifically. ;)
[01:03] <YokoZar> Merry christmas slangasek :) https://bugs.launchpad.net/ubuntu/+source/defoma/+bug/905055
[01:04] <slangasek> YokoZar: er, there's no reason those packages need to be marked Multi-Arch: foreign for this
[01:05] <slangasek> *only* the immediate dependency of your package does
[01:05] <slangasek> does need to be, I mean
[01:05] <YokoZar> slangasek: I may have confused the issue from reading the M-A spec then :/
[01:06] <YokoZar> Yeah, this bit: https://wiki.ubuntu.com/MultiarchSpec#Dependencies_involving_Architecture:_all_packages  -- I thought that meant the transitive dependencies would matter too
[01:07] <YokoZar> slangasek: but thanks for the correction then
[01:07] <slangasek> "foreign-architecture package" here is the binary-only i386 package you're working on
[01:08] <slangasek> to satisfy that package's dependency on ttf-mscorefonts-installer, ttf-mscorefonts-installer needs to be marked Multi-Arch: foreign
[01:09] <slangasek> that's all
[01:11] <YokoZar> ok so it's nice it doesn't recurse
[07:30] <pitti> good morning
[07:30] <pitti> infinity: I had to wait for the others to time out, which took like 12 hours
[07:31] <pitti> infinity: but I finally debugged/fixed it last night, so I'll just do another -propoosed upload
[07:45] <micahg> hi pitti, sorry, I'm still waiting for a final release tag, chrisccoulson told me there are l10n differences between beta and release
[07:50] <pitti> micahg: ack; I can start the langpack build tomorrow morning, no problem
[07:51] <micahg> pitti: last release the tag didn't come until midnight UTC Sat morning
[07:51] <micahg> ah, is that what you meant?
[07:51] <pitti> yes, I meant I can start it on saturday morning
[07:53] <micahg> pitti: ok, well, there's a caveat, if I don't get it before around 21:00 UTC, I won't be able to upload until midnight Sun morning UTC
[07:53] <pitti> micahg: if we want to speed it up, I could also temporarily hack the code to have a hardcoded language list
[07:53] <pitti> micahg: if you can just send me the list of firefox-l10n-* packs that will be built, that's sufficient
[07:54] <micahg> pitti: I don't really need to inherently rush as long as it's ready sometime next week
[07:54] <pitti> then I can do the actual work today, and then on Sunday just hit the "upload" button
[07:54] <pitti> micahg: ok
[07:54] <micahg> pitti: ok, let me see if I have that already
[08:00] <micahg> pitti: I think this is it: http://bazaar.launchpad.net/~mozillateam/firefox/firefox.maverick/view/head:/debian/config/locales.all
[08:05] <pitti> micahg: perfect, thanks!
[08:10] <dholbach> good morning
[08:11]  * soren is very confused this morning
[08:11] <soren> http://paste.ubuntu.com/771954/  <--- Why does it try to apply those patches twice?
[08:12] <micahg> soren: quilt rule + source format 3?
[08:13] <soren> micahg: Nope.
[08:13] <soren> http://paste.ubuntu.com/771955/
[08:13] <soren> It also seems that it's dpkg-source doing it both times.
[08:16] <soren> micahg: Oh..
[08:16] <soren> I think I may know why.
[08:18] <soren> micahg: Apparently the orig.tar.gz accidentally had the patches applied. Weird.
[08:19] <micahg> soren: ah, I think I've seen that before
[08:44] <pitti> apw, smb`: FYI, -5 kernel binNEWed, ready for -meta upload
[08:44] <pitti> I'll update seeds and d-i
[09:09] <pitti> micahg: oh, I figure it's the same list for lucid?
[09:10] <micahg> pitti: should be I think
[09:23] <pitti> cjwatson: do you want to move d-i to 3.2.0-5, or want me to? (already updated the seeds, this time also including s-kernel-common)
[09:24] <cjwatson> pitti: go ahead
[09:24] <pitti> cjwatson: waiting for the metapackage to land, though
[09:24] <cjwatson> you don't need to wait for that before doing d-
[09:24] <cjwatson> i
[09:25] <pitti> cjwatson: wouldn't that make alternate builds uninstallable?
[09:26] <cjwatson> pitti: no
[09:26] <pitti> because d-i builds against -5, but the altearnate image has -4 in /pool ?
[09:26] <cjwatson> Who cares
[09:26] <pitti> ok
[09:26] <cjwatson> Worst case it installs an older kernel than it ran with
[09:27] <cjwatson> But the ABIs don't need to be in sync there
[09:41] <jml> Hi
[09:41] <jml> I was in the middle of reporting a crash bug, when about a third of the way through a lengthy upload I got this error:
[09:41] <jml> 'Cannot connect to crash database, please check your Internet connection.
[09:41] <jml> <urlopen error [Errno 32] Broken pipe>'
[09:42] <jml> Clicking OK then cancels the upload
[09:43] <jml> is this a bug? where should I report it?
[09:43] <pitti> jml: how big is that report?
[09:43] <pitti> jml: LP has an unfortunate habit of resetting the connection once you try to upload more than 50 or 100 MB
[09:43] <jml> pitti: it said 33MB
[09:44] <jml> pitti: but I have a lousy internet connection.
[09:48] <jml> also, now I don't know how to actually report this crasher bug in Qt Creator :)
[10:17] <dholbach> @pilot in
[10:27]  * pitti hugs dholbach
[10:27] <buxy> pitti: 10 minutes before your mail I sent an updated dpkg patch
[10:27] <buxy> did you saw it?
[10:27] <pitti> buxy: right, I noticed too late, sorry
[10:28] <pitti> buxy: many thanks, you rock
[10:28] <pitti> buxy: I updated the bug, too; new dpkg uploaded, and I'm unable to break it now
[10:28] <pitti> tried with different packages, installing in one or two steps, multiple times, etc.
[10:28] <buxy> ah ok, LP mails tend to lag a bit apparently
[10:29] <buxy> (at least LP mails through the PTS)
[10:29] <pitti> buxy: they lag for about 5 minutes, so that you can do several bug state changes without triggering one mail for each
[10:29] <pitti> buxy: but it was mostly my lag, I just saw Steve's response in my mailbox and then re-uploaded the workaround
[10:29] <pitti> then saw you'rs
[10:29] <pitti> your's (argh typing is hard)
[10:58] <rbasak> I'm trying to do an upgrade test from oneiric->precise, but do-release-upgrade is failing with various "Encountered a section with no Package: header" errors. Is this expected?
[11:02] <pitti> rbasak: not expected, in fact we also get it in the automatic dist-upgrader
[11:02] <pitti> ... tester
[11:02] <pitti> no idea what's causing it, though
[11:03] <pitti> apparently that started happening a few days ago only
[11:03]  * ajmitch gets that in pbuilder, it's always to do with TranslationIndex
[11:03] <ajmitch> so I disabled apt fetching translations to get around it
[11:03] <pitti> jibel: ^ FYI
[11:14] <rbasak> thanks pitti, ajmitch. Is it possible to use that workaround in do-release-upgrade? I tried Acquire::Languages { "none"; }; in apt.conf but that doesn't seem to have any effect
[11:18] <ajmitch> rbasak: unsure, I did have to delete the translation files in /var/lib/apt/lists as well - afaik apt tries to fetch languages that it already has, ignoring the config option in those cases
[11:18] <ajmitch> it was a bit of an ugly hack I was only using for the chroot, I don't think it's suitable for upgrade testing
[11:19] <rbasak> ajmitch: yeah, doesn't seem to work. I tried clearing out /var/lib/apt/lists but it still seems to be fetching TranslationIndex files if that's a suitable indiciation? Perhaps do-release-upgrade causes apt to ignore some of its config? It was always a bit of a mysterious black box to me
[11:20] <dholbach> seb128, I 'ate the desktop team branches
[11:20] <dholbach> just saying :)
[11:20] <seb128> dholbach, *hug* ;-)
[11:23] <seb128> dholbach, you hate things that are light to checkout and easy to use? ;-)
[11:24] <dholbach> not quite how I'd put it :)
[11:26] <rbasak> OK, it seems dist-upgrade is sufficient for my testing needs today, so my workaround was just to run dist-upgrade instead of do-release-upgrade ignoring the apt-get update warnings before it
[11:28] <seb128> dholbach, you are just old and grumpy :p
[11:36] <pitti> perl patch for broken doc-base trigger, take III *sigh*
[11:36] <pitti> jibel: ^ that's for bug 902553, hopefully *really* fixed now
[12:08] <dholbach> can somebody reject https://code.launchpad.net/~snicksie/ubuntu/precise/libgda4/fix-for-typo/+merge/86025 for me?
[12:08] <dholbach> (fix forwarded to upstream instead)
[12:09] <Snicksie> ah, sorry dholbach, didnt know it had to be somewhere else :$
[12:09] <dholbach> hey Snicksie
[12:09] <dholbach> thanks for your work on this
[12:09] <pitti> dholbach: done
[12:09] <dholbach> thanks pitti
[12:09] <dholbach> Snicksie, no worries - good work on the fix - I hope the fix can get into upstream soon
[12:09] <dholbach> and then we'll get it for free :)
[12:10] <Snicksie> where should I put it next time? :)
[12:10] <dholbach> (I linked to the upstream bug in the merge proposal)
[12:10] <dholbach> Snicksie, if it's a fix we should immediately get into Ubuntu it's totally fine to file a merge proposal just like you did
[12:11] <Snicksie> okay, and if its just a typo? :)
[12:11] <dholbach> in any case it makes sense that if you modify code (and not just the packaging in ./debian/), you send the fix upstream
[12:11] <dholbach> Snicksie, in that case we usually send it to upstream (the software authors) and get the fix for free with the next version update
[12:11] <Snicksie> okay, should I send it upstream myself next time or...? :)
[12:12] <dholbach> I wouldn't want to impose it on you, we're grateful for all the fixes we get - but if you have the time and don't mind doing it, then that's cool
[12:13] <dholbach> and if you run into any issues, feel free to just ask in here, or #ubuntu-motu
[12:13] <Snicksie> will do :)
[12:14] <Snicksie> thanks for explaining :)
[12:14] <dholbach> no worries :)
[12:27] <dholbach> ev, do you think you can have a look at the patch in bug 897933 and apply it upstream (the merge proposal gives me "bzr: ERROR: None 0.2 was not found in <PristineTarSource at....")?
[12:28] <debfx> could we lower the required dpkg Pre-Depends for xz-compressed packages to 1.15.6~? at least 2 Debian packages use that version.
[12:30] <dholbach> oh, 0.2.1 in precise seems to have fixed the issue - it seems the fix just needs to go upstream as well
[12:31] <Snicksie> dholbach, sorry for the incorrect email... seems like I had a small mistake in my bashrc. thanks for notifying :)
[12:31] <dholbach> :-)
[12:36] <apw> i wonder if someone could stroke the new linux-ti-omap4 binaries, then i can get the meta up
[12:39] <cjwatson> debfx: sure, file a Launchpad bug please and I'll take care of it
[12:40] <cjwatson> (unless somebody beats me to it)
[12:40] <cjwatson> debfx: preferably include an example package name (for QA)
[12:47] <jml> "Your computer does not have enough free memory to automatically analyse the problem and send a report to the developers."!
[13:04] <philpem> Hi all. I'm trying to build a CVS release of Gutenprint in as a .deb package. I'm having a few problems with this, mainly with paths... can anyone help out?
[13:06] <philpem> These are the last few lines from my buildlog: http://paste.ubuntu.com/772169/
[13:07] <philpem> It seems to be looking for '../../../src/xml/xmli18n-tmp.h' from a CSD of 'gutenprint-5.2.7cvs20111216/debian/build/po'. This file is actually in the build directory -- gutenprint-5.2.7cvs20111216/debian/build/src/xml/xmli18n-tmp.h
[13:09] <debfx> cjwatson: thanks, I've filed bug #905322
[13:10] <philpem> basically, I want a version of gutenprint which works with the Canon CP-800 mini photo printer... the current ubuntu release does not, but CVS does.
[13:14] <pitti> apw: done
[13:25] <apw> pitti, thanks :)
[13:29] <l3on> Laney, ping
[13:30] <Laney> hello
[13:30] <l3on> hey :).. I saw your answer at bug #905304
[13:30] <l3on> problem is ruby-rack does not exist in oneiric
[13:30] <l3on> what do you suggest ?
[13:31] <l3on> I would introduce transitional package... what do you think ?
[13:31] <Laney> it was renamed from libruby-rack
[13:31] <l3on> Laney, → http://anonscm.debian.org/gitweb/?p=pkg-ruby-extras/ruby-rack.git;a=commitdiff;h=885afe575fb0b04505c64f908dd364180cbd5bb4
[13:32] <Laney> which we do have in oneiric
[13:32] <l3on> yep, you 're right :(
[13:32] <Laney> so somehow fix that or the depending (broken) package
[13:33] <Laney> i mean librack-ruby, not libruby-rack
[13:33] <tumbleweed> oh, I missed that it was a rename, sorry l3on
[13:34] <l3on> tumbleweed, np :) I'm here to learn :P
[13:34] <Laney> and at any rate it is always fishy to fix bugs via backports
[13:36] <tumbleweed> I was tihnking of it as a new package, not a bug fix
[13:37] <Laney> but the purpose of it is to fix an uninstallability in the main archive
[13:38] <Laney> anyway, no harm done
[13:40] <l3on> Laney, I'm trying to build ruby-sinatra depending on librack-ruby instead of rack-ruby
[13:40] <l3on> we'll see
[13:41] <Laney> cool, thanks for your work
[13:47] <l3on> Laney, ok, seems works fine.. installs and runs
[13:48] <l3on> is it a -proposed ?
[13:48] <l3on> +ruby-sinatra (1.2.6-1ubuntu1) oneiric; urgency=low
[14:07] <l3on> No, it does not work properly because:
[14:07] <l3on> $ apt-file search /usr/lib/ruby/vendor_ruby/rack
[14:07] <l3on> returns nothing in oneiric
[14:12] <ManDay> Hello, may someone give a one-line summary of how the LiveCD makes itsself persistent through the casper-rw partition? Is it just a matter of rsyncing shellscripts that syncronize the FS upon shutdown or is it something more complicated like a sort of Union-FS?
[14:16] <cjwatson> ManDay: it's a union filesystem, specifically (at the moment) overlayfs
[14:17] <ManDay> Ah okay
[14:17] <ManDay> Thank you
[14:17] <cjwatson> you're welcome
[14:32] <mterry> stgraber, hello!  I have a work item to talk to you about whether ARB still uses /opt/extras.ubuntu.com/?  Last I heard it did, but just confirming for Quickly support
[14:34] <stgraber> mterry: yes, we still use /opt/extras.ubuntu.com
[14:34] <mterry> stgraber, cool, thanks
[14:35] <mterry> stgraber, and it's still useful to the ARB for quickly to create a package that puts things there (i.e. you don't have some other preferred solution to help developers with that)?
[14:47] <dholbach> broder, I had a look at the atkmm multiarch update - it seems debian/compat needs to be updated to 9. I can add that if you like
[14:47] <dholbach> (still reviewing it)
[14:50] <stgraber> mterry: we have some scripts to force a package to put everything in /opt/extras.ubuntu.com but we still prefer if the source package we receive is right to begin with
[14:50] <mterry> stgraber, awesome, OK
[15:19] <dholbach> @pilot out
[15:20] <cjwatson> debfx: OK, branch for that up for review.  It won't be deployed until at least Monday now, though
[15:21] <cjwatson> assuming that we can still manage further deployments this year
[15:34] <seb128> could somebody set https://code.launchpad.net/~jconti/ubuntu/oneiric/webkit/fix_doc_path/+merge/85054 to merge?
[15:34] <seb128> it was uploaded but the merge request targetted oneiric rather than oneiric-proposed
[15:49] <pitti> seb128: done
[15:49] <seb128> pitti, danke
[15:49] <seb128> (how come you have access to that and not me? ;-)
[15:49] <pitti> seb128: presumably through ~techboard as the owner of ubuntu branches or somethign like that
[15:49] <seb128> (or said differently, where do I need to apply to be able to do it?)
[15:49] <seb128> ok
[15:49] <pitti> and yes, it's a bug
[15:50] <cjwatson> it's a bug but maybe you could be added to ~ubuntu-branches or something as a workaround?
[15:50] <cjwatson> actually, why don't I do that.  TB people, any objections?
[15:50] <seb128> would that make me receive emails for every merge request? ;-)
[15:50] <cjwatson> yes
[15:51] <seb128> so please don't
[15:51] <cjwatson> I delete a lot of mail
[15:51] <cjwatson> OK
[15:51] <seb128> I will maybe ask you after the holidays, but I'm away starting tonight and I don't want to subscribe to some new spamming while i'm not around to set filters etc if needed ;-)
[15:51] <cjwatson> actually, I think I'll deactivate myself from that team; I'm already a member via techboard, and that way all the mail should land in techboard's moderation trap rather than my inbox where I don't want it
[15:52] <seb128> could we subscribe like ubuntu-dev or something to do?
[15:52] <seb128> to *it*
[15:52] <cjwatson> maybe but it would need to have a contact address that discarded mail
[15:53] <cjwatson> (and I'd prefer if the UDD folks signed off on something like that)
[15:53] <seb128> ok, seems like ~ubuntu-core-dev would be a good fit
[15:53] <seb128> contact email is ubuntu-core-review@luc
[15:57] <pitti> cjwatson: right, tb@ has tons of merge proposals, always fun to listadmin them away
[16:02] <Laney> ubuntu-dev would be good
[16:10] <apw> pitti, any idea where upstream udev repos are, the links we have in the package point to dead web pages (since the kernel.org debackle)
[16:11] <pitti> apw: http://git.kernel.org/?p=linux/hotplug/udev.git;a=summary works quite fine?
[16:11] <pitti> apw: I also commit to it  every now and then
[16:11] <apw> i have some fixes i want to upstream, so thought i'd better base on that
[16:29] <pitti> cjwatson: of course ti-omap got an abi bump an hour after I uploaded d-i; guess I'll upload another one?
[16:31] <cjwatson> pitti: if you like
[16:31] <cjwatson> version numbers are cheap ... ish
[16:31] <pitti> huge NBS and current images and all that
[16:32] <Daviey> How many people will be upset if they cannot use d-i with ti-omap until the next upload happend to happen?
[16:32] <pitti> Daviey: well, it's an update I can do while the meeting and it eases my mind to see http://people.canonical.com/~ubuntu-archive/nbs.html get smaller again :)
[16:33] <Daviey> lol, ok. :)
[16:34] <pitti> cjwatson: cheap> at ubuntu94 now -- soon we'll need another byte for it!
[16:35] <pitti> Daviey: at least I like to do that for the "normal" kernel -- we then get timely feedback through the QA autotests
[16:35] <cjwatson> I would have merged a while back but upstream moved to git; while I've managed to rewrite most of the other d-i component branches on git imports, that one has defeated me so far
[16:35] <cjwatson> not urgent, though
[16:35] <pitti> Daviey: now we don't have that (yet?) for arm images, but I think it's still better to not drag it for too long
[16:36] <pitti> cjwatson: (I was just kidding)
[16:36] <cjwatson> yeah :)
[16:37] <Daviey> pitti: yet is correct :)
[16:37] <pitti> Daviey: oh, are there concrete plans to auto-test them?
[16:48] <Daviey> pitti: not concrete, but a /want/.
[16:48] <pitti> Daviey: ah, ok; well, I /want/ a whole lot of things :)
[16:48] <Daviey> The trouble is, the current testing is tied to libvirt and using ISO's.
[16:49] <Daviey> we could port it to use qemu arm virtulisation, so still use libvirt.. but the ISO requirement refactoring might require some love.
[16:49] <pitti> I guess virtualization in ARM land is still a bit on the experimental/nonexisting side?
[16:49] <pitti> lxc perhaps?
[16:49] <pitti> ah no, that's not for image testing, just running test suites
[16:49] <Daviey> which is a bandwidth issue.
[16:49] <ogra_> lxc should be good
[16:50] <Daviey> pitti: what tests are you interested in?
[16:51] <pitti> Daviey: they are preinstalled, so no installer testing, but e. g. the OEM setup tests apply, as well as simply "does that thing boot into a desktop"
[16:51] <Daviey> well i'm leaning towards server testing :)
[16:51] <pitti> jibel and I were also discussign opening all /usr/share/applications/*.desktop files and see if the program starts or immediately crashes
[16:52] <pitti> Daviey: right; anythign other than 0 helps :)
[16:56] <Daviey> pitti: I'm still wondering if the server-iso testing method makes sense, but suck in the tarball and boot that in qemu.
[16:57] <Daviey> Gets around hardware limitations, and is a pretty well tested formula.
[16:58] <Daviey> jamespage: thoughts? ^^
[17:00] <SpamapS> hm..
[17:00]  * jamespage thinks hard
[17:00] <SpamapS> so the mdadm debian maintainer has forcibly turned off -Werror .. suggesting that its bad because a toolchain update would break the build
[17:00] <jamespage> yikes
[17:00] <SpamapS> thats sort of backward..
[17:01] <jamespage> Daviey, pitti: I agree that virtual arm testing is not a starter ATM
[17:01] <SpamapS> I think I'll disable that patch during the merge. :-P
[17:02] <pitti> good night everyone, have a nice weekend!
[17:02] <SpamapS> pitti: cheers!
[17:02] <pitti> micahg: lucid langpacks prepared, maverick langpacks are being created; so I'll check for a ping from you over the weekend when to upload them
[17:02] <Daviey> pitti: o/
[17:03] <jamespage> Daviey: for server we could setup something with hardware
[17:03] <jamespage> that uses network preseed installs
[17:03] <Daviey> jamespage: you don't think it is worth pushing, or won't work currently?
[17:03] <jamespage> Daviey: TBH my mind is a bit blown and I've not had time to think about it to hard
[17:04] <jamespage> Daviey: it might work OK for the image testing I guess
[17:04] <Daviey> jamespage: ah, ok - probably best not blow your mind on a Friday evening :)
[17:04] <jamespage> Daviey: well at least I get the weekend to recover it :-)
[17:05] <Daviey> The mess alone, would take all weekend to clean up.
[17:05] <jamespage> lol
[17:09] <juliank> SpamapS: Yes, -Werror is not recommended, as it can break with new GCC's because the new GCC adds a warning more.
[17:09] <doko> gnome broken to install :-/
[17:09] <juliank> Software releases should never be build with -Werror
[17:12] <juliank> You don't want to break a build because a new GCC version suddenly thinks you forgot to initialize a variable.
[17:13] <broder> dholbach: ah, sorry for not being explicit about that. debian/compat => 9 isn't necessary for non dh(1) packages; for multiarch stuff it only affects dh_auto_configure
[17:14] <broder> slangasek: ^ is there anything i'm missing there? would you mind if i updated the wiki to not bump debian/compat on classic debhelper and cdbs packages?
[17:15] <dholbach> broder, thanks for letting me know - rereading debhelper(7) what you say makes sense :)
[17:15] <broder> dholbach: i didn't bump it to minimize the diff, but it's a noop
[17:16] <dholbach> broder, if the fix lands in Debian, we should be able to just sync and adopt whatever the debian maintainers decided on having
[17:16] <dholbach> thanks broder for your work on them
[17:16] <broder> thanks for sponsoring :)
[17:16] <dholbach> de nada
[17:17] <cjwatson> SpamapS: I agree with juliank - use -Werror when developing but it simply doesn't scale to 10000 packages in a distribution which some poor sod has to try to fix in bulk
[17:17] <SpamapS> Ah I thought we had it on by default or something.
[17:17] <cjwatson> we do not
[17:18]  * SpamapS makes a note to wait 10 minutes for the espresso to kick in before thinking.
[17:18] <doko> we do have -Werror=format-security only
[17:18] <SpamapS> Right thats the one I was thinking of
[17:18]  * SpamapS turns patch back on. :)
[17:19] <juliank> Enabling -Werror for implicit declarations might make sense as well, though; if not already using C99
[17:20] <slangasek> broder: maybe you should check with the cdbs maintainer, which is who put that there :)
[17:21] <broder> slangasek: ...really? i don't think cdbs even acknowledges the existence of compat 9 yet
[17:21] <slangasek> cdbs shouldn't in general touch or care about debian/compat
[17:21] <slangasek> so maybe it's cut'n'waste
[17:22] <broder> it does at least know about it for its auto-generated build-dep feature
[17:22] <broder> and to switch between dh_clean -k and dh_prep as appropriate
[17:22] <broder> and how dh_strip works
[17:22] <broder> but that appears to be it
[17:23] <micahg> pitti: thanks, will let you know when stuff is ready
[17:23] <broder> slangasek: i'll go ahead and edit the wiki, then
[17:57] <onkarshinde> Can any of the core developers please give back cups in oneiric-proposed on powerpc?
[18:00] <micahg> onkarshinde: I already told you that won't work, you need a new upload
[18:00] <micahg> we lost armel also in that build
[18:01] <infinity> micahg: Eh?  Why wouldn't giving it back work?
[18:01] <onkarshinde> micahg: I am talking about cups, not gnome-shell.
[18:01] <micahg> infinity: already in -updates :)
[18:01] <infinity> (I mean, assuming the ghostscript issue is worked out)
[18:01] <micahg> onkarshinde: ah, same issue though :)
[18:01] <infinity> micahg: Yes, so?
[18:01] <micahg> infinity: I thought you can't publish the same source record twice
[18:02] <infinity> micahg: ...?
[18:02] <onkarshinde> Oh. I didn't know that if some binaries are moved to updates you can't give back those who FTBFS.
[18:02] <infinity> micahg: I'm not sure what you mean by that.
[18:02] <micahg> infinity: once a source has been published to a pocket, you can't recopy new binaries (at least that's what I was told before0
[18:02] <infinity> micahg: It's published in both proposed and updates.  And it can certainly still be built and the new binaries copied.
[18:03] <micahg> infinity: I was told that's not possible yet
[18:03] <infinity> micahg: We couldn't re-build and re-copy the i396 binaries (and wouldn't want to), but there's no technical reason the ppc/arm ones can't build.
[18:03] <infinity> micahg: Unless someone broke something when I wasn't looking, it used to be possible...
[18:04] <Daviey> infinity: I've never seen an i396 build succeed TBH
[18:04] <infinity> Daviey: Typing is hard.
[18:04] <Daviey> :)
[18:05] <micahg> infinity: I've had a few conversations with wgrant about this situation WRT copying from a native PPA to -security or -proposed, is -proposed to -updates different?
[18:05] <infinity> micahg: I could be misremembering soyuz brokenness.  That's also possible.
[18:07] <micahg> infinity: it needs a rebuild anyways or different archs will be built against different versions of libgs9
[18:08] <infinity> I'm okay with "soyuz is broken", but the libgs9 argument is meaningless.
[18:08] <infinity> If building against different versions breaks things, then we have HUGE problems with how we develop, well, the entire distribution.
[18:09] <micahg> I guess that might not be a problem in this case
[18:09] <infinity> It better not be. :P
[18:10] <infinity> If libgs9 broke ABI without an SOVER bump, we have slightly bigger concerns.
[18:10] <infinity> (Which I'm sure it didn't, just sayin'....)
[18:12] <micahg> infinity: eh, I guess 1 for 2 isn't too bad this "early" :)
[18:13] <infinity> micahg: So, the more curious question, if both these builds failed due to obvious archive skew, why was the copy to -updates done without retrying them first? :/
[18:13] <micahg> infinity: indeed, was thinking the same thing
[18:14] <micahg> infinity: are you retrying the powerpc build just to see if it works?
[18:14] <infinity> Kinda curious what Soyuz will do. :P
[18:15] <infinity> I can actually create build records for that source in -updates, which would work around the issue as-described.
[18:15] <micahg> I'm wondering if in-archive is different
[18:15] <infinity> But the whole thing sounds just plain wrong.
[18:15] <micahg> since it's the same shared pool
[18:16] <onkarshinde> While we are discussing this, I just build cups in oneiric chroot on my machine.
[18:16] <onkarshinde> If you want I can try hplip as well.
[18:16]  * micahg wonders why it was allowed to get this broke
[18:17] <infinity> Dunno.  And I can't find any obvious indication of who did the copy.
[18:17] <slangasek> audit trails are for sissypanst
[18:17] <slangasek> ts
[18:18] <infinity> Apparentyl.
[18:18] <infinity> That's hard to type on purpoes.
[18:18] <slangasek> I know, rigth?
[18:18] <micahg> do the SRU copy scripts check for arch skew?
[18:18] <infinity> micahg: No, though they do tell you what they plan to do before you commit.
[18:18] <infinity> micahg: But at that point, it's too late, if someone's decided "ports don't matter".
[18:19] <infinity> micahg: But I'd like to think people aren't doing that.  Security certainly don't.
[18:19] <micahg> infinity: our copy scripts warn if you're missing an arch :)
[18:20] <onkarshinde> if 'ports don't matter', specifically powerpc then all the packages on this port should be in universe. So that people like me can take care of it. :-)
[18:20] <infinity> onkarshinde: Ports matter.
[18:20] <micahg> onkarshinde: and the archive doesn't work like that :)
[18:21] <onkarshinde> I know. Just kidding. :-)
[18:21] <infinity> micahg: Actually...
[18:21] <infinity> micahg: Per-arch overrides are a fantastic soyuz misfeature.
[18:21] <micahg> infinity: the binaries could be, but we build from source, so I guess a better way to put it is Ubuntu doesn't work like that
[18:22] <infinity> Well, yes.
[18:23] <infinity> Either way.  This sort of thing annoys me.  I can understand people punting on terribly painful porting bugs in an SRU, but not "giving back builds is hard".
[18:23] <infinity> Or, perhaps, just "counting to four is hard".  I dunno. ;)
[18:24] <micahg> maybe SpamapS can add a safeguard to check for that in the SRU scripts (we have that in our unembargo script)
[18:24] <infinity> There are fancy scripts other than copy-package.py on ftpmaster?
[18:24] <infinity> Am I living in the past again?
[18:25] <onkarshinde> Me leaving now friends. Will bug again tomorrow for more give backs.
[18:25] <micahg> infinity: I have no idea, I just know he's been tweaking stuff to warn about possible issues
[18:25] <micahg> onkarshinde: thanks
[18:25] <slangasek> infinity: there's an sru-accept script in ubuntu-archive-tools?
[18:26] <slangasek> but I guess micahg's referring to the security-specific ones
[18:26] <micahg> infinity: maybe it can warn on the report page that it's not ready to be copied and why
[18:26] <infinity> slangasek: Ahh, never used it.  Though I haven't been on the SRU team for years.
[18:26] <james_w> great work cjwatson, thanks
[18:28] <cjwatson> I doubt I'll ever personally reclaim the time spent
[18:28] <cjwatson> but maybe collectively we will :)
[18:28] <cjwatson> sru-release too
[18:28] <cjwatson> infinity,slangasek: ^-
[18:29] <slangasek> right, that's the one that times out on kernels :)
[18:29] <cjwatson> heh, yeah
[18:29] <cjwatson> that's because it's using syncSource
[18:33] <SpamapS> micahg: can you summarize what I might be protecting against? The backscroll is dizzying
[18:33] <infinity> SpamapS: proposed->updates copies when not all arches are built.
[18:33] <SpamapS> That should be easy enough to build into sru-release
[18:34] <micahg> SpamapS: I'd suggest warning on the SRU report about it as well
[18:34] <infinity> SpamapS: It needs to be an annoying, flashing, over-the-top, Mardi Gras warning that tells people that they're Very Bad People for not looking at the failure logs.
[18:35] <infinity> SpamapS: Instead of a simple "You're about to shaft some users of !x86, do you care? [N/y]"
[18:35] <SpamapS> micahg: yes, thats a good plan.. no "green light" until all arches are built
[18:35] <SpamapS> infinity: we can make it just stop, dead.
[18:36] <SpamapS> I'm not against adding --ignore-unbuilt-arches for the urgent case
[18:36] <infinity> SpamapS: The problem with that is that it's sometimes valid to release without all arches (say, something that was never built correctly on armel).
[18:36] <micahg> SpamapS: well, if it's not a regression (i.e. the arch didn't build before), then it could be green, sometimes it's worth overriding though, i'd suggest a warning next to it, maybe yellow vs green or red
[18:36] <infinity> SpamapS: And if there's a force flag, people just add that to their workflow.
[18:36] <infinity> (Oh, how I wish the above weren't true)
[18:37] <SpamapS> infinity: in this case, there are 3 - 5 of us, all of which can be held to a lot higher standard than "people".. we're at least "SRU people"
[18:37] <infinity> But look at how often people type "rm -rf" instead of "rm -r" (when the latter would clearly work for 99% of your recursive deletion needs)
[18:38] <micahg> SpamapS: well, hplip and cups managed to migrate from -proposed to -updates w/out armel or powerpc
[18:38] <infinity> SpamapS: Well, yes.  I do hold you to a higher standard, which is kinda why I wonder that warnings from tools are even necessary to avoid this sort of thing. :P
[18:38] <infinity> SpamapS: But, meh.  Mistakes happen.  Mitigating them is nice.
[18:38] <SpamapS> micahg: right, this seems serious enough and simple enough that it should be solved now, before we forget this atrocity. ;)
[18:39] <SpamapS> I like to think that the tools should not be expected to enforce the standards, but justs to remind us of our high standards
[18:39] <SpamapS> which is why the tool also puts up a big warning "HEY THERE IS ALREADY AN UNAPPROVED UPLOAD IN PROPOSED"
[18:40] <SpamapS> It reduces the steps necessary to do the job.. I don't have to go check, I just expect that the warning will tell me when I need to do that step.
[18:40] <infinity> I suspect ubuntu-archive-tools needs to grow a dependency on cowsay for all our warnings.
[18:41] <micahg> SpamapS: that's the problem with these checks though, they train us to rely on the tools
[18:41] <SpamapS> me resists the urge to run /exec -o cowsay +1 in #ubuntu-devel
[18:41]  * SpamapS also resisted the urge to type /me
[18:41] <SpamapS> micahg: um, if we cannot rely on deterministic tools , what can we rely on?
[18:42] <SpamapS> micahg: I mean, I'm asking the tool to do, via the API, the same check I'd do manually via the web interface...
[18:43] <micahg> SpamapS: well, it's shouldn't be an excuse to not think about these things at all, i.e. you know what steps you need to take, the tool is helping you streamline
[18:43] <infinity> The tool could ask a remote machine to take a picture of the web UI via a mindstorms-controlled camera, and then email you the resulting jpeg.
[18:43] <SpamapS> infinity: christmas holiday project selected, thank you
[18:44] <infinity> Use some pattern detection software to find the little "failed" icons, and circle them in crayon.
[18:44]  * SpamapS starts working on connecting a kinect to his mindstorms
[18:45] <SpamapS> micahg: who is making excuses? I can use my browser and my eyes, or a script. Honestly, I trust the script to be more consistent than me. :)
[18:46] <SpamapS> There's another problem which is that the verification-done is never arch specific
[18:46] <SpamapS> I suppose we just wave our hands over that..
[18:47] <Daviey> The tool should make it harder to go against the recommendation IMO.
[18:47] <Daviey> I trust tools more than my own eyes, lointian catches MUCH more stuff that i ever would.
[18:47] <Daviey> dput checking that *.changes are signed before tey upload... sure i can ork around it, but it's noce to have the check.
[18:48] <SpamapS> Daviey: less IRC, more whiskey. ;) srrsly, shouldn't you actually take the holiday you're on? ;-)
[18:49]  * SpamapS just went over his emoticon quota for the day, #$!@
[18:49] <micahg> SpamapS: I'm saying, that you shouldn't be replaced with a cron that checks what the script checks and if all the tests pass, copy, there should be some human sanity check that happens as well
[18:50] <micahg> SpamapS: that human sanity check is understanding the process and verifying that the requirements are met (even with the help of the scripts)
[18:50] <infinity> Daviey: lointian?  Do I even want to know what checks that's performing?
[18:50] <slangasek> I: codpiece-not-recommended
[18:51] <SpamapS> micahg: of course. So you agree then, the release script should tell me what arches are, and aren't, built, and if they're not all built, it should error out. :) likewise, the report should show the same level of built/unbuilt. :)
[18:52] <micahg> Daviey: sure, I'm not saying not to use the tools :), just saying that you shouldn't blindly trust lintian either (that's why we have overrides, scripts can be wrong)
[18:52] <micahg> SpamapS: oh, definitely, but I think you still need to be aware of the check :)
[18:54] <raphink> Is it normal that /etc/ld.so.conf.d/zz_i386-biarch-compat.conf as distributed by libc6-i386 doesn't contain /usr/lib32/mesa ?
[18:54] <slangasek> yes
[18:55] <slangasek> just as /usr/lib/mesa isn't put on the path by libc6
[18:55] <raphink> google-earth failed to load for me
[18:55] <Daviey> *glug*
[18:55] <raphink> but it works after I add /usr/lib32/mesa to the path
[18:55] <raphink> or is it another inclusion path that's missing?
[18:56] <raphink> for example, it couldn't find libGL.so.1
[18:57] <slangasek> install libgl1-mesa-dri:i386 instead?
[18:57] <raphink> well, ia32-libs already provides it in /usr/lib32/mesa
[18:57] <raphink> and it works
[18:57] <raphink> so why should we install a package from another arch (unless that's the new way and ia32-libs is not supposed to be used anymore?)
[18:59] <micahg> raphink: https://lists.ubuntu.com/archives/ubuntu-devel/2011-October/034279.html
[19:00] <raphink> ok
[19:00] <raphink> so that's the way in precise, thanks
[19:00] <raphink> the machine I couldn't get it to work on runs oneiric, and I hadn't considered this might have changed in precise
[19:07] <slangasek> raphink: ia32-libs has *never* had correct handling of libGL; it's always worked only for one libGL implementation at a time because it didn't implement the alternatives handling used for the native libs, and it's not the responsibility of libc6-i386 to fix this
[19:08] <slangasek> it needs to be fixed by having the packages you're installing to get 32-bit libGL do the same alternatives handling as the native ones... which we address by having you actually install the 32-bit libGL packages
[21:37] <arges> Hello. Is there an easy way to 'apt-get source' a package from an older version of ubuntu. i'm running oneiric, but want to get the sources to a lucid package. i'm guessing I should be using a separate chroot? thanks
[21:37] <broder> arges: pull-lp-source from ubuntu-dev-tools
[21:38] <broder> there's also chdist in devscripts, but it's harder to setup
[21:38] <arges> broder, cool thanks. downloading the ubuntu-dev-tools now