[00:01] <xnox> slangasek: it does ignore stuff that is unknown / not bzr add'ed. Running in merge-upstream mode? These are the two cases I know when it ignores upstream changes.
[00:01] <slangasek> [BUILDDEB]
[00:01] <slangasek> split = True
[00:01] <slangasek> xnox: thanks for the hint
[00:01] <xnox> ha. so it does have them, in the tarball =)
[05:27] <pitti> Good morning
[05:50] <SpamapS> hrm, mysqld is segfaulting during the php test suite
[06:12] <RAOF> Any bzr-git maestros in here?
[06:14] <SpamapS> shoot.. thats not a segfault.. its hitting an ASSERT.. whoops.. gotta turn those back off
[06:26] <infinity> RAOF: Oh hai.
[06:26] <infinity> RAOF: I'm not a bzr-git guy at all, but I totally need to hijack you to tell you that my laptop hates you.
[06:26] <infinity> RAOF: I've been getting those GPU lockup dialogs (after long freezes) several times a day.
[06:27] <infinity> RAOF: Do those actually go anywhere that gets read? :)
[06:27] <RAOF> If you send them up they'll end up on the X team's workqueue.
[06:27]  * pitti is suffering from bug 1081009, that might be your's?
[06:27] <infinity> I've sent maybe 10 or so.
[06:27] <pitti> https://bugs.freedesktop.org/show_bug.cgi?id=55984 is seeing some action
[06:28] <infinity> pitti: The symptoms definitely sound like mine.
[06:28] <pitti> although I'm not quite sure whether this was triaged right, Arrandale != Ironlake
[06:28] <pitti> and it's definitively unnerving; I'm running the quantal kernel
[06:29] <infinity> I'm running raring kernel and userspace.
[06:29] <pitti> I'm running raring userspace, but with this lockup the raring kernel is by and large useless for me
[06:29] <infinity> It only started for me a few days ago, after some dist-upgradery.
[06:29] <pitti> I need to reboot every hour or so, losing the last piece of work
[06:29] <pitti> for me it precisely started with the introduction of 3.77
[06:29] <pitti> 3.7
[06:29] <infinity> pitti: I don't need to reboot, I find VT switching out and back fixes it.
[06:29] <pitti> lucky you
[06:29] <pitti> not for me
[06:30] <infinity> Heh.
[06:30] <pitti> not even shutting down lightdm and restarting
[06:30] <infinity> So, almost the same. :P
[06:30] <infinity> But I'm on different hardware.
[06:30] <pitti> infinity: yeah, so I think you'rs is the fd.o one
[06:30] <pitti> but that actually also talks about two different issues, and the "other" one seems to be mine
[06:31] <infinity> I don't speak codenames, so I don't know what an Ironlake is.
[06:31] <infinity> But it's a Sandybridge CPU with Intel HD whatever graphics.
[06:31] <jk-> it's an iron. in a lake.
[06:31] <infinity> jk-: You're so remarkably helpful.
[06:31] <pitti> infinity: yeah, so sandybridge is the chipset generation after mine (arrandale)
[06:31] <RAOF> infinity: Your *Sandybridge* is seeing problems? My Sandybridge, obviously, is absolutely stable.
[06:31] <pitti> infinity: I'm not quite sure about the ordering of ironlake
[06:32] <pitti> and I thought all those X freezes and pipeline underruns were a thing of the distant past :(
[06:32]  * pitti still remembers the long hours of testing various watermark heuristic patches
[06:32] <hyperair> RAOF: my *sandybridge* is seeing problems too.
[06:33] <pitti> it's like every other raring user has this problem except the X.org team :)
[06:33] <hyperair> RAOF: but i suspect it's got a lot to do with a stupid bios.
[06:33] <infinity> The funny thing is, I switched from discrete graphics to Intel on advice that the drivers were more solid. :P
[06:33] <infinity> Though, this is actually one of those silly hybrids, I could switch it in the BIOS and go nvidia for a while.
[06:33] <RAOF>  http://www.bryceharrington.org/Arsenal/ubuntu-x-swat/Reports/totals-raring-workqueue is what's on the X team's easy workqueue.
[06:33] <pitti> RAOF: last time I looked this was actually be the #1 raring problem on errors.u.c.
[06:33] <RAOF> infinity: Is https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/1041790 what you're seeing?
[06:33] <pitti> infinity: they had been for years really
[06:34] <hyperair> infinity: they are more solid though. just not completely foolproof.
[06:34] <infinity> RAOF: I don't remember what error codes I was getting.  Is that saved anywhere?
[06:34] <pitti> you should have some files in /var/crash/
[06:34] <pitti> I get proper dumps, anyway
[06:34] <infinity> Oh yes, I have tons of them.
[06:35] <infinity> RAOF: Yep, that's the one.
[06:36] <RAOF> So, you could probably be all switch-to-SNA to get around the problem.
[06:36] <infinity> (base)adconrad@cthulhu:/var/crash$ sudo grep 0b140001 * | wc -l
[06:36] <infinity> 3272
[06:36] <SpamapS> *doh*
[06:37] <infinity> RAOF: Speak slowly, I'm a toolchain guy.  What's an SNA?
[06:37] <RAOF> infinity: It's the shiny new acceleration architecture for Intel.
[06:37] <pitti> is it really spelled out like that? :)
[06:37] <infinity> Though, the comments in that bug about SNA sucking differently don't instill confidence.
[06:38] <RAOF> Oh, yeah.
[06:38] <infinity> I might just hit the BIOS and turn the nvidia battery-eater back on, and switch to nouveau for a while.
[06:38] <RAOF> A fine choice!
[06:38]  * pitti sobs a little; I haven't had X problems since karmic or so
[06:39]  * infinity hasn't had X problems since he used to live with daniels.
[06:39] <infinity> And back then, the X problems were forced on me. :P
[06:39] <infinity> "Here, try this" ... "ARGH!"
[06:44] <infinity> RAOF: So, I'm not one to generally jump on the "omg, revert" bandwagon, but perhaps reverting to the Q version for now might be sane?
[06:44] <infinity> RAOF: Or does that present ABI problems that mean reverting a whole stack of crap?
[06:44] <infinity> Or API problems, rather, ABI's no big deal, we'd be rebuilding.
[06:45] <infinity> Or is it actually a kernel bug, and reverting the X driver won't make a lick of difference?
[06:49] <pitti> the X drivers didn't actually change in raring that much
[06:49] <pitti> so reverting the bits in the kernel ought to work, given that raring userspace works just fine on a quantla kernel
[06:50] <pitti> and given how everyone complains about this, it might not be the worst idea
[06:51] <infinity> pitti: I suspect "reverting the kernel" is a bit more troublesome, unless someone's already isolated a small and self-contained (set of) commit(s) to revert?
[06:51] <infinity> pitti: If so, find a kernel team member in your timezone (say, smb) and, for the love of god, make it happen. :)
[06:52] <pitti> it might actually be easier/possible to DKMSify quantal's i915?
[06:52] <pitti> at least back then I built the i915 module out of an otherwise unbuilt kernel tree
[06:52] <pitti> not sure how many dependencies it grew by now
[07:46]  * smb tries not to look too awake yet
[07:54]  * xnox my sandybridge locks up. TTY7 -> TTY1 -> TTY7 unfreezes it usually and then I get the popup "we detected it froze. Did you need a hard reset?"
[07:54] <pitti> xnox: was discussed some two hours ago already
[07:54] <infinity> xnox: Yeah, that's exactly how it works for me.  pitti's not so lucky, VT switching doesn't fix him.
[07:54] <pitti> xnox: bug 1041790
[07:55] <xnox> I see =)
[07:55]  * xnox just woke up and reading backscroll.
[07:55] <xnox> pitti: thanks for merging adt wrapper. Is it "deployed" to jenkins as well?
[07:55] <pitti> xnox: yes
[07:56] <xnox> pitti: awesome, thanks.
[08:00] <dholbach> good morning
[08:05] <xnox> morning =)
[08:05] <tjaalton> i've now upgraded my snb laptop to raring as well, so if there are issues with the kernel they'll get sorted out ;)
[08:07] <tjaalton> pitti: oops, did I mix arrandale and ironlake..
[08:11] <didrocks> indicator-messages | 12.10.5-0ubuntu2 |        raring | armel
[08:11] <didrocks> is that normal that armel is still listed in rmadison? ^
[08:11] <infinity> didrocks: Yes, we'll clean it up.
[08:11] <infinity> didrocks: ftpmaster still has the index files (but not the debs) for armel, that's all.
[08:12] <infinity> didrocks: And rmadison works off Packages/Sources files.
[08:12]  * xnox wonders if there are lonely armel raring boxes trying to upgrade.....
[08:12] <didrocks> infinity: ok, thanks for confirming :)
[08:24] <dholbach> does anyone know anything new about bug 965371? somewhere in the bug it says this was fixed in quantal, but on my quantal server I still see the same problem with pylplib
[08:32] <infinity> dholbach: Somewhere between mdeslaur and cjwatson you may find someone who knows the state of that mess.
[08:33] <infinity> dholbach: It seems that any way we try to fix it, we just break another weird corner case in the process.
[08:34] <dholbach> I'll go through the entire comments in the bug again - maybe I find a workaround for mine :/
[08:40] <brendand> dholbach, hey! i think the patch on https://bugs.launchpad.net/ubuntu/+source/hplip/+bug/1069324 looks good, but i don't have any rights - so how can i help out?
[08:41] <dholbach> tkamppeter__, ^ do you think you could help out with this?
[08:41] <dholbach> brendand, tkamppeter__ is our local printing expert :)
[08:53] <didrocks> infinity: cjwatson: backlogging on yesterday's discussion
[08:54] <didrocks> infinity: cjwatson: slangasek: so the automatic upload is checking the current version
[08:55] <didrocks> like this night, it blocked because of upload out of trunk
[08:55] <didrocks> so the archive is authorative
[08:55] <didrocks> see https://jenkins.qa.ubuntu.com/view/cu2d/view/Indicators%20Head/job/cu2d-indicators-head-1.1prepare-appmenu-gtk/
[08:55] <didrocks> and particularly the artefact: https://jenkins.qa.ubuntu.com/view/cu2d/view/Indicators%20Head/job/cu2d-indicators-head-1.1prepare-appmenu-gtk/lastSuccessfulBuild/artifact/upload_out_of_trunk_appmenu-gtk_12.10.3daily12.11.28-0ubuntu2.xml
[08:56] <didrocks> the issue there is that there was a distro patch (inline apparently?) that wasn't submitted/committed upstream
[08:56] <didrocks> and when doing the bootstrap, this patch didn't show up
[08:57] <didrocks> so it's only an issue with bootstrapping, we missed it apparently (cyphermox did the bootstrap and as there was no debian/patches…)
[09:16] <didrocks> so, just a head's up, this was merged only in the packaging branch, not upstream. Then it seems that cyphermox bzr merge the packaging branch upstream, but then, did a bzr revert debian/ (which is fine for new files as we don't want to have configure in the upstream repo, but not for modified one)…
[09:16] <didrocks> so human error for the bootstrap, as it can happen for a manual merge and inline patch
[09:34] <infinity> dholbach: So, I know you just sent an email shaming us all into patch piloting mor vigorously, but I was thinking I'd spend the better part of my piloting day tomorrow clearing out the SRU queues instead.  We're about 80 uploads deep and climbing, and I suspect my time would be better spent there, before piloting even more patches that will land in upload queues that are stagnating. :P
[09:34] <dholbach> infinity, works for me
[09:34] <dholbach> I don't want to shame anyone
[09:35] <dholbach> it's just that we need to do it and it will be good for us :)
[09:35] <diwic> infinity, +1
[09:35] <infinity> mvo: Any reason you don't rev VERSION in softwarecenter/version.py when you tag/release?
[09:35] <infinity> mvo: My local copy is at 5.5.1.1 (because I revved it), but bzr trunk is still sitting at 5.3.7 :P
[09:36] <tjaalton> pitti: ok I checked, the chipset is arrandale, but the "graphics media hub" is ironlake, so the upstream bug is the correct one for this case I think
[09:36] <pitti> tjaalton: ah, thanks
[09:36] <tjaalton> pitti: so you could try i915.i915_enable_rc6=0 with it
[09:36] <tjaalton> the raring kernel
[09:36] <infinity> dholbach: I'll still check in with the bot and respond to people pinging on IRC for specific help, but yeah, I think my time will be better spent trying to reduce the upload queues so that puloted SRU patches aren't a month behind after they're uploaded. :)
[09:37]  * dholbach hugs infinity
[09:37] <infinity> s/puloted/piloted/
[09:37] <pitti> tjaalton: $ sudo cat /sys/module/i915/parameters/i915_enable_rc6
[09:37] <pitti> -1
[09:37] <pitti> tjaalton: does that mean "enabled"? (I thought rc6 doesn't work with my chipset)
[09:37] <pitti> tjaalton: but I'll try it
[09:38] <tjaalton> hmm don't know what that value means. with the quantal kernel it was disabled for sure
[09:38] <tjaalton> then turned on in 3.7(?) and later disabled again
[09:38] <pitti> ah right -- that is the quantal kernel
[09:38] <tjaalton> I'll check if it's already in the next rc
[09:39] <xnox> pitti: /usr/lib/python2.7/dist-packages/bzrlib/plugins/dbus/activity.py:122: PyGIDeprecationWarning: MainLoop is deprecated; use GLib.MainLoop instead
[09:39] <pitti> xnox: yep; it needs GObject.MainLoop →  GLib.MainLoop
[09:40] <mvo> infinity: no (good) reason, no, its autogenerated on build but bzr-buildpackage builds outside of the tree, so no good reason
[09:40] <mvo> infinity: thanks for your build fix btw :)
[09:40] <xnox> pitti: ack. It will annoy me enough to eventually make me upload a fix =)
[09:42] <tjaalton> pitti: looks like it's not in 3.7 yet, pinging upstream
[09:42] <pitti> tjaalton: where "it's" == ?
[09:42] <tjaalton> pitti: ah, the patch to revert rc6 for ironlake :)
[09:42] <pitti> ah
[09:43] <tjaalton> disables it again
[09:43] <pitti> tjaalton: so that should fix my crashes, but not infinity's and xnox' hangs?
[09:43] <tjaalton> but the kernel option I gave does the same when you have a chance to test it
[09:43] <tjaalton> pitti: yeah, probably doesn't change it for them
[09:43] <tjaalton> err, definitely doesn't
[09:43] <pitti> yep, I will; but I'm deep in a debugging session in a VM, so it'll be an hour or two until I can reboot
[09:43] <tjaalton> but probably fixes it for you :)
[09:43] <pitti> thanks for the hint!
[09:44] <tjaalton> sure thing, take your time and don't do anything critical in case it blows up on you :)
[09:45] <tjaalton> about the other hang, we've been discussing whether to use sna by default..
[09:46] <tjaalton> I'll push -intel 2.20.14 to see if it fixes the issues some folks on the bug were seeing with sna
[09:46] <infinity> tjaalton: Is that my sandbridge bug?
[09:46] <tjaalton> infinity: yes
[09:46] <infinity> sandy, too.
[09:46] <pitti> tjaalton: it seems to trigger when I'm putting the machine under high load, such as creating VMs
[09:47] <infinity> tjaalton: Awesome.  I shall upgrade and let you know if I stop seeing hangs over a day or two.
[09:47] <pitti> tjaalton: so I'll exercise it that way
[09:47] <infinity> tjaalton: I've been getting them every hour or two over the last few days, so shouldn't take long to confirm.
[09:47] <tjaalton> infinity: ok, I'll prepare & upload in a bit
[09:47] <tjaalton> right, I hope to see them myself on my t420s
[09:47] <infinity> This is also a T420s, so your odds are good.
[09:48] <tjaalton> great! :)
[09:48] <tjaalton> pitti: yeah that'd make sense
[09:48] <tjaalton> infinity: btw, which bios version do you have?
[09:49] <infinity> tjaalton: Happened with both 1.31 and 1.35 (I upgraded to see if things would improve)
[09:50] <tjaalton> infinity: I'm still on 1.30, all attempts to boot the update cd have failed so far.. was thinking if you happen to force pcie_aspm to save a watt
[09:50] <tjaalton> here it causes issues with the drive
[09:50] <infinity> tjaalton: Do you have your BIOS set to UEFI only?  The update CD will only boot in Legacy.  I found that out after an hour of head->desk.
[09:51] <tjaalton> oh
[09:51] <tjaalton> I'm not sure actually..
[09:52] <tjaalton> seems to be using legacy only, so it's not that
[09:53] <cking> tjaalton, do you mean ALPM for the drive issue or ASPM?
[09:53] <tjaalton> cking: the one you suggested :)
[09:53] <tjaalton> can't remember which one it was
[09:54] <cking> tjaalton, ALPM for the HDD, in which case yes, it may give issues
[09:54] <tjaalton> so my machine seemed to work fine for a while, then I started getting i/o-errors
[09:55] <tjaalton> that's why I've been trying to update the bios to see if it would help
[09:55] <cking> tjaalton, it may be more of a controller/drive combo issue
[09:56] <infinity> Hrm, I have controller/drive issues on my T420s, though not leading to errors or corruption.
[09:56] <infinity> It's just abnormally slow.
[09:56] <infinity> Compared to my older T61 with a similar (but older) drive.
[09:56] <tjaalton> yeah, I have an ocz agility 3 ssd on it
[09:57] <infinity> cking: Do you have any clever ideas on diagnosing "my disk performance seems to be crap"?
[09:57] <cking> infinity, what kind of disk performance issues are you seeing?
[09:57] <infinity> I just opted to put 16G of RAM in it and do everything in tmpfses.
[09:58] <infinity> cking: Just... Slow.  Heavy reads and writes are both lousy.  So, startup of large applications, or long dpkg runs.
[09:58] <infinity> cking: Updating a chroot takes longer on my laptop than it does on my Panda with a USB disk attached.
[09:58] <infinity> cking: And it's pretty clearly disk I/O, cause it's blinding fast in tmpfses.
[09:59] <cking> infinity, we are in the process of looking at a bunch of I/O performance issues
[09:59] <tjaalton> infinity: which fs?
[09:59] <infinity> tjaalton: ext4
[09:59] <cking> but it's taking forever to tease out
[09:59] <pitti> hasn't it been crap for many years already?
[09:59] <pitti> I can't remember a time when doing an rsync or copying large files hasn't brought my system to a crawl
[10:00] <infinity> cking: To be clear, it also doesn't seem to be vfs or filesystem, since my old T61, with a very similar setup, is much, much faster.
[10:00] <pitti> (load goes to 5 or 10, and everything feels like tar)
[10:00] <infinity> cking: Both are 2.5in 7200 RPM drives, though I'd expect the newer/denser one in the new laptop to be a smidge faster, rather than a ton slower.
[10:01] <infinity> (I haven't tried swapping drives yet, I may do that at some point as a data point, but I'm assuming it's the controller, or the driver for said controller)
[10:04] <cking> infinity, well swapping drives is the first step to sanity checking thus
[10:04] <infinity> cking: *nod* ... Agreed.  Just haven't found the time to take a screwdriver to both machines.
[10:05] <infinity> Though, I suppose once I do, I have a spare 2.5in SATA drive floating around that I can plug into a random ARM board.
[10:05] <infinity> That's a win, right? :P
[10:05] <cking> :-)
[10:06]  * apw idly wonders if command queueing is enabled on the controller
[10:06] <infinity> I have a plethora of rather large 2.5in PATA drives kicking around, I wonder if adaptors are cheap enough to be worth driving to the store.
[10:06] <infinity> apw: Don't we enable that by default on anything built after, like, 2002?
[10:06] <apw> oh we should indd
[10:06] <infinity> (And how can I find out?  hdparm?)
[10:06] <apw> indeed, but you never know
[10:06] <ogra-cb> hdparm
[10:07] <infinity> What's the "show me everything you know about this drive" flag?
[10:09] <infinity> /dev/sda:
[10:09] <infinity>  queue_depth   = 31
[10:09] <infinity> I assume that also means it's enabled?
[10:09] <cjwatson> didrocks: OK, so can you fix it so that the same sanity check is applied to bootstrapping?  I'm sure we'll be doing plenty more bootstrapping, and it would be helpful for it to be robust.
[10:11] <didrocks> cjwatson: yeah, well, we always have modified files (because of autogenerated changelog), so what I'm doing now is to go over all the indicator stack and bzr branch <before inlining> && bzr merge packaging -> look at what files were modified
[10:11] <ogra-cb> infinity, mine has "*	Native Command Queueing (NCQ)" in the capabilities list
[10:11] <cjwatson> Special-casing the changelog would be fine, of course
[10:11] <didrocks> cjwatson: until now, I only spotted one project where the tests were commented downstream, so we definitively don't want to poick that :)
[10:11] <didrocks> changelog, news and so on
[10:12] <didrocks> cjwatson: I'm documenting the bootstrapping procedure with this
[10:12] <ogra-cb> err, commands/fatures that is
[10:12] <didrocks> and will write a checker
[10:12] <cjwatson> thank you
[10:12] <infinity> ogra-cb: Ahh, in -I?  Yeah, mine too.
[10:12] <didrocks> thanks, sorry for this oversight (but at least, the daemon picked it once you uploaded the fix)
[10:13] <ogra-cb> right in -I
[10:13] <didrocks> my fault, I should have picked it during reviewing cyphermox's bootstrap merge
[10:13] <didrocks> (but it didn't appear on the diff of course as it was reverted)
[10:13] <cjwatson> well, it should still be autochecked immediately before copy, to minimise the race condition
[10:14] <didrocks> cjwatson: I'm checking the version, if the version lies in the vcs? what do you mean?
[10:15] <didrocks> the issue there was that a version claimed to be that version in the vcs wasn't exactly (and can't be on a bootstrap because of all those autogenerated files we don't have anymore)
[10:22] <cjwatson> didrocks: You must not trust the VCS for this purpose - you *must* double-check against the archive
[10:22] <cjwatson> The VCS is what we intend to ship to users, but the archive is what we *are* shipping
[10:23] <cjwatson> I'm much less concerned about differences in autogenerated files than I am in you checking that the version in the changelog is what you expect it to be
[10:24] <didrocks> cjwatson: I'm trying to think of a way of doing that automatically, like apt-get source the previous version, rebuilding the previous vcs version (in addition to the current one) and diffing? ignoring some autogenerated/modified file
[10:24] <cjwatson> This is analogous (though obviously not identical) to auto-sync checking for an "ubuntu" substring in versions and refusing to ever overwrite
[10:25] <cjwatson> The problem at hand isn't that the contents aren't what you expect - it's that there's a whole new upload in the archive you don't know about
[10:25] <cjwatson> You can spot that with a simple version check
[10:25] <didrocks> what do you mean? I do a version check, see the files above ^
[10:25] <cjwatson> Err - then how did last night's breakage happen?
[10:25] <didrocks> ok, let me explain again :)
[10:26] <didrocks> So, I'm doing a version check twice
[10:26] <didrocks> one at the very beginning of the process and one at the end
[10:26] <didrocks> if the vcs doesn't have in debian/changelog the latest version published into distro, the component is ignored
[10:27] <didrocks> for instance, after you uploaded the -0ubuntu2 yesterday evening, this night run published: https://jenkins.qa.ubuntu.com/view/cu2d/view/Indicators%20Head/job/cu2d-indicators-head-1.1prepare-appmenu-gtk/lastSuccessfulBuild/artifact/upload_out_of_trunk_appmenu-gtk_12.10.3daily12.11.28-0ubuntu2.xml
[10:27] <didrocks> which marked the appemnu-gtk job as instable https://jenkins.qa.ubuntu.com/view/cu2d/view/Indicators%20Head/job/cu2d-indicators-head-1.1prepare-appmenu-gtk/
[10:27] <didrocks> this is to avoid this overwrite
[10:27] <didrocks> unstable*
[10:27]  * cjwatson looks at the changelogs
[10:27] <cjwatson> Oh, so this was actually a human mismerge?
[10:27] <didrocks> so what happens here is because of the bootstrap
[10:27] <didrocks> right
[10:28] <cjwatson> I understand now
[10:28] <didrocks> the vcs claimed to be at that version
[10:28] <didrocks> when some local modification weren't committed
[10:28] <cjwatson> In that case I don't see this as a failure of the autopackaging/autouploading tools
[10:28] <cjwatson> A human committer screwed up
[10:28] <didrocks> yeah, like when we merge from debian for instance
[10:28] <didrocks> it's a similar case of failure
[10:28] <infinity> A human committer screwed up, but a human uploader would have noticed, I'd like to think.  Maybe not.
[10:29] <cjwatson> Right.  I thought that the problem was that the upload had been entirely disregarded
[10:29] <infinity> Cause upload time is (traditionally) when you diff against the previous archive version to see if you buggered it.
[10:29] <cjwatson> infinity: Mismerges happen and reach the archive rather a lot
[10:29] <infinity> cjwatson: Sure, they do.  Not arguing that they don't.
[10:29] <seb128> infinity, if you mismerge you probably mis-dput as well
[10:29] <didrocks> cjwatson: ah, not at all
[10:29] <OdyX> mdeslaur, pitti : sent a summary to the debian bug for the cupsd privilege escalation. I suspect mdeslaur's solution + hindering HTTP POST might be the only solution we have…
[10:29] <infinity> I'm still just very wary of the idea that sufficiently complex machinery can replace a real person giving a pass/fail on tagging/rolling a release.
[10:30] <seb128> infinity, those are not releases, they are daily snapshots
[10:30] <didrocks> and this kind of this will only happen at bootstrap, (or if a merge from distro backported isn't properly done)
[10:30] <tkamppeter> dholbach, no problem, I can apply the patch. I have to make an HPLIP upload to Raring anyway as there is a new release.
[10:30] <didrocks> but I guess that an upload on distro will be noticed and the diff should be low enough so that the reviewer spot it
[10:30] <cjwatson> seb128: terminology
[10:30] <infinity> seb128: They're daily snapshots being released to users.  That's a release.
[10:31]  * pitti remembers more than one buggered mis-merge upload that was done manually :(
[10:31] <dholbach> brendand, ^
[10:31] <dholbach> thanks tkamppeter
[10:31] <tkamppeter> pitti, I have a problem with setting permissions on a file of the cups-filters package.
[10:31] <didrocks> cjwatson: so indicator stack is cleaned, I'm checking the last one, oif (but the other 25 projects enabled are cleaned, the only guilty was appmenu-gtk)
[10:31] <pitti> (i. e. reading debdiffs before upload is a great habit!)
[10:32] <pitti> tkamppeter: what are you trying to do?
[10:32] <pitti> OdyX: thanks
[10:32] <infinity> pitti: Yes, one I keep trying to get people to do. :P
[10:33] <seb128> infinity, well, most uploaders probably trust the content of the packaging vcses as well and don't bother a debdiff and read it before building/pushing ... so it's not much different, the checking is just done at commit time and not upload time
[10:34] <tkamppeter> pitti, in debian/rules, in the binary-post-install/cups-filters:: section I do "chmod 700 debian/$(cdbs_curpkg)/usr/lib/cups/backend/serial" and in the resulting package /usr/lib/cups/backend/serial has still standard 755 permissions.
[10:34] <pitti> seb128: I was thinking of the cases where a new upstream version was done in UDD with only bumping debian/changelog, but not bzr merge-import (i. e. diff.gz reverted the newer version to the older one again)
[10:34] <infinity> seb128: I'd argue that those people are wrong, and codifying that as best practise is also wrong. :P
[10:34] <pitti> the autogenerated debian/patches/<version>.patch doesn't help, of course
[10:34] <xnox> seb128: I was once caught out by ubiquity (it embeds other source packages on upload, but not in the VCS), so these days I do `pull-lp-source $pkg` and compare the debdiff of what i am about to upload.
[10:34] <pitti> tkamppeter: I bet that's done before dh_fixperms runs
[10:35] <seb128> pitti, right, I'm just saying that for normal day to day work (like doing a new version update in ubuntu), people tend to work, bzr diff, review the diff, upload, build, test, dput (without doing a debdiff)
[10:35] <pitti> tkamppeter: so you need to tell dh_fixperms to -Xserial
[10:35] <OdyX> tkamppeter: move away from cdbs … :) . That and dh_fixperms indeed.
[10:35] <infinity> I do a lot of development in VCSes, but because the archive and source packages are authoritative, I always debdiff prev.dsc new.dsc before uploading.
[10:35] <seb128> xnox, you have more discipline than me then ;-)
[10:35] <cjwatson> seb128: "people" - I *never* do that
[10:35] <infinity> seb128: Which "people" are these?
[10:35]  * pitti always checks debdiffs
[10:35] <tkamppeter> pitti, formerly, in the cups package it worked, was dh_fixperms only a recent addition?
[10:35] <pitti> OdyX: not really cdbs specific :)
[10:35] <infinity> seb128: And can we get them (you?) to stop?
[10:35] <cjwatson> I always always always debdiff before upload and tell people I sponsor to do the same
[10:36] <cjwatson> It's saved me from innumerable mistakes
[10:36] <tkamppeter> pitti, how can I suppress dh_fixperms or make an exception for the mentioned file.
[10:36] <xnox> seb128: you do check debdiff when sponsoring? so why not do the same for your own uploads?!
[10:36] <infinity> It saves me from a lot of large mistakes, it also saves me from introducing annoying cruft here and there.
[10:36] <pitti> tkamppeter: so with cdbs it's DH_FIXPERMS_ARGS=-Xserial
[10:36] <pitti> tkamppeter: with dh7 you override it as normal and supply the argument
[10:36] <cjwatson> Even when I'm doing mass rebuild-only uploads and the like, the homemade scripts I use for that present me with a debdiff before I say yes
[10:36] <pitti> tkamppeter: no, dh_fixperms has existed for ages
[10:37] <pitti> tkamppeter: as I said; -X is a general debhelper option to ignore a file
[10:37] <seb128> xnox, because I read the diff before commiting to the vcs
[10:37] <pitti> infinity: (debian/patches/debian_changes *cough*)
[10:37] <xnox> seb128: ok. but that is racy, until we build out of VCS without uploading source packages.
[10:37] <infinity> The number of times I've commited a sane diff to a VCS then proceeded to produce a slightly insane source package is rather large.
[10:38] <seb128> infinity, well, if you know you can't get stuff done right :p
[10:38] <xnox> most of the time it is fine, until it isn't. =)))))
[10:38]  * seb128 hides
[10:38] <seb128> (joking)
[10:38] <infinity> seb128: says the man who's repeatedly dropped patches that were in the archive? :P
[10:38] <seb128> xnox, well, it's also that double checking takes time and sometime you have ETOOMANYTHINGS todo
[10:39] <infinity> A debdiff would easily show you "hey, I remember changing this thing, but I sure didn't drop fix_arm_again_argh.patch, I wonder what that's about."
[10:40] <seb128> if we want "debdiff and ack before upload" to be standard maybe we should have dput to do the debdiff and let you say y/n...
[10:40] <cjwatson> The checks are hardcoded into my fingers, and I still seem to get lots of uploads done, so ;-)
[10:40] <infinity> cjwatson: Pfft, you're pretty underrepresented every release in tumbleweed's pie chart of doom.
[10:41] <seb128> well, it's just that I never felt a strong need to get the diff a second way after checking it using the vcs
[10:41] <seb128> but maybe I'm wrong
[10:41] <seb128> in practice it didn't bite me too much so far so I never felt the need to change that
[10:41] <cjwatson> I honestly think you are, based on my experience of the two disagreeing from time to time
[10:41] <infinity> seb128: But the VCS diff != the package diff, especially if you have auto-generated files you don't check in, and also because the archive may have changes your VCS doesn't.
[10:42] <infinity> seb128: And if you think it hasn't bitten you, maybe the people who've had to unrevert things you've reverted in the past haven't yelled loudly enough. :)
[10:42] <cjwatson> I'm not saying I read every line of the debdiff, but I do skim it and check for files I wasn't expecting and the like
[10:42] <seb128> well I do debdiff in non trivial updates
[10:42] <seb128> or in merges
[10:42]  * xnox was bitten by not checking debdiff, don't want to be there again
[10:43] <pitti> yeah, usually the kind of errors that you make is not in the fine details, but more like "debian/control was autogenerated and I forgot to change control.in", or "debian/patches/ added/dropped an unexpected one"
[10:43] <seb128> infinity, if you didn't bother commiting your change to the vcs you have a blame for that as well
[10:43] <cjwatson> It's your responsibility to double-check that you aren't reverting stuff by accident.
[10:43] <infinity> seb128: No.  I really don't.  The VCS isn't authoritative, the VCS isn't authoritative and, also, the VCS isn't authoritative.
[10:43] <cjwatson> This goes for all uploaders.
[10:43] <pitti> infinity: (but you do cause pain to people who actually use it)
[10:44] <seb128> infinity, that's called "let's not bother to do my change properly and let's create work for others", not being a good citizen either...
[10:44] <infinity> seb128: I do try to check in more and more these days (though, the number of packages in the archive that have a VCS-* that I can't commit to is irksome), but that doesn't change that the archive is authoritative.
[10:44] <seb128> infinity, the archive being authoritative is not a valid reason to not bother doing the change properly and get them in the vcs if there is one
[10:44] <infinity> seb128: I could turn your statement right around for you, when your failure to merge the archive changes means people need to re-fix the same things.  *shrug*
[10:45] <cjwatson> seb128: The difference is that when infinity gets this wrong it doesn't undo your work.
[10:45] <cjwatson> When you get this wrong it undoes other people's work.
[10:45] <cjwatson> That's clearly worse.
[10:45] <infinity> seb128: The VCS being your preferred workflow is not a valid reason to not diff against the archive to avoid reverts. QED.
[10:46] <tjaalton> infinity: -intel 2.20.14 uploaded. I'll give sna a go as well
[10:46] <seb128> cjwatson, well, I'm not trying to rank it, I'm just saying that ignoring the Vcs and let others deal with the work of founding the diff and getting it including is not correct either
[10:46] <infinity> (This would all be solved if we built directly from VCS tags, but we don't.  And as long as we don't, people need to stop pretending their preferred workflow is also "the only way to upload correctly")
[10:46] <pitti> xnox: metacity on openjdk> I think there was some talk about MIRing something like twm instead
[10:46] <infinity> tjaalton: <3
[10:46] <cjwatson> Sure.  But it is not justification for pretending such things don't exist
[10:46] <pitti> xnox: I though openjdk just needed _a_ WM, not necessarily one as complex as metacity
[10:46] <seb128> cjwatson, oh, I don't, and I agree it's the responsability of the uploader to check that no archive change gets reverted
[10:46] <infinity> seb128: Hey, I tried to commit to software-center earlier today, and couldn't.  This sort of thing frustrates me. :P
[10:47] <infinity> seb128: (mvo added me to the team, right after merging my changes from the archive, though.  So, he did it right, and now I can do it his brand of right)
[10:47] <seb128> cjwatson, I just disagree on the fact that the archive being authoritative should be a right to bypass the Vcs and a way to dump the work on other
[10:47] <cjwatson> seb128: We're not saying it should
[10:47] <xnox> pitti: I have now finished downloading the source package. It can use metacity or twm. And it seems like it is used in the check target.
[10:47] <seb128> infinity's position sounds like "anyone should be able to just upload to the archive and not have to bother if the source is maintained in a Vcs"
[10:48] <seb128> but maybe I read him wrongly
[10:48] <cjwatson> Although I would say that you don't have a lot of experience of trying to do work across the whole archive, and running into the huge number of misconfigured VCSes
[10:48] <seb128> sorry if that's the case
[10:48] <xnox> pitti: i wonder if it is sufficient to move that bit to DEP-8 and not have either metacity nor twm in main.
[10:48] <cjwatson> infinity and I both do
[10:48] <pitti> xnox: twm is in universe, but putting it into main if we can drop metacity from main sounds like a good trade
[10:48] <xnox> pitti: the check won't run on non x86 platforms though =(
[10:48] <cjwatson> And that surely colours our outlook
[10:48] <infinity> seb128: I don't personally just upload willy-nilly without checking for a VCS-* field.  On the other hand, your statement is more or less true.  Archive uploads are the One True Source for the packages, like it or not.
[10:49] <seb128> cjwatson, well, if the Vcs is misconfigured it's the responsability of whoever is handling the Vcs and the package indeed
[10:49] <infinity> seb128: So, my position isn't as caustic as you make it out, I still try to find the maintainer's preferred methods.
[10:49] <xnox> seb128: which doesn't help, if the core-dev doesn't have write access to it.
[10:49] <cjwatson> seb128: Sure.  But when you're uploading 500 packages and run into 100 of these ...
[10:49] <cjwatson> (Or whatever)
[10:49] <infinity> (And yeah, it annoys me to no end when someone bitches me out for not committing to a VCS I can't commit to)
[10:50] <infinity> Also, I won't commit to a VCS-* for a 100-package rebuild run or something.
[10:50] <seb128> xnox, a vcs for a package in the archive should allow commit from the people who can upload
[10:50] <infinity> (Though, I also don't care if those no-change changelogs get lost)
[10:50] <cjwatson> seb128: hahahahaha.  I would love that to be close to true
[10:50] <pitti> infinity: if the only uploaded diff is a ubuntu1.1 no-change rebuild, I admittedly often don't bother grabbing the diff and committing it, I just overwrite it
[10:50] <pitti> (as recently seen with some py3.3 rebuilds)
[10:50] <seb128> cjwatson, I said "should" ;-)
[10:50] <diwic> cjwatson, infinity, from a diffent colored outlook; if you don't have commit access to the vcs; why don't you ask for sponsorship from the person who has?
[10:51] <xnox> seb128: ps branches? =)
[10:51] <infinity> seb128: Actually, I don't wildly care if it's commitable IF I also know that the people who maintain the package will integrate uploads.
[10:51] <cjwatson> diwic: When trying to complete a transition involving 500 packages?
[10:51] <cjwatson> Sure, if you don't mind it taking 5x longer
[10:51] <infinity> seb128: (For instance, not all of core-dev can commit to debian-installer, but all the people who upload debian-installer are also dediff freaks who will notice and merge)
[10:51] <seb128> anyway
[10:51] <seb128> I don't think we have disagreements there
[10:52] <infinity> diwic: Yeah, that's not going to happen.
[10:52] <cjwatson> infinity: Actually that's not a good example, it's lp:~ubuntu-core-dev/debian-installer/ubuntu
[10:52] <cjwatson> infinity: But for ubiquity, yes
[10:52] <cjwatson> (Which is a bug)
[10:52] <infinity> diwic: Except in rare cases of large packages (firefox/tbird, libreoffice, eglibc, gcc, etc) where I don't want to step on toes, asking permission to upload something I can upload is a horrible waste of effort.
[10:53] <cjwatson> diwic: If I notice that there's a VCS involved, and the change is non-trivial, then in general I will make an effort to offer up a branch for merging or some such
[10:53] <infinity> cjwatson: Oh, did it move somewhere in the last N years and I never paid attention, just checked out the new location?
[10:53]  * xnox off to a meeting in a glass cage =)
[10:53] <cjwatson> diwic: But usually, by the time a committer notices, I've finished all the rest of the work and long since moved on
[10:53] <infinity> diwic: And in all those cases, I tend to ask to be added to the team, rather than ask for sponsorship, but ymmv.
[10:53] <cjwatson> Interactions between humans are the slowest thing in the project
[10:54] <cjwatson> infinity: I don't think it was ever somewhere else, but you could well be mixing it up with another package
[10:54] <infinity> cjwatson: I could well be.  I've long since forgotten and stopped caring about what ~ubuntu-installer buys me and what it doesn't. :P
[10:55] <cjwatson> ubiquity is the problem child I know about.  I need to check whether bugmail configuration is now sane enough such that I can add ubuntu-core-dev to ubuntu-installer without spamming the world.
[10:55] <infinity> cjwatson: But the point still stands, I don't mind a package's VCS having a tighter control group than "core-dev", they act as a bastion of code review and release management.  But the key is that that group needs to be okay with merging out-of-band uploads instead of being whiney. :)
[10:56] <diwic> cjwatson, infinity, so sure it buys you time, but you're putting the time on somebody else, so the time is not /saved/, it has just moved to somebody else. That might be fair; since we're short on people with your knowledge, but just want to point that out.
[10:56] <infinity> cjwatson: My complaint comes in when someone says "all releases must happen from our VCS" and "also, you can't commit to it".
[10:57] <cjwatson> diwic: I'm not trying to justify it and say that it's OK.
[10:57] <didrocks> infinity: did anyone told that?
[10:57] <infinity> diwic: If building from VCSes was as unified and sane as buildin from source packages, this would be a very different conversation, to be fair.
[10:58] <mvo> infinity: you could simply push the changes into a seperated (owned by you) branch. but +1 on that the workflow is not ideal
[10:58] <cjwatson> diwic: I'm saying that I'm not prepared to wait for a sponsor in the case where our documented community procedures say I can upload; not that I'm not prepared to offer a branch for merging.
[10:58] <cjwatson> But that certainly takes substantially more time than just uploading.
[10:58] <mvo> infinity: hm, this reads a bit harsh, not meant this way :)
[10:58] <infinity> didrocks: It's certainly happened in the past.  Or, rather, I've uploaded a package with a VCS I can't commit to, and later been told off for not instead submitting a merge proposal or something.  Same general effect.
[10:59] <infinity> mvo: Hahaha.  You always merge my archive uploads anyway, you're my hero in this debate.
[10:59] <didrocks> infinity: that's why we added that to all projects following daily-building (http://bazaar.launchpad.net/~unity-team/unity/trunk/view/head:/debian/control#L48)
[10:59] <didrocks> infinity: not sure to make is more visible though
[10:59] <infinity> mvo: Plus, you added me to your commit group within hours of my asking. :P
[10:59] <mvo> if the UDD branch would simply allow to merge the upload into trunk ...
[10:59] <didrocks> so it's like "the daily upload system will notice/stop and we'll get informed and clean that ourself"
[10:59] <mvo> infinity: haha, indeed, I would have added you within minutes if I wasn't sleeping
[10:59] <didrocks> (which is what happened once the fixed appmenu-gtk was uploaded)
[11:00] <infinity> didrocks: Yeah, that's perfectly suitable (not necessarily all the automation that I'm still having a hard time coming to terms with, but the private group but open upload policy)
[11:00] <infinity> At least, that works for me.
[11:01] <infinity> One less VCS I need to commit to. :P
[11:01] <didrocks> :)
[11:01] <cjwatson> It seems more work for you guys than just adding ubuntu-core-dev to unity-team, though.  At least if bugmail configuration and the like is suitable for that.
[11:01] <didrocks> cjwatson: it's a long term plan/long discussion with PS. I'll spare you that :)
[11:01] <tkamppeter> pitti, I added "DEB_DH_FIXPERMS_ARGS := -Xusr/lib/cups/backend" as it was in the cups package, thank you anyway for the hints.
[11:02] <infinity> Some day, I'll commandeer a bunch of tools guys and re-do the Maemo build-from-tags workflow and we can all stop having this "which copy of the source is the authoritative one?" debate.
[11:02] <didrocks> (and yeah, a lot of spams because of all the bugs, but I think we'll sort something out in the end)
[11:03] <didrocks> cjwatson: ok, FYI, finished to check all the 25 projects bootstrapped and appmenu-gtk was the only guilty one
[11:03] <didrocks> now, let me make that clear in the bootstrap procedure that it's something to check
[11:04] <diwic> cjwatson, I don't exactly know what notification mechanisms there are when uploading things in the archive, but can we do something on that side to notify the vcs owner that a non-vcs based upload was done or something?
[11:05] <infinity> diwic: In most cases, the "VCS owner" is "ubuntu-dev" or "ubuntu-core-dev", so, uhm, hell no.
[11:05] <infinity> diwic: Package subscriptions, however, are a long time Soyuz wishlist bug.
[11:05] <cjwatson> UDD was supposed to solve this - in the case where it's a UDD branch, the importer submits a merge proposal
[11:05] <cjwatson> But the UDD importer has enough other problems that it's tough to rely on it
[11:06] <diwic> infinity, what is the recommended way of me finding out that an p
[11:06] <diwic> oops
[11:06] <diwic> infinity, what is the recommended way of me finding out that an upload of pulseaudio has been done?
[11:06] <infinity> Honestly, it doesn't seem like that much effort to just grab the latest source and debdiff before you upload.
[11:06] <diwic> and I need to merge back the result into the vcs
[11:07] <cjwatson> diwic: 'pull-lp-source -d pulseaudio' and check that it's the version you expect
[11:07] <infinity> If no one uploaded between your last and your current, apt-get source is a no-op (if you still have your old one), and you want to debdiff before upload ANYWAY, since the person introducing cruft may well be you. :P
[11:08] <infinity> I've certainly broken my own sources between my upload and my upload before.
[11:08] <infinity> I'm pretty much a jerk to myself that way.
[11:08] <cjwatson> pull-lp-source has made lots of things massively easier for me.
[11:08] <diwic> ok
[11:08] <infinity> Yeah, I just recently have started trying to train my fingers to use pull-lp-source and pull-debian-source.
[11:09] <infinity> Especially as a replacement for surfing old versions on lp.net/ubuntu/+source and snapshot.debian.ogr
[11:09] <didrocks> cjwatson: https://wiki.ubuntu.com/InlinePackaging?action=diff&rev2=11&rev1=10 FYI
[11:11] <infinity> cjwatson: Oh hey, instead of this scintillating conversation about various heads impacting random flat surfaces...
[11:12] <infinity> cjwatson: Did you ever look into the coreutils testsuite hang on powerpc?  (It's factor(1) hanging on certain input, I haven't gotten much deeper into it than that)
[11:13] <infinity> cjwatson: Annoyingly, due to the britney hack for sulfur's sadness, it migrated despite the FTBFS.  Which, I suppose, isn't world-ending, but irksome.
[11:14] <cjwatson> infinity: No, I got as far as filing RT#57703 ...
[11:14] <cjwatson> infinity: Don't suppose you have a box I could debug it on?
[11:14] <infinity> cjwatson: I do indeed.
[11:14] <infinity> cjwatson: You can even have root.
[11:15] <infinity> (Cause I'm too lazy to setup up schroot on it right now)
[11:15] <dholbach> can somebody please reject https://code.launchpad.net/~scarneiro/ubuntu/raring/adns/fix-for-ignored-make-clean-errors/+merge/136558 and https://code.launchpad.net/~scarneiro/ubuntu/raring/dictclient/fix-for-ignore-make-clean-errors/+merge/136563?
[11:18] <pitti> dholbach: erledigt
[11:20] <dholbach> danke pitti :)
[11:21] <tkamppeter> pitti, can you upload cups-filters from BZR to Debian and Ubuntu Raring? I have released 1.0.15 fixing some bugs.
[11:22] <OdyX> tkamppeter: we should merge the debian-wheezy to include the copyright fixes.
[11:22] <pitti> tkamppeter: i. e. re-name your 1.0.25-0ubuntu1 version in bzr to -1 and experimental?
[11:22] <tkamppeter> pitti, will do.
[11:23] <pitti> tkamppeter: I am, just wanted to confirm that this is correct
[11:23] <pitti> tkamppeter: ok, doing
[11:24] <tkamppeter> pitti, done.
[11:24] <pitti> hmm
[11:24]  * pitti aborts build, pulls, and does again then
[11:26] <pitti> tkamppeter: uploaded to experimental; we can sync it in half a day or so when it got imported
[11:26] <pitti> (and on a related note, yay for 5 times faster upload bandwidth)
[11:27] <tkamppeter> pitti, OK, thanks.
[11:28] <pitti> can someone please remind me what the magic $http_proxy was on the porter boxes to get outside?
[11:29] <pitti> ah, found it, but I get a "403 forbidden" with it
[11:29] <pitti> so, tarball upload it is
[11:34] <OdyX> pitti: we should make tkampetter a DM
[11:39] <OdyX> tkamppeter: is there some useful stuff to backport to cups-filters' 1.0.18 ?
[12:03] <tkamppeter> OdyX, there are lots of bug fixes on pdftops and texttopdf (debian/changelog entries with references to bug reports). Each of them are fixed by rather small changes in the upstream code (see upstream BZR). These a worth backporting. The libqpdf switchover of pdftopdf is a bigger change which you should not backport.
[12:04] <OdyX> tkamppeter: hrm yes will try.
[12:05] <OdyX> tkamppeter: could you find what was breaking the test-suite?
[12:15] <tkamppeter> OdyX, no, it seems that for some unknown reason CUPS does not remove the job control files when the exception path PS->pstops->PS printer is allowed. I cannot imagine why. AFAIK it should be in CUPS' responsability to remove these files.
[12:18] <OdyX> tkamppeter: printing to stderr, wrong return value, don't know,
[12:31] <mdeslaur> infinity: per comment in bug 1084054, could you kill the vlc in -proposed, please?
[13:19] <israeldahl> I have been trying to figure out just exactly how to push a new version of a package into the repos (to be reviewed) so it can be uploaded.  Anyone know / have a good resource to read?
[13:21] <israeldahl> Do I just make a separate branch?
[13:23] <tumbleweed> israeldahl: yes. http://developer.ubuntu.com/packaging/html/udd-merging.html#merging-a-new-upstream-version
[13:24] <israeldahl> awesome, thank you!
[13:27] <bdrung> does someone know why gettext fails to build (unmet dependencies)? default-jdk depends on default-jre (= 1:1.7-43ubuntu3) and openjdk-7-jdk (>= 7~u3-2.1), but they are not going to be installed?
[13:28] <bdrung> doko_: ^
[13:28] <israeldahl> tumbleweed can i use git instead of http?
[13:30] <cjwatson> bdrung: huh?  I sbuilt it locally about an hour ago and it was fine
[13:30] <cjwatson> bdrung: (working on a merge so please leave it alone)
[13:30] <cjwatson> oh damnit you already uploaded
[13:30] <cjwatson> bdrung: just leave it please.  I'll sort it out
[13:30] <bdrung> cjwatson: yes (it built fine in pbuilder)
[13:31] <bdrung> cjwatson: thanks. i am happy to leave it to you.
[13:32] <tumbleweed> israeldahl: not entirely sure what you mean there. But we need a source tarball. Ideally one the upstream provides, although sometimes one has no choice but to generate one from their git repository
[13:33] <israeldahl> Ok, just wondering.  I can use a local tarball though.  thanks
[13:33] <apw> @pilot in
[13:33] <apw> bah stupid bot
[13:33] <ogra_> on vacation
[13:34] <apw> that bot never works for me ... ever
[13:34] <cjwatson> bdrung: Hmm, OK, it's a problem between ca-certificates and ca-certificates-java, but I'm going to need to rebootstrap ca-certificates-java somehow to fix it
[13:35] <cjwatson> (it can't build for the same reason)
[13:36] <seb128> hallyn, hey, thanks for the qemu fixes, the current binary seems to give a working spice on i386 ;-)
[13:36] <seb128> hallyn, 	"dh_link -pqemu-kvm-spice usr/bin/qemu-system-i386-spice usr/bin/kvm-spice" is buggy though
[13:37] <seb128> hallyn, there is no "qemu-system-i386-spice" binary in the deb, only a "qemu-i386-spice" and "qemu-spice"
[13:44] <cjwatson> i386 builders on manual while I rebootstrap ca-certificates-java
[13:50] <hallyn> seb128: then perhaps one more ppa upload before going to archive :)
[13:50] <seb128> hallyn, well, just fixing the symlink should be easy enough to fix with the archive upload ;-)
[13:52] <hallyn> ok you've convinced me
[13:55] <hallyn> seb128: how weird though, why no qemu-system-i386-spice?  huh...
[13:56] <hallyn> yeah no those are not the same
[13:56] <seb128> hallyn, I don't know, good question ... what's the difference with -system?
[13:56] <hallyn> the non-system one is qemu-user, different thing.  drat.
[13:56] <hallyn> back to the build rules
[13:58] <hallyn> oh duh.  i see.  my stupid bad
[14:00] <hallyn> really there is no reason for the qemu-user-spice binaries.  I think I"ll pull them from the package
[14:01] <hallyn> they're not even linked against libspice.
[14:01] <ogra-cb> and what exactly would they do anyway
[14:02] <vibhav> hmm, the sponsoring queue is HUGE. Lets see if I can help
[14:02] <vibhav> Does anybody know what to do with: https://bugs.launchpad.net/ubuntu/+source/gnome-screensaver/+bug/952771 ?
[14:03] <vibhav> It looks more like a feature request for me, should I tell the reporter that it will be fixed in raring?
[14:03] <hallyn> ogra-cb: right
[14:07] <mdeslaur> OdyX: hi! I've got comments about your cups git tree: 1- you should probably add PidFile to the list of warnings, 2- you should remove the VCS tag from cups-files.conf too
[14:09] <OdyX> mdeslaur: the first is committed already, but not pushed. good point for the second.
[14:09] <mdeslaur> OdyX: cool
[14:10] <OdyX> mdeslaur: you confirm that these are only warnings, right ? Are they considered as configuration stanzas or just discarded + warning ?
[14:11] <mdeslaur> OdyX: yeah, so as I said in the bug, I'm probably going to push the config file split in Ubuntu stable releases, but without conf file changes...it's not a really elegant thing to do, but we try not to have any conf file prompts with security updates
[14:11] <mdeslaur> OdyX: they are just discarded and logged
[14:11] <OdyX> mdeslaur: ah okay, good.
[14:11] <OdyX> mdeslaur: how do you avoid cupsd.conf prompt when SystemGroup is removed ?
[14:11] <vibhav> IS there any guide on understanding diff3 conflict markers?
[14:12] <mdeslaur> OdyX: I'm not going to remove SystemGroup, and I'm going to remove the warning about it
[14:12] <OdyX> mdeslaur: you'll get a prompt as soon as an admin modified the conffile through the webadmin, no ?
[14:12] <OdyX> mdeslaur: ah, yeah, good point.
[14:12] <mdeslaur> OdyX: it will be left there, which is kind of ugly, but harmless
[14:12] <mdeslaur> since it,s the only one we shipped by default in the conf file, it's not so bad
[14:13] <mdeslaur> and once they upgrade to a newer release, then the conf file cleanup will happen
[14:13] <OdyX> mdeslaur: sure. But it gets in the way when you get the configuration prompt for another reason.
[14:13] <FourDollars> cjwatson: ping
[14:14] <mdeslaur> OdyX: well, in theory, we shouldn't be getting any configuration prompt
[14:14] <cjwatson> FourDollars: yes?
[14:14] <mdeslaur> (for the stable releases)
[14:15] <FourDollars> cjwatson: ubuntu-meta needs some patch for linux-headers-generic-lts-quantal .
[14:15] <cjwatson> I know, I was holding off on deciding about that
[14:15] <FourDollars> cjwatson: ubuntu-meta on precise.
[14:16] <FourDollars> cjwatson: I see. Thanks.
[14:16] <cjwatson> You may note my germinate upload which was aimed at doing something about that
[14:16] <cjwatson> (well, supporting it)
[14:17] <FourDollars> OK. There is some thing I did not follow.
[14:17] <OdyX> mdeslaur: ah, you mean that you'll avoid the configuration prompt by not changing cupsd.conf, right ?
[14:17] <OdyX> aka shipping the same one. Good idea. I had hard time parsing your idea
[14:18] <mdeslaur> OdyX: yes, that's what I meant
[14:18] <OdyX> mdeslaur: good. That said, I noticed Wheezy ships a cupsd.conf with a bloody cvs tag, I hope Ubuntu's doesn't.
[14:19] <mdeslaur> OdyX: there's one in cupsd.conf.default, but none in cupsd.conf
[14:19] <OdyX> mdeslaur: nice.
[14:19] <mdeslaur> (at least, on quantal...haven't checked the older releases yet)
[14:19] <FourDollars> cjwatson: Have you also patched ubuntu-meta of precise for linux-headers-generic-lts-quantal ?
[14:20] <OdyX> mdeslaur: I'll try to get the 1.5.3 (precise) version done later today (or tomorrow).
[14:20] <FourDollars> cjwatson: I didn't see newer ubuntu-desktop in precise-proposed.
[14:20] <OdyX> but as it will be for our next stable, I'll get this SystemGroup thing dropped.
[14:21] <cjwatson> FourDollars: I haven't uploaded it yet
[14:21] <cjwatson> FourDollars: It's not urgent compared to all the other SB work
[14:21] <cjwatson> FourDollars: But I know about it and I'll sort it out, don't worry
[14:21] <FourDollars> cjwatson: So I will upload it eventually, right?
[14:22] <FourDollars> cjwatson: So you will upload it eventually, right?
[14:22] <cjwatson> FourDollars: That's what I said, yes
[14:22] <FourDollars> cjwatson: Got it. I just make sure you have noticed that. Thanks.
[14:24] <FourDollars> s/make sure/want to make sure/
[14:24] <mdeslaur> OdyX: oh, one more thing...the upstream security patch drops UseNetworkDefault from the html documentation, but we still have that option in one of our other patches, so I added it back in
[14:27] <OdyX> mdeslaur: do you have commit rights ?
[14:27] <mdeslaur> OdyX: no
[14:27] <OdyX> mdeslaur: I'd be happy to have you commit these in our experimental repository directly, or handle that in branches there.
[14:27] <OdyX> mdeslaur: alioth account ?
[14:28] <OdyX> it facilitates merging and diffing.
[14:28] <mdeslaur> OdyX: I don't have a alioth account, sorry
[14:31] <OdyX> mdeslaur: no problem. It's git afterall.
[14:31] <mdeslaur> OdyX: oh, yeah, I'd have to learn git too :P
[14:31]  * mdeslaur cringes
[14:31] <OdyX> mdeslaur: eh, yeah. Yaknow we're in 2012 right ? :)
[14:32] <mdeslaur> OdyX: yeah, that's why I use bzr! :)
[14:32] <mdeslaur> hehe :)
[14:36]  * cjwatson belatedly remembers to put the i386 builders back on auto
[14:58] <OdyX> mdeslaur: regading your "2- you should remove the VCS tag from  cups-files.conf too
[14:58] <OdyX> " that's done already. cups-files.conf is in KEEP
[14:59] <cjwatson> doko_: Do you have a current VCS for Ubuntu binutils?  I don't really want to upload it for a single change
[15:00] <mdeslaur> OdyX: oh, hrm...it didn't seem to work for me...ok, thanks, I'll take a look on my end
[15:00] <mdeslaur> OdyX: sorry for the false alarm
[15:04] <OdyX> mdeslaur: np. More eyes don't hurt.
[15:20] <micahg> xnox: with the loss of metacity, does that mean that 3D cards will be needed for installing Ubuntu Desktop?
[15:21] <ogra_> compiz alone doesnt need much, llvmpipe should work snappy with it even on slow CPUs
[15:21] <micahg> xnox: I guess it's already a requirement of sorts, so nevermind, E_NEEDSOMECAFFEINE
[15:21] <pitti> micahg: not quite; that was done with dropping unity-2d last cycle already
[15:21] <ogra_> and yeah, you are installing ubuntu-desktop which definitely requires GL
[15:21] <xnox> micahg: 3D cards are not needed, as we have llvmpipe.
[15:22] <pitti> xnox: (which doesn't really get you that far, but oh well)
[15:22] <pitti> so yes, you do need a 3D card
[15:22] <pitti> or a really fast CPU
[15:22] <ogra_> pitti, its finbe for plain GL stuff, as long as there is no excessive compositing
[15:22] <xnox> pitti: what do you mean? /me ran installer in the dog slow VM.
[15:23] <xnox> pitti ogra_ micahg: note that for the installer I only enable a single compiz plugin (decor) and hope to have fast texture rendering.
[15:23] <ogra_> compiz as a WM is really low demanding if you dont have any fancy effects in use
[15:23] <pitti> xnox: well, sure, but it's not really a joy to use unity with all its effects there
[15:23] <xnox> (although there aren't that many textures in the installer)
[15:23] <pitti> or on an arm machine
[15:24] <pitti> ogra_: yeah, that doesn't seem to apply to unity as a whole though :)
[15:24] <ogra_> right, on arm devices where we dont have GL we dont provide ubuntu-desktop :)
[15:24]  * xnox doesn't even let to resize or move windows =)))) win \0/
[15:27] <didrocks> micahg: no need to CC me btw, I'm already subscribed to ubuntu-devel
[15:30] <micahg> didrocks: sorry
[16:04] <doko> cjwatson, just a personal one. which change do you mean?
[16:04] <cjwatson> doko: s/gettext:any/gettext/ in debian/control
[16:04] <cjwatson> I see pitti uploaded something following your most recent upload, if you didn't already know about it
[16:05] <doko> seen that, and integrated for the next upload
[16:05] <cjwatson> doko: cf. coreutils and most of the other stuff I uploaded today
[16:05] <cjwatson> doko: thanks
[16:13] <seb128> micahg, xpathselect is a new source...
[16:13] <seb128> micahg, I'm not sure what your "The only thing updated was debian/copyright." means
[16:13] <micahg> seb128: yeah, I know, I got trigger happy with E-Mail today, see followup
[16:14] <micahg> Launchpad failed
[16:14] <seb128> micahg, ok...
[17:06] <slangasek> didrocks: bootstrap> ah, alright, thanks for the explanation
[17:06] <didrocks> yw :)
[17:06] <didrocks> sorry for missing that in the review
[17:08]  * apw has an ocaml package which is using ocamlfind, and that is producing paths from another /build presumably from a library -- any idea how to debug such a thing?
[17:09] <barry> stgraber: i guess we leave it up to jodh to do the final merge?
[17:11] <stgraber> barry: yep, I don't think I have commit rights to upstart's trunk, so I need jodh for that
[17:17] <Laney> hmm
[17:20] <Laney> what's made $world uninstallable in r-proposed?
[17:21] <cjwatson> example?
[17:21] <Laney> I didn't phrase that quite correctly
[17:21] <Laney> https://launchpadlibrarian.net/124428233/buildlog_ubuntu-raring-i386.libcanberra_0.30-0ubuntu1_FAILEDTOBUILD.txt.gz
[17:22] <cjwatson> hm
[17:22] <seb128> gettext fallout I guess
[17:22] <cjwatson> component-mismatches, fixing
[17:28] <seb128> cjwatson, " debhelper : Depends: po-debconf but it is not going to be installed" ... that's likely the same issue you just fixed?
[17:28] <hallyn> seb128: ok, i think the pkg is good now, will push soon (qemu-linaro)
[17:28] <seb128> hallyn, great, thanks a lot of the efforts for enable spice on i386 ;-)
[17:28] <Laney> seb128: that's in my build log too - I'm guessing they were all the same root cause
[17:29] <seb128> Laney, oh, right, I read the first line about dh-translations before ;-)
[17:34] <cjwatson> seb128: Yes
[17:34] <seb128> cjwatson, thanks
[17:34] <cjwatson> I'll do a mass give-back of stuff affected by uninstallability a bit later
[17:34] <cjwatson> modulo EOD soon
[17:38] <hallyn> seb128: i don't have upload rights.  do you mind grabbing the ppa6.dsc, removing the ppa6 from changelog, and pushing?
[17:38] <seb128> hallyn, doing it
[17:40] <hallyn> seb128: thanks
[17:56] <xnox> stgraber: mark as merged https://code.launchpad.net/~jamesodhunt/ubuntu/precise/libnih/bugs-740390+1062202/+merge/130504 already in precise-proposed
[17:57] <stgraber> xnox: done
[17:58]  * Laney is glad we have this task to keep the TB busy
[18:00] <didrocks> Laney: I'm sure they value this added karma :p
[18:01] <xnox> Laney: well is it TB or just pitti & stgraber ?! =)
[18:01] <stgraber> anyone on the TB is an owner of ~ubuntu-branches and can do it
[18:01] <Laney> the real power brokers
[19:51] <infinity> mdeslaur: Done.
[20:05] <rickspencer3> dang
[20:06] <infinity> I agree.
[20:11] <infinity> @pilot in
[20:28] <stokachu> cjwatson: is it possible to do both raid + luks encryption during preseed?
[20:49] <mikeit> HI
[20:49] <mikeit> hi
[21:16] <hallyn> zul: i'm going to add to qemu-kvm.conf in raring an optional auto-mount of hugepages fs.  do you think i should do so in the default /var/lib/hugetlbfs/group/kvm, or a shorter /run/hugetlbfs/kvm ?
[21:17] <zul> hallyn: i really dont have an opinon
[21:18] <hallyn> slangasek: ^ i assume there are LSB type considerations, among others....
[21:21] <hallyn> zul: there are two reasons to pick a custom location:
[21:21] <hallyn> zul: 1. it's long to type out 'kvm -mempath /var/lib/hugetlbfs/group/kvm/page-xxxxxxx' :)  (but that's not that good a reason)
[21:22] <hallyn> zul: 2. libvirt is going to need to know the path, so we might want to pick our own so we always know where it is
[21:23] <zul> i think the /run/hugetlbfs/kvm might be a good choice but that looks wrong again i have no opinon :)
[21:23] <slangasek> hallyn: I don't think it matters either way under the FHS / LSB.  It would be nice if such things were standardized, but wishes and horses
[21:24] <hallyn> zul: the problem with /var/lib/hugetlbfs/group/kvm/ is that the final pathname is dependent upon the supported hugepage sizes
[21:24] <slangasek> hallyn: this is somewhat comparable to the kernel's rpc_pipefs filesystem, which Debian mounts under /var/lib and I've moved to /run - but the only reason I've moved it is because of bootstrapping problems for /var itself
[21:24] <hallyn> actually that might be a wishlisht bug worth filing against hugadm
[21:24] <slangasek> which I don't think apply here
[21:24] <hallyn> it should have a /pagesize-default directory
[21:24] <hallyn> slangasek: no, i'd do it here like this because otherwise libvirt will have todo ititself anyway,
[21:24] <hallyn> so that it can hardcode a path in /etc/libvirt/qemu.conf
[21:25] <slangasek> hallyn: right, there should be an agreed convention on where it should live; I'm just opining that /var/lib vs. /run doesn't matter much AFAICS
[21:27] <hallyn> slangasek: for a new mount i wouldn't care where it goes, my question is more about whether i can make myown mount or whether i shoudl use hugeadm --create-group-mounts=kvm
[21:27] <slangasek> oh
[21:27] <hallyn> slangasek: the problem with the latter is that the resulting path, /var/lib/hugetlbfs/group/kvm/pagesize-2097152/,
[21:27] <slangasek> I have no informed opinion on that question :)
[21:27] <hallyn> 1. is long, and 2. is not predictable across arches
[21:28] <hallyn> slangasek: ok :)
[21:29] <hallyn> slangasek: thanks
[21:44] <stokachu> infinity: would you mind taking a peek at bug 1068199?
[21:44] <infinity> stokachu: doesn't that one already have a comment from me?
[21:44]  * infinity looks.
[21:45] <infinity> Oh, no.  It doesn't.  I'm thinking of another lucid bug, perhaps.
[21:45] <stokachu> :P
[21:46] <stokachu> cyphermox: if you get a chance could you check up on the status of bug 967091
[21:46] <infinity> stokachu: Do various things need rebuilding against those new headers to make it all work?
[21:47] <stokachu> infinity: the package to make use of this has been rebuilt and is in -backports i believe
[21:47] <infinity> stokachu: Sure, but wouldn't it need to be rebuilt against proper glibc headers? :P
[21:48] <stokachu> ah.. well..ermm.. im not sure
[21:48] <infinity> stokachu: Unless they statically defined the constant in their own source, which would then mean I don't have to.
[21:48] <stokachu> i would have to do some digging on that
[21:48] <stokachu> arges: ^ do you know?
[21:49] <arges> looking
[21:49] <stokachu> cyphermox: scratch that wrong person
[21:50] <arges> stokachu, I think its a static thing... although we could probably test this right?
[21:51] <stokachu> mdeslaur: bug 967091; could you take a look at this status when you get a chance?
[21:51] <infinity> arges: See, if it's just a static define, one could re-backport libhugetlbfs with the constant set, and be done with it.  I could build a quick test package for you right now.
[21:51] <infinity> stokachu: ^
[21:51] <stokachu> arges: what was the other package in question again??
[21:51]  * infinity does this.
[21:51] <arges> infinity, so I remember the person who was affected by this could install a the .deb from a newer release and it worked
[21:52] <stokachu> oh yea thats right
[21:52] <arges> infinity, but if they installed the version built in the lucid chroot (the lucid backported version) it did not work
[21:52] <arges> even though they were the same exact versions of libhugetlbfs
[21:55] <arges> stokachu, another easy experiment would be just to installer the newer eglibc version (with the proper MAP_HUGE define) in lucid and see if it works. but I feel like I've already done that
[21:56] <infinity> arges / stokachu: New package incoming.
[21:59] <stokachu> ok
[22:01] <infinity> arges / stokachu: http://people.canonical.com/~adconrad/hugetlb/
[22:02] <infinity> If that package behaves correctly (It's built on lucid), then I think we should upload that.  I'll keep the eglibc bug queued as well, for it I need to do a lucid upload for other reasons.
[22:02] <stokachu> ok ill get this sent out for testing
[22:02] <infinity> Also, a review of the debdiff there would be appreciated. :P
[22:03] <stokachu> thanks i just quickly looked at the diff and it looks straightforward and should hopefully work
[22:03] <darkxst> Sarvatt, https://code.launchpad.net/~darkxst/ppa-purge/lp706774/+merge/137061
[22:05] <stokachu> infinity: lastly, this one isn't an sru but could you check to see if there is a precise upload for this? bug 1004775
[22:05] <stokachu> if it just needs sru written out i can do that but didnt know if it required sponsorship
[22:06] <infinity> stokachu: There's nothing uploaded for it in the precise queue, no.
[22:07] <infinity> stokachu: https://launchpad.net/ubuntu/precise/+queue?queue_state=1 for future helping yourself to said information. :)
[22:07] <stokachu> ah sweet /me bookmarks
[22:07] <infinity> cyphermox: Care to poke at 1004775 with stokachu for precise?
[22:07] <arges> bookmarked
[22:07] <cyphermox> infinity: ok
[22:08] <infinity> stokachu / arges: You can ignore the ? fluff there, the URL to remember is just /ubuntu/$dist/+queue
[22:08] <cyphermox> this one basically needs work in dnsmasq and nm
[22:08] <infinity> stokachu / arges: The drop-down lets you get at unapproved, new, etc.
[22:09] <stokachu> cool yea this is very helpful thanks
[22:10] <stokachu> cyphermox: that different from what was done in quantal?
[22:10] <infinity> stokachu: Anyhow, I'm not actively monitoring that bug, so if you get that hugetlb backport tested for me and it's all good, give me the go-ahead and I'll sign and upload it.
[22:11] <infinity> Err, and add a bug reference to the changelog. :P
[22:11] <stokachu> infinity: will do - ive already sent it off to be tested
[22:11]  * infinity adds the bug ref now, so he doesn't forget.
[22:14] <cyphermox> stokachu: not especially different, but one of the patches that really makes a difference is really not simple
[22:15] <stokachu> cyphermox: ah ok, well if you could keep that bug on your radar for when you get some time to dig more into it
[22:17] <stokachu> cyphermox: and as always if you need me to do the trivial stuff so you can concentrate on the patch just let me know
[22:19] <stokachu> infinity: could you point me to someone who could answer a question about debian preseed with setting up RAID+Luks? Or if you know if thats even possible. I can do one or the other but not both
[22:20] <infinity> stokachu: You want xnox.
[22:20] <infinity> stokachu: Maybe.
[22:20] <stokachu> xnox :D
[22:21] <slangasek> xnox would certainly know the answer
[22:22] <stokachu> ok cool ill try to catch up with him tomorrow during his local time
[22:22] <cyphermox> stokachu: here's what I'll do
[22:23] <cyphermox> just putting food in the oven so I can eat at some point and I'll poke at it now and until it's ready to upload, hopefully sometime before tomorrow
[22:24] <stokachu> cyphermox: awesome, really appreciate this
[22:32] <cyphermox> stokachu, there's already a sru in progress in precise, so if you want to help actually independently verifying https://bugs.launchpad.net/bugs/995165 would help a huge amount
[22:32] <cyphermox> stokachu: however, it requires ipv6 connectivity while you do the install, and installing with -proposed enabled
[22:35] <stokachu> cyphermox: ok ill see what i can do in the lab and update the case
[22:44] <cjwatson> stokachu: I'm actually not sure.  I suspect not without hacking - things are not as nestable as they should be. :-(
[22:46] <stokachu> cjwatson: ok cool we werent sure it was possible so ill relay that to them
[22:47] <stokachu> cjwatson: you think maybe we should consider a feature request for something like this?
[22:47] <stokachu> for future planning
[22:49] <mdeslaur> stokachu: status?
[22:50] <stokachu> mdeslaur: your last comment #193 indicated it was still waiting on sru -- was curious if youve gotten any feedback on it yet or if I should try to work with SRU team
[22:51] <stokachu> its still in unapproved queue https://launchpad.net/ubuntu/precise/+queue?queue_state=1&queue_text=libvdpau
[22:51] <mdeslaur> stokachu: I have not gotten any feedback from the sru team, no
[22:51] <mdeslaur> stokachu: could you try pinging someone from the SRU team? if there's an issue, let me know.
[22:52] <stokachu> mdeslaur: sure thing just didnt want to step on any toes if you were actively doing that -- i can see about getting it reviewed
[22:52] <stokachu> thanks for the response
[22:52] <stokachu> infinity: you up for one more? :)
[22:52] <cjwatson> stokachu: Sure
[22:53] <stokachu> bug 967091
[22:53] <cjwatson> stokachu: Though not quite sure when it'd fit - we really need to redesign the whole recipe format IMO :-/
[22:53] <stokachu> cjwatson: ok ill talk with mattrae and see if we want to file a request for that
[22:54] <mdeslaur> stokachu: np, thanks for doing a follow-up on it
[22:54] <stokachu> mdeslaur: my pleasure
[23:02] <infinity> stokachu: Maaaaybe.
[23:02] <stokachu> infinity: hopefully this one is just a click of a button :D
[23:02] <stokachu> aol style
[23:03] <infinity> stokachu: Sure.  Tell me all about it while I'm out smoking.
[23:04] <stokachu> infinity: pretty straight forward, users seeing a blue tint on on flash videos running nvidia-current/-updates
[23:05] <stokachu> the fix affects libvdpau's behaviour for flash
[23:06] <stokachu> infinity: i haven't dug through the source to tell you exactly what the patch is doing but maybe mdeslaur could shed some light on it
[23:07] <mdeslaur> infinity: binary flash inverts two color channels when using vdpau acceleration. Newer vdpau versions detect when flash is using it, and re-inverts the two color channels
[23:07] <stokachu> but i can tell you blue faces dont show up anymore :)
[23:07] <mdeslaur> infinity: so flash videos don't have blue faces :)
[23:08] <mdeslaur> it's a dirty hacky workaround, but there is no hope of adobe fixing the issue in flash itself
[23:08] <mdeslaur> and it's included in quantal+ and in debian's vdpau for a while now
[23:08] <infinity> stokachu: So, what's the button that needs pushing?  It this something I need to review in the queue, or release from proposed?
[23:09] <stokachu> its in the unapproved queue atm
[23:09] <verievied> hi all it's paddy drunk as hell
[23:10] <stokachu> https://launchpad.net/ubuntu/precise/+queue?queue_state=1&queue_text=libvdpau
[23:10] <infinity> stokachu: Check, having a look.
[23:14] <verievied> ttfn
[23:16] <infinity> stokachu / mdeslaur: Accepterificated.
[23:16] <mdeslaur> hehe
[23:17] <mdeslaur> infinity: thanks
[23:17] <stokachu> infinity: awesomeeee
[23:17] <stokachu> mdeslaur: ill get this tested and verified in the next few days to get it wrapped up
[23:17] <mdeslaur> I'm going to miss all the andorian porn
[23:17] <stokachu> lol
[23:19] <stokachu> ok thanks all :D im off to a family movie night with fairly odd christmas woooot