[00:01] slangasek: it does ignore stuff that is unknown / not bzr add'ed. Running in merge-upstream mode? These are the two cases I know when it ignores upstream changes. [00:01] [BUILDDEB] [00:01] split = True [00:01] xnox: thanks for the hint [00:01] ha. so it does have them, in the tarball =) === cpg is now known as cpg|away === mspencer_ is now known as mdspencer === fisted is now known as Guest43294 === jk-- is now known as jk- === RAOF_ is now known as RAOF === fisted_ is now known as fisted === fisted is now known as Guest21723 === patr|ck_ is now known as patr|ck === cpg|away is now known as cpg === shadeslayer_ is now known as shadeslayer [05:27] Good morning [05:50] hrm, mysqld is segfaulting during the php test suite === Ursinha_ is now known as Ursinha === rsalveti_ is now known as rsalveti === chilicuil_away is now known as chilicuil [06:12] Any bzr-git maestros in here? [06:14] shoot.. thats not a segfault.. its hitting an ASSERT.. whoops.. gotta turn those back off [06:26] RAOF: Oh hai. [06:26] RAOF: I'm not a bzr-git guy at all, but I totally need to hijack you to tell you that my laptop hates you. [06:26] RAOF: I've been getting those GPU lockup dialogs (after long freezes) several times a day. [06:27] RAOF: Do those actually go anywhere that gets read? :) [06:27] If you send them up they'll end up on the X team's workqueue. [06:27] * pitti is suffering from bug 1081009, that might be your's? [06:27] Launchpad bug 1081009 in xserver-xorg-video-intel (Ubuntu) "[arrandale] GPU lockup IPEHR: 0x02000022" [High,Incomplete] https://launchpad.net/bugs/1081009 [06:27] I've sent maybe 10 or so. [06:27] https://bugs.freedesktop.org/show_bug.cgi?id=55984 is seeing some action [06:28] Freedesktop bug 55984 in DRM/Intel "[ilk regression] gpu hangs on ironlake with 3.6 + -next + -fixes code" [Normal,Needinfo] [06:28] pitti: The symptoms definitely sound like mine. [06:28] although I'm not quite sure whether this was triaged right, Arrandale != Ironlake [06:28] and it's definitively unnerving; I'm running the quantal kernel [06:29] I'm running raring kernel and userspace. [06:29] I'm running raring userspace, but with this lockup the raring kernel is by and large useless for me [06:29] It only started for me a few days ago, after some dist-upgradery. [06:29] I need to reboot every hour or so, losing the last piece of work [06:29] for me it precisely started with the introduction of 3.77 [06:29] 3.7 [06:29] pitti: I don't need to reboot, I find VT switching out and back fixes it. [06:29] lucky you [06:29] not for me [06:30] Heh. [06:30] not even shutting down lightdm and restarting [06:30] So, almost the same. :P [06:30] But I'm on different hardware. [06:30] infinity: yeah, so I think you'rs is the fd.o one [06:30] but that actually also talks about two different issues, and the "other" one seems to be mine [06:31] I don't speak codenames, so I don't know what an Ironlake is. [06:31] But it's a Sandybridge CPU with Intel HD whatever graphics. [06:31] it's an iron. in a lake. [06:31] jk-: You're so remarkably helpful. [06:31] infinity: yeah, so sandybridge is the chipset generation after mine (arrandale) [06:31] infinity: Your *Sandybridge* is seeing problems? My Sandybridge, obviously, is absolutely stable. [06:31] infinity: I'm not quite sure about the ordering of ironlake [06:32] and I thought all those X freezes and pipeline underruns were a thing of the distant past :( [06:32] * pitti still remembers the long hours of testing various watermark heuristic patches [06:32] RAOF: my *sandybridge* is seeing problems too. [06:33] it's like every other raring user has this problem except the X.org team :) [06:33] RAOF: but i suspect it's got a lot to do with a stupid bios. [06:33] The funny thing is, I switched from discrete graphics to Intel on advice that the drivers were more solid. :P [06:33] Though, this is actually one of those silly hybrids, I could switch it in the BIOS and go nvidia for a while. [06:33] http://www.bryceharrington.org/Arsenal/ubuntu-x-swat/Reports/totals-raring-workqueue is what's on the X team's easy workqueue. [06:33] RAOF: last time I looked this was actually be the #1 raring problem on errors.u.c. [06:33] infinity: Is https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/1041790 what you're seeing? [06:33] Launchpad bug 1041790 in xserver-xorg-video-intel (Ubuntu) "[sandybridge-m-gt2] GPU lockup IPEHR: 0x0b160001 IPEHR: 0x0b140001" [Medium,Confirmed] [06:33] infinity: they had been for years really [06:34] infinity: they are more solid though. just not completely foolproof. [06:34] RAOF: I don't remember what error codes I was getting. Is that saved anywhere? [06:34] you should have some files in /var/crash/ [06:34] I get proper dumps, anyway [06:34] Oh yes, I have tons of them. [06:35] RAOF: Yep, that's the one. [06:36] So, you could probably be all switch-to-SNA to get around the problem. [06:36] (base)adconrad@cthulhu:/var/crash$ sudo grep 0b140001 * | wc -l [06:36] 3272 [06:36] *doh* [06:37] RAOF: Speak slowly, I'm a toolchain guy. What's an SNA? [06:37] infinity: It's the shiny new acceleration architecture for Intel. [06:37] is it really spelled out like that? :) [06:37] Though, the comments in that bug about SNA sucking differently don't instill confidence. [06:38] Oh, yeah. [06:38] I might just hit the BIOS and turn the nvidia battery-eater back on, and switch to nouveau for a while. [06:38] A fine choice! [06:38] * pitti sobs a little; I haven't had X problems since karmic or so [06:39] * infinity hasn't had X problems since he used to live with daniels. [06:39] And back then, the X problems were forced on me. :P [06:39] "Here, try this" ... "ARGH!" [06:44] RAOF: So, I'm not one to generally jump on the "omg, revert" bandwagon, but perhaps reverting to the Q version for now might be sane? [06:44] RAOF: Or does that present ABI problems that mean reverting a whole stack of crap? [06:44] Or API problems, rather, ABI's no big deal, we'd be rebuilding. [06:45] Or is it actually a kernel bug, and reverting the X driver won't make a lick of difference? [06:49] the X drivers didn't actually change in raring that much [06:49] so reverting the bits in the kernel ought to work, given that raring userspace works just fine on a quantla kernel [06:50] and given how everyone complains about this, it might not be the worst idea [06:51] pitti: I suspect "reverting the kernel" is a bit more troublesome, unless someone's already isolated a small and self-contained (set of) commit(s) to revert? [06:51] pitti: If so, find a kernel team member in your timezone (say, smb) and, for the love of god, make it happen. :) [06:52] it might actually be easier/possible to DKMSify quantal's i915? [06:52] at least back then I built the i915 module out of an otherwise unbuilt kernel tree [06:52] not sure how many dependencies it grew by now === smb` is now known as smb [07:46] * smb tries not to look too awake yet [07:54] * xnox my sandybridge locks up. TTY7 -> TTY1 -> TTY7 unfreezes it usually and then I get the popup "we detected it froze. Did you need a hard reset?" [07:54] xnox: was discussed some two hours ago already [07:54] xnox: Yeah, that's exactly how it works for me. pitti's not so lucky, VT switching doesn't fix him. [07:54] xnox: bug 1041790 [07:54] Launchpad bug 1041790 in xserver-xorg-video-intel (Ubuntu) "[sandybridge-m-gt2] GPU lockup IPEHR: 0x0b160001 IPEHR: 0x0b140001" [Medium,Confirmed] https://launchpad.net/bugs/1041790 [07:55] I see =) [07:55] * xnox just woke up and reading backscroll. [07:55] pitti: thanks for merging adt wrapper. Is it "deployed" to jenkins as well? [07:55] xnox: yes [07:56] pitti: awesome, thanks. [08:00] good morning [08:05] morning =) [08:05] i've now upgraded my snb laptop to raring as well, so if there are issues with the kernel they'll get sorted out ;) [08:07] pitti: oops, did I mix arrandale and ironlake.. [08:11] indicator-messages | 12.10.5-0ubuntu2 | raring | armel [08:11] is that normal that armel is still listed in rmadison? ^ [08:11] didrocks: Yes, we'll clean it up. [08:11] didrocks: ftpmaster still has the index files (but not the debs) for armel, that's all. [08:12] didrocks: And rmadison works off Packages/Sources files. [08:12] * xnox wonders if there are lonely armel raring boxes trying to upgrade..... [08:12] infinity: ok, thanks for confirming :) [08:24] does anyone know anything new about bug 965371? somewhere in the bug it says this was fixed in quantal, but on my quantal server I still see the same problem with pylplib [08:24] Launchpad bug 965371 in openssl (Ubuntu Precise) "HTTPS requests fail on sites which immediately close the connection if TLS 1.1 negotiation is attempted, on Ubuntu 12.04" [Medium,Triaged] https://launchpad.net/bugs/965371 [08:32] dholbach: Somewhere between mdeslaur and cjwatson you may find someone who knows the state of that mess. [08:33] dholbach: It seems that any way we try to fix it, we just break another weird corner case in the process. [08:34] I'll go through the entire comments in the bug again - maybe I find a workaround for mine :/ [08:40] dholbach, hey! i think the patch on https://bugs.launchpad.net/ubuntu/+source/hplip/+bug/1069324 looks good, but i don't have any rights - so how can i help out? [08:40] Launchpad bug 1069324 in hplip (Ubuntu) "diagnose_queues.py crashed with NameError in su_sudo(): global name 'utils' is not defined" [Medium,Triaged] [08:41] tkamppeter__, ^ do you think you could help out with this? [08:41] brendand, tkamppeter__ is our local printing expert :) [08:53] infinity: cjwatson: backlogging on yesterday's discussion [08:54] infinity: cjwatson: slangasek: so the automatic upload is checking the current version [08:55] like this night, it blocked because of upload out of trunk [08:55] so the archive is authorative [08:55] see https://jenkins.qa.ubuntu.com/view/cu2d/view/Indicators%20Head/job/cu2d-indicators-head-1.1prepare-appmenu-gtk/ [08:55] and particularly the artefact: https://jenkins.qa.ubuntu.com/view/cu2d/view/Indicators%20Head/job/cu2d-indicators-head-1.1prepare-appmenu-gtk/lastSuccessfulBuild/artifact/upload_out_of_trunk_appmenu-gtk_12.10.3daily12.11.28-0ubuntu2.xml [08:56] the issue there is that there was a distro patch (inline apparently?) that wasn't submitted/committed upstream [08:56] and when doing the bootstrap, this patch didn't show up [08:57] so it's only an issue with bootstrapping, we missed it apparently (cyphermox did the bootstrap and as there was no debian/patches…) === mcclurmc_away is now known as mcclurmc [09:16] so, just a head's up, this was merged only in the packaging branch, not upstream. Then it seems that cyphermox bzr merge the packaging branch upstream, but then, did a bzr revert debian/ (which is fine for new files as we don't want to have configure in the upstream repo, but not for modified one)… [09:16] so human error for the bootstrap, as it can happen for a manual merge and inline patch === henrix_ is now known as henrix === Tonio_ is now known as Tonio_aw [09:34] dholbach: So, I know you just sent an email shaming us all into patch piloting mor vigorously, but I was thinking I'd spend the better part of my piloting day tomorrow clearing out the SRU queues instead. We're about 80 uploads deep and climbing, and I suspect my time would be better spent there, before piloting even more patches that will land in upload queues that are stagnating. :P [09:34] infinity, works for me [09:34] I don't want to shame anyone [09:35] it's just that we need to do it and it will be good for us :) [09:35] infinity, +1 [09:35] mvo: Any reason you don't rev VERSION in softwarecenter/version.py when you tag/release? [09:35] mvo: My local copy is at 5.5.1.1 (because I revved it), but bzr trunk is still sitting at 5.3.7 :P [09:36] pitti: ok I checked, the chipset is arrandale, but the "graphics media hub" is ironlake, so the upstream bug is the correct one for this case I think [09:36] tjaalton: ah, thanks [09:36] pitti: so you could try i915.i915_enable_rc6=0 with it [09:36] the raring kernel [09:36] dholbach: I'll still check in with the bot and respond to people pinging on IRC for specific help, but yeah, I think my time will be better spent trying to reduce the upload queues so that puloted SRU patches aren't a month behind after they're uploaded. :) [09:37] * dholbach hugs infinity [09:37] s/puloted/piloted/ [09:37] tjaalton: $ sudo cat /sys/module/i915/parameters/i915_enable_rc6 [09:37] -1 [09:37] tjaalton: does that mean "enabled"? (I thought rc6 doesn't work with my chipset) [09:37] tjaalton: but I'll try it [09:38] hmm don't know what that value means. with the quantal kernel it was disabled for sure [09:38] then turned on in 3.7(?) and later disabled again [09:38] ah right -- that is the quantal kernel [09:38] I'll check if it's already in the next rc [09:39] pitti: /usr/lib/python2.7/dist-packages/bzrlib/plugins/dbus/activity.py:122: PyGIDeprecationWarning: MainLoop is deprecated; use GLib.MainLoop instead [09:39] xnox: yep; it needs GObject.MainLoop → GLib.MainLoop [09:40] infinity: no (good) reason, no, its autogenerated on build but bzr-buildpackage builds outside of the tree, so no good reason [09:40] infinity: thanks for your build fix btw :) [09:40] pitti: ack. It will annoy me enough to eventually make me upload a fix =) [09:42] pitti: looks like it's not in 3.7 yet, pinging upstream [09:42] tjaalton: where "it's" == ? [09:42] pitti: ah, the patch to revert rc6 for ironlake :) [09:42] ah [09:43] disables it again [09:43] tjaalton: so that should fix my crashes, but not infinity's and xnox' hangs? [09:43] but the kernel option I gave does the same when you have a chance to test it [09:43] pitti: yeah, probably doesn't change it for them [09:43] err, definitely doesn't [09:43] yep, I will; but I'm deep in a debugging session in a VM, so it'll be an hour or two until I can reboot [09:43] but probably fixes it for you :) [09:43] thanks for the hint! [09:44] sure thing, take your time and don't do anything critical in case it blows up on you :) [09:45] about the other hang, we've been discussing whether to use sna by default.. [09:46] I'll push -intel 2.20.14 to see if it fixes the issues some folks on the bug were seeing with sna [09:46] tjaalton: Is that my sandbridge bug? [09:46] infinity: yes [09:46] sandy, too. [09:46] tjaalton: it seems to trigger when I'm putting the machine under high load, such as creating VMs [09:47] tjaalton: Awesome. I shall upgrade and let you know if I stop seeing hangs over a day or two. [09:47] tjaalton: so I'll exercise it that way [09:47] tjaalton: I've been getting them every hour or two over the last few days, so shouldn't take long to confirm. [09:47] infinity: ok, I'll prepare & upload in a bit [09:47] right, I hope to see them myself on my t420s [09:47] This is also a T420s, so your odds are good. [09:48] great! :) [09:48] pitti: yeah that'd make sense [09:48] infinity: btw, which bios version do you have? [09:49] tjaalton: Happened with both 1.31 and 1.35 (I upgraded to see if things would improve) [09:50] infinity: I'm still on 1.30, all attempts to boot the update cd have failed so far.. was thinking if you happen to force pcie_aspm to save a watt [09:50] here it causes issues with the drive [09:50] tjaalton: Do you have your BIOS set to UEFI only? The update CD will only boot in Legacy. I found that out after an hour of head->desk. [09:51] oh === Steven_ is now known as [HacknamStyle]St [09:51] I'm not sure actually.. === [HacknamStyle]St is now known as [HS]Steven [09:52] seems to be using legacy only, so it's not that [09:53] tjaalton, do you mean ALPM for the drive issue or ASPM? [09:53] cking: the one you suggested :) [09:53] can't remember which one it was [09:54] tjaalton, ALPM for the HDD, in which case yes, it may give issues [09:54] so my machine seemed to work fine for a while, then I started getting i/o-errors [09:55] that's why I've been trying to update the bios to see if it would help [09:55] tjaalton, it may be more of a controller/drive combo issue [09:56] Hrm, I have controller/drive issues on my T420s, though not leading to errors or corruption. [09:56] It's just abnormally slow. [09:56] Compared to my older T61 with a similar (but older) drive. [09:56] yeah, I have an ocz agility 3 ssd on it [09:57] cking: Do you have any clever ideas on diagnosing "my disk performance seems to be crap"? [09:57] infinity, what kind of disk performance issues are you seeing? [09:57] I just opted to put 16G of RAM in it and do everything in tmpfses. [09:58] cking: Just... Slow. Heavy reads and writes are both lousy. So, startup of large applications, or long dpkg runs. [09:58] cking: Updating a chroot takes longer on my laptop than it does on my Panda with a USB disk attached. [09:58] cking: And it's pretty clearly disk I/O, cause it's blinding fast in tmpfses. [09:59] infinity, we are in the process of looking at a bunch of I/O performance issues [09:59] infinity: which fs? [09:59] tjaalton: ext4 [09:59] but it's taking forever to tease out [09:59] hasn't it been crap for many years already? [09:59] I can't remember a time when doing an rsync or copying large files hasn't brought my system to a crawl [10:00] cking: To be clear, it also doesn't seem to be vfs or filesystem, since my old T61, with a very similar setup, is much, much faster. [10:00] (load goes to 5 or 10, and everything feels like tar) [10:00] cking: Both are 2.5in 7200 RPM drives, though I'd expect the newer/denser one in the new laptop to be a smidge faster, rather than a ton slower. [10:01] (I haven't tried swapping drives yet, I may do that at some point as a data point, but I'm assuming it's the controller, or the driver for said controller) [10:04] infinity, well swapping drives is the first step to sanity checking thus [10:04] cking: *nod* ... Agreed. Just haven't found the time to take a screwdriver to both machines. [10:05] Though, I suppose once I do, I have a spare 2.5in SATA drive floating around that I can plug into a random ARM board. [10:05] That's a win, right? :P [10:05] :-) [10:06] * apw idly wonders if command queueing is enabled on the controller [10:06] I have a plethora of rather large 2.5in PATA drives kicking around, I wonder if adaptors are cheap enough to be worth driving to the store. [10:06] apw: Don't we enable that by default on anything built after, like, 2002? [10:06] oh we should indd [10:06] (And how can I find out? hdparm?) [10:06] indeed, but you never know [10:06] hdparm [10:07] What's the "show me everything you know about this drive" flag? [10:09] /dev/sda: [10:09] queue_depth = 31 [10:09] I assume that also means it's enabled? [10:09] didrocks: OK, so can you fix it so that the same sanity check is applied to bootstrapping? I'm sure we'll be doing plenty more bootstrapping, and it would be helpful for it to be robust. [10:11] cjwatson: yeah, well, we always have modified files (because of autogenerated changelog), so what I'm doing now is to go over all the indicator stack and bzr branch && bzr merge packaging -> look at what files were modified [10:11] infinity, mine has "* Native Command Queueing (NCQ)" in the capabilities list [10:11] Special-casing the changelog would be fine, of course [10:11] cjwatson: until now, I only spotted one project where the tests were commented downstream, so we definitively don't want to poick that :) [10:11] changelog, news and so on [10:12] cjwatson: I'm documenting the bootstrapping procedure with this [10:12] err, commands/fatures that is [10:12] and will write a checker [10:12] thank you [10:12] ogra-cb: Ahh, in -I? Yeah, mine too. [10:12] thanks, sorry for this oversight (but at least, the daemon picked it once you uploaded the fix) [10:13] right in -I [10:13] my fault, I should have picked it during reviewing cyphermox's bootstrap merge [10:13] (but it didn't appear on the diff of course as it was reverted) [10:13] well, it should still be autochecked immediately before copy, to minimise the race condition [10:14] cjwatson: I'm checking the version, if the version lies in the vcs? what do you mean? [10:15] the issue there was that a version claimed to be that version in the vcs wasn't exactly (and can't be on a bootstrap because of all those autogenerated files we don't have anymore) [10:22] didrocks: You must not trust the VCS for this purpose - you *must* double-check against the archive [10:22] The VCS is what we intend to ship to users, but the archive is what we *are* shipping [10:23] I'm much less concerned about differences in autogenerated files than I am in you checking that the version in the changelog is what you expect it to be [10:24] cjwatson: I'm trying to think of a way of doing that automatically, like apt-get source the previous version, rebuilding the previous vcs version (in addition to the current one) and diffing? ignoring some autogenerated/modified file [10:24] This is analogous (though obviously not identical) to auto-sync checking for an "ubuntu" substring in versions and refusing to ever overwrite [10:25] The problem at hand isn't that the contents aren't what you expect - it's that there's a whole new upload in the archive you don't know about [10:25] You can spot that with a simple version check [10:25] what do you mean? I do a version check, see the files above ^ [10:25] Err - then how did last night's breakage happen? [10:25] ok, let me explain again :) [10:26] So, I'm doing a version check twice [10:26] one at the very beginning of the process and one at the end [10:26] if the vcs doesn't have in debian/changelog the latest version published into distro, the component is ignored [10:27] for instance, after you uploaded the -0ubuntu2 yesterday evening, this night run published: https://jenkins.qa.ubuntu.com/view/cu2d/view/Indicators%20Head/job/cu2d-indicators-head-1.1prepare-appmenu-gtk/lastSuccessfulBuild/artifact/upload_out_of_trunk_appmenu-gtk_12.10.3daily12.11.28-0ubuntu2.xml [10:27] which marked the appemnu-gtk job as instable https://jenkins.qa.ubuntu.com/view/cu2d/view/Indicators%20Head/job/cu2d-indicators-head-1.1prepare-appmenu-gtk/ [10:27] this is to avoid this overwrite [10:27] unstable* [10:27] * cjwatson looks at the changelogs [10:27] Oh, so this was actually a human mismerge? [10:27] so what happens here is because of the bootstrap [10:27] right [10:28] I understand now [10:28] the vcs claimed to be at that version [10:28] when some local modification weren't committed [10:28] In that case I don't see this as a failure of the autopackaging/autouploading tools [10:28] A human committer screwed up [10:28] yeah, like when we merge from debian for instance === tkamppeter__ is now known as tkamppeter [10:28] it's a similar case of failure [10:28] A human committer screwed up, but a human uploader would have noticed, I'd like to think. Maybe not. [10:29] Right. I thought that the problem was that the upload had been entirely disregarded [10:29] Cause upload time is (traditionally) when you diff against the previous archive version to see if you buggered it. [10:29] infinity: Mismerges happen and reach the archive rather a lot [10:29] cjwatson: Sure, they do. Not arguing that they don't. [10:29] infinity, if you mismerge you probably mis-dput as well [10:29] cjwatson: ah, not at all [10:29] mdeslaur, pitti : sent a summary to the debian bug for the cupsd privilege escalation. I suspect mdeslaur's solution + hindering HTTP POST might be the only solution we have… [10:29] I'm still just very wary of the idea that sufficiently complex machinery can replace a real person giving a pass/fail on tagging/rolling a release. [10:30] infinity, those are not releases, they are daily snapshots [10:30] and this kind of this will only happen at bootstrap, (or if a merge from distro backported isn't properly done) [10:30] dholbach, no problem, I can apply the patch. I have to make an HPLIP upload to Raring anyway as there is a new release. [10:30] but I guess that an upload on distro will be noticed and the diff should be low enough so that the reviewer spot it [10:30] seb128: terminology [10:30] seb128: They're daily snapshots being released to users. That's a release. [10:31] * pitti remembers more than one buggered mis-merge upload that was done manually :( [10:31] brendand, ^ [10:31] thanks tkamppeter [10:31] pitti, I have a problem with setting permissions on a file of the cups-filters package. [10:31] cjwatson: so indicator stack is cleaned, I'm checking the last one, oif (but the other 25 projects enabled are cleaned, the only guilty was appmenu-gtk) [10:31] (i. e. reading debdiffs before upload is a great habit!) [10:32] tkamppeter: what are you trying to do? [10:32] OdyX: thanks [10:32] pitti: Yes, one I keep trying to get people to do. :P [10:33] infinity, well, most uploaders probably trust the content of the packaging vcses as well and don't bother a debdiff and read it before building/pushing ... so it's not much different, the checking is just done at commit time and not upload time === Tonio_aw is now known as Tonio_ [10:34] pitti, in debian/rules, in the binary-post-install/cups-filters:: section I do "chmod 700 debian/$(cdbs_curpkg)/usr/lib/cups/backend/serial" and in the resulting package /usr/lib/cups/backend/serial has still standard 755 permissions. [10:34] seb128: I was thinking of the cases where a new upstream version was done in UDD with only bumping debian/changelog, but not bzr merge-import (i. e. diff.gz reverted the newer version to the older one again) [10:34] seb128: I'd argue that those people are wrong, and codifying that as best practise is also wrong. :P [10:34] the autogenerated debian/patches/.patch doesn't help, of course [10:34] seb128: I was once caught out by ubiquity (it embeds other source packages on upload, but not in the VCS), so these days I do `pull-lp-source $pkg` and compare the debdiff of what i am about to upload. [10:34] tkamppeter: I bet that's done before dh_fixperms runs [10:35] pitti, right, I'm just saying that for normal day to day work (like doing a new version update in ubuntu), people tend to work, bzr diff, review the diff, upload, build, test, dput (without doing a debdiff) [10:35] tkamppeter: so you need to tell dh_fixperms to -Xserial [10:35] tkamppeter: move away from cdbs … :) . That and dh_fixperms indeed. [10:35] I do a lot of development in VCSes, but because the archive and source packages are authoritative, I always debdiff prev.dsc new.dsc before uploading. [10:35] xnox, you have more discipline than me then ;-) [10:35] seb128: "people" - I *never* do that [10:35] seb128: Which "people" are these? [10:35] * pitti always checks debdiffs [10:35] pitti, formerly, in the cups package it worked, was dh_fixperms only a recent addition? [10:35] OdyX: not really cdbs specific :) [10:35] seb128: And can we get them (you?) to stop? [10:35] I always always always debdiff before upload and tell people I sponsor to do the same [10:36] It's saved me from innumerable mistakes [10:36] pitti, how can I suppress dh_fixperms or make an exception for the mentioned file. [10:36] seb128: you do check debdiff when sponsoring? so why not do the same for your own uploads?! [10:36] It saves me from a lot of large mistakes, it also saves me from introducing annoying cruft here and there. [10:36] tkamppeter: so with cdbs it's DH_FIXPERMS_ARGS=-Xserial [10:36] tkamppeter: with dh7 you override it as normal and supply the argument [10:36] Even when I'm doing mass rebuild-only uploads and the like, the homemade scripts I use for that present me with a debdiff before I say yes [10:36] tkamppeter: no, dh_fixperms has existed for ages [10:37] tkamppeter: as I said; -X is a general debhelper option to ignore a file [10:37] xnox, because I read the diff before commiting to the vcs [10:37] infinity: (debian/patches/debian_changes *cough*) [10:37] seb128: ok. but that is racy, until we build out of VCS without uploading source packages. [10:37] The number of times I've commited a sane diff to a VCS then proceeded to produce a slightly insane source package is rather large. [10:38] infinity, well, if you know you can't get stuff done right :p [10:38] most of the time it is fine, until it isn't. =))))) [10:38] * seb128 hides [10:38] (joking) [10:38] seb128: says the man who's repeatedly dropped patches that were in the archive? :P [10:38] xnox, well, it's also that double checking takes time and sometime you have ETOOMANYTHINGS todo [10:39] A debdiff would easily show you "hey, I remember changing this thing, but I sure didn't drop fix_arm_again_argh.patch, I wonder what that's about." [10:40] if we want "debdiff and ack before upload" to be standard maybe we should have dput to do the debdiff and let you say y/n... [10:40] The checks are hardcoded into my fingers, and I still seem to get lots of uploads done, so ;-) [10:40] cjwatson: Pfft, you're pretty underrepresented every release in tumbleweed's pie chart of doom. [10:41] well, it's just that I never felt a strong need to get the diff a second way after checking it using the vcs [10:41] but maybe I'm wrong [10:41] in practice it didn't bite me too much so far so I never felt the need to change that [10:41] I honestly think you are, based on my experience of the two disagreeing from time to time [10:41] seb128: But the VCS diff != the package diff, especially if you have auto-generated files you don't check in, and also because the archive may have changes your VCS doesn't. [10:42] seb128: And if you think it hasn't bitten you, maybe the people who've had to unrevert things you've reverted in the past haven't yelled loudly enough. :) [10:42] I'm not saying I read every line of the debdiff, but I do skim it and check for files I wasn't expecting and the like [10:42] well I do debdiff in non trivial updates [10:42] or in merges [10:42] * xnox was bitten by not checking debdiff, don't want to be there again [10:43] yeah, usually the kind of errors that you make is not in the fine details, but more like "debian/control was autogenerated and I forgot to change control.in", or "debian/patches/ added/dropped an unexpected one" [10:43] infinity, if you didn't bother commiting your change to the vcs you have a blame for that as well [10:43] It's your responsibility to double-check that you aren't reverting stuff by accident. [10:43] seb128: No. I really don't. The VCS isn't authoritative, the VCS isn't authoritative and, also, the VCS isn't authoritative. [10:43] This goes for all uploaders. [10:43] infinity: (but you do cause pain to people who actually use it) [10:44] infinity, that's called "let's not bother to do my change properly and let's create work for others", not being a good citizen either... [10:44] seb128: I do try to check in more and more these days (though, the number of packages in the archive that have a VCS-* that I can't commit to is irksome), but that doesn't change that the archive is authoritative. [10:44] infinity, the archive being authoritative is not a valid reason to not bother doing the change properly and get them in the vcs if there is one [10:44] seb128: I could turn your statement right around for you, when your failure to merge the archive changes means people need to re-fix the same things. *shrug* [10:45] seb128: The difference is that when infinity gets this wrong it doesn't undo your work. [10:45] When you get this wrong it undoes other people's work. [10:45] That's clearly worse. [10:45] seb128: The VCS being your preferred workflow is not a valid reason to not diff against the archive to avoid reverts. QED. [10:46] infinity: -intel 2.20.14 uploaded. I'll give sna a go as well [10:46] cjwatson, well, I'm not trying to rank it, I'm just saying that ignoring the Vcs and let others deal with the work of founding the diff and getting it including is not correct either [10:46] (This would all be solved if we built directly from VCS tags, but we don't. And as long as we don't, people need to stop pretending their preferred workflow is also "the only way to upload correctly") [10:46] xnox: metacity on openjdk> I think there was some talk about MIRing something like twm instead [10:46] tjaalton: <3 [10:46] Sure. But it is not justification for pretending such things don't exist [10:46] xnox: I though openjdk just needed _a_ WM, not necessarily one as complex as metacity [10:46] cjwatson, oh, I don't, and I agree it's the responsability of the uploader to check that no archive change gets reverted [10:46] seb128: Hey, I tried to commit to software-center earlier today, and couldn't. This sort of thing frustrates me. :P [10:47] seb128: (mvo added me to the team, right after merging my changes from the archive, though. So, he did it right, and now I can do it his brand of right) [10:47] cjwatson, I just disagree on the fact that the archive being authoritative should be a right to bypass the Vcs and a way to dump the work on other [10:47] seb128: We're not saying it should [10:47] pitti: I have now finished downloading the source package. It can use metacity or twm. And it seems like it is used in the check target. [10:47] infinity's position sounds like "anyone should be able to just upload to the archive and not have to bother if the source is maintained in a Vcs" [10:48] but maybe I read him wrongly [10:48] Although I would say that you don't have a lot of experience of trying to do work across the whole archive, and running into the huge number of misconfigured VCSes [10:48] sorry if that's the case [10:48] pitti: i wonder if it is sufficient to move that bit to DEP-8 and not have either metacity nor twm in main. [10:48] infinity and I both do [10:48] xnox: twm is in universe, but putting it into main if we can drop metacity from main sounds like a good trade [10:48] pitti: the check won't run on non x86 platforms though =( [10:48] And that surely colours our outlook [10:48] seb128: I don't personally just upload willy-nilly without checking for a VCS-* field. On the other hand, your statement is more or less true. Archive uploads are the One True Source for the packages, like it or not. [10:49] cjwatson, well, if the Vcs is misconfigured it's the responsability of whoever is handling the Vcs and the package indeed [10:49] seb128: So, my position isn't as caustic as you make it out, I still try to find the maintainer's preferred methods. [10:49] seb128: which doesn't help, if the core-dev doesn't have write access to it. [10:49] seb128: Sure. But when you're uploading 500 packages and run into 100 of these ... [10:49] (Or whatever) [10:49] (And yeah, it annoys me to no end when someone bitches me out for not committing to a VCS I can't commit to) [10:50] Also, I won't commit to a VCS-* for a 100-package rebuild run or something. [10:50] xnox, a vcs for a package in the archive should allow commit from the people who can upload [10:50] (Though, I also don't care if those no-change changelogs get lost) [10:50] seb128: hahahahaha. I would love that to be close to true [10:50] infinity: if the only uploaded diff is a ubuntu1.1 no-change rebuild, I admittedly often don't bother grabbing the diff and committing it, I just overwrite it [10:50] (as recently seen with some py3.3 rebuilds) [10:50] cjwatson, I said "should" ;-) [10:50] cjwatson, infinity, from a diffent colored outlook; if you don't have commit access to the vcs; why don't you ask for sponsorship from the person who has? [10:51] seb128: ps branches? =) [10:51] seb128: Actually, I don't wildly care if it's commitable IF I also know that the people who maintain the package will integrate uploads. [10:51] diwic: When trying to complete a transition involving 500 packages? [10:51] Sure, if you don't mind it taking 5x longer [10:51] seb128: (For instance, not all of core-dev can commit to debian-installer, but all the people who upload debian-installer are also dediff freaks who will notice and merge) [10:51] anyway [10:51] I don't think we have disagreements there [10:52] diwic: Yeah, that's not going to happen. [10:52] infinity: Actually that's not a good example, it's lp:~ubuntu-core-dev/debian-installer/ubuntu [10:52] infinity: But for ubiquity, yes [10:52] (Which is a bug) [10:52] diwic: Except in rare cases of large packages (firefox/tbird, libreoffice, eglibc, gcc, etc) where I don't want to step on toes, asking permission to upload something I can upload is a horrible waste of effort. [10:53] diwic: If I notice that there's a VCS involved, and the change is non-trivial, then in general I will make an effort to offer up a branch for merging or some such [10:53] cjwatson: Oh, did it move somewhere in the last N years and I never paid attention, just checked out the new location? [10:53] * xnox off to a meeting in a glass cage =) [10:53] diwic: But usually, by the time a committer notices, I've finished all the rest of the work and long since moved on [10:53] diwic: And in all those cases, I tend to ask to be added to the team, rather than ask for sponsorship, but ymmv. [10:53] Interactions between humans are the slowest thing in the project [10:54] infinity: I don't think it was ever somewhere else, but you could well be mixing it up with another package [10:54] cjwatson: I could well be. I've long since forgotten and stopped caring about what ~ubuntu-installer buys me and what it doesn't. :P [10:55] ubiquity is the problem child I know about. I need to check whether bugmail configuration is now sane enough such that I can add ubuntu-core-dev to ubuntu-installer without spamming the world. [10:55] cjwatson: But the point still stands, I don't mind a package's VCS having a tighter control group than "core-dev", they act as a bastion of code review and release management. But the key is that that group needs to be okay with merging out-of-band uploads instead of being whiney. :) [10:56] cjwatson, infinity, so sure it buys you time, but you're putting the time on somebody else, so the time is not /saved/, it has just moved to somebody else. That might be fair; since we're short on people with your knowledge, but just want to point that out. [10:56] cjwatson: My complaint comes in when someone says "all releases must happen from our VCS" and "also, you can't commit to it". [10:57] diwic: I'm not trying to justify it and say that it's OK. [10:57] infinity: did anyone told that? [10:57] diwic: If building from VCSes was as unified and sane as buildin from source packages, this would be a very different conversation, to be fair. [10:58] infinity: you could simply push the changes into a seperated (owned by you) branch. but +1 on that the workflow is not ideal [10:58] diwic: I'm saying that I'm not prepared to wait for a sponsor in the case where our documented community procedures say I can upload; not that I'm not prepared to offer a branch for merging. [10:58] But that certainly takes substantially more time than just uploading. [10:58] infinity: hm, this reads a bit harsh, not meant this way :) [10:58] didrocks: It's certainly happened in the past. Or, rather, I've uploaded a package with a VCS I can't commit to, and later been told off for not instead submitting a merge proposal or something. Same general effect. [10:59] mvo: Hahaha. You always merge my archive uploads anyway, you're my hero in this debate. [10:59] infinity: that's why we added that to all projects following daily-building (http://bazaar.launchpad.net/~unity-team/unity/trunk/view/head:/debian/control#L48) [10:59] infinity: not sure to make is more visible though [10:59] mvo: Plus, you added me to your commit group within hours of my asking. :P [10:59] if the UDD branch would simply allow to merge the upload into trunk ... [10:59] so it's like "the daily upload system will notice/stop and we'll get informed and clean that ourself" [10:59] infinity: haha, indeed, I would have added you within minutes if I wasn't sleeping [10:59] (which is what happened once the fixed appmenu-gtk was uploaded) [11:00] didrocks: Yeah, that's perfectly suitable (not necessarily all the automation that I'm still having a hard time coming to terms with, but the private group but open upload policy) [11:00] At least, that works for me. [11:01] One less VCS I need to commit to. :P [11:01] :) [11:01] It seems more work for you guys than just adding ubuntu-core-dev to unity-team, though. At least if bugmail configuration and the like is suitable for that. [11:01] cjwatson: it's a long term plan/long discussion with PS. I'll spare you that :) [11:01] pitti, I added "DEB_DH_FIXPERMS_ARGS := -Xusr/lib/cups/backend" as it was in the cups package, thank you anyway for the hints. [11:02] Some day, I'll commandeer a bunch of tools guys and re-do the Maemo build-from-tags workflow and we can all stop having this "which copy of the source is the authoritative one?" debate. [11:02] (and yeah, a lot of spams because of all the bugs, but I think we'll sort something out in the end) [11:03] cjwatson: ok, FYI, finished to check all the 25 projects bootstrapped and appmenu-gtk was the only guilty one [11:03] now, let me make that clear in the bootstrap procedure that it's something to check [11:04] cjwatson, I don't exactly know what notification mechanisms there are when uploading things in the archive, but can we do something on that side to notify the vcs owner that a non-vcs based upload was done or something? [11:05] diwic: In most cases, the "VCS owner" is "ubuntu-dev" or "ubuntu-core-dev", so, uhm, hell no. [11:05] diwic: Package subscriptions, however, are a long time Soyuz wishlist bug. [11:05] UDD was supposed to solve this - in the case where it's a UDD branch, the importer submits a merge proposal [11:05] But the UDD importer has enough other problems that it's tough to rely on it [11:06] infinity, what is the recommended way of me finding out that an p [11:06] oops [11:06] infinity, what is the recommended way of me finding out that an upload of pulseaudio has been done? [11:06] Honestly, it doesn't seem like that much effort to just grab the latest source and debdiff before you upload. [11:06] and I need to merge back the result into the vcs [11:07] diwic: 'pull-lp-source -d pulseaudio' and check that it's the version you expect [11:07] If no one uploaded between your last and your current, apt-get source is a no-op (if you still have your old one), and you want to debdiff before upload ANYWAY, since the person introducing cruft may well be you. :P [11:08] I've certainly broken my own sources between my upload and my upload before. [11:08] I'm pretty much a jerk to myself that way. [11:08] pull-lp-source has made lots of things massively easier for me. [11:08] ok [11:08] Yeah, I just recently have started trying to train my fingers to use pull-lp-source and pull-debian-source. [11:09] Especially as a replacement for surfing old versions on lp.net/ubuntu/+source and snapshot.debian.ogr [11:09] cjwatson: https://wiki.ubuntu.com/InlinePackaging?action=diff&rev2=11&rev1=10 FYI [11:11] cjwatson: Oh hey, instead of this scintillating conversation about various heads impacting random flat surfaces... [11:12] cjwatson: Did you ever look into the coreutils testsuite hang on powerpc? (It's factor(1) hanging on certain input, I haven't gotten much deeper into it than that) [11:13] cjwatson: Annoyingly, due to the britney hack for sulfur's sadness, it migrated despite the FTBFS. Which, I suppose, isn't world-ending, but irksome. [11:14] infinity: No, I got as far as filing RT#57703 ... [11:14] infinity: Don't suppose you have a box I could debug it on? [11:14] cjwatson: I do indeed. [11:14] cjwatson: You can even have root. [11:15] (Cause I'm too lazy to setup up schroot on it right now) [11:15] can somebody please reject https://code.launchpad.net/~scarneiro/ubuntu/raring/adns/fix-for-ignored-make-clean-errors/+merge/136558 and https://code.launchpad.net/~scarneiro/ubuntu/raring/dictclient/fix-for-ignore-make-clean-errors/+merge/136563? [11:18] dholbach: erledigt [11:20] danke pitti :) [11:21] pitti, can you upload cups-filters from BZR to Debian and Ubuntu Raring? I have released 1.0.15 fixing some bugs. [11:22] tkamppeter: we should merge the debian-wheezy to include the copyright fixes. [11:22] tkamppeter: i. e. re-name your 1.0.25-0ubuntu1 version in bzr to -1 and experimental? [11:22] pitti, will do. [11:23] tkamppeter: I am, just wanted to confirm that this is correct [11:23] tkamppeter: ok, doing [11:24] pitti, done. [11:24] hmm [11:24] * pitti aborts build, pulls, and does again then [11:26] tkamppeter: uploaded to experimental; we can sync it in half a day or so when it got imported [11:26] (and on a related note, yay for 5 times faster upload bandwidth) [11:27] pitti, OK, thanks. [11:28] can someone please remind me what the magic $http_proxy was on the porter boxes to get outside? [11:29] ah, found it, but I get a "403 forbidden" with it [11:29] so, tarball upload it is === _salem is now known as salem_ [11:34] pitti: we should make tkampetter a DM [11:39] tkamppeter: is there some useful stuff to backport to cups-filters' 1.0.18 ? === lenios__ is now known as lenios === Tonio_ is now known as Tonio_aw [12:03] OdyX, there are lots of bug fixes on pdftops and texttopdf (debian/changelog entries with references to bug reports). Each of them are fixed by rather small changes in the upstream code (see upstream BZR). These a worth backporting. The libqpdf switchover of pdftopdf is a bigger change which you should not backport. [12:04] tkamppeter: hrm yes will try. [12:05] tkamppeter: could you find what was breaking the test-suite? [12:15] OdyX, no, it seems that for some unknown reason CUPS does not remove the job control files when the exception path PS->pstops->PS printer is allowed. I cannot imagine why. AFAIK it should be in CUPS' responsability to remove these files. [12:18] tkamppeter: printing to stderr, wrong return value, don't know, [12:31] infinity: per comment in bug 1084054, could you kill the vlc in -proposed, please? [12:31] Launchpad bug 1084054 in vlc (Ubuntu Oneiric) "Denial of service via crafted PNG file" [Undecided,Confirmed] https://launchpad.net/bugs/1084054 === cpg is now known as cpg|away [13:19] I have been trying to figure out just exactly how to push a new version of a package into the repos (to be reviewed) so it can be uploaded. Anyone know / have a good resource to read? [13:21] Do I just make a separate branch? [13:23] israeldahl: yes. http://developer.ubuntu.com/packaging/html/udd-merging.html#merging-a-new-upstream-version [13:24] awesome, thank you! [13:27] does someone know why gettext fails to build (unmet dependencies)? default-jdk depends on default-jre (= 1:1.7-43ubuntu3) and openjdk-7-jdk (>= 7~u3-2.1), but they are not going to be installed? [13:28] doko_: ^ [13:28] tumbleweed can i use git instead of http? [13:30] bdrung: huh? I sbuilt it locally about an hour ago and it was fine [13:30] bdrung: (working on a merge so please leave it alone) [13:30] oh damnit you already uploaded [13:30] bdrung: just leave it please. I'll sort it out [13:30] cjwatson: yes (it built fine in pbuilder) [13:31] cjwatson: thanks. i am happy to leave it to you. [13:32] israeldahl: not entirely sure what you mean there. But we need a source tarball. Ideally one the upstream provides, although sometimes one has no choice but to generate one from their git repository [13:33] Ok, just wondering. I can use a local tarball though. thanks [13:33] @pilot in [13:33] bah stupid bot [13:33] on vacation [13:34] that bot never works for me ... ever [13:34] bdrung: Hmm, OK, it's a problem between ca-certificates and ca-certificates-java, but I'm going to need to rebootstrap ca-certificates-java somehow to fix it [13:35] (it can't build for the same reason) === Zic is now known as Guest73291 [13:36] hallyn, hey, thanks for the qemu fixes, the current binary seems to give a working spice on i386 ;-) [13:36] hallyn, "dh_link -pqemu-kvm-spice usr/bin/qemu-system-i386-spice usr/bin/kvm-spice" is buggy though [13:37] hallyn, there is no "qemu-system-i386-spice" binary in the deb, only a "qemu-i386-spice" and "qemu-spice" === dpm_ is now known as dpm === Guest73291 is now known as Zic [13:44] i386 builders on manual while I rebootstrap ca-certificates-java === Tonio_aw is now known as Tonio_ [13:50] seb128: then perhaps one more ppa upload before going to archive :) [13:50] hallyn, well, just fixing the symlink should be easy enough to fix with the archive upload ;-) [13:52] ok you've convinced me === yofel_ is now known as yofel [13:55] seb128: how weird though, why no qemu-system-i386-spice? huh... [13:56] yeah no those are not the same [13:56] hallyn, I don't know, good question ... what's the difference with -system? [13:56] the non-system one is qemu-user, different thing. drat. [13:56] back to the build rules [13:58] oh duh. i see. my stupid bad [14:00] really there is no reason for the qemu-user-spice binaries. I think I"ll pull them from the package [14:01] they're not even linked against libspice. [14:01] and what exactly would they do anyway [14:02] hmm, the sponsoring queue is HUGE. Lets see if I can help [14:02] Does anybody know what to do with: https://bugs.launchpad.net/ubuntu/+source/gnome-screensaver/+bug/952771 ? [14:02] Launchpad bug 952771 in gnome-screensaver (Ubuntu) "Gnome Screensaver should handle expired password tokens" [Undecided,Confirmed] [14:03] It looks more like a feature request for me, should I tell the reporter that it will be fixed in raring? [14:03] ogra-cb: right [14:07] OdyX: hi! I've got comments about your cups git tree: 1- you should probably add PidFile to the list of warnings, 2- you should remove the VCS tag from cups-files.conf too [14:09] mdeslaur: the first is committed already, but not pushed. good point for the second. [14:09] OdyX: cool [14:10] mdeslaur: you confirm that these are only warnings, right ? Are they considered as configuration stanzas or just discarded + warning ? === salem_ is now known as _salem [14:11] OdyX: yeah, so as I said in the bug, I'm probably going to push the config file split in Ubuntu stable releases, but without conf file changes...it's not a really elegant thing to do, but we try not to have any conf file prompts with security updates [14:11] OdyX: they are just discarded and logged [14:11] mdeslaur: ah okay, good. [14:11] mdeslaur: how do you avoid cupsd.conf prompt when SystemGroup is removed ? [14:11] IS there any guide on understanding diff3 conflict markers? [14:12] OdyX: I'm not going to remove SystemGroup, and I'm going to remove the warning about it [14:12] mdeslaur: you'll get a prompt as soon as an admin modified the conffile through the webadmin, no ? [14:12] mdeslaur: ah, yeah, good point. [14:12] OdyX: it will be left there, which is kind of ugly, but harmless [14:12] since it,s the only one we shipped by default in the conf file, it's not so bad [14:13] and once they upgrade to a newer release, then the conf file cleanup will happen [14:13] mdeslaur: sure. But it gets in the way when you get the configuration prompt for another reason. [14:13] cjwatson: ping [14:14] OdyX: well, in theory, we shouldn't be getting any configuration prompt [14:14] FourDollars: yes? [14:14] (for the stable releases) === _salem is now known as salem_ [14:15] cjwatson: ubuntu-meta needs some patch for linux-headers-generic-lts-quantal . [14:15] I know, I was holding off on deciding about that [14:15] cjwatson: ubuntu-meta on precise. [14:16] cjwatson: I see. Thanks. [14:16] You may note my germinate upload which was aimed at doing something about that [14:16] (well, supporting it) [14:17] OK. There is some thing I did not follow. [14:17] mdeslaur: ah, you mean that you'll avoid the configuration prompt by not changing cupsd.conf, right ? [14:17] aka shipping the same one. Good idea. I had hard time parsing your idea === chilicuil is now known as chilicuil_away [14:18] OdyX: yes, that's what I meant [14:18] mdeslaur: good. That said, I noticed Wheezy ships a cupsd.conf with a bloody cvs tag, I hope Ubuntu's doesn't. [14:19] OdyX: there's one in cupsd.conf.default, but none in cupsd.conf [14:19] mdeslaur: nice. [14:19] (at least, on quantal...haven't checked the older releases yet) [14:19] cjwatson: Have you also patched ubuntu-meta of precise for linux-headers-generic-lts-quantal ? === 20WABM4XS is now known as jussi [14:20] mdeslaur: I'll try to get the 1.5.3 (precise) version done later today (or tomorrow). [14:20] cjwatson: I didn't see newer ubuntu-desktop in precise-proposed. === jussi is now known as jussi01 [14:20] but as it will be for our next stable, I'll get this SystemGroup thing dropped. === jussi01 is now known as jussi [14:21] FourDollars: I haven't uploaded it yet [14:21] FourDollars: It's not urgent compared to all the other SB work [14:21] FourDollars: But I know about it and I'll sort it out, don't worry [14:21] cjwatson: So I will upload it eventually, right? [14:22] cjwatson: So you will upload it eventually, right? [14:22] FourDollars: That's what I said, yes [14:22] cjwatson: Got it. I just make sure you have noticed that. Thanks. [14:24] s/make sure/want to make sure/ [14:24] OdyX: oh, one more thing...the upstream security patch drops UseNetworkDefault from the html documentation, but we still have that option in one of our other patches, so I added it back in [14:27] mdeslaur: do you have commit rights ? [14:27] OdyX: no [14:27] mdeslaur: I'd be happy to have you commit these in our experimental repository directly, or handle that in branches there. [14:27] mdeslaur: alioth account ? [14:28] it facilitates merging and diffing. [14:28] OdyX: I don't have a alioth account, sorry [14:31] mdeslaur: no problem. It's git afterall. [14:31] OdyX: oh, yeah, I'd have to learn git too :P [14:31] * mdeslaur cringes [14:31] mdeslaur: eh, yeah. Yaknow we're in 2012 right ? :) [14:32] OdyX: yeah, that's why I use bzr! :) [14:32] hehe :) [14:36] * cjwatson belatedly remembers to put the i386 builders back on auto === cr3 is now known as Guest93140 [14:58] mdeslaur: regading your "2- you should remove the VCS tag from cups-files.conf too [14:58] " that's done already. cups-files.conf is in KEEP [14:59] doko_: Do you have a current VCS for Ubuntu binutils? I don't really want to upload it for a single change [15:00] OdyX: oh, hrm...it didn't seem to work for me...ok, thanks, I'll take a look on my end [15:00] OdyX: sorry for the false alarm [15:04] mdeslaur: np. More eyes don't hurt. === Tonio_ is now known as Tonio_aw [15:20] xnox: with the loss of metacity, does that mean that 3D cards will be needed for installing Ubuntu Desktop? [15:21] compiz alone doesnt need much, llvmpipe should work snappy with it even on slow CPUs [15:21] xnox: I guess it's already a requirement of sorts, so nevermind, E_NEEDSOMECAFFEINE [15:21] micahg: not quite; that was done with dropping unity-2d last cycle already [15:21] and yeah, you are installing ubuntu-desktop which definitely requires GL [15:21] micahg: 3D cards are not needed, as we have llvmpipe. [15:22] xnox: (which doesn't really get you that far, but oh well) [15:22] so yes, you do need a 3D card [15:22] or a really fast CPU [15:22] pitti, its finbe for plain GL stuff, as long as there is no excessive compositing [15:22] pitti: what do you mean? /me ran installer in the dog slow VM. [15:23] pitti ogra_ micahg: note that for the installer I only enable a single compiz plugin (decor) and hope to have fast texture rendering. [15:23] compiz as a WM is really low demanding if you dont have any fancy effects in use [15:23] xnox: well, sure, but it's not really a joy to use unity with all its effects there [15:23] (although there aren't that many textures in the installer) [15:23] or on an arm machine [15:24] ogra_: yeah, that doesn't seem to apply to unity as a whole though :) [15:24] right, on arm devices where we dont have GL we dont provide ubuntu-desktop :) [15:24] * xnox doesn't even let to resize or move windows =)))) win \0/ [15:27] micahg: no need to CC me btw, I'm already subscribed to ubuntu-devel [15:30] didrocks: sorry === doko_ is now known as doko [16:04] cjwatson, just a personal one. which change do you mean? [16:04] doko: s/gettext:any/gettext/ in debian/control [16:04] I see pitti uploaded something following your most recent upload, if you didn't already know about it [16:05] seen that, and integrated for the next upload [16:05] doko: cf. coreutils and most of the other stuff I uploaded today [16:05] doko: thanks [16:13] micahg, xpathselect is a new source... [16:13] micahg, I'm not sure what your "The only thing updated was debian/copyright." means [16:13] seb128: yeah, I know, I got trigger happy with E-Mail today, see followup [16:14] Launchpad failed [16:14] micahg, ok... === Tonio_aw is now known as Tonio_ === deryck is now known as deryck[lunch] === Tonio_ is now known as Tonio_aw [17:06] didrocks: bootstrap> ah, alright, thanks for the explanation [17:06] yw :) [17:06] sorry for missing that in the review [17:08] * apw has an ocaml package which is using ocamlfind, and that is producing paths from another /build presumably from a library -- any idea how to debug such a thing? [17:09] stgraber: i guess we leave it up to jodh to do the final merge? [17:11] barry: yep, I don't think I have commit rights to upstart's trunk, so I need jodh for that [17:17] hmm [17:20] what's made $world uninstallable in r-proposed? [17:21] example? [17:21] I didn't phrase that quite correctly [17:21] https://launchpadlibrarian.net/124428233/buildlog_ubuntu-raring-i386.libcanberra_0.30-0ubuntu1_FAILEDTOBUILD.txt.gz [17:22] hm === tyhicks` is now known as tyhicks [17:22] gettext fallout I guess [17:22] component-mismatches, fixing [17:28] cjwatson, " debhelper : Depends: po-debconf but it is not going to be installed" ... that's likely the same issue you just fixed? [17:28] seb128: ok, i think the pkg is good now, will push soon (qemu-linaro) [17:28] hallyn, great, thanks a lot of the efforts for enable spice on i386 ;-) [17:28] seb128: that's in my build log too - I'm guessing they were all the same root cause [17:29] Laney, oh, right, I read the first line about dh-translations before ;-) [17:34] seb128: Yes [17:34] cjwatson, thanks [17:34] I'll do a mass give-back of stuff affected by uninstallability a bit later [17:34] modulo EOD soon [17:38] seb128: i don't have upload rights. do you mind grabbing the ppa6.dsc, removing the ppa6 from changelog, and pushing? [17:38] hallyn, doing it [17:40] seb128: thanks === Ursinha is now known as Ursinha-afk [17:56] stgraber: mark as merged https://code.launchpad.net/~jamesodhunt/ubuntu/precise/libnih/bugs-740390+1062202/+merge/130504 already in precise-proposed [17:57] xnox: done [17:58] * Laney is glad we have this task to keep the TB busy [18:00] Laney: I'm sure they value this added karma :p [18:01] Laney: well is it TB or just pitti & stgraber ?! =) [18:01] anyone on the TB is an owner of ~ubuntu-branches and can do it [18:01] the real power brokers === deryck[lunch] is now known as deryck === Ursinha-afk is now known as Ursinha === mcclurmc is now known as mcclurmc_away === mcclurmc_away is now known as mcclurmc === mcclurmc is now known as mcclurmc_away [19:51] mdeslaur: Done. [20:05] dang [20:06] I agree. [20:11] @pilot in === udevbot changed the topic of #ubuntu-devel to: Ubuntu 12.10 released | Archive: Open | Dev' of Ubuntu (not support or app devel) | build failures -> http://qa.ubuntuwire.com/ftbfs/ | #ubuntu for support and dicussion of hardy -> quantal | #ubuntu-app-devel for app development on Ubuntu http://wiki.ubuntu.com/UbuntuDevelopment | See #ubuntu-bugs for http://bit.ly/lv8soi | Patch Pilots: infinity [20:28] cjwatson: is it possible to do both raid + luks encryption during preseed? [20:49] HI [20:49] hi [21:16] zul: i'm going to add to qemu-kvm.conf in raring an optional auto-mount of hugepages fs. do you think i should do so in the default /var/lib/hugetlbfs/group/kvm, or a shorter /run/hugetlbfs/kvm ? [21:17] hallyn: i really dont have an opinon [21:18] slangasek: ^ i assume there are LSB type considerations, among others.... [21:21] zul: there are two reasons to pick a custom location: [21:21] zul: 1. it's long to type out 'kvm -mempath /var/lib/hugetlbfs/group/kvm/page-xxxxxxx' :) (but that's not that good a reason) [21:22] zul: 2. libvirt is going to need to know the path, so we might want to pick our own so we always know where it is [21:23] i think the /run/hugetlbfs/kvm might be a good choice but that looks wrong again i have no opinon :) [21:23] hallyn: I don't think it matters either way under the FHS / LSB. It would be nice if such things were standardized, but wishes and horses [21:24] zul: the problem with /var/lib/hugetlbfs/group/kvm/ is that the final pathname is dependent upon the supported hugepage sizes [21:24] hallyn: this is somewhat comparable to the kernel's rpc_pipefs filesystem, which Debian mounts under /var/lib and I've moved to /run - but the only reason I've moved it is because of bootstrapping problems for /var itself [21:24] actually that might be a wishlisht bug worth filing against hugadm [21:24] which I don't think apply here [21:24] it should have a /pagesize-default directory === Guest46508 is now known as dpb___ [21:24] slangasek: no, i'd do it here like this because otherwise libvirt will have todo ititself anyway, [21:24] so that it can hardcode a path in /etc/libvirt/qemu.conf [21:25] hallyn: right, there should be an agreed convention on where it should live; I'm just opining that /var/lib vs. /run doesn't matter much AFAICS === salem_ is now known as _salem [21:27] slangasek: for a new mount i wouldn't care where it goes, my question is more about whether i can make myown mount or whether i shoudl use hugeadm --create-group-mounts=kvm [21:27] oh [21:27] slangasek: the problem with the latter is that the resulting path, /var/lib/hugetlbfs/group/kvm/pagesize-2097152/, [21:27] I have no informed opinion on that question :) [21:27] 1. is long, and 2. is not predictable across arches [21:28] slangasek: ok :) [21:29] slangasek: thanks [21:44] infinity: would you mind taking a peek at bug 1068199? [21:44] Launchpad bug 1068199 in eglibc (Ubuntu Lucid) "please add support for MAP_HUGETLB in eglibc for Lucid" [High,In progress] https://launchpad.net/bugs/1068199 [21:44] stokachu: doesn't that one already have a comment from me? [21:44] * infinity looks. [21:45] Oh, no. It doesn't. I'm thinking of another lucid bug, perhaps. [21:45] :P [21:46] cyphermox: if you get a chance could you check up on the status of bug 967091 [21:46] Launchpad bug 967091 in libvdpau (Ubuntu Precise) "Wrong tint in flash when it uses video acceleration" [High,Confirmed] https://launchpad.net/bugs/967091 [21:46] stokachu: Do various things need rebuilding against those new headers to make it all work? [21:47] infinity: the package to make use of this has been rebuilt and is in -backports i believe [21:47] stokachu: Sure, but wouldn't it need to be rebuilt against proper glibc headers? :P [21:48] ah.. well..ermm.. im not sure [21:48] stokachu: Unless they statically defined the constant in their own source, which would then mean I don't have to. [21:48] i would have to do some digging on that [21:48] arges: ^ do you know? [21:49] looking [21:49] cyphermox: scratch that wrong person [21:50] stokachu, I think its a static thing... although we could probably test this right? [21:51] mdeslaur: bug 967091; could you take a look at this status when you get a chance? [21:51] Launchpad bug 967091 in libvdpau (Ubuntu Precise) "Wrong tint in flash when it uses video acceleration" [High,Confirmed] https://launchpad.net/bugs/967091 [21:51] arges: See, if it's just a static define, one could re-backport libhugetlbfs with the constant set, and be done with it. I could build a quick test package for you right now. [21:51] stokachu: ^ [21:51] arges: what was the other package in question again?? [21:51] * infinity does this. [21:51] infinity, so I remember the person who was affected by this could install a the .deb from a newer release and it worked [21:52] oh yea thats right [21:52] infinity, but if they installed the version built in the lucid chroot (the lucid backported version) it did not work [21:52] even though they were the same exact versions of libhugetlbfs [21:55] stokachu, another easy experiment would be just to installer the newer eglibc version (with the proper MAP_HUGE define) in lucid and see if it works. but I feel like I've already done that [21:56] arges / stokachu: New package incoming. [21:59] ok [22:01] arges / stokachu: http://people.canonical.com/~adconrad/hugetlb/ [22:02] If that package behaves correctly (It's built on lucid), then I think we should upload that. I'll keep the eglibc bug queued as well, for it I need to do a lucid upload for other reasons. [22:02] ok ill get this sent out for testing [22:02] Also, a review of the debdiff there would be appreciated. :P [22:03] thanks i just quickly looked at the diff and it looks straightforward and should hopefully work [22:03] Sarvatt, https://code.launchpad.net/~darkxst/ppa-purge/lp706774/+merge/137061 [22:05] infinity: lastly, this one isn't an sru but could you check to see if there is a precise upload for this? bug 1004775 [22:05] Launchpad bug 1004775 in network-manager (Ubuntu Precise) "NetworkManager restarts dnsmasq and adds host route on every IPv6 route lookup" [High,In progress] https://launchpad.net/bugs/1004775 [22:05] if it just needs sru written out i can do that but didnt know if it required sponsorship [22:06] stokachu: There's nothing uploaded for it in the precise queue, no. [22:07] stokachu: https://launchpad.net/ubuntu/precise/+queue?queue_state=1 for future helping yourself to said information. :) [22:07] ah sweet /me bookmarks [22:07] cyphermox: Care to poke at 1004775 with stokachu for precise? [22:07] bookmarked [22:07] infinity: ok [22:08] stokachu / arges: You can ignore the ? fluff there, the URL to remember is just /ubuntu/$dist/+queue [22:08] this one basically needs work in dnsmasq and nm [22:08] stokachu / arges: The drop-down lets you get at unapproved, new, etc. [22:09] cool yea this is very helpful thanks [22:10] cyphermox: that different from what was done in quantal? [22:10] stokachu: Anyhow, I'm not actively monitoring that bug, so if you get that hugetlb backport tested for me and it's all good, give me the go-ahead and I'll sign and upload it. [22:11] Err, and add a bug reference to the changelog. :P [22:11] infinity: will do - ive already sent it off to be tested [22:11] * infinity adds the bug ref now, so he doesn't forget. [22:14] stokachu: not especially different, but one of the patches that really makes a difference is really not simple [22:15] cyphermox: ah ok, well if you could keep that bug on your radar for when you get some time to dig more into it [22:17] cyphermox: and as always if you need me to do the trivial stuff so you can concentrate on the patch just let me know [22:19] infinity: could you point me to someone who could answer a question about debian preseed with setting up RAID+Luks? Or if you know if thats even possible. I can do one or the other but not both [22:20] stokachu: You want xnox. [22:20] stokachu: Maybe. [22:20] xnox :D [22:21] xnox would certainly know the answer [22:22] ok cool ill try to catch up with him tomorrow during his local time [22:22] stokachu: here's what I'll do [22:23] just putting food in the oven so I can eat at some point and I'll poke at it now and until it's ready to upload, hopefully sometime before tomorrow [22:24] cyphermox: awesome, really appreciate this [22:32] stokachu, there's already a sru in progress in precise, so if you want to help actually independently verifying https://bugs.launchpad.net/bugs/995165 would help a huge amount [22:32] Launchpad bug 995165 in network-manager (Ubuntu Precise) "IPv4 connectivity broken after installing from ubuntu-12.04-alternate-amd64.iso" [High,Fix committed] [22:32] stokachu: however, it requires ipv6 connectivity while you do the install, and installing with -proposed enabled [22:35] cyphermox: ok ill see what i can do in the lab and update the case [22:44] stokachu: I'm actually not sure. I suspect not without hacking - things are not as nestable as they should be. :-( [22:46] cjwatson: ok cool we werent sure it was possible so ill relay that to them [22:47] cjwatson: you think maybe we should consider a feature request for something like this? [22:47] for future planning === cpg|away is now known as cpg [22:49] stokachu: status? [22:50] mdeslaur: your last comment #193 indicated it was still waiting on sru -- was curious if youve gotten any feedback on it yet or if I should try to work with SRU team [22:51] its still in unapproved queue https://launchpad.net/ubuntu/precise/+queue?queue_state=1&queue_text=libvdpau [22:51] stokachu: I have not gotten any feedback from the sru team, no [22:51] stokachu: could you try pinging someone from the SRU team? if there's an issue, let me know. [22:52] mdeslaur: sure thing just didnt want to step on any toes if you were actively doing that -- i can see about getting it reviewed [22:52] thanks for the response [22:52] infinity: you up for one more? :) [22:52] stokachu: Sure [22:53] bug 967091 [22:53] Launchpad bug 967091 in libvdpau (Ubuntu Precise) "Wrong tint in flash when it uses video acceleration" [High,Confirmed] https://launchpad.net/bugs/967091 [22:53] stokachu: Though not quite sure when it'd fit - we really need to redesign the whole recipe format IMO :-/ [22:53] cjwatson: ok ill talk with mattrae and see if we want to file a request for that === cpg is now known as cpg|away [22:54] stokachu: np, thanks for doing a follow-up on it [22:54] mdeslaur: my pleasure [23:02] stokachu: Maaaaybe. [23:02] infinity: hopefully this one is just a click of a button :D [23:02] aol style [23:03] stokachu: Sure. Tell me all about it while I'm out smoking. [23:04] infinity: pretty straight forward, users seeing a blue tint on on flash videos running nvidia-current/-updates [23:05] the fix affects libvdpau's behaviour for flash [23:06] infinity: i haven't dug through the source to tell you exactly what the patch is doing but maybe mdeslaur could shed some light on it [23:07] infinity: binary flash inverts two color channels when using vdpau acceleration. Newer vdpau versions detect when flash is using it, and re-inverts the two color channels [23:07] but i can tell you blue faces dont show up anymore :) [23:07] infinity: so flash videos don't have blue faces :) [23:08] it's a dirty hacky workaround, but there is no hope of adobe fixing the issue in flash itself [23:08] and it's included in quantal+ and in debian's vdpau for a while now [23:08] stokachu: So, what's the button that needs pushing? It this something I need to review in the queue, or release from proposed? [23:09] its in the unapproved queue atm [23:09] hi all it's paddy drunk as hell [23:10] https://launchpad.net/ubuntu/precise/+queue?queue_state=1&queue_text=libvdpau [23:10] stokachu: Check, having a look. [23:14] ttfn [23:16] stokachu / mdeslaur: Accepterificated. [23:16] hehe [23:17] infinity: thanks [23:17] infinity: awesomeeee [23:17] mdeslaur: ill get this tested and verified in the next few days to get it wrapped up [23:17] I'm going to miss all the andorian porn [23:17] lol [23:19] ok thanks all :D im off to a family movie night with fairly odd christmas woooot === cpg|away is now known as cpg