[00:02] <RAOF> jordi: The pkg-config madness was, from memory, because configure would pick up on the native-arch pkg-config files for the biarch build - even when there weren't actually any biarch libs for those, which would cause linking to fail.
[00:21] <jordi> RAOF: hm, pulse plugin which is probably the most interesting one wasn't built in the 64bit version
[00:29] <RAOF> jordi: Is libpulse in amd64-libs?  If so, the magic *should* pick it up.
[00:31]  * RAOF actually looks at the filelist of amd64-libs
[00:32] <NCommander> hey TheMuso
[00:32] <RAOF> jordi: If http://packages.debian.org/sid/i386/amd64-libs/filelist is accurate, then it's not much of a surprise that the pulse plugin wasn't built!
[00:44] <TheMuso> Hey NCommander .
[00:44] <NCommander> TheMuso, how goes it?
[00:44] <TheMuso> NCommander: Well thanks. Yourself?
[00:44]  * NCommander is determining how print-architecture works in dpkg
[00:44] <NCommander> TheMuso, not bad. I have dpkg running on Gentoo ;-)
[00:53] <TheMuso> haha
[00:53] <NCommander> and now APT
[00:53] <NCommander> :-)
[00:54] <ajmitch> I wouldn't think they'd be hard to run on gentoo
[00:54] <NCommander> You'd be suprised
[00:54] <NCommander> There is a circular dependency
[00:54] <NCommander> dpkg depends on itself to determine build information
[00:54] <NCommander> (else dpkg --print-architecture and a whole lot of other things don't work right)
[00:55] <ajmitch> so yay, you just installed dpkg & apt, now you have another debian-like system
[01:08] <NCommander> ajmitch, sorta, I'm rebootstrapping amd64
[01:09] <StevenK> NCommander: We did that already so you don't have to.
[01:10] <NCommander> StevenK, with everything as position independent exectuables ;-)?
[01:11] <zul> ok why?
[01:12] <StevenK> zul: Because NCommander has a lot of spare time, apparently.
[01:12] <zul> StevenK: well good for him then
[01:12] <NCommander> Having the archive exist as PIEd executables will allow us to intact address space randomization
[01:13] <NCommander> Or in other words, it would be near impossible to execute a stack smash or return-to-libc attack
[01:21] <RAOF> NCommander: What's the performance impact on ia32 like?
[01:22] <NCommander> Bad
[01:22] <NCommander> 10-15% I think
[01:22] <NCommander> And no real improvement to security
[01:22] <NCommander> Due to the small address space
[01:22] <NCommander> Hence why ASR is only sane on 64-bit architectures
[01:22] <RAOF> The address space is still _pretty_ big, isn't it?
[01:22] <NCommander> Its been proven it can be brute forced extremely quickly
[01:22] <NCommander> SOmething like 10 hours
[01:23] <RAOF> Fair enough.
[02:25] <mrooney> bryce: what is an EPR, re the fglrx r3xx bug?
[02:26] <bryce> Engineering Problem Reports
[02:28] <mrooney> bryce: hm, I see, so where is the report that it corresponds to?
[02:35] <bryce> mrooney: it is in AMD's internal bug tracker
[04:54] <dcolish> Any update-manager developers?
[04:54] <ScottK> Almost certainly sleeping.
[04:56] <dcolish> ah, well I wanted to try and understand some of the reasoning behind recent changes to update-manager and why it's reconfiguring my xorg. Can't seem to find anything clear on launchpad
[04:56] <wgrant> dcolish: You mean its commenting-out of inputdevices?
[04:57] <dcolish> yup, I mean I understand that HAL is supposed to be doing that work with the new autoconfig, but why when I make local changes does it not honor those?
[04:57] <wgrant> Because you should be using fdi files.
[04:58] <wgrant> See https://wiki.ubuntu.com/X/Config/Input
[04:58] <wgrant> Speicifically https://wiki.ubuntu.com/X/Config/Input#hal
[05:00] <dcolish> I see, maybe a link in the xorgs would be helpful for overrides. I might never have found that link and I've been doing a bunch of support help on the xubuntu channel
[05:00] <dcolish> A link to that page
[05:01] <wgrant> That's a good point.
[05:01] <wgrant> But mvo doesn't seem to be around yet.
[05:02] <dcolish> Let me see if I can find the project in launchpad. I'll try and add a note there
[07:00] <pitti> Good morning
[07:01] <Hobbsee> pitti!!!!!
[07:01] <pitti> infinity: erm, isn't it exactly the other way around? SRUs are always more urgent that jaunty? at least at this time, right after a release?
[07:02]  * Hobbsee throws rather hot gummy bears at pitti
[07:02] <pitti> infinity: last time I discussed that with cprov, I asked for SRU > devel, hm
[07:02] <pitti> Hobbsee: yummy! but wouldn't they melt?
[07:02] <Hobbsee> pitti: not sure.  somewhat, i think
[07:03] <soren> They do on pizza.
[07:04] <Hobbsee> you've tried this, i take it?
[07:04] <soren> I may have :)
[07:04] <StevenK> I've had gummy bears in ice cream
[07:05] <pitti> tkamppeter: doesn't seem to fix it for everyone? but an improvement in any case, so please commit it to bzr; once people in the bug are happy enough, I'll upload an SRU
[07:05] <StevenK> Morning pitti
[07:06] <soren> StevenK: In Denmark youcan get ice cream with gummy bears preinstalled.
[07:06] <StevenK> soren: As you can here
[07:06] <soren> Wel... Not ice cream, actually. Ice lollies.
[07:07] <StevenK> Well, I go to a shop that sells ice cream with a choice of stuff in it, and they mix it in front of you
[07:07] <soren> Neat.
[07:07] <StevenK> (Like chocolate biscuits, cookie dough, fruit pieces...)
[07:07] <soren> I've tried that once, but I'm quite sure they'd ground it up first. That doesn't really work well with gummy bears, I think.
[07:07] <StevenK> The gummy bears just get mixed in and go hard
[07:08] <StevenK> Mango ice cream with mango pieces, gummy bears and M&Ms
[07:09] <pitti> soren: I forgot, is that Häagen Dasz from Denmark, too? gooood stuff
[07:11] <wgrant> StevenK: That sounds rather awesome.
[07:11] <StevenK> wgrant: It's all kinds of awesome
[07:11] <StevenK> I might have to hit up a local to see San Francisco has something similar
[07:11] <wgrant> Heh.
[07:12] <wgrant> Yay, UDS.
[07:20] <soren> pitti: What do you mean "too"?
[07:21] <soren> pitti: And no, Häagen Dasz isn't Danish, I'm afraid. It's good stuff, though :)
[07:21] <pitti> soren: as in "ice cream without gummy bears, but other good ingredients"
[07:23] <soren> pitti: Ah, I thought I missed someone talking about something else that was from Denmark. :)
[07:23] <soren> Gah...
[07:24]  * soren changes wifi driver
[07:24] <wgrant> What? How can something with äa in it not be Danish?
[07:24] <soren> For starters because we don't have ä in our alphabet :p
[07:25] <wgrant> Blah!
[07:25] <soren> We have a-z+æøå. No ä.
[07:27]  * StevenK is trying to remember which European languages use ä
[07:27] <wgrant> German is all I know.
[07:27] <wgrant> Probably Norwegian or similar.
[07:28] <StevenK> Finnish does
[07:28] <soren> Swedish and Norwegian.
[07:28] <liw> StevenK, Swedish, too
[07:29]  * soren is suddenly not so sure about Norwegian.
[07:29] <wgrant> $STEREOTYPICAL_EUROPEAN_LANGUAGE
[07:30] <soren> No, not Norwegian. My bad.
[07:31] <liw> StevenK, http://en.wikipedia.org/wiki/%C3%84
[07:31] <StevenK> Ah.
[07:32] <StevenK> German's use is different.
[07:32]  * wgrant just thought to go to Wikipedia too.
[07:32] <wgrant> But I didn't have a compose key set.
[07:32] <StevenK> In Finnish and Swedish, it's a seperate letter, and German it's an accent
[07:32] <wgrant> And was too braindead to consider copying it from the IRC window.
[07:34] <soren> StevenK: Are you sure about that?
[07:34] <soren> My German classes were in another millenium, but I seem to remember that it's a completely separate letter.
[07:35] <liw> soren, mine were only 20 years ago, and I remember it being a variant of a, for all practical porpoises
[07:35] <persia> soren, Wikipedia claims it's just "ae", and not actually 'æ'
[07:35] <tjaalton> hmm candy talk.. must bring some Turkish Pepper to UDS ;)
[07:36] <slangasek> persia: in reference to German?
[07:36] <wgrant> tjaalton: Turkish Pepper?
[07:36] <persia> slangasek, Yes.
[07:36] <tjaalton> wgrant: http://en.wikipedia.org/wiki/Tyrkisk_Peber
[07:36] <slangasek> yes, umlaut in German is a stylized e
[07:37] <soren> tjaalton: Ah, yes! Yet another Danish invention :)
[07:38] <wgrant> tjaalton: So it's Finnish and Danish, rather than Turkish? Strange names.
[07:38] <tjaalton> soren: hehe, yes :) But we ate the company, literally
[07:39] <tjaalton> (Linus's blog also mentions salmiac)
[07:39] <soren> tjaalton: Apparantly. I didn't even know that :)
[07:40] <soren> I wonder when that happened..
[07:41] <soren> 1996, it seems.
[07:42] <tjaalton> soren: stores had them in a big plastic jars, and since the candy is hygroscopic, they'd get sticked together. When I was a kid my mom got those jars cheaply from a store, and we then tried to break the massive blob with spoons :)
[07:42] <tjaalton> hmm, blob is not right..
[07:42] <wgrant> I was fairly amused that the Wikipedia page actually mentions that they're hygroscopic.
[07:43] <tjaalton> lump is better
[07:43] <wgrant> Blob could be right.
[07:43] <wgrant> But lump is probably better, yes.
[07:44] <tjaalton> the jar was maybe 40cm high, so my arm barely reached the bottom :)
[07:44] <soren> Heheh :)
[07:44] <tjaalton> (25y ago)
[07:46] <jordi> NCommander: I think there's a static dpkg build just for that kind of bootstrapping
[07:46] <jordi> could be wrong though
[07:46] <NCommander> Was it a bad thing that when I thought of jar, I thought of Java
[07:47] <NCommander> jordi, its more of a toolchain issue, there are circular dependencies between glibc and gcc :-)
[07:48] <NCommander> I could PROBABLY get away with just rebootstrapping the glibc packages and the toolchain
[07:48] <NCommander> But I want to make sure everything gets built with pie, and I wanted to learn how to do a bootstrap for future reference
[08:36] <tkamppeter> pitti, I have already uploaded the new pstopdf to the BZR yesterday (see my mail). It should at least fix a part of the bugs. My tests show at least that bugs of mis-centered printouts and the problem that PostScript printers (Kyocera, Xerox, ...) have pstopdf crashing are fixed. The garbage when printing with SpliX is perhaps SpliX itself.
[08:37] <pitti> tkamppeter: I'm just walking through the bugs
[08:37] <pitti> tkamppeter: some say that the new filter doesn't fix it
[08:37] <pitti> tkamppeter: so for those we either need more fixes, or remove the bug references from the changelog
[08:37] <pitti> tkamppeter: I think we should wait for more feedback for a couple of days, and if we have a clear "fixes it" or "doesn't fix", do an upload
[08:37] <pitti> WDYT?
[08:38] <tkamppeter> pitti, I did not get any feedback. The reporter of bug 293883 is on travel ...
[08:39] <pitti> tkamppeter: I saw quite a lot of feedback in the bugs...
[08:40] <tkamppeter> bug 293832 is without answer
[08:41] <pitti> tkamppeter: once you are content with a particular bug, please reassing it to cups and subscribe ubuntu-sru
[08:43] <tkamppeter> pitti, bug 293832 seems really not to be covered, but also fixable in pstopdf, as you get it right by sending a PDF directly to CUPS, as then no pstopdf is involved and get it wrong if you send from evince, as evince sends the job in PostScript. Here I need your help as you have the actual printer and can reproduce the bug.
[08:43] <pitti> tkamppeter: I can reproduce bug 292690
[08:44] <pitti> tkamppeter: the page position is actually fine for me, I just get the garbage
[08:47] <tkamppeter> bug 289759 is fixed according to my tests and also according to one user, but there are still problems reported by another user
[08:48] <pitti> tkamppeter: ok; if the followup is an unrelated bug: as I said, please subscribe ubuntu-sru once you think that a particular bug is fixed by the patch
[08:58] <tkamppeter> pitti, the one who complained that it still does not work for him probably did not do the download correctly, his error_log shows a syntax error in pstopdf
[09:02] <tkamppeter> pitti, can you test bug 292690 with my new filter? Thanks.
[09:03] <pitti> tkamppeter: okay, let me walk over and test it there
[09:04] <pitti> tkamppeter: btw, I thought gtkprint would spit out PDF now instead of PS?
[09:04] <pitti> i. e. why does evince send out ps still?
[09:06] <seb128> pitti: hey
[09:06] <seb128> pitti: do you know if firefox3 and openoffice.org use gtkprint? the new comments on bug #248902 seem rather a cupsys issue no?
[09:14] <tkamppeter> pitti, unfortunately, some GTK/GNOME apps do not use the function of GTK Print for which I made the patch. They do the PostScript generation by themselves or with another library (something older, Cairo, ...).
[09:23] <pitti> tkamppeter: 292690 updated; unfortunately it doesn't fix it :(
[09:23] <pitti> seb128: ffox looks like it's using gtkprint, the dialog is exactly the same
[09:23] <pitti> seb128: OO.o still has its own thing
[09:24] <pitti> looking at the bug
[09:24] <pitti> seb128: hm, it says it works fine with lpr...
[09:24] <seb128> pitti: thanks
[09:24] <seb128> pitti: the recent comments
[09:24] <seb128> pitti: I think they hijacked the bug for a different issue
[09:25] <seb128> users tend to do that and that's annoying, when they don't know they should not guess and just open a new bug and mention the other one is the description
[09:26] <pitti> seb128: according to the log, they are likely suffering from the same pstopdf problems that tkamppeter is just working on
[09:27] <pitti> but also, seems they are trying to print to an SMB printer, which fails to log in
[09:27] <tkamppeter> pitti, so you get the garbage of bug 292690 also with the "gdi" driver? Not only with SpliX?
[09:28] <pitti> tkamppeter: right
[09:28] <pitti> tkamppeter: ISTR that splix didn't even work at all with pstopdf from intrepid final; shall I test that case again?
[09:29] <tkamppeter> pitti, yes.
[09:29] <pitti> ok, brb
[09:33] <pitti> tkamppeter: ok, nevermind; same result with splix+old pstopdf
[09:40] <pitti> tkamppeter: hm, if the bug is in the pstopdf filter, I wonder why it isn't reproducible when printing into a ps file
[09:43] <tkamppeter> pitti, it looks like that it is the pstopdf filter according to your following comment: FYI, this happens if I print a PDF from evince. If I print it with "lp foo.pdf", the output is fine, so that's a temporary workaround.
[09:44] <tkamppeter> If you do "lp foo.pdf" CUPS gets a PDF file and it goes directly through pdftopdf->pdftoraster->rastertoqpdl
[09:44] <pitti> tkamppeter: if I remove that filter, cups should use the normal ps workflow, and it should work again?
[09:44] <tkamppeter> pitti, yes
[09:44] <tkamppeter> If you print from evince, the filter chain is pstopdf->pdftopdf->pdftoraster->rastertoqpdl
[09:45] <tkamppeter> pitti, it looks like that for some files pstopdf does not use the correct page size for the conversion. This depends on certain conditions and needs to be investigated.
[09:46] <tkamppeter> Can you print your PDF file with evince into a PostScript file and then run
[09:47] <tkamppeter> cupsfilter -m application/vnd.cups-pdf -p /etc/cups/ppd/<yoursamsungqueue>.ppd evince.ps > output.pdf
[09:49] <pitti> mvo, soren: anyone who could test ubuntu-vm-builder in hardy-proposed? It stacked a plethora of fixes, but no feedback for them
[09:49] <mvo> pitti: I think I did the last sru so I'm probably not a good candidate
[09:49] <mvo> but I would definitely welcome that
[09:53] <tkamppeter> pitti, I have also fond a bug in evince which can have influence. Independent of your localization and /etc/papersize the "Print Setup" of evince is set to "US Letter". You can set it to "A4", bgut this does not get saved. When you close evince and open it again it is back to "US Letter". This is a possible cause for outputting bad PostScript.
[09:55] <seb128> that's http://bugzilla.gnome.org/show_bug.cgi?id=525185
[09:56] <seb128> or bug #224882
[09:57] <tkamppeter> pitti, when I take a PDF file in A4 and simply print it with evince into a PostScript file I get a Letter-sized PostScript where the original content is shrinked by around 2cm.
[10:00] <directhex> whereas i can't even print in evince
[10:01] <seb128> directhex: why not?
[10:01] <directhex> problem with brother printers, i forget the bug #
[10:01] <pitti> mvo: well, I don't mind that; if you know how the package works and use the actual -proposed .deb, please do it
[10:02] <pitti> tkamppeter: I'll try the test now
[10:02] <tkamppeter> pitti, evince prints only in the correct size if I manually set the paper size in "Page Setup".
[10:03] <tkamppeter> pitti, doing this way should probably lead you to a good printout, but there is still a bug somewhere in the printing chain as if the page size is wrong the printer driver should not fill blank space with garbage.
[10:07] <pitti> tkamppeter: I tried the cupsfilter -m application/vnd.cups-pdf ... test, and it produces correct PDF
[10:07] <pitti> tkamppeter: I do get
[10:07] <pitti> ERROR: No %%BoundingBox: comment in header!
[10:07] <pitti> ERROR: No %%Pages: comment in header!
[10:07] <pitti> tkamppeter: that's with intrepid's original pstopdf
[10:09] <pitti> tkamppeter: bug updated (also tested with new pstopdf)
[10:16] <sbeattie> pitti: any idea why the apparmor package for intrepid-proposed is still listed as needs building? https://launchpad.net/ubuntu/+source/apparmor/2.3+1289-0ubuntu4.1
[10:16] <tkamppeter> pitti, as the garbage does not appear up to the post-pdftopdf state, it is produced by the driver. It seems that the input file has somewhere Letter dimensions inside, but the drawing is A4-sized. This makes the driver perhaps setting the printer to Letter and then sending an A4 bitmap, which leaves a small stripe of memory in the printer uninitialized which leads to the garbage.
[10:16] <pitti> sbeattie: infinity told me that apparently soyuz got the build priorities the wrong way around, and first builds the thousands of jaunty packages, and then SRUs
[10:16] <pitti> sbeattie: let me bump the priority
[10:16] <sbeattie> pitti: thanks.
[10:17] <pitti> tkamppeter: that makes sense
[10:18] <tkamppeter> pitti, so take the output file of your last comment (/tmp/out.pdf, using the new pstopdf) and search it for the numbers 612 and 792. Which lines contain these numbers?
[10:19] <pitti> tkamppeter: not contained at all
[10:19] <pitti> tkamppeter: I can attach the file to the bug, if it helps you?
[10:21] <tkamppeter> pitti, please do so.
[10:22] <pitti> tkamppeter: done
[10:32] <tkamppeter> pitti, thanks.
[10:34] <tkamppeter> pitti, probably the PDF file has no completely white background but transparent areas at the borders (do not know any program to visualize that) Here it could happen that the driver renders garbage or leaves the printer with uninitialized memory.
[10:38] <infinity> pitti: Not positive on when the rationale was laid down, but I suspect -proposed had the lower priority from back before we had the copy-from-proposed-to-updates workflow.
[10:38] <infinity> pitti: So proposed really was a low prio queue at that point (sort of a "yeah, test this junk if you get around to it" thing)
[10:39] <infinity> pitti: I still argue that devel breakage is often far more urgent than anything else though (ie: we upload something, find out it's HORRIBLY BROKEN, and want a fix in the next hour)
[10:40] <infinity> pitti: If anything in a released series is that broken, sure, we can special-case it, but there shouldn't be any world-ending, causes-machines-to-stop-booting stuff in released dists.
[10:42] <infinity> pitti: Classically, the queue orders on Debian buildds (or, the ones I ran) were "security > devel > proposed"
[10:43] <infinity> pitti: But we've usually treated "proposed" as "nice-to-have updates and bugfixes, but not world-endingly critical", I thought.
[10:44] <pitti> infinity: in confcall, will respond later
[10:44] <liw> seb128, how can I attach a crash report if apport doesn't even notice the crash?
[10:45] <seb128> liw: did you enable apport and start it as explained in the comment?
[10:45] <liw> seb128, apport has been enabled all along
[10:45] <seb128> liw: it's not running by default on stable versions
[10:45] <seb128> liw: are you sure?
[10:45] <liw> I checked...
[10:45] <liw> yes, I am sure
[10:45] <seb128> so talk to pitti about why apport is not working after his phone call
[10:46] <seb128> or you have a .apport-something configuration saying to ignore those crashes?
[10:46] <liw> I have not
[10:46] <seb128> do you have anything in /var/crash?
[10:46] <liw> /var/crash is empty
[10:46] <seb128> ok, so talk to pitti about why apport is not working for you
[10:47] <seb128> usually that's because it's not running but if you did enable it and ran apport start and still get the issue that's an apport bug
[10:47] <liw> seb128, so you're not even going to care about .xsession-errors output?
[10:47] <seb128> no, we need a debug stacktrace for a crash
[10:47] <seb128> you can install all the dbg packages and use gdb if you really want
[10:48] <seb128> that's extra work for you and it'll not have automatic duplicate checking etc, that's much nicer if you could try to get apport working
[10:49] <seb128> you are sure it's crashing btw? not exiting or aborting because apport catches only crashes
[10:49] <seb128> try running evolution under gdb and wait for the next crash
[10:50] <liw> seb128, "crash" is not a technical term with a clear definition: I use it here to indicate that evolution goes away without my asking it to quit
[10:51] <seb128> crash == segfault usually on bug trackers
[10:51] <liw> I don't think that is a universal definition
[10:51] <liw> but message understood, there's no point in reporting evolution bugs manually
[10:52] <liw> only via apport
[10:52] <seb128> not really but apport makes the job much easier for everybody
[10:52] <brrt> simple question about releases: is ubuntu-8.10-rc really different from ubuntu-8.10-release?
[10:52] <Hobbsee> brrt: yes
[10:53] <brrt> how much?
[10:53] <brrt> because I have got an iso from rc and if there isn't significant difference I just install rc
[10:53] <seb128> liw: you can read http://wiki.ubuntu.com/DebuggingProgramCrash and get a stacktrace using gdb
[10:53] <brrt> if there is I first download the release
[10:53] <seb128> liw: that should at least tell you if it's crashing or exiting which are useful informations
[10:53] <Hobbsee> brrt: this is a questoin for #ubuntu, but you can just do an upgrade after you install the rc, and you'll get to the final.
[10:54] <seb128> liw: the bug right now has not enough details to be useful
[10:54] <brrt> ok thanks
[10:54] <tkamppeter> pitti, can you try out whether you can get rid of the garbage when running the pstopdf conversion with other Ghostscript parameters? Edit PS2PS_OPTIONS and PS2PDF_OPTIONS in pstopdf.
[10:54] <liw> x-+
[10:54] <tkamppeter> pitti, doc about the options you find here: http://ghostscript.com/doc/current/Ps2pdf.htm
[11:09] <tkamppeter> pitti, I have uploaded a pstopdf with some parameters changed to the bug
[11:14] <pitti> infinity: depends, I think; one week after intrepid, proposed is more important; one week before jaunty release, devel is more important
[11:17] <persia> infinity, I suspect that autosync is a special case, as when 1500 packages get uploaded all at once (or more when Debian is less frozen), that can block critical bugfixes.
[11:18] <Mithrandir> proposed will generally be much smaller than devel, won't it?
[11:19] <infinity> persia: Why, I won't disagree that autosync time is a "special" time.
[11:20] <persia> Mithrandir, Almost assuredly.
[11:20] <infinity> pitti: You have the power to rescore proposed.  I'd suggest, honestly, that we just manually fiddle with proposed scores during autosync hell.
[11:20] <pitti> infinity: yes, that's what I'm doing; I'm fine with that
[11:22] <NCommander> infinity, I assume score values can only be calculated between archive sections, and not by releases?
[11:34] <pitti> tkamppeter: can you please upload that to the bug report? I have to finish my conf call, and then leave for two hours
[11:34] <pitti> tkamppeter: I'll test that in the afternoon then
[11:34] <pitti> liw: was it a python or SIGSEGV-like crash?
[11:34] <pitti> liw: in the latter case, you should have something in /var/log/apport.log
[11:36] <tkamppeter> pitti, I have already uploaded it.
[11:47] <liw> pitti, there is nothing in apport.log related to this (only to epiphany and synergyc)
[11:47] <Kano> hi, has u an autodetection of macbooks for the special keyboard layout?
[11:47] <liw> pitti, afaik evolution is not implemented in python, so presumably not that, either
[11:52] <pitti> liw: does this work: bash -c 'kill -SEGV $$'
[11:53] <liw> pitti, yes, yes, apport works, before and after evolution crashes
[11:53] <pitti> liw: hm; if it crashes with an assertion, apport.log should have at least a note about it
[11:54] <pitti> liw: if you open evolution and kill -SEGV it manually, do you get an apport.log snippet?
[11:56] <liw> pitti, yes
[11:56] <liw> apport (pid 8382) Thu Nov  6 13:54:58 2008: called for pid 7418, signal 11
[11:56] <liw> apport (pid 8382) Thu Nov  6 13:54:58 2008: executable: /usr/bin/evolution (command line "evolution --component=mail")
[11:56] <liw> modinfo: could not open cdrom: No such device
[11:57] <pitti> liw: hm, I have no idea then; even if the core dump gets too large, apport.log has something
[11:58] <liw> pitti, there is no core dump
[11:58] <pitti> liw: maybe it didn't actually crash with a coredump, but through normal exit() or whatever
[12:00] <pitti> I'm off for two hours, bbl
[12:54] <gammy> G'day. In https://help.ubuntu.com/8.04/serverguide/C/user-management.html#where-is-root it does not mention that crontabs will stop functioning if the root account is locked.
[12:58] <joaopinto> gammy, which crontabs? user's crontabs do not depend root's account
[12:58] <gammy> joaopinto: crontabs run by root. /etc/cron.d/ stuff for example.
[12:58] <joaopinto> for system crontabs, /etc/cron.d works just fine
[12:58] <gammy> Lies.
[12:58] <gammy> if I lock the root account, CRON just spits out "Authentication error" and never runs them.
[12:59] <gammy> This caused quite a lot of problems for me :P
[12:59] <gammy> Er, this is regarding ubuntu-server of course.
[12:59] <persia> gammy, That's a known regression with intrepid, and only happens if the root account has been unlocked and relocked.  I believe there is a fix underway, which should be available soon.
[12:59] <gammy> (As the documentation says)
[13:00] <gammy> persia: Might you know where I can find information regarding this bug?
[13:01] <joaopinto> gammy, next time please do not call me a lier, I am using 8.04, the documentation you are pointing is for 8.04, and I do not have such a problem
[13:02] <gammy> "liar"
[13:02] <gammy> :|
[13:02] <gammy> And that was a joke.
[13:02] <gammy> Actually it's odd. I thought I joined #ubuntu-doc, not -devel
[13:03] <persia> gammy, I don't find it now, but I was sure I saw a bug with that description.  You might ask if anyone in #ubuntu-bugs remembers which one.
[13:03] <gammy> persia: Also - I don't have intrepid installed. Perhaps that *is* the issue in of itself?
[13:04] <persia> Oh, if you're using hardy, then I have no guess why you should encounter that at all, as I don't remember seeing a report about that against hardy.  You might still ask in #ubuntu-bugs.
[13:05] <persia> Or maybe someone in #ubuntu can help you troubleshoot.
[13:05] <gammy> Oh. Wait. intrepid is the name of 8.10..
[13:05]  * gammy slaps forehead
[13:06] <gammy> I thought it was the name of some daemon :D
[13:06] <gammy> persia: Yes. this is indeed intrepid. So what you said is likely true
[13:09] <gammy> Quick silly question: What is a Sigfile? Cron now whines about not finding it.
[13:14] <gammy> Well, thank you for the assistance. Have a nice day!
[14:31] <pitti> tkamppeter: you rock!
[14:33] <pitti> tkamppeter: bug 292690 updated
[14:51] <tkamppeter> pitti, so the garbage goes actually away with my last approach of pstopdf (the third one in this bug report)? This means the bug is fixed?
[14:51] <tkamppeter> pitti, can you then do one other check?
[14:52] <tkamppeter> pitti, I have added "-dHaveTransparency=false" to the PS2PDF_OPTIONS in pstopdf. Can you test whether this is really needed by simply removing it (or setting it to true) and try again?
[14:54] <Kalidarn> hi, in relation to this bug http://bugs.kde.org/show_bug.cgi?id=167767 apparently fixed in the kubuntu distribution
[14:55] <Kalidarn> im wondering what svn trunk patch is being used in KDE sources to fix it.
[14:55] <Kalidarn> nobody ever replied to my request to know if it was possible to backport the patch from svn trunk to kde 4.1
[14:55] <Kalidarn> (apparently the bug is fixed in svn trunk) ... its KDE 4.1.3 (and yes i don't use kubuntu) but what im trying to work out is what patch you guys are applying.
[14:56] <Kalidarn> obviously it hasn't been applied to the 4.1 branch upstream
[14:57] <Riddell> hi Kalidarn
[14:58] <Riddell> Kalidarn: not sure off the top of my head, let me look
[14:58] <Kalidarn> thanks
[14:58] <Kalidarn> ;P cos this bug annoys the shit out of me ;)
[14:59] <Kalidarn> Christoph Cullmann 2008-08-18 20:17:22  Seems to work with current /trunk, please retest
[14:59] <Kalidarn> Dotan Cohen 2008-10-09 15:11:02  I can confirm that this is fixed in the KDE provided with Kubuntu 8.10. Thanks.
[14:59] <Kalidarn> that was b ack in the days of KDE 4.1, and i've been hoping it would be ported back to 4.1.X (ie 4.1.2 or 4.1.3) but it looks like it has not.
[15:00] <Kalidarn> unless ofcourse dotan is using a KDE build from svn trunk but i doubt that
[15:00] <Kalidarn> because i doubt KDE would be 'providing a untagged build of kde with kubuntu 8.10' like he says
[15:01] <Riddell> we've no patches to kate in kde4libs
[15:02] <Kalidarn> :(
[15:02] <Kalidarn> hmmm....
[15:02] <Kalidarn> what build of KDE do you have atm?
[15:02] <Kalidarn> or version
[15:02] <Riddell> 4.1.3
[15:02] <Kalidarn> ya can u test that bug to see if it exists
[15:02] <Kalidarn> its really easy to test
[15:03] <Kalidarn> and type "kwrite blahblah"
[15:03] <stdin> https://bugs.launchpad.net/ubuntu/+source/kdesdk/+bug/259772
[15:03] <Kalidarn> should give u a message about no file being called that.. and you should notice that no text shows up when you typ
[15:03] <Riddell> Kalidarn: I can't recreate the problem
[15:03] <Kalidarn> e
[15:03] <stdin> is it that one?
[15:03] <Kalidarn> right it must be fixed then
[15:03] <Kalidarn> looking at that
[15:03] <Kalidarn> sounds like it
[15:04] <Kalidarn> >When I open "compile" then try to type stuff. The text is invisible.
[15:04] <Kalidarn> yar thats exactly what i just said ;)
[15:04] <Riddell> we've no patches to kwrite in kdebase
[15:04] <Riddell> mysterious
[15:05] <Kalidarn> that bug has a link to a duplicate of the one i pasted of upstream
[15:05] <Kalidarn> so where the hell did it get fixed. because i've obviously got a unpatched copy :P
[15:06] <Kalidarn> as im using archlinux (kdemod) atm
[15:06] <Riddell> might be an idea to see if it affects debian
[15:06] <Kalidarn> yer possibly.
[15:06] <Kalidarn> what's their dev channel?
[15:07] <Kalidarn> ah found it it's #debian-devel
[15:07] <Riddell> Kalidarn: #debian-qt-kde on oftc
[15:08] <Kalidarn> [01:37:48]--- Topic for ##debian-qt-kde is This unofficial channel is available as needed. Thank you for using freenode!
[15:08] <tkamppeter> pitti, did you answer my last message? My connection got stuck and I had to reconnect.
[15:08] <Kalidarn> and it has one user... chanserv ;)
[15:08] <Kalidarn> im sure chanserv will be lots of help muahahahaha
[15:08] <stdin> Kalidarn: on oftc
[15:08] <Kalidarn> oh ;P
[15:08] <Kalidarn> mm
[15:09] <Kalidarn> sorry im tired i missed that :P
[15:09] <stdin> irc.oftc.net
[15:29] <Skiessi> !info lmms
[15:29] <Skiessi> "2008-10-31: LMMS 0.4.0 has been released!"
[15:30] <Skiessi> is there some scheduled time for upgrading the packages? or could it be done now?
[15:32] <Riddell> Skiessi: now is the best time
[15:33] <Kalidarn> Riddell, apparently fixed in debian
[15:33] <Kalidarn> too
[15:33] <directhex> although another ":(" @ things being updated in ubuntu but not debian
[15:37] <Keybuk> pitti: how do I make debcheckout use bzr+ssh?
[15:37] <jcristau> Keybuk: debcheckout -a iirc
[15:40] <NCommander> jdong, ping
[15:40] <Riddell> Kalidarn: I don't see anything obvious in Qt either that would help it
[15:41] <Riddell> so not sure I'm afraid
[15:41] <Kalidarn> mm
[15:41] <pitti> Keybuk: right, -a should DTRT
[15:41] <jdong> NCommander: sup?
[15:42] <pitti> tkamppeter: will do the test now, I have to reboot that other computer with the printer
[15:47] <Keybuk> cjwatson: debconf merge redone in bzr
[15:47] <Keybuk> it'd be really nice to have some way of saying if there's a usual upstream cvs as well
[15:47] <Keybuk> XS-Upstream-Vcs-Bzr: ?:p
[15:50] <cjwatson> Keybuk: XS-Debian-Vcs-*
[15:50] <Keybuk> that contains an svn url
[15:50] <Keybuk> obviously lp:debconf works rather nicely
[15:50] <cjwatson> yes ...?
[15:50] <cjwatson> oh, you said "usual upstream cvs"
[15:50] <cjwatson> I didn't realise you meant bzr
[15:50] <Keybuk> usual one you merge from I mean
[15:51] <cjwatson> hmm
[15:51] <Keybuk> I guess we should really be adding a XS-Debian-Vcs-Bzr: in there as well along with our Vcs-Bzr
[15:51] <pitti> tkamppeter: setting it to true works as well; noted in the bug report
[15:51] <cjwatson> sort of seems like lp:<project> should always DTRT if it exists
[15:51] <Keybuk> is it XS-Debian- or XS-Original- ? :p
[15:51] <cjwatson> should be XS-Debian, I've already changed almost all the branches I care about from XS-Original to XS-Debian after being reminded by the discussion yesterday
[15:52] <cjwatson> I have an appropriate diff for debconf in my working copy but didn't want to commit it until you'd sorted out your merge
[15:52] <Keybuk> pushed
[15:53] <cjwatson> committed XS-{Original,Debian} fix
[15:53] <cjwatson> thanks
[15:53] <Keybuk> actually, from doing that, I can see why you generally like using bzr that way
[15:53] <tkamppeter> pitti, thanks. Can you also remove it and see whether pstopdf works without Garbage on the Samsung?
[15:53] <Keybuk> the merge is easy, since it's just "bzr merge"
[15:54] <pitti> tkamppeter: argh, just shut that machine down again :)
[15:54] <cjwatson> I do agree that it's a real mess when not all the files you need are in revision control
[15:54] <pitti> tkamppeter: so true isn't the default either?
[15:54] <pitti> tkamppeter: okay, will do
[15:54] <cjwatson> I was thinking of redoing ubiquity/oem-config so that 'debian/rules update' isn't necessary after checking out
[15:54] <Keybuk> yeah, just trying to think a way of sorting that out atm
[15:54] <cjwatson> and just commit all the d-i/source/ stuff
[15:54] <Keybuk> would be nice to move udev and upstart to bzr packaging
[15:54] <Keybuk> but they're both difficult
[15:54] <cjwatson> it would give us a more easily reconstructable historical record
[15:55] <Keybuk> upstart is maintained in bzr upstream, but without the autogenerated files
[15:55] <cjwatson> udev due to git upstream?
[15:55] <Keybuk> so I'd need an intermediate "tarball" branch
[15:55] <Keybuk> and right
[15:55] <Keybuk> udev uses git
[15:55] <cjwatson> git upstream is actually handleable
[15:55] <cjwatson> it just takes a little creativity
[15:55] <Keybuk> stick a .git and .bzr in the same directory
[15:55] <Keybuk> remember to type "git" when updating from upstream
[15:55] <cjwatson> didn't mean that :-)
[15:55] <Keybuk> and "bzr" when committing ;)
[15:55] <cjwatson> git fast-export | bzr fast-import -
[15:55] <Keybuk> oh, I don't know that one?
[15:56] <cjwatson> lp:~bzr/bzr-fastimport/fastimport.dev
[15:56] <cjwatson> it's one of the category of tools that you often have to hack a bit to make it do what you want; the flip side is that it isn't a black box at all so you *can* hack it to deal with weird special cases
[15:56] <Keybuk> what do you do?
[15:56] <Keybuk> git checkout in one directory
[15:57] <Keybuk> maintain a bzr import alongside, and run that to keep it up to date?
[15:57] <cjwatson> right
[15:57] <Keybuk> then a third packaging directory alongside that?
[15:57] <cjwatson> my import script for debian-policy looks like this:
[15:57] <cjwatson> #! /bin/sh
[15:57] <cjwatson> [ -d .bzr ] || bzr init-repo .
[15:57] <cjwatson> (cd ~/src/packages/debian-policy/git/policy && git fast-export --signed-tags=strip master --tags) | bzr fast-import -
[15:57] <cjwatson> that creates lp:~kamion/debian-policy/master
[15:58] <cjwatson> and then lp:~ubuntu-core-dev/debian-policy/ubuntu is a branch from that
[15:58] <cjwatson> this is strictly inferior to git imports in Launchpad, and it's possible that I might have to rebase or something when those turn up
[15:58] <cjwatson> but I can probably cope with that
[16:00] <cjwatson> I actually found bzr fast-import when I was trying to do some very, very custom one-time imports
[16:00] <cjwatson> I have a long-term project to migrate all my private Debian svn packaging branches over to bzr branches that are true branches of upstream
[16:00] <cjwatson> and I want to keep all the history intact
[16:01] <pitti> tkamppeter: tested, bug updated
[16:01] <cjwatson> bzr fast-import is hackable enough to let me say "OK, so this revision here, I actually want it to be a merge from that revision of that branch over there as well as including the patch content it claims"
[16:01] <Keybuk> why init-repo ?
[16:02] <cjwatson> bzr fast-import really likes to have a repository - I think it's because it can potentially import multiple branches
[16:02] <cjwatson> the short answer is "because it broke when it didn't" :-)
[16:02] <Keybuk> bzr: ERROR: exceptions.KeyError: None
[16:02] <Keybuk> :-/
[16:02] <cjwatson> what's the URL for udev git?
[16:03] <Keybuk> git://git.kernel.org/pub/scm/linux/hotplug/udev.git
[16:03] <cjwatson> git imports are something like number two on the LP code priority list, so this should get sorted out soon even if we can't get bzr fast-import working
[16:04] <Keybuk> hmm, actually, git fast-export is erroring
[16:04] <Keybuk> maybe that's what's choking it
[16:05] <cjwatson> sometimes need to tweak the options there
[16:05] <Keybuk> seems to be --tags ?
[16:05] <Keybuk> what does that do
[16:05] <cjwatson> note that the output is a text stream - in extremis it can make sense to sed the bugger :)
[16:06] <cjwatson> git help rev-parse
[16:06] <cjwatson>        --tags
[16:06] <cjwatson>            Show tag refs found in $GIT_DIR/refs/tags.
[16:06] <cjwatson> you can leave that out reasonably enough
[16:07] <Keybuk> yeah, without that it seems to work nicely
[16:07] <Keybuk> it's done 1,000 commits so far
[16:07] <cjwatson> do check the output pretty carefully; I suggest diffing at some specific revisions or something
[16:07] <cjwatson> and check things like exec bits that won't show up in diff
[16:08] <Keybuk> meh, tracebacked again :-/
[16:08] <cjwatson> I have a few crazy patches lying around that deal with complicated directory structure rearrangements
[16:09] <cjwatson> but they slow it down a lot and I'm not entirely confident in them
[16:09] <Keybuk> I'll have to debug that a bit more later
[16:09] <cjwatson> anyway, feel free to file it in the category of "neat idea, needs polish" :)
[16:09] <Keybuk> this cycle I want to drop all of our patches to udev
[16:10] <cjwatson> hmm, yeah, breaks for me too
[16:10] <cjwatson> bzr: ERROR: parent_id {etcudevslackware-20081106160805-210n85iwk7nvgmkv-1477} not in inventory
[16:10] <Keybuk> (of which there are only really rules ones remaining)
[16:10] <cjwatson> even with my crazy patches
[16:10] <tkamppeter> Thanks, pitti, uploaded into the BZR and to upstream (OpenPrinting). I have also updated changelog. Can you now upload it to Debian and Jaunty? Then we will request an Intrepid SRU for the pstopdf fix (and perhaps also to your AppArmor config fix).
[16:10] <pitti> tkamppeter: yep, I planned to SRU the apparmor one, too
[16:10] <cjwatson> Keybuk: I have once done the trick of having a single directory in two revision control systems at once ...
[16:11] <pitti> tkamppeter: are all other referenced bugs confirmed fixed with the new script?
[16:11] <Keybuk> cjwatson: the nice thing is you just do "git pull;bzr commit"
[16:11] <cjwatson> a previous company had a CVS repository that you could check out as ~/.vim/, with some handy bits and pieces
[16:11] <Keybuk> the bad thing is  you don't really end up with a useful bzr repository as a result
[16:11] <cjwatson> I had my home directory in svn, including ~/.vim/
[16:11] <cjwatson> so I just made it be both at once and didn't necessarily add all the files on either side
[16:14] <liw> mvo, does python-apt handle translated package descriptions?
[16:15] <pitti> tkamppeter: hm, did you push?
[16:16] <timrc> Does apt try to install packages with no inter-dependencies in parallel or is package installation done linearly regardless of relationship (or lack there-of) ?
[16:17] <wasabi> Linearly.
[16:17] <wasabi> Odd race conditions could exist otherwise in scripts invoked by the individual packages.
[16:17] <pitti> tkamppeter: e. g. bug 282186 isn't tested yet; mind if I drop the bug reference and instead we ask the reporter to test with the new version?
[16:17] <wasabi> timrc: Why that question?
[16:18] <timrc> wasabi: I was just curious, I kind of like the idea of efficiency... but I see your point
[16:18] <wasabi> Well, there are some things which can be done to speed up dpkg.
[16:18] <wasabi> Some of which I've played with. Which is why I find your question interesting.
[16:18] <wasabi> The largest problem with it is the sync IO that it ends up doing.
[16:18] <tkamppeter> pitti, confirmed as fixed are: bug 292690, bug 289759.
[16:19] <jdong> wasabi: it's hardly ever the unpack phase IMO that can benefit from parallelization
[16:19] <wasabi> jdong: it is.
[16:19] <pitti> tkamppeter: yep, just walking through them
[16:19] <tkamppeter> bug 282186 I could reproduce and with my fixed pstopdf it is fixed.
[16:19] <jdong> wasabi: look at postinst/configure phases
[16:19] <wasabi> But not the kind of paralization you think of.
[16:19] <jdong> wasabi: it's annoying when update-initramfs is spending 30 seconds compressing modules, then scrollkeeper spends another 10s pegging CPU , etc
[16:19] <pitti> tkamppeter: and e. g. 293832 is confirmed not fixed
[16:19] <wasabi> Unpacking causes many many many IO blocks. That is, when unpacking a file, it stats for an existing file, then extracts, then stats, then extracts. Each stat waits for IO.
[16:19] <pitti> tkamppeter: can you please bzr push?
[16:20] <tkamppeter> bug 293832 is not fixed, it seems that that one is not caused by pstopdf.
[16:20] <jdong> wasabi: I'm not saying unpacking isn't a big time waster, I'm saying that parallelization probably will benefit the postinst state a lot
[16:20] <wasabi> jdong: No, really. You can speed up an apt run by about two times by making a little Pre script that issues async requests to stat every file involved.
[16:20] <wasabi> Done it.
[16:20] <jdong> but as mentioned, the race conditions involved are really annoying
[16:20] <timrc> wasabi: that's really interesting, actually
[16:20] <tkamppeter> pitti, sorry, I have forgotten that. Now I have done the "bzr push".
[16:21] <jdong> wasabi: I am sure statting all the files en masse will help
[16:21] <wasabi> Yup, it helps a *lot*
[16:21] <jdong> but that's not really parallelism
[16:21] <pitti> tkamppeter: thanks; I'll drop the unconfirmed patches and prepare the sid/jaunty upload now
[16:21] <wasabi> No, it's not.
[16:21] <jdong> in fact that's quite the opposite
[16:21] <jdong> that's avoiding parallelism due to seek/wait times :)
[16:21] <wasabi> Well, it's  paralizing IO, in a way.
[16:21] <jdong> no, it's deparallelizing IO :)
[16:21] <jdong> rather, reordering it intelligently
[16:21] <jdong> or optimistically.
[16:21] <wasabi> Yeah, you are right.
[16:21] <tkamppeter> pitti, the reporter of bug 293883 is on travel, so we do not know.
[16:22] <wasabi> Issue all the async stats, the kernel can reorder them, do them in fewer requests.
[16:22] <tkamppeter> pitti, which unconfirmed patches?
[16:22] <pitti> tkamppeter: I know; we'll just wait
[16:22] <jdong> but between instaling things fast and installing them reliably, I think I know which one I prefer :)
[16:22] <wasabi> jdong: I think though it's a simple way to get some large speed increase out of dpkg.
[16:22] <pitti> tkamppeter: sorry, "unconfirmed bug refs"; but you did already
[16:22] <wasabi> Well, pre-stating does not cause unreliability.
[16:22] <jdong> wasabi: I think the main "brokenness" is stats are too expensive :)
[16:22] <timrc> To do parallelization would require you walk dependency trees (and if you want to absolutely avoid configuration conflicts) you'd have to walk the file lists... and the return would be greatly diminished in terms of time and even the probability that you'd be able to install in parallel
[16:22] <wasabi> jdong: The main problem with any IO is that it's sync. It should be async.
[16:22] <jdong> wasabi: it also doesn't necessarily help a lot on filesystems with fast stats.
[16:22] <jdong> i.e. reiserfs
[16:23] <wasabi> Well, I don't think it's realistic to paralize package installs.
[16:23] <wasabi> So, that's a no go.
[16:23] <pitti> tkamppeter: so for 282186 you could reproduce it yourself and it's fixed for you?
[16:23] <jdong> wasabi: for now I agree
[16:23] <wasabi> The effort to do that would be massive, and package maintainers would have a huge new thing to consider.
[16:24] <wasabi> With many many corner cases.
[16:24] <wasabi> That only exhibit on SMP machines.
[16:24] <jdong> wasabi: yeah, brings a WHOLE new set of QA challenges to the plate
[16:24] <tkamppeter> pitti, yes.
[16:24] <timrc> My other gripe with apt right now may actually be a personal probably... I find myself in situations where I'll absentmindedly try to install something from the console with aptitude while synaptic is open... it'd be great if there was an 'apt queue' which all these various apt front-ends would queue packages on... let the queue handle ordering and installation
[16:24] <jdong> I doubt all of our postinsts are parallel-safe
[16:24] <pitti> tkamppeter: cool, thanks; I modified the bug accordingly
[16:24] <wasabi> I doubt many of them are.
[16:24] <wasabi> And Id oubt we have any sort of ability to vet them
[16:24] <jdong> timrc: that would be cool but difficult to implement
[16:24] <wasabi> timrc: PackageKit or whatever is supposed to solve that.
[16:25] <wasabi> Though I have no idea how Ubuntu views that.
[16:25] <timrc> wasabi: hm, I'll investigate... thanks for the tip :)
[16:25] <jdong> timrc: particuarly if you install package a, b,c, then queue d which conflicts b and a, etc etc
[16:25] <wasabi> Nor whether it's even sane software.
[16:25] <wasabi> Synaptic should not lock teh database while open.
[16:25] <wasabi> That should be fixable.
[16:25] <tkamppeter> pitti, we have three bug confirmed to be fixed by my new pstopdf, and they are all of high impact, so I think we can do the SRU even without waiting for bug 293883
[16:25] <jdong> wasabi: I think it must in order to consistently build a package marking set to apply
[16:25] <pitti> tkamppeter: yes, I agree
[16:26] <wasabi> jdong: Can't it op;en teh database only when it needs to?
[16:26] <pitti> tkamppeter: just saying that I don't like to auto-close 293883 until we got confirmation; but I'll ask there to test the intrepid SRU
[16:26] <timrc> jdong: why wouldn't you be able to detect that conflict with a queue implementation?
[16:26] <jdong> wasabi: how does it know it doesn't need to recalculate every previosu marking every time you make a selection?
[16:26] <wasabi> jdong: It might. Can't it do that by reading the database once?
[16:26] <jdong> timrc: you would be able to, but that might abort previously queued installations in ways previous programs are not aware
[16:27] <jdong> timrc: and the actual set of packages to be installed or removed might change after you agreed to the install
[16:27] <wasabi> jdong: Launch, read. Operate on read information. Apply: Open, check to see if marked stuff is still sane, apply.
[16:27] <jdong> wasabi: what if it is no longer sane?
[16:27] <jdong> wasabi: undo all the user's markings and start over again?
[16:27] <wasabi> Naw, just make them right. :0
[16:27] <jdong> wasabi: I think locking the DB and preventing other package installs while the user is marking is entirely sane
[16:27] <jdong> wasabi: "make them right" might be a relative term :)
[16:28] <jdong> wasabi: if I install apache-mpm-prefork and a bunch of strict subdepends while the user marks apache-mpm-worker, have fun convincing the userbase what is the right thing to do :)
[16:29] <pitti> tkamppeter: btw, do you know about Mike's plans to adopt the PDF workflow/filters?
[16:29] <wasabi> Ahh well. timrc, it's pretty trivial to do the prestat thing.
[16:29] <wasabi> I had some code for it, I seem to have lost it.
[16:30] <timrc> wasabi: yeah, I like that idea
[16:30] <wasabi> pre-stat isn't how it should eb done though.
[16:30] <wasabi> It's just an interesting way to test teh idea.
[16:30] <jdong> read: hack.
[16:30] <wasabi> yeah.
[16:30] <jdong> :)
[16:30] <jdong> not saying we don't do the same anyway *cough readahead-list*
[16:30] <wasabi> What should happen is dpkg should take all the operations it intends to do, all the stating, and submit them using Real Async IO
[16:30] <wasabi> And then as those requests complete, issue new requests.
[16:31] <wasabi> The dpkg-prestat crap I did just spawns a bunch of forks of 'stat'
[16:31] <wasabi> So, bunch of useless processes.
[16:31] <wasabi> But it was still quite a speed improvement.
[16:31] <wasabi> Quick little shell script hack.
[16:31] <jdong> wasabi: probably building a statcache using some batch operation at the kernel/vfs level would be faster even
[16:31] <jdong> and much less thrashing around
[16:31] <wasabi> jdong: yeah. I know almost nothing about the kernel's IO layer, nor async APIs
[16:32] <jdong> probabilistic/markov stat associative readahead :)
[16:32] <jdong> lol kidding
[16:32] <wasabi> But the idea of an async API is you should just be able to call async_stat(file, cb) repeatidly.
[16:32] <wasabi> And the kernel should do it's own reordering and smartness, and cb should be invoked as results filter in
[16:32] <wasabi> or yeah, some sort of batch submission.
[16:33] <wasabi> But I doubt we have that.
[16:33] <tkamppeter> pitti, Mike did not answer anything about his plans on http://www.cups.org/str.php?L2897
[16:33] <wasabi> All my little hack was doing was priming the page cache.
[16:33] <tkamppeter> CUPS bug 2897
[16:33] <jdong> wasabi: right, we could use a bunch of kernel-layer APIs for priming the page cache
[16:33] <wasabi> Well, i'd still not prime the page cache.
[16:33] <wasabi> As you can never guarentee the page cache size.
[16:34] <wasabi> So you could end up dumping too much, and defeat yourself.
[16:34] <wasabi> You just need an async queue that can be invoked as appropiate depending on teh device's ram, etc.
[16:34] <jdong> wasabi: not really, Windows prefetching mechanisms don't suffer from those issues.
[16:34] <wasabi> How not?
[16:34] <jdong> a smarter / more aware pagecaching system isn't entirely unreasonable
[16:34] <wasabi> If I say stat 1000 files, and the last 500 files throw out the first 500.
[16:34] <wasabi> Then I haven't helped.
[16:34] <jdong> wasabi: it builds "bundles" of "caches" in C:\Windows\Prefetch
[16:35] <wasabi> Oh. I don't think that's useful.
[16:35] <wasabi> That's for fixing crappy software.
[16:35] <jdong> wasabi: whether or not the extra IO overhead it generates is useful is up for debate
[16:35] <wasabi> That software should be using async APIs to begin with.
[16:35] <wasabi> And there would be no problem.
[16:35] <jdong> wasabi: async APIs doesn't solve disk seeking issues.
[16:35] <jdong> wasabi: i.e. starting up openoffice
[16:35] <jdong> wasabi: the second startup on Windows even after a coldboot is almost double the speed
[16:36] <wasabi> If you start up open office, and the first thing the code does is issue an async request for each bit of data it will need throughout the entire process, and then as those pieiecs arive, it operates on them, then the problem solves itself.
[16:36] <pwnguin> if you do async writes, you better be prepared to handle async write failure
[16:36] <jdong> wasabi: not really, there's sequential dependencies on each component.
[16:36] <jdong> wasabi: loading an entire program by async storms is entirely nontrivial
[16:37] <wasabi> How deep?
[16:37] <jdong> wasabi: probably a linear chain
[16:37] <wasabi> And how are those dependencies expressed?
[16:37] <jdong> wasabi: probably by natural imperative flow of the startup procedure
[16:38] <wasabi> I dunno. I'd be curious to see it graphed.
[16:38] <jdong> wasabi: at any rate, all I'm pointing out is that the Windows prefetch layer can, without any modification to applications, often turn a random set of read operations into a single contiguous IO request
[16:38] <wasabi> Yeah. I get ya.
[16:38] <jdong> and it wouldn't be entirely unreasonable for us in Linux world to get something like that
[16:39] <wasabi> We've tried before.
[16:39] <wasabi> With various levels of success
[16:39] <jdong> wasabi: AFAIK the COW system in ZFS and friends already do similar things with being able to represent random writes as a single sequential write
[16:39] <wasabi> preload - adaptive readahead daemon
[16:39] <pwnguin> jdong: ever seen seekwatcher?
[16:39] <jdong> wasabi: preload is a pretty ugly userspace hack though
[16:40] <wasabi> yeah, it is.
[16:40] <jdong> wasabi: it could be done better :)
[16:40] <wasabi> So we were talking about dpkg.
[16:40] <jdong> pwnguin: no, heard of it
[16:40] <wasabi> Superfetch-like stuff doesn't really apply.
[16:40] <wasabi> As the IO semantics are not repeatitive between runs in any way.
[16:40] <jdong> wasabi: it is a seek-heavy workload
[16:41] <wasabi> Sure, but there's nothign to record.
[16:41] <wasabi> You can't record somebody installing gnome, and then use it to usefully make installing eclipse faster.
[16:41] <jdong> well why are you seeking so much?
[16:41] <jdong> there's got to be a reason why your requests are seemingly random
[16:41] <wasabi> They're not, they're just ordered with waits in between.
[16:41] <jdong> and there must be a way to group that
[16:41] <pwnguin> jdong: ah; i think it'd be neat to chart out readahead
[16:41] <jdong> why are there waits in between?
[16:42] <wasabi> for each file { stat file; wait for stat; write new file; relink; }
[16:42] <jdong> wasabi: what if after the 5th file statted, the kernel has a probabilistic model of the next 500 stat requests from the process and begins to lookahead? :)
[16:42] <wasabi> The writes get deferred, but the stats must be finished before proceeding.
[16:42] <wasabi> jdong: Then that's awesome. but the only way to build that model is for dpkg to submit it.
[16:42] <jdong> wasabi: this can probably be done with a markov chain type model for each process
[16:43] <wasabi> for each file { submit stat request to kernel; callback to X }    X  { write new file; relink }
[16:43] <jdong> wasabi: generic implementation-agnostic probabilistic readahead at the kernel level :)
[16:43] <wasabi> Dude. it can't figure out what dpkg is about to do
[16:43] <wasabi> It has no way to know the files dpkg is about to work on.
[16:43] <jdong> wasabi: sure it can. from experience.
[16:43] <wasabi> What, reading the .deb files itself?
[16:43] <jdong> wasabi: the first set of writes will be expensive
[16:43] <jdong> wasabi: the the next time you unpack the same app it will be inexpensive.
[16:43] <wasabi> Sure, but that doesn't happen that often.
[16:44] <jdong> we can prepopulate this cache per-install
[16:44] <wasabi> Oh. I don't think that's going to wrok at all.
[16:44] <wasabi> That's ungodly complicated.
[16:44] <pwnguin> http://oss.oracle.com/~mason/seekwatcher/
[16:44] <wasabi> Takes up space for a profile of each possible instaleld app?
[16:44] <jdong> wasabi: sure it is, but it's a properly engineered solutions with applications to EVERY app that is IO intensive.
[16:44] <wasabi> Somehow you need to have this huge profil daemon thing... dpkg still needs to somehow submit the profile to it.
[16:44] <jdong> wasabi: how much space do you think a list of blocks needs?
[16:45] <wasabi> They're not blocks.
[16:45] <wasabi> Each install moves the file.
[16:45] <pwnguin> is apt percieved to be slow?
[16:45] <jdong> pwnguin: some people do stare at their apt installs
[16:45] <jdong> for one reason or another.
[16:45] <wasabi> I don't think it's slow.
[16:45] <wasabi> I just think it can be much faster. :)
[16:45] <jdong> I can't think of ONE performance-critical usecase of apt
[16:45] <pwnguin> i mean, scrollkeeper
[16:45] <wasabi> Triggers removed the only complaint I had.
[16:45] <pwnguin> jdong: install?
[16:46] <jdong> pwnguin: we don't even use apt to install these days
[16:46] <pwnguin> dist-upgrade?
[16:46] <wasabi> I do. :0
[16:46] <jdong> except on the alternate
[16:46] <cjwatson> and server
[16:46] <wasabi> I've never used the Live CD installer.
[16:46] <wasabi> And I probably never will. heh.
[16:46] <dilinger> hi, how do i get myself off a bug notification list?
[16:46] <dilinger> https://bugs.launchpad.net/ubuntu/+source/nvidia-graphics-drivers-177/+bug/294527
[16:46] <liw> incidentally, wasn't installing from the live cd supposed to be faster than from the alternate?
[16:46] <jdong> has anyone ever tried doing upgrades on a tmpfs+ext3 unionfs? :D
[16:46] <jdong> liw: of course
[16:46] <dilinger> i was automatically added to that bug because i one did openafs stuff, but i don't work it it anymore.. and i see no way to unsubscribe myself
[16:47] <dilinger> s/one/once/
[16:47] <cjwatson> dilinger: https://bugs.launchpad.net/openafs/+subscribe maybe?
[16:47] <liw> because when I did ISO testing under kvm, I could start booting a live install, then start it installing, and then boot+install+test an alternate ISO in parallel
[16:47] <liw> might be a kvm quirk, of course
[16:48] <jdong> liw: the livecd is just a simple copy proceedure for installing while the alternate cd uses dpkg -i on a bunch of debs
[16:48] <liw> jdong, yes, I know that
[16:48] <dilinger> cjwatson: that offers me the option to subscribe
[16:48] <jdong> liw: a VM also restricts your block device to a much smaller seek radius
[16:48] <cjwatson> dilinger: I don't know, then - I suggest asking #launchpad for advice
[16:48] <jdong> liw: and in fact on my main system I can fit an entire VM's blockdevice and installation image into pagecache
[16:48] <cjwatson> liw: how much memory did you give the live install?
[16:49] <jdong> and I could do a 2 minute live install from start to finish
[16:49] <liw> cjwatson, a gigabyte (same as the alternate)
[16:49] <cjwatson> liw: a gotcha is that kvm's default memory size is 128MB, which dooms the live CD to swapping a lot
[16:49] <cjwatson> liw: ok
[16:49] <liw> jdong, I could fit both kvm's, both isos, and both virtual disk images in RAM at once, so I doubt that was the problem
[16:50] <jdong> liw: the question is whether or not the OS was doing that :)
[16:50] <jdong> on a 1TB XFS volume it was quite apparent that was the case
[16:50] <jdong> pdflush disabled
[16:50] <jdong> (no I don't recommend that setup :D)
[16:52] <pitti> mvo: did you upload this? http://launchpadlibrarian.net/19389671/gnome-terminal_2.24.1.1-0ubuntu1_source.changes
[16:54] <pitti> mvo: in case you did, it's missing a bug ref in the changelog
[16:55] <pitti> mvo: ah, it was you
[16:57] <kirkland> pitti: hey, thanks for the raid backport review
[16:57] <kirkland> pitti: i have a couple of responses to your questions
[16:57] <pitti> kirkland: hi; sorry for being picky
[16:57] <mvo> pitti: ups, sorry :/
[16:57] <mvo> pitti: pleae reject
[16:57] <pitti> well, not sorry for being picky, but for the additional work it creates
[16:57] <kirkland> pitti: you want them here, via synchronous conversation, or back in the bug report
[16:57] <pitti> mvo: already done
[16:57] <kirkland> pitti: no worries at all!
[16:57] <kirkland> pitti: this stuff needed a critical eye
[16:57] <pitti> kirkland: I actually prefer bug report, for keeping a record
[16:58] <kirkland> pitti: this is by far the most complex backport i've ever done for Ubuntu ;-)
[16:58] <pitti> kirkland: but if you have questions which need discussion, IRC is fine
[16:58] <kirkland> pitti: well, two minor ones, i think would help to mention here
[16:58] <kirkland> pitti: i'll record the summary in the bug report
[16:58] <pitti> kirkland: yeah, and we actually don't backport features either, so the entire thing is an exception
[16:59] <kirkland> pitti: " - panic(): Why the chvt 1? Boot messages are usually on VT8, and this changes behaviour of an existing function."
[17:00] <kirkland> pitti: see https://edge.launchpad.net/ubuntu/+source/initramfs-tools, changelog for 0.92bubuntu12
[17:00] <kirkland> pitti: i added an inline comment to the shell in this next iteration
[17:01] <kirkland> pitti: there's a prompt that comes up, "Do you want to boot your degraded RAID?  [y/N]: ", which times out after 15 seconds
[17:02] <pitti> kirkland: chvt> yes, for the raid case; but my concern is that other initramfs-tools scripts might call panic(), too, and this changes behaviour for those
[17:03] <pitti> kirkland: maybe the chvt cna be done right where the prompt for mdadm is done, instead of in existing code paths?
[17:03] <kirkland> pitti: that seems fair, i'll need to do some testing
[17:07] <kirkland> pitti: okay, the other one ....
[17:07] <kirkland> pitti: the code removed from panic()
[17:07] <pitti> mvo: I won't accept u-m right now; we already have two SRU uploads stacked in -proposed
[17:08] <pitti> mvo: the second one is verified, but none of the three bugs of the first
[17:08] <kirkland> pitti: this is due to the way we've separated the try_failure_hooks() and the panic() calls in scripts/local
[17:08] <kirkland> pitti: perhaps we're going to need a new panic() function
[17:08] <pitti> kirkland: right, that was an underlying problem, too, it changed the API
[17:08] <kirkland> pitti: just_panic()
[17:09] <kirkland> pitti: whereas the current one tries-failure-hooks-then-panics()
[17:14] <mvo> pitti: I agree with that plan, while the evms issue is anoying I think its rare enough to justify the that
[17:15] <pitti> mvo: I hope that the reporters or QA team can test the others soon; u-m fixes are quite urgent those days, when many people upgrade
[17:15] <mvo> pitti: absolutely, I talk to ubuntu-qa and ask if they can prioritize it
[17:15]  * pitti hugs mvo, thanks
[17:16] <pitti> jdstrand: in bug 271252 you tested the actual .debs from -proposed
[17:16] <pitti> ?
[17:16] <mvo> thanks pitti :)
[17:16] <jdstrand> pitti: yes-- which is why I waited til today to comment :)
[17:16]  * mvo needs to find out how to create a evms parition anyway before the verification for the one in the queue can begin
[17:17] <pitti> jdstrand: cool, thanks
[17:17] <tkamppeter> pitti, thanks for uploading the new CUPS package and doing the SRU request.
[17:17] <pitti> tkamppeter: no problem; thanks a lot to you for figuring out the fixes :)
[17:17] <jdstrand> pitti: np
[17:18] <pitti> tkamppeter: the SRU is uploaded and accepted, waiting for feedback now (well, still needs to build, but I prioritized the SRUs)
[17:22] <NCommander> oooh, new shiny icons
[17:25] <liw> mvo, does python-apt support package description translations?
[17:43] <kirkland> pitti: can you give this a once-over?  http://pastebin.ubuntu.com/68445/
[17:44] <kirkland> pitti: see if that addresses your concerns
[17:44] <kirkland> pitti: it's a _royal_ pain to test
[17:44] <kirkland> pitti: so it would help if you can at least spot check it for your concerns, and I'll go FVT it
[17:47] <mvo> liw: yes, out of the box
[17:47] <mvo> liw: so python -c 'import apt; print apt.Cache()["apt"].description' should DTRT
[17:47] <liw> mvo, ok, good, then I just haven't them configure
[17:48] <mvo> liw: what locale do you use?
[17:48] <liw> mvo, I'm actually using summary and not description, but I assume that doesn't matter
[17:48] <liw> mvo, fi_FI.UTF-8
[17:52] <mvo_> RainCT: hey! if you are interessted in apturl, I did some code refactoring in http://bazaar.launchpad.net/~ubuntu-core-dev/apturl/ubuntu - it should be *much* nicer now
[17:55]  * RainCT pulls
[17:56] <ScottK> mvo_: Is it still going to hard depend on synaptic or will it be front end agnostic?
[17:56] <mvo_> RainCT: well, the code should be much nicer :)
[17:56] <mvo_> ScottK: I put some effort into it today to make it frontend agnoistic
[17:56] <ScottK> mvo_: That's great news.  Glad to hear it.
[17:56] <mvo_> ScottK: it should be really easy now to write a qt frontend by just filling in the (few) needed UI functions
[17:57] <mvo_> ScottK: it was on my agenda for some time, I finally got around doing it :)
[17:58] <mvo_> I think rgreening was interessted in working on this
[17:58] <ScottK> mvo_: Yes.  That's my recollenction too.  I've just ping'ed him on #kubuntu-devel.
[17:59] <rgreening> mvo: ping
[17:59] <mvo_> ah, hello rgreening
[17:59] <mvo_> rgreening: I did the refactoring we talked about the other day in apturl
[17:59] <mvo_> rgreening: its in the ubuntu branch (http://bazaar.launchpad.net/~ubuntu-core-dev/apturl/ubuntu/files)
[18:00] <mvo_> let me know if there is anything I can help with
[18:01] <mvo_> AptUrl/UI.py and AptUrl/gtk/GtkUI.py should be good starting points
[18:01] <rgreening> ty mvo_
[18:01] <mvo_> my pleasure
[18:02] <rgreening> mvo_ I added this to the Jaunty specs page so it won't get lost :)
[18:03] <RainCT> mvo_: there's already a get_dist function in AptUrl/Helpers.py :P
[18:03] <RainCT> mvo_: to call lsb_release -c -s
[18:04] <RainCT> mvo: but the changed look great :)
[18:05] <RainCT> *changes
[18:05] <mvo> rgreening: cool
[18:05] <mvo> RainCT: oh, I thought I killed the duplicated one, will do that now (unless you beat me to it :)
[18:06] <RainCT> mvo: oh, seems like you have, my fault :)
[18:06]  * RainCT checked the diffs
[18:11] <RainCT> mvo: I've fixed two typos in my branch, btw
[18:11] <RainCT> (lp:~rainct/apturl/ubuntu)
[18:11] <mvo> excellent, thanks RainCT
[18:13] <gicmo> network .... slow ....
[18:13] <psycardis> Why jigdo download not available for the standard 8.10 install disc?
[18:13] <mvo> RainCT: merged
[18:16] <persia> psycardis, No benefit : most of that disc is a single file.
[18:16] <psycardis> Is that not the case with all of the other install discs?
[18:17] <persia> The alternate CDs have lots of individual files, rather than one big one.
[18:17] <persia> (these are files *inside* the ISO, the ISO itself is always one big file)
[18:31] <tedg> Okay, so if I want to SRU something into Hardy, does it have to be SRU into Intrepid first?
[18:36] <persia> tedg, Technically no, but if the bug is open in intrepid, it's standard practice to fix it there first.
[18:37] <persia> Essentially, users who haven't upgraded are expecting a higher degree of stability, and so the fix should first be presented to the most recent users for additional time to collect regression data.
[18:37] <persia> Exceptions are mostly bugs that affect hardy, but don't affect intrepid.
[18:38] <tedg> persia: Okay, so I should SRU it into Intrepid.  Wait a while, and then SRU it into Hardy (assuming it gets approved of course)
[18:41] <persia> tedg, That's the standard practice for an SRU that affects both intrepid and hardy.  If you think it's *really* critical, stick it in both -proposed queues at the same time.
[18:41] <psycardis> Sorry had to step out, even the non-alternative install variations of ubuntu with the exception of ubuntu desktop have jigdo links, the only one lacking it is ubuntu desktop live cd
[18:43] <persia> psycardis, Hrm?  None of the LiveCDs benefit from jigdo.
[18:43]  * persia looks at releases.u.c
[18:43] <tedg> persia: Not really critical, but effects large installations, so should probably go into an LTS.  Isn't a crash as much as a usability issue.  bug 203217
[18:44] <persia> tedg, That's something that would probably benefit from some testing in intrepid before it hit hardy, unless the other FUSA changes render the code-path obsolete.
[18:45] <tedg> persia: Fortunately not this one.  It's in code untouched by the other changes.
[18:45] <persia> tedg, Remember to fix it in jaunty first, as otherwise chances of approval are slim :)
[18:48] <persia> psycardis, I've just double-checked.  There's no jigdo for any of the live images (Ubuntu Desktop, Xubuntu Desktop, Kubuntu Desktop, Ubuntu UMPC, Ubuntu MID).  Everything else is an alternate CD.
[18:48] <persia> Where do you see a non-alternative jigdo link?
[18:49] <psycardis> http://releases.ubuntu.com/8.10/ubuntu-8.10-server-i386.jigdo
[18:49] <persia> That's an alternate CD.
[18:50] <psycardis> Yeah, my bad. I just noticed that. It's a shame it's not an option.
[18:51] <psycardis> Although, now that I'm reading about how the live cd is built i see why...
[18:52] <psycardis> Thank you.
[18:54] <persia> Yeah.  Just no point to jigdo for that.
[18:54] <persia> The torrent works fairly well though ...
[18:54] <cjwatson> right, jigdo works by creating a template with "holes" that can be filled in by byte-for-byte copies of .debs downloaded from more local mirrors
[18:55] <cjwatson> live CDs have very few such possible slots, because mostly the .debs are uncompressed and then recompressed as a single giant filesystem
[18:55] <cjwatson> in principle I'm sure it's possible to construct something *like* jigdo that would do the job, but it would really be completely different
[19:09] <infinity> cjwatson: It would be far more CPU intensive, client-side, since it would need to unpack debs after downloading them, to try to shove the raw files in place.
[19:09] <infinity> cjwatson: Doesn't seem practical, really, though it might be a fun exercise for someone bored enough to do it.
[20:35]  * quadrispro is away: Away
[20:36] <ion_> Le sigh
[20:36] <Mithrandir> quadrispro: please turn off public away.
[20:36] <pwnguin> Mithrandir: you know much about openGL ES?
[20:38] <Mithrandir> pwnguin: I don't know much about opengl at all; why?
[20:38] <pwnguin> ah, i thought you were into compiz. nevermind then ;)
[20:39] <quadrispro> sorry Mithrandir
[20:40] <mklebel> will RandR 1.3 be in the alpha 9.04 release?
[20:42] <pwnguin> when is randr 1.3 scheduled to be in a release?
[20:42] <pwnguin> (from xorg)
[20:42] <mklebel> yes pwnguin
[20:43] <pwnguin> mklebel: when will xorg release xrandr 1.3? now?
[20:43] <mklebel> oh it's not even out yet?
[20:44] <pwnguin> i dont know
[20:44] <pwnguin> xorg verioning is insane
[20:44] <mklebel> i guess ill check out #xorg-devel then
[20:45] <mklebel> pwnguin, yes it is.
[22:05] <TheMuso> psusi: WHen you work out the necessary fixes for those dmraid bugs, let me know. I have another dmraid bug I need to fix, and I'll throw them together in one SRU.
[22:08] <psusi> TheMuso: k
[23:14] <wasabi> Anybody running an ARM port that I don't know about? :)
[23:25] <pwnguin> wasabi: do you know about the arm port?
[23:26] <pwnguin> Hasty
[23:27] <pwnguin> http://mojo.handhelds.org/distributions
[23:28] <persia> The intrepid equivalent will probably be a month or two, based on past performance.
[23:28] <pwnguin> Icy imp is slated for q4 2008
[23:29] <persia> Right.  A month or two :)
[23:29] <pwnguin> just in time for my pandora :)
[23:30] <genii> Is there anywhere to learn about incorporating CUDA into ubuntu apps?
[23:30] <pwnguin> the GPU compiler?
[23:31] <genii> pwnguin: Yes, like with an NVidia Tesla or so
[23:32] <genii> I understand it doesn't use gcc
[23:33] <pwnguin> http://www.nvidia.com/object/cuda_develop.html
[23:34] <pwnguin> but basically, if the compiler's nonfree it'll never be integrated
[23:35] <genii> Hm
[23:35] <genii> pwnguin: Thanks anyhow :)
[23:35] <pwnguin> and im  not sure how much perfomance benefit there is for the average program
[23:35] <pwnguin> for video playback, maybe
[23:36] <pwnguin> but i cant see how it'd make your RSS reader any better off :)
[23:37] <pwnguin> oh, and being nvidia hardware specific is a strike against inclusion =/
[23:37] <genii> pwnguin: I want to use it on shading/lighting/physics calculations when rendering in POVRay
[23:37] <pwnguin> you're better off talking with the upstream projects about it, since they know the software they publish best
[23:37] <genii> Well, true there about the NVidia origin :)
[23:38] <pwnguin> it'd suck to install POVray and discover you can't use it because you have intel or amd
[23:38] <pwnguin> or even just an older geforce
[23:39] <genii> pwnguin: The Tesla is purely a dedicated GPU multithread card, no video output. You just use it to crunch numbers then out to whatever video card you want
[23:39] <genii> 240 cores
[23:39] <pwnguin> but you could always just publish it personally if you just want to toy around; this channel is more focused on core ubuntu components
[23:40] <genii> pwnguin: Thanks. Now I'll need to remember how to work my ppa ... ;)
[23:41] <pwnguin> oh, i guess the problem there is build tools
[23:42] <pwnguin> you cant use a ppa if you have build deps outside of ubuntu
[23:42] <genii> Hm
[23:42] <genii> You can't just link static binaries?
[23:42] <pwnguin> with what?
[23:42] <pwnguin> nvcc?
[23:43] <genii> Yes, build on local box with nvcc the CUDA specific stuff then link it into the regular gcc
[23:44] <pwnguin> somehow that seems like against the terms of service
[23:44] <pwnguin> uploading object code as source package
[23:44] <genii> OK. I won't chance it then
[23:44] <pwnguin> you can always debuild locally
[23:45] <genii> True.
[23:46] <genii> OK I'll leave you be then and sort it out... thanks