[00:04] <gribouille> hi
[00:04] <cody-somerville> Is the survey on the Ubuntu-devel-discuss mailing list legit?
[00:05] <gribouille> do you there is a problem with firefox+google toolbar on ubuntu ?
[00:07] <gribouille> hey, are you sleeping ?
[00:09] <cody-somerville> no...
[00:09] <mario_limonciell> gribouille, this isn't really the right place for that kind of discussion.  #ubuntu should be better for it.  To answer your question though, it's very possible that Google hasn't updated their Google Toolbar extension for Firefox3 (which is in Ubuntu 8.04)
[00:09] <cjwatson> also, bug reports are really better off on bugs.launchpad.net/ubuntu - they can be tracked properly that way
[00:09] <gribouille> mario_limonciell, the problem existed also with firefox 2
[00:10] <cjwatson> bug 228636
[00:10] <gribouille> https://bugs.launchpad.net/ubuntu/+source/firefox/+bug/135110
[00:10] <cjwatson> (not sure how useful that is though)
[00:11] <gribouille> my question is : why do you need to patch firefox so heavily ?
[00:11] <cjwatson> gribouille: by the way, waiting three minutes after asking your question and then asking if anyone is asleep is unpleasant behaviour on IRC - it's an asynchronous medium and conversations routinely take hours if one relevant party is away
[00:11] <gribouille> cjwatson, ok, sorry
[00:12] <gribouille> cjwatson, it is 1 A.M., so I can't wait for hours
[00:12] <cjwatson> use the bug tracking system, then
[00:13] <cjwatson> it's 1am where the firefox maintainer lives too
[00:13] <gribouille> cjwatson, the bug was already reported, but it doesn't seem to habe been taken into account
[00:13] <cjwatson> well, in some ways it's a google toolbar problem, of course
[00:14] <cjwatson> we simply can't wait for *every* extension to upgrade, I'm afraid
[00:15] <gribouille> cjwatson, I don't think it is a google toolbar problem, since the problem appears only with the patched version of firefox kindly ditributed by ubuntu
[00:15] <cjwatson> in that case it would be useful to narrow down which patch is responsible
[00:15] <cjwatson> there's a reason for each patch we carry, and FWIW our patches generally go through patch review by upstream
[00:16] <cjwatson> we don't just patch it for fun
[00:17] <gribouille> cjwatson, what do the patches add in functionality ?
[00:17] <cjwatson> that is an extremely broad question best answered by looking at the changelog
[00:18] <gribouille> where can I find it ?
[00:18] <cjwatson> /usr/share/doc/xulrunner-1.9/changelog.Debian.gz, /usr/share/doc/firefox-3.0/changelog.Debian.gz
[00:19] <emgent> heya people
[00:20] <nysin> How long generally happens after a bug gets marked as fixed should it appear back in the intrepid repositories? In particular, there's one 11 hours old that hasn't appeared at packages.ubuntu.com
[00:20] <gribouille> cjwatson, will I have problems if I use the version of firefox from mozilla ?
[00:20] <Keybuk> nysin: between 3 hours and 6 months
[00:21] <cjwatson> gribouille: no idea, up to you
[00:21] <nysin> heh.
[00:21] <cjwatson> nysin: which bug?
[00:21] <gribouille> nysin, but it can take 10 years too
[00:21] <nysin> https://bugs.launchpad.net/ubuntu/+source/scons/+bug/226783
[00:21] <cjwatson> gribouille: thanks for your helpful remarks, but please don't
[00:21] <nysin> and I'm looking at http://packages.ubuntu.com/intrepid/scons
[00:21] <Keybuk> https://edge.launchpad.net/ubuntu/intrepid/+source/scons/0.98.5-1ubuntu1
[00:21] <TheMuso> I've found packagesw.ubuntu.com to be behind.
[00:21] <nysin> ah
[00:21] <Keybuk> it's listed as built and published
[00:22] <TheMuso> pacages.ubuntu.com even
[00:22] <TheMuso> gah typing
[00:22] <cjwatson> nysin: yeah, that all seems to be in place
[00:22] <nysin> cjwatson: well my link still has it at 0.97
[00:22] <Keybuk> (from any bug, just click the "Overview" tab to see which versions are in which releases of Ubuntu)
[00:22] <mario_limonciell> nysin, you are usually better off looking at https://edge.launchpad.net/ubuntu/intrepid/+source/scons for more information on the version in different places
[00:22] <cjwatson> http://archive.ubuntu.com/ubuntu/pool/main/s/scons/
[00:22] <nysin> but Keybuk's is useful
[00:23] <cjwatson> nysin: ^- more accurate than packages.u.c
[00:23] <mario_limonciell> er well https://edge.launchpad.net/ubuntu/+source/scons
[00:23] <mario_limonciell> instead
[00:23] <cjwatson> launchpad.net -> what ought to be in the archive, archive.ubuntu.com -> what is in the archive, packages.ubuntu.com -> what was in the archive some hours ago
[00:23] <gribouille> cjwatson, do you think it is better to despise the users ?
[00:24] <nysin> cjwatson: ah, I did not know that, thanks.
[00:24] <cjwatson> gribouille: err, you're being completely irrelevant; the bug nysin is talking about is already fixed and in the archive and your comment was unhelpful
[00:24] <Keybuk> gribouille: sleeping for 8 hours a day is despising users?
[00:24] <gribouille> Keybuk, I'm not talking about that
[00:25] <gribouille> cjwatson, he's lucky
[00:25] <Keybuk> gribouille: it is unclear to me what you are talking about
[00:25] <nysin> no I'm not
[00:25] <nysin> I've been watching this bug for some days now
[00:25] <nysin> I subscribed to it to find out when it was fixed
[00:25] <nysin> and then when it was I looked for the fix
[00:25] <cjwatson> gribouille: it is a reality of life anywhere in the software world that not all bugs get fixed, I'm afraid; we are willing to work with people who are willing to help us
[00:25] <nysin> stop assuming
[00:25] <nysin> please
[00:25] <nysin> my timing here is not accidental
[00:26] <gribouille> cjwatson, we don't ask you to fix all the bugs, but at least, please, fix the bugs you introduced yourseves
[00:26] <Keybuk> gribouille: bug# ?
[00:27] <cjwatson> gribouille: it's an interaction between our changes and non-free code that we cannot see; it's unclear, as far as I can see, what's going on
[00:27] <gribouille> Keybuk, https://bugs.launchpad.net/ubuntu/+source/firefox/+bug/135110
[00:27] <cjwatson> gribouille: but, as I said, somebody who's willing to put the work in to figure out which patch is responsible would help greatly
[00:27] <cjwatson> gribouille: shouting at us on IRC achieves nothing
[00:28] <gribouille> cjwatson, can't you understand that users can become angry ?
[00:28] <cjwatson> gribouille: yes, I can. That doesn't mean we have to tolerate people being uncivil to us here.
[00:28] <Keybuk> gribouille: several people seem to be affected by that bug, and are posting useful information
[00:28] <Keybuk> it's only a matter of time before the right piece of information is found by somebody affected by it, and posted to the bug
[00:28] <Keybuk> at which point, a developer may be able to act on it
[00:29] <Keybuk> of course
[00:29] <Keybuk> I'm not sure _why_ that is marked as a security vulnerability
[00:29] <cody-somerville> Keybuk, probably to get developer attention
[00:29] <Keybuk> cody-somerville: ironically, that has the exact opposite effect :)
[00:29] <kees> it's such a tempting checkbox
[00:30] <Keybuk> security-related bugs cannot be seen by most developers
[00:30] <Keybuk> so marking it as security means your bug will be hidden to everyone :p
[00:30] <kees> Keybuk: "private" can't.
[00:30] <Keybuk> kees: oh, did they break that link now?
[00:30] <gribouille> Keybuk, what "piece of information" are you waiting for ? can't you find it yourselves ?
[00:30] <Keybuk> gribouille: no, we can't find bugs ourselves
[00:30] <kees> Keybuk: yeah, it defaults to private+security and prompts you to please make it public if it's not really private
[00:31] <gribouille> Keybuk, but what information do you need ?
[00:31] <cjwatson> gribouille: please be aware that we have an enormous amount to do, and the fact that we cannot give every bug all the attention it deserves does not mean that we are lazy
[00:31] <Keybuk> gribouille: in almost all cases of annoying bugs, the developers are not affected by it
[00:31] <Keybuk> the developers are likely entirely unable to replicate it
[00:31] <kees> 135110 is not a security bug anymore.  ;)
[00:31] <Keybuk> it simply works for us
[00:31] <Keybuk> so we rely on our users, who _are_ affected by the bug and who _can_ replicate it, to provide as much information as they can
[00:31] <Keybuk> and to actively help out to detect the bug
[00:32] <Keybuk> as to what information it needs, an exact description of how to replicate the bug on any system -- ideally with lots of detail about what causes it
[00:32] <Keybuk> a patch would be nice too ;-)
[00:32] <cjwatson> it is now the third time that I've said that, ideally, somebody affected by the bug would figure out which Ubuntu patch is responsible for introducing it
[00:32] <gribouille> Keybuk, strange, because the bug happens conctantly on my computer. I can't believe that it is not the case with you
[00:33] <cjwatson> i.e. bisect the set of Ubuntu patches, building with and without various ones, until it is isolated
[00:33] <Keybuk> gribouille: that is why you fail (to get it fixed :p)
[00:33] <cody-somerville> gribouille, I don't think Keybuk uses that software.
[00:34] <gribouille> Keybuk, did you even try to reproduce the bug ?
[00:34] <Keybuk> gribouille: yes
[00:34] <cjwatson> gribouille: we have many, many other bugs
[00:34] <Keybuk> for the sake of interest, I installed GTB while we've been talking
[00:34] <Keybuk> and it works for me
[00:35] <ogra> but you dont use gutsy :)
[00:35] <ogra> or ff 2
[00:35] <Keybuk> oh, I tried hardy ;)
[00:35] <cjwatson> it has also been asserted that it fails on hardy
[00:35] <gribouille> Keybuk, are you saying that no ubuntu developer has ever managed to reproduce the bug ?
[00:35] <cjwatson> that said, gribouille has not been clear
[00:35] <ogra> oh
[00:35] <Keybuk> gribouille: I'm saying that there is no instance of that bug being confirmed by a developer
[00:35] <cjwatson> gribouille: do you believe that every Ubuntu developer attempts to reproduce all tens of thousands of open Ubuntu bugs, separately? :-)
[00:36] <gribouille> cjwatson, surely not
[00:37] <cjwatson> gribouille: we have to work in priority order, and bugs that affect the standard Ubuntu installation without extensions tend to receive higher priority because more users encounter those
[00:37] <gribouille> for your information, the bug affects ff2 on gutsy as well as ff3 on hardy
[00:37] <cjwatson> Firefox 3 on hardy for *some people*
[00:37] <cjwatson> but, it is clear, not all - like all the hardest bugs
[00:37] <Keybuk> gribouille: does it affect vanilla upstream ff3 on hardy?
[00:37] <kirkland> does anyone know anything about libpam-script ?  it doesn't seem to be packaged for Debian or Ubuntu....
[00:37] <gribouille> Keybuk, I haven't tried
[00:38] <Keybuk> gribouille: trying that would be useful, and reporting in the bug
[00:38] <Keybuk> if it does work, then there is a problem in the ubuntu specific packaging
[00:38] <Keybuk> then please try bisecting through the ubuntu patches and changes until you find the one which breaks it
[00:38] <slangasek> kirkland: I know that the idea makes me slightly nauseous ;)
[00:38] <Keybuk> _that_ would be amazingly useful
[00:39] <kirkland> slangasek: :-)  has it been proposed and shot down previously?
[00:39] <gribouille> Keybuk, the people who reported the bug said there was no problem with the version of ff distributed by mozilla
[00:39] <slangasek> kirkland: not that I'm aware of
[00:39] <Keybuk> gribouille: so the next step is to bisect until you find what about the ubuntu version causes it
[00:39] <Keybuk> like they do at the opticians
[00:39] <Keybuk> "better with, or without?"
[00:39] <Keybuk> (change lens)
[00:39] <kirkland> slangasek: okay, thanks.  i'll look into putting the bits i need into pam_ecryptfs.so instead.
[00:40] <gribouille> Keybuk, I'm not an ubuntu developer
[00:40] <Keybuk> "better with, or without?"
[00:40] <Keybuk> gribouille: it's a great way to become one ;)
[00:40] <gribouille> Keybuk, sure, I don't have anything else to do
[00:40] <cody-somerville> gribouille, good, we have a lot to do! :)
[00:40] <Keybuk> thanks, that would be really helpful
[00:40] <Keybuk> put as much detail into what you find into the bug
[00:41] <cody-somerville> gribouille, If you need help, let us know.
[00:41] <cjwatson> gribouille: the more that this sort of thing can be done by folks who are directly affected by and bothered by a bug, the more time Ubuntu developers can spend on fixing bugs that already have all the necessary information in them
[00:41] <cjwatson> we're not short of those
[00:41] <cjwatson> and they're the ones that can *only* be addressed by developers
[00:42] <gribouille> I don't say I don't want to help, but I can't spend hours a day trying to find the cause of certain bugs
[00:42] <Keybuk> gribouille: yet you expect other people to do it for the bugs that affect you?
[00:42] <Keybuk> most ubuntu developers do this in their spare time, remember
[00:43] <gribouille> Keybuk, I don't have the necessary tools to fix bugs
[00:44] <Keybuk> gribouille: we're not asking you to _fix_ it, just provide enough information for a developer to be able to, if not replicate it, understand it
[00:44] <Keybuk> and thus the developer to fix it
[00:44] <cjwatson> gribouille: it's OK not to be able to help - many people fall into that category - but we do ask that people who aren't able to help themselves don't come into our development coordination channel and have a go at us for not fixing something
[00:44] <Keybuk> (and you have, in the Ubuntu system, all the tools that the developers have :p)
[00:45] <nxvl> cjwatson: thanks!
[00:45] <cjwatson> nxvl: hope it's less work for you that way ;-)
[00:46] <nxvl> yep
[00:46] <nxvl> i thought it was just not needed, and since i didn't get any response from upstream i thought i should, thanks a lot!
[00:47] <gribouille> cjwatson, I came here because I had an urgent need to understand something
[00:47] <cjwatson> perhaps we can start again, then
[00:48] <cjwatson> what do you still need to understand that has not been answered so far?
[00:48] <gribouille> cjwatson, now, I know better how you work
[00:51] <gribouille> do you fix the bugs yourselves or do you just forward the bug reports to the authors of the software ?
[00:51] <cjwatson> a bit of both
[00:51] <gribouille> what is the % of bugs you fix yourselves ?
[00:51] <cjwatson> I don't think we have the materials to answer that question even approximately
[00:53] <cjwatson> we fix many problems ourselves, and often forward those fixes where appropriate; in other cases we forward the report; in some of those cases we work with upstream to figure out a fix. It depends on the importance of the software to Ubuntu, how actively it's maintained in Ubuntu, how much else the relevant maintainer has to do, how experienced the relevant maintainer is in that area of the software in question, sometimes ...
[00:54] <cjwatson> ... whether the maintainer judges that it would be too disruptive to attempt a fix in a distribution-specific patch, and probably all kinds of other reasons I've forgotten
[00:54] <cjwatson> and there are, unfortunately, some bugs that don't get the attention they should, because there are simply more reports coming in than we have the manpower to handle at present
[00:54] <cjwatson> (I suspect most popular projects are in this position, and we're always looking for ways to improve it)
[01:07] <gribouille> I can't even run firefox in gdb :-(
[01:14] <cody-somerville> evand, What would cause errors like this?
[01:14] <cody-somerville> https://bugs.edge.launchpad.net/ubuntu/+bug/157260
[01:15] <BenC> pitti: ping
[01:16] <gribouille> is a version of firefox with debugging symbols available ?
[01:17] <cody-somerville> gribouille, Are you gutsy or hardy?
[01:17] <gribouille> hardy
[01:18] <cjwatson> grabbing and installing the matching-version packages from http://ddebs.ubuntu.com/pool/main/x/xulrunner-1.9/ and http://ddebs.ubuntu.com/pool/main/f/firefox-3.0/ should help
[01:18] <cjwatson> (detached symbols)
[01:19] <cody-somerville> gribouille, Add deb http://ddebs.ubuntu.com/ hardy main multiverse restricted universe and install the -dbgsm counterparts.
[01:20] <cjwatson> see https://wiki.ubuntu.com/MozillaTeam/Bugs
[01:20] <cjwatson> detailed directions there in the "Crashes" section
[01:26] <gribouille> sudo apt-get install xulrunner-1.9-dbgsym : The following packages have unmet dependencies:
[01:26] <gribouille>   xulrunner-1.9-dbgsym: Depends: xulrunner-1.9 (= 1.9~b5+nobinonly-0ubuntu3) but 1.9+nobinonly-0ubuntu0.8.04.1 is to be installed
[01:26] <gribouille> E: Broken packages
[01:28] <cjwatson> gribouille: looks like you didn't include the hardy-updates line - see https://wiki.ubuntu.com/MozillaTeam/Bugs
[01:28] <cjwatson> (hardy-updates has xulrunner-1.9-dbgsym 1.9+nobinonly-0ubuntu0.8.04.1
[01:28] <cjwatson> )
[01:37] <gribouille> I installed firefox-3.0-dbgsym xulrunner-1.9-dbgsym libgtk2.0-0-dbg libnss3-0d-dbgsym libnspr4-0d-dbg libpango1.0-0-dbg libcairo2-dbg libc6-dbg, but gdb /usr/lib/firefox-3.0/firefox still says there are no debugging symbols
[01:39] <cody-somerville> gribouille, can you pastebin the output?
[01:39] <cody-somerville> !pastebin
[01:41] <gribouille> bash: /usr/lib/debug/usr/lib/firefox-3.0/firefox: cannot execute binary file
[01:41] <gribouille> the file is -rwxr-xr-x 1 root root 1672 2008-06-10 17:05 /usr/lib/debug/usr/lib/firefox-3.0/firefox
[01:59] <Amaranth> tjaalton: with http://www.realistanew.com/random/xen.patch the 173.14.09 nvidia module will build against the 2.6.26-1 kernel
[02:05] <cody-somerville> I think we may have a rather big regression on our hands.
[02:05] <Amaranth> cody-somerville: ?
[02:05] <cody-somerville> https://bugs.edge.launchpad.net/ubuntu/+bug/221657
[02:05] <cody-somerville> I'm starting to see reports the issue also occurrs with gnome
[02:06] <cody-somerville> bryce, ^^
[02:08] <bryce> cody-somerville: url for the gnome report(s)?
[02:09] <cody-somerville> https://bugs.edge.launchpad.net/ubuntu/+bug/221657/comments/7
[02:32] <TheMuso> Hrm. Trying to use ubuntu-vm-builder to build a hardy kvm image, yet it seems to fail when attempting to set up grub. Anybody had anything similar? http://www.pastebin.ca/1050720
[02:41] <cody-somerville> Alrighty folks.
[02:42] <cody-somerville> Wish me luck. Hopefully I can solve bug #232364 with this next test.
[03:07] <lamont> pitti: requestsync does silly things... see bug 241144 for an example
[03:08]  * lamont read through the merge list going "I touched that?"
[03:31] <cody-somerville> bryce, ping
[03:31] <cody-somerville> bryce, [pid  7877] ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, 0xbfd17a18) = -1 ENOTTY (Inappropriate ioctl for device)
[03:31] <cody-somerville> [pid  7877] select(21, [20], NULL, [20], NULL
[03:31] <cody-somerville> :)
[03:39] <Keybuk> [pid  7877] select(21, [20], [20], NULL, NULL) = 1 (out [20])
[03:39] <Keybuk> [pid  7877] writev(20, [{"\225\0\2\0\1\0\0\0", 8}], 1) = 8
[03:39] <Keybuk> [pid  7877] select(21, [20], [], NULL, NULL) = 1 (in [20])
[03:39] <Keybuk> [pid  7877] read(20, "\1\1\6\0\0\0\0\0\1\0\0\0H\217\357\277}s\33\10\230\3378"..., 4096) = 32
[03:39] <Keybuk> [pid  7877] read(20, 0x8056f3c, 4096)   = -1 EAGAIN (Resource temporarily unavailable)
[03:40] <Keybuk> [pid  7877] select(21, [20], NULL, [20], NULL
[03:40] <Keybuk> --
[03:40] <Keybuk> so that's where it's hanging
[03:56] <toaster_fun> How do I create a wiki page?
[03:56] <toaster_fun> on http://wiki.ubuntu.com
[03:59] <Keybuk> actually, cody's ioctl is quite interesting after all ;)
[04:04] <ion> how is intrepid running
[04:06] <ion_> Hello, other ion. :-)
[04:10]  * slangasek uses a mass spectrometer to tell the two apart
[04:11] <ion_> I stole an electron from him, thus we both became ions.
[04:12] <wgrant> ion_: You should be ïon, and he should be ıon!
[04:12] <Keybuk> what happens if we split you apart?
[04:16] <wgrant> How often does changelogs.u.c update?
[04:16] <n8k99> Keybuk: depends on what you use to split them
[04:16] <Keybuk> LHC, of course
[04:16] <persia> wgrant: I've heard 4-hourly, but I may be mistaken.  mvo knows for sure.
[04:16] <ion_> LHC ♥
[04:18] <n8k99> heh
[04:18] <wgrant> Is there any reason to not run it right after the publisher?
[04:19] <persia> wgrant: To avoid the "changelogs cannot be found" issue?
[04:20] <wgrant> persia: Indeed.
[04:24] <persia> wgrant: bug #40058
[04:24] <wgrant> persia: Thanks
[05:52] <sbeattie> slangasek: I assume it's known that the 8.04.1 daily builds have been failing to show up since the 20080617 build?
[05:53] <TheMuso> sbeattie: That sounds correct to me.
[05:53] <TheMuso> sbeattie: As in, thats only yesterday for me at least.
[05:54] <sbeattie> TheMuso: there's a 20080617.1, 20080618 and 20080618.1 directory that are all unpopulated.
[05:55] <TheMuso> sbeattie: Oh right haven't looked that deeply.
[05:58] <iwkse> hi all, i got a crash by the installer, ubuntu hurdy, http://pastebin.ca/1050814. Anybody can help me to understand what's going that on it?
[06:09] <slangasek> sbeattie: yes; unfortunately I didn't know this before the QA meeting this morning, sorry
[06:10] <sbeattie> slangasek: no problem, just wanted to be sure you were aware. I should be pulling isos from there more frequently.
[06:28] <Hobbsee> [15:26] * Hobbsee tries to remember which version of ubuntu first writes to ntfs by default.
[06:29] <Hobbsee> [15:26] <Hobbsee> er, out of the box, not by default.
[06:32] <ScottK> Gutsy I think.
[06:32]  * calc wonders if the glib timestamp file manager bug is ever going to get fixed in hardy :-\
[06:32] <TheMuso> Hobbsee: I'd say gutsy as well.
[06:32]  * calc doesn't like using the command line for organizing files
[06:34] <Hobbsee> oh, it was gutsy?
[06:34] <Hobbsee> nice.
[06:50] <pitti> Good morning
[06:50] <pitti> BenC: pong
[06:52] <pitti> lamont: that's a known limitation; for versions which were never in Debian you have to specify the "base version" (branch point); see manpage
[07:19] <pitti> StevenK: you use postgresql a bit, don't you?
[07:20] <StevenK> pitti: Certainly do
[07:20] <pitti> StevenK: would you mind giving 8.3.3 a try, in hardy-proposed? I need someone else than just me to verify bug 238587
[07:20] <pitti> StevenK: (or, rather, any *-proposed)
[07:22] <\sh> moins
[07:23] <fabbione> morning guys
[07:26] <pitti> hey fabbione
[07:27] <fabbione> yo yo pitti
[07:28] <StevenK> pitti: Bug updated about 8.3
[07:28] <StevenK> Updating. Move it, Launchpad
[07:28] <pitti> StevenK: oh, that was quick; thanks! you were already running -proposed?
[07:29] <StevenK> pitti: Nope. Creating chroots is easy.
[07:30] <pitti> that wasn't a very extensive testing then :)
[07:31] <StevenK> pitti: It runs, and imports a database dump I had lying around, and answers queries. What else do I want a database server to do? :-)
[07:31] <pitti> StevenK: ah, that's fine
[07:32] <dholbach> good morning
[07:40] <dholbach> does the new intrepid kernel work for anybody in KVM?
[07:40] <Hobbsee> StevenK: make you tea, coffee, and dinner on request.  duh.
[07:41] <StevenK> Hobbsee: No, that's how pitti thanks me for testing.
[07:41] <dholbach> also with the old kernel, does anybody else have funny mouse movement? as in mouse moving very very slowly just in the upper left 100*50 pixel area?
[07:41] <Hobbsee> StevenK: it'll be cold by the time it gets there.
[07:42] <lifeless> pitti: btw I haven't tested yet, my conversion took waaay longer than I expected
[07:52] <pitti> lifeless: conversion from 8.2 to 8.3?
[07:52] <lifeless> pitti: kernel version, suspend bug
[07:53] <lifeless> oh, no bzr repo stuff
[07:53] <pitti> oh, that
[07:53] <Amaranth> that reminds me, I don't know if it is the new nvidia driver or the new kernel but one of the two added at least 30 minutes to my battery life
[07:53]  * Amaranth is happy
[07:54] <pwnguin> Amaranth: checked powertop?
[07:55] <Amaranth> pwnguin: same wakeups but I think I spend a little more time in C3
[07:56] <Amaranth> only problem I was with the 2.6.26 kernel is I can no longer use my keyboard to wake up my laptop if I suspend it with the button instead of closing the lid
[07:56] <Amaranth> i have to push the power button
[07:56] <Amaranth> otherwise I'm impressed at how well everything still works
[07:56] <pwnguin> heh
[07:56] <Amaranth> well, except that it seems to support Xen DomU by default now which means I had to make a patch for the nvidia driver
[07:57] <pwnguin> when i updated yesterday, i think i missed the nvidia binary
[07:57]  * pwnguin hopes the dust settles on drm2 soon
[07:57] <Amaranth> luckily the debian guys already had to do the same thing so I just updated their patch to the latest driver
[07:58] <Amaranth> pwnguin: you could try http://ubuntuforums.org/showthread.php?t=833633
[07:59] <pwnguin> i havent looked at it much yet
[08:00] <wgrant> Amaranth: Are you using 2.6.26 from the kernel-team PPA?
[08:00] <pwnguin> 2.6.26 hit intrepid
[08:00] <Amaranth> I thought it hit 4 days ago
[08:00] <Amaranth> i dunno, the mirror I was using seemed to die
[08:00] <pwnguin> Amaranth: besides which, "Design charge: 50Wh. Last full charge: 18.5Wh"
[08:01] <Amaranth> was thinking "where did all the updates go?" then "oh crap, 571 updates"
[08:01] <Amaranth> pwnguin: forget where that info is
[08:01] <pwnguin> click on the battery
[08:02] <Amaranth> ah
[08:02] <Amaranth> both are 97.7 Wh
[08:02] <pwnguin> my laptop's nearing two years of age
[08:02] <Amaranth> and this battery is 2 years old, I'm impressed
[08:02] <Amaranth> my other battery for it is completely shot because I forgot to use it for awhile
[08:04] <pwnguin> i wonder if the cold killed it
[08:04] <pwnguin> i used to leave my laptop on the floor. I'd wake up and it wouldn't boot
[08:05] <pwnguin> or rather, it'd boot but only if you sat at grub for a while or restarted
[08:05] <Amaranth> ooh i never leave mine on on the floor, get crap from the carpet in it and it overheats
[08:06] <pwnguin> it was off
[08:06] <Amaranth> weird
[08:06] <pwnguin> the floor is concrete with carpet over it
[08:06] <pwnguin> it probably got cold enough to screw it
[08:07] <Amaranth> that would have to be incredibly cold
[08:07] <Amaranth> like around 0
[08:07] <pwnguin> then there was the time someone decided toshiba_acpi didnt need to be around
[08:07] <pwnguin> on my scale, 10 is around 0
[08:08] <Amaranth> err, which scale are we talking?
[08:08] <Amaranth> Oh, I guess I should use F then
[08:08] <Amaranth> I meant 32
[08:09] <pwnguin> i wouldn't be surpised it it approached 40F or lower
[08:09] <pwnguin> winter is cold
[08:11] <pwnguin> i know there was a point where the whole house was at 45 when we lost power
[08:12] <pwnguin> that was a fun finals week
[08:12] <pwnguin> anyways, the cheapest way to add battery time for me is to replace the thing =(
[08:28] <seb128> pitti: hello
[08:29] <seb128> pitti: so about this lanman password things, there is several points to consider and I've no idea about the number concerned there
[08:29] <seb128> - how many setup requiring this weak authentifications are still around
[08:29] <seb128> - how of a security issues is the option, knowing that upstream, other distro and other ubuntu versions are still using it
[08:30] <pitti> AFAIUI, Vista already disables it completely, and XP/2000 don't configure it by default; and neither did we in any supported Ubuntu release?
[08:30] <seb128> - and if we prefer to annoy users for their own good
[08:30] <seb128> speaking about server or client side?
[08:30] <seb128> we didn't configure any server side to use that
[08:30] <seb128> but there is people who have NAS using samba 2 for example
[08:31] <pitti> admittedly CIFS passwords are used in much more controlled environments than SSH keys
[08:31] <seb128> and they used to be able to connect to those using gutsy
[08:31] <seb128> and that still work using, let's say fedora 9
[08:32] <seb128> ideally nautilus and gvfs should display clear errors about why they can't connect and what they can do to fix the issue
[08:32] <seb128> but that's not going to happen before 8.04.1
[08:33] <seb128> so either we declare that those are still broken in a confusing way
[08:33] <seb128> or enable the option until getting the dialog
[08:34] <pitti> so for judging the potential exposure, how many SAMBA servers are realistically accessible from the internet?
[08:34] <seb128> ideally none? ;-)
[08:34] <seb128> in really, no real clue
[08:35] <seb128> s/really/reality
[08:35] <pitti> if they pretty much all are confined to private home/corporate networks, I'm not too concerned about weak passwords, but for anything exposed to the web I am
[08:35] <seb128> right
[08:35] <seb128> do you know about any usecase for having a samba share available on the internet?
[08:35] <seb128> my understanding is that those are used to share things on local network usually
[08:36] <pitti> my gf's uni offers access from home, but solely through a VPN
[08:36] <wgrant> seb128: I have a usecase - stupidity. Most ISPs block SMB ports.
[08:36] <pitti> apparently (and rightfully) they don't trust CIFS passwords enough to offer a direct connection :)
[08:36] <seb128> ;-)
[08:37] <pitti> wgrant: that would be a good thing indeed
[08:37] <wgrant> (in .au at least, not sure about elsewhere)
[08:38] <StevenK> Silly weather applet. It's not a clear sky, it's raining!
[08:38] <persia> Is it not possible to warn the user, rather than enabling/disabling it?  Something like "Your password will be transmitted unencrypted: are you sure you want to send it?"
[08:38] <fabbione> wgrant: pretty much everywhere, but i can see scans on smb ports within the same ISP
[08:39] <persia> There's also wireless access to homes/offices to consider, but isn't this just about a client setting?
[08:39] <wgrant> StevenK: It's not raining here, but my clock applet says it is. I think we need to swap.
[08:40] <wgrant> What about enterprise LANs? Are they not a risk?
[08:40] <StevenK> wgrant: Is it clear sky? :-)
[08:40] <pitti> wgrant: admittedly the *actual* risk is misconfiguring the server
[08:40] <wgrant> StevenK: Too dark to tell.
[08:40] <persia> wgrant: Few true enterprise LANs permit LANMAN password shares
[08:40] <pitti> wgrant: this client-side setting will just prevent to connect to those
[08:41] <wgrant> Couldn't an attack on the network convince the client to send a LANMAN password, if it were enabled on the client?
[08:41] <wgrant> *attacker on the
[08:41] <persia> pitti: Hasn't the LANMAN trivial hash vulnerability been public for a decade or so?
[08:41] <seb128> persia: such changes are not possible before 8.04.1 no
[08:41] <persia> seb128: Ah.  Too bad.
[08:42] <seb128> persia: either we keep the current way, ie things not working and not giving any indication of why, or we authorize the lanman authorization again until having the interface changes
[08:42] <seb128> persia: 8.04.1 is frozen so now is not the right time to start writting new code and doing user interfaces changes
[08:43] <persia> seb128: Well then.  I'm in favour of permitting lanman authorisation.  It's bad, but it's been known bad for so long that it's really a server issue.
[08:43] <persia> (unless wgrant is correct that someone can spoof samba to pass a LANMAN authentication token without user intervention)
[08:44] <seb128> we don't speak about server side change
[08:45] <seb128> and that's a smb.conf option, people who want to use it to do something will not be stopped by the default value
[08:45] <persia> seb128: Right :)  I should have said "a server administrator issue"
[08:45] <wgrant> Wouldn't an attacker simply have to get a victim to attempt to connect to his machine, then request LANMAN? ARP poisoning makes that nice and easy.
[08:45] <pitti> yeah, it's really more like a "fix your broken server now, dammit!" measure
[08:45] <pitti> well, and prevent your password from being intercepted, of course
[08:46] <pitti> wgrant: exactly; that's the MITM scenario
[08:46] <pitti> and the primary reason why clients shoulnd't accept LANMAN any more
[08:46] <seb128> do we have any idea on how many samba 2 NAS, servers and w9x configs are still around?
[08:46] <pitti> so at *some point* we have to stop doing that
[08:46] <wgrant> So it's a question of breaking a few outdated things, or letting everybody's passwords be got at trivially.
[08:46] <wgrant> Right?
[08:46] <seb128> well, if we had a dialog explaining why the connection doesn't work I would be fine stopping it now
[08:47] <seb128> wgrant: "everybody"?!
[08:47] <persia> pitti: I agree, but I think interface is the way to solve it: without an interface change, the user just experiences the device as broken.  Considering that many NAS boxes come with LANMAN auth (null-password) by default at the shop...
[08:47] <seb128> wgrant: why would correctly configured server expose passwords?
[08:47] <pitti> seb128: because you can do a MITM attack and ask the client to use LANMAN
[08:48] <pitti> regardless of what the server is or how it is configured
[08:48] <seb128> pitti: well, that's not everybody, that's broken server configs
[08:48] <pitti> seb128: no, it's not dependent on the server config
[08:48] <seb128> pitti: you still need somebody trying to connect
[08:48] <persia> pitti: It's more a social engineering attack than a MITM attack.  A user would want to connect to a new resource.
[08:48] <pitti> right, with changing DNS, or just telling someone "use this"
[08:48] <seb128> pitti: if I try to connect to a vista box on my local network why would lanman be used?
[08:49] <pitti> persia: true; DNS poisoning is not a very common scenario in commercial environments, I guess
[08:49] <seb128> right, you have to change DNS and get people to "use this"
[08:49] <pitti> persia: (DNS poisoning would make it a true MITM)
[08:49] <pitti> seb128: s/and/or/
[08:50] <seb128> people who try to connect to random server people they don't know tell them to try can always be screwed in some way
[08:50] <pitti> seb128: but given SMB's auto-announcement of servers and shares, this is not that unlikely
[08:50] <seb128> and I don't think DNS changes are easily to do
[08:50] <persia> pitti: Ah.  Right.  wildcards.
[08:50] <pitti> you don't need DNS changes
[08:50] <pitti> they are helpful, but not required at all
[08:51] <pitti> just setup a LANMAN share on your own box, and wait for people to connect and thus give you their password
[08:51] <wgrant> ARP poisoning isn't hard in most circumstances, either.
[08:51] <seb128> well, let's word it differently
[08:51] <seb128> lanman is being authorized for years and years and the world never broke due to it
[08:52] <persia> Well, lots of data was compromised.
[08:52] <seb128> is there any reason it should break between now and the time we get a nautilus interface to display a clear message?
[08:52] <wgrant> seb128: It has another 3 years to break it on Hardy...
[08:52] <wgrant> Or is a better fix coming later for Hardy?
[08:52] <seb128> wgrant: no, it hasn't
[08:52] <seb128> wgrant: as soon as we get the nautilus and gvfs change we will switch the samba default again
[08:52]  * Amaranth cries at banshee using 10% CPU to play a song
[08:52] <seb128> wgrant: it's just too near of 8.04.1 to do the interface changes now
[08:52] <wgrant> Amaranth: Is the UI also horridly slow for you?
[08:53] <wgrant> seb128: Makes sense. I never read the whole bug, I must admit.
[08:53] <Amaranth> wgrant: nope, and rhythmbox uses that much cpu too
[08:53] <Amaranth> and everything else that uses gstreamer to play an mp3 file
[08:53] <seb128> Amaranth: and gst-launch?
[08:53] <seb128> Amaranth: what plugin do you use?
[08:53] <Amaranth> i have no idea
[08:53] <seb128> is the fluendo plugin installed?
[08:53] <Amaranth> probably
[08:53] <seb128> trying uninstalling it
[08:53] <seb128> s/trying/try
[08:54] <Amaranth> it sucks that bad?
[08:54] <seb128> I don't use it but I think some users had cpu usage issues when using it
[08:54] <Amaranth> neat, banshee crashed when I removed the plugin :P
[08:55] <Amaranth> still 10-12%
[08:55] <emgent> morning
[08:55] <Amaranth> iirc I've always had this
[08:56] <pitti> seb128: that's a much better and convincing argument
[08:56] <Amaranth> no one else has gstreamer-based apps using outrageous CPU time playing an mp3?
[08:56] <Amaranth> (or just about any other media)
[08:56] <seb128> 12459 seb128    20   0  182m  46m  22m S    2  2.3   0:02.29 rhythmbox
[08:57] <seb128> 12459 seb128    20   0  183m  48m  22m S    2  2.4   0:03.03 rhythmbox
[08:57] <seb128> no
[08:57] <Amaranth> what cpu?
[08:57] <seb128> pitti: which one?
[08:57] <seb128> Amaranth: the "2" column
[08:57] <Amaranth> i know
[08:57] <Amaranth> i have 2ghz core duo
[08:57] <seb128> ah
[08:58] <pitti> seb128: the "we had allowed it for so long, so we can allow it just a little longer"
[08:58] <seb128> Amaranth: E6750
[08:58] <seb128> duo core intel
[08:58] <Amaranth> shouldn't be that different
[08:59] <seb128> pitti: well, really I don't have an idea on how many basic user have NAS they can't use due to that for example so it's not easy to argue either way
[08:59] <seb128> that would really depend of this number
[09:00] <seb128> if the reply is "almost none" don't think it's worth enabling, if the reply is "quite some" I think we should authorize it for 8.04.1, work on the nautilus change and get that to hardy-updates quickly after 8.04.1
[09:00] <seb128> pitti: now it's your and slangasek calls ;-)
[09:16] <brt> hi
[09:36] <saispo> hi
[09:36] <saispo> BenC: ping ? :)
[09:42] <saispo> any kernel maintainer in this room ? :)
[09:44] <RAOF> saispo: You're probably after #ubuntu-kernel
[09:44] <saispo> yes, maybe too :)
[09:49] <MacSlow> Where can I see what's current in debian regarding packages and used upstream-versions?
[09:50] <persia> MacSlow: http://qa.ubuntuwire.com/multidistrotools/all.html has them (in comparison to Ubuntu)
[09:50] <persia> drop the all.html if you want a more focused list (there are several from which to choose)
[09:55] <RAOF> My, subversion packaging is complicated.
[09:56] <persia> RAOF: Depends on how one uses tags...
[09:56] <RAOF> No, I mean the packaging for svn is complicated.
[09:57] <persia> heh
[09:57] <RAOF> But it (A) FTBFS and (B) causes svn-buildpackage to be uninstallable, care of the perl 5.10 transition, and that lack is blocking me from working on gnome-do packaging :)
[09:57] <RAOF> And since noone seems to be doing the merge, I'm looking at it.
[09:58]  * persia gives RAOF a bucket, a katana, and a yak :)
[09:59]  * RAOF wonders if a yak has enough blood to placate the dark gods of SVN.
[10:04] <RAOF> Oh, crap.  Debian partially adopted some of our changes, but in a way apparently incompatible with some of our other changes.
[10:05] <seb128> RAOF: I asked doko about the subversion merge yesterday and he was looking at it
[10:06] <seb128> RAOF: you might want to ask him when he will be around
[10:06] <MacSlow> Keybuk, hey there
[10:06] <mvo> didn't he said we will not look at it before monday or something?
[10:06] <mvo> because he is traveling?
[10:06] <MacSlow> Keybuk, for todays meeting I just added another discussion point
[10:06] <RAOF> It's big and evil and complicated.
[10:06] <MacSlow> Keybuk, I hope that's ok... it'll be mainly me asking to get a more recent version of clutter in intrepid
[10:07] <RAOF> I'd be very happy to leave it to doko :)
[10:07] <seb128> maybe somebody should just do a rebuild for the perl transition now and let doko to the merge later
[10:08] <rockyrock> hi guys
[10:08] <rockyrock> I need a help
[10:09] <rockyrock> Does any USB external DialUp modem work on Ubuntu?
[10:10] <james_w> RAOF: bzr-svn 0.4.10 + bzr-builddeb 0.95 should work as a replacement for svn-buildpackage if you can't get anything working. If you want more details then just ask.
[10:10] <RAOF> subversion currently fails to build, at least it did for me; we can't just rebuild it.
[10:10] <RAOF> james_w: Thanks.  I actually have a fallback; my not-intrepid install.  That'll work fine. :)
[10:18] <pwnguin> Amaranth: ok so updating didnt fix it. why's it called a xen patch though?
[10:18] <Amaranth> pwnguin: because the problem is that the kernel includes xen
[10:18] <Amaranth> the patch makes the driver ignore this
[10:18] <pwnguin> i thought KVM was the new hotness
[10:18] <Amaranth> xen is more 'enterprise', i guess
[10:19] <pwnguin> boy im sure glad people don't read press releases
[10:28] <cjwatson> dholbach: the intrepid kernel fails for me in kvm too; I brought it up on #ubuntu-kernel a few days ago
[10:28] <cjwatson> dholbach: I've been falling back to qemu
[10:29] <dholbach> cjwatson: thanks for letting me know - does X and the mouse pointer work OK for you (with the old kernel)?
[10:32] <cjwatson> dholbach: that's been OK for me
[10:32] <dholbach> ok, thanks
[10:32] <dholbach> I'll ask soren about it once he's back again :)
[10:33] <lifeless> RAOF: subverison FTBS? how elegant
[10:39] <pitti> tkamppeter: are you aware of your three assigned merges on http://merges.ubuntu.com/main.html?
[10:40] <pitti> tkamppeter: please let me know if some of them can be just synced (which I suppose, since you tend to just apply fixes upstream)
[10:55] <cjwatson> Riddell: should I disable kubuntu-kde4 CD builds?
[10:55] <cjwatson> now that you've merged it
[11:24] <Riddell> cjwatson: yes
[11:36] <cjwatson> Riddell: done
[11:53] <asac> siretart: do you have iwlXXXX ?
[11:54] <james_w> Where's it documented that you can put stuff at the end of debian/changelog, I don't see any mention in policy.
[11:55] <pitti> james_w: you shouldn't really; what do you want to do?
[11:55] <james_w> e.g. "Local variables:\nmode: debian-changelog\nEnd:"
[11:55] <james_w> pitti: I don't want to, and I wish other people wouldn't :-)
[11:56] <pitti> right
[11:56] <pitti> people who add that should fix their editor
[11:56] <james_w> I'm trying to parse them, and these lines don't fit the specified format, but are apparently allowed.
[11:56] <pitti> james_w: they are not really allowed, but do exist nevertheless
[11:56] <pitti> james_w: btw, do you use dpkg-parsechangelog, or did you implement it yourself?
[11:56] <james_w> is there any mention of a standard format anywhere that you know of?
[11:56] <Keybuk> james_w: you need to always handle unparseable junk at the end of changelog
[11:57] <Keybuk> since the changelog format itself has changed over time
[11:57] <pitti> james_w: but for very old packages you'll also find the "old-style" changelogs, which don't match the current syntax either
[11:57] <Keybuk> so you may hit entries that don't parse, but were valid according to policy at the time
[11:58] <james_w> yup, I'm just wary of allowing "too much" at the moment, as not getting information about a version could do bad things to my alogrithms.
[11:58] <emgent> \sh: leonov is avaiable ?
[11:58] <james_w> I didn't know about --all to dpkg-parsechangelog, so that may be a safer way to go.
[11:59] <james_w> however, the output of that isn't ideal.
[11:59] <pitti> james_w: indeed I was pretty surprised that you "changed" to dpkg-source recently; in general I think it's safer to use the standard dpkg-deb, dpkg-source, and other dpkg tools instead of reimplementing parsers
[11:59] <Mirv> cjwatson: I've tried to reach Jeroen with no luck wrt bug 144741 - the translations for 8.04.1 would be there in Rosetta for most languages, but they've not been fetched.
[11:59] <james_w> pitti: yeah, I just didn't think you could get the two parts out separately until recently.
[12:00] <pitti> james_w: well, it transforms it into reasonable RFC822; Python should have a parser for that in the MIMETools?
[12:01] <james_w> pitti: yup, but I need the list of versions, which still requires parsing text out of one of the fields.
[12:02] <pitti> james_w: oh, I see what you mean
[12:02] <james_w> I could have a loop with --count and --offset it appears.
[12:02] <james_w> I wonder what happens when I hit the end.
[12:02] <james_w> empty output, no error, apparently.
[12:03] <pitti> right
[12:03] <pitti> so you'd loop until you get an empty output
[12:03] <pitti> that's quite expensive, of course
[12:03] <pitti> (subprocess.call()'ing a Perl programm dozens of times)
[12:04] <pitti> if that was perl, you could use the library (libparse-debianchangelog-perl)
[12:04] <pitti> james_w: do you just need the versions? or anything else, too?
[12:05] <cjwatson> Mirv: please talk with evand
[12:05] <cjwatson> he did the last translation update in ubiquity
[12:05] <cjwatson> Mirv: if the strings look OK in Rosetta, it's not Jeroen's problem
[12:05] <pitti> james_w: in the former case, a simple re.finditer() might be much easier and more efficient
[12:06] <james_w> pitti: I need the whole list of versions, but everything for the most recent version.
[12:06] <james_w> pitti: I'm the author of changelog.py in python-debian, which is what I use, so it would be good to make that more robust.
[12:06] <pitti> james_w: I don't think you need to worry too much about old-style changelogs; they were abolished many, many years before Ubuntu even existed and created forks
[12:07] <pitti> oh, nice! wasn't aware of that package
[12:08] <siretart> asac: yes, I use iwl3945 on my laptop
[12:08] <MacSlow> what maketarget also removes autom4te.cache again
[12:09] <asac> siretart: not sure if its 4965 only, but with nm 0.7 (aka no hacks) essid frequently doesnt get set and stays empty
[12:09] <asac> not sure if its never set or just bounces back quickly yet
[12:09] <asac> NM sends ssid to wpasupplicant though
[12:10] <siretart> I have this phaenomenon as well with the NM from hardy from time to time
[12:10] <siretart> sometimes it is sufficient to reload the iwl module, yesterday I had to remove the complete configuration entry from NM.
[12:11] <siretart> NM is sill a very black box for me :/
[12:12] <\sh> emgent: as preview source sure
[12:13] <emgent> \sh: launchpad branch avaiable ?
[12:13] <\sh> emgent: launchpad.net/leonov :)
[12:13] <emgent> ok thanks :)
[12:13] <\sh> emgent: on http://leonov.tv/ there are all infos as well including the links to lp
[12:14] <emgent> cool :)
[12:14] <asac> siretart: i always assumed that this issue was due to the missing scan_capa patch ... but i have the latest modules from rtg and still see this
[12:15] <asac> siretart: looks like a race ... what kind of race could that be?
[12:15] <siretart> asac: a race in the firmware?
[12:16] <Mirv> cjwatson: oh ok, jeroen was only about getting stuff to rosetta
[12:16] <Mirv> evand: could you do debian-installer translation update from Rosetta, to have 8.04.1 partitioning screen fixed? the strings are there and translated to 20+ languages.
[12:20] <cjwatson> Mirv: please note that it was updated at the deadline, though
[12:20] <cjwatson> r2686 in ubiquity trunk
[12:20] <cjwatson> Mirv: translators may simply have missed the boat
[12:21] <cjwatson> unless it was mis-updated somehow
[12:31] <asac> siretart: please test the disable_hw_scan=1 option for your module
[12:34] <asac> siretart: for iwl4965 it appears to fix the problem for me \o/
[12:40] <siretart> asac: what does 'disable_hw_scan' do? (yes, I'll tell my girlfriend to do that)
[12:44] <asac> siretart: it does what it suggests :) ... it scans from software
[12:44]  * ogra wildly guesses it also diables the hardware scan at the same time :)
[12:46] <ogra> who is on sync duty today ?
[12:47]  * ogra needs cbflib which is not in ubuntu yet to build rasmol
[12:47] <siretart> asac: with 'software' meaning processed by the linux kernel on the host cpu instead of the broadcom cpu by the firmware?
[12:48] <Mirv> cjwatson: I feel it was mis-updated somehow. I did notice the changelog entry that translations had been updated, but they weren't actually in the UI.
[12:52] <Mirv> evand: ^
[12:52] <Mirv> evand: at least as an example fi.po wasn't updated even though the translation was updated two weeks before the bzr commit on 2008-06-02
[12:53] <asac> siretart: good question. the driver code suggests that it doesnt do anything except refusing to scan if there is already a HW_SCAN running :(
[12:53] <asac> but i guess i miss the right code
[12:54] <asac> siretart: maybe its a bogus option?
[12:56] <asac> siretart: http://paste.ubuntu.com/21369/
[12:56] <asac> thats the only place where its actually used
[12:56] <asac> doesnt look like its really something that contributes to anything except strangeness ;)
[12:57] <cjwatson> evand: I'm wondering if you got caught out by the fact that Rosetta exports started to unpack into multiple directories?
[12:57] <cjwatson> ogra: hmm, not showing up in new-source
[12:57] <ogra> strange
[12:58] <cjwatson> ogra: https://launchpad.net/ubuntu/+source/cbflib
[12:58] <ogra> http://packages.debian.org/source/sid/cbflib has it though
[12:58] <cjwatson> ogra: it's in Ubuntu
[12:58] <ogra> oh, right, i didnt look on LP
[12:58] <cjwatson> built and everything
[12:58] <cjwatson>    libcbf0 |  0.7.9.1-1 | intrepid/universe | amd64, i386
[12:59] <ogra> oh sigh, i knew that the cmpc has to stay on hardy thing would bite me
[12:59] <ogra> itsn indeed not in hardy so i can search for ages on my laptop :P
[12:59] <ogra> (and my chroot only has main enabled on purpose)
[13:00] <ogra> cjwatson, gracias ... sorry for beng stupid today
[13:00] <ogra> *being even
[13:03] <pitti> mvo: hm, I just tried packagekit, this still seems to be pretty buggy :/
[13:04] <pitti> mvo: Spawn of helper '/usr/share/PackageKit/helpers/apt/resolve.py ~installed pmount' failed
[13:04] <pitti> mvo: and similar error messages for update, etc.
[13:04] <mvo> pitti: hrm, ok. I check that out
[13:04] <pitti> too bad, I thought it would already work nowadays
[13:04] <pitti> mvo: oh, that didn't mean "fix it now", just for discussing, and whether you tried it out as well already
[13:05] <pitti> hah, /usr/share/PackageKit/helpers/apt/resolve.py doesn't exist :)
[13:09] <iwkse> hi almost at the end of the installation (ubuntu hurdy), ubiquity fails. http://pastebin.ca/1050814. Any hints?
[13:09] <pitti> ah, package_id != package name; pkcon  install 'pmount;0.9.16-4;amd64;available' at least doesn't throw an error (but does not do anything either)
[13:09] <Hobbsee> evand: ^
[13:10] <Tm_T> Hobbsee: hi, new hat? looks nice ;)
[13:11] <cjwatson> iwkse: need to get /var/log/syslog
[13:11] <Hobbsee> Tm_T: thanks ;)
[13:12] <iwkse> cjwatson, let me check
[13:13] <iwkse> cjwatson, there's anything else i can watch? it seems nothing interesting in /var/log/syslog
[13:14] <iwkse> i'll paste it though
[13:15] <cjwatson> iwkse: interestingness is a matter of opinion ;-)
[13:15] <iwkse> cjwatson, of course:) i'll paste it so we can see if our opinion matches
[13:16] <iwkse> cjwatson, http://pastebin.ca/1051000
[13:17] <cjwatson> iwkse: err, that doesn't look like the installer has been run in that session
[13:18] <iwkse> cjwatson, oh, probably it's been overwritten
[13:18] <cjwatson> iwkse: by what?
[13:18] <mvo> pitti: the o.2.x branch is better, but not uploaded yet
[13:18] <iwkse> cjwatson, no idea, i got the log from the installer in a vm
[13:18] <pitti> mvo: right, I am just downloading 0.2.2 and want to install it locally
[13:18] <iwkse> cjwatson, i'm sure is this one
[13:19] <cjwatson> iwkse: well, the evidence has been erased. Can you reproduce the error and fetch /var/log/syslog immediately after the installer fails?
[13:19] <cjwatson> iwkse: oh, though check /var/log/syslog.0 in case it got rotated
[13:20] <iwkse> cjwatson, let me see
[13:21] <iwkse> cjwatson, bingo, it's in syslog.0
[13:24] <cjwatson> iwkse: ...?
[13:24] <iwkse> cjwatson, moment i'm pasting it
[13:24] <iwkse> cjwatson, it seems i fault of mine, some problems in gconf
[13:26] <iwkse> cjwatson, http://pastebin.ca/1051008
[13:26] <iwkse> i think i did some rubbish in gconf
[13:27] <cjwatson> iwkse: yeah, looks like it, though I'm going to change user-setup upstream to not worry if update-gconf-defaults fails
[13:27] <cjwatson> but you'll want to fix that anyway
[13:27] <iwkse> cjwatson, yes, that's good to know i have so bad things in gconf
[13:28] <iwkse> cjwatson, i cry for the old good text configuration files :-(
[13:28] <iwkse> thank for the help cjwatson, you helped me
[13:30] <Amaranth> stupid "GPE storm"
[13:32] <pitti> mvo: ok, will take me a little longer; I need to download and install intrepid and try it there (0.2.x doesn't work on hardy, too old PK and such)
[13:32] <cjwatson> installing intrepid may still be a little optimistic :)
[13:33] <cjwatson> though you can upgrade, I guess
[13:33] <pitti> cjwatson: I thought the current alternates should be (mostly) installable? so they aren't?
[13:33] <cjwatson> oh, blast, I stopped the publisher this morning and then totally forgot what I was doing
[13:34] <pitti> cjwatson: d-i isn't built against 2.6.26 or something?
[13:34] <cjwatson> it is, I just haven't tested past CD detection
[13:34]  * pitti apologizes for sounding stupid, I'm not really up to date wrt. intrepid
[13:35] <pitti> cjwatson: well, then I'll give you some further results soon :)
[13:35] <cjwatson> the boot loader is also a bit hosed
[13:35] <cjwatson> it should be usable, it'll just look odd
[13:35] <cjwatson> yesterday, I got as far as CD detection and then had to rebuild d-i due to some kernel udeb changes
[13:35] <cjwatson> haven't tested the new version yet
[13:40] <cjwatson> publisher re-enabled; I fixed up hardy-proposed/restricted/debian-installer/ (I hope), and copied the installer images from hardy-proposed to hardy-updates to match the debian-installer source package
[13:57] <pitti> seb128: ok, I think you convinced me about smb LANMAN; so we should reenable it for .1, create the better UI, and then disable it in all supported releases and add the new UI in a security update; ok for you?
[13:57] <seb128> pitti: exactly what I suggest yes ;-)
[13:57] <seb128> slangasek: ^
[13:59] <pitti> slangasek: so, Seb and I discussed this at length this morning (some other folks participated, too); they finally convinced me, mainly because exposure of smb shares is usually limited to reasonably trusted environments, and not to the internet
[14:02] <evand> Mirv, cjwatson: OK, I'll look into it.  Is it too late for updating translations in 8.04.1?
[14:03] <cjwatson> it's difficult since ubiquity is currently waiting in -proposed for verification
[14:10] <mvo> hey doko! do you mind if I take some of your merges?
[14:11] <doko> mvo: not at all. was looking at subversion and wondering why the debian maintainers thinks its better to build things twice ...
[14:12] <seb128> doko: what was this libdb thing you told me about yesterday?
[14:14] <cjwatson> I thought I'd fixed up libdb the other day
[14:14] <doko> ohh, nevermind, was confused by some b-d's requiring both 4.6 and 4.7 -dev packages
[14:14] <cjwatson> (I fixed a bunch of uninstallable stuff over the weekend, db was deeply embroiled in that)0
[14:17] <pitti> cjwatson: dhcp in the  installer didn't work for me, and the gfxboot menu is a bit strange, but at least it's installing
[14:20] <cjwatson> pitti: dhcp: bug 241295
[14:20] <cjwatson> pitti: I think I have gfxboot nearly fixed now
[14:20] <gribouille> still trying to find what's wrong with firefox+google toolbar, but it isn't easy without a working version of firefox with debugging symbols
[14:20] <pitti> cjwatson: I'm crossing fingers that the kernel will be alright
[14:21] <pitti> cjwatson: want a bug for "asks question about scanning another CD"?
[14:22] <cjwatson> yes please
[14:22] <cjwatson> I hadn't got that far yet
[14:22] <gribouille> bash: /usr/lib/debug/usr/lib/firefox-3.0/firefox: cannot execute binary file. is it normal ?
[14:23] <Chipzz> gribouille: you're not supposed to execute that file
[14:23] <Chipzz> just start firefox the way you normally would
[14:25] <pitti> cjwatson: bug 241304
[14:26] <pitti> cjwatson: I can attach the installer log after installation is finished
[14:26] <cjwatson> probably not needed
[14:28] <pitti> cjwatson: ah, dang, failed; xserver-xorg-video-ati is not installable
[14:28] <lifeless> so with WINE reaching 1.0, can it run windows viruses yet ?
[14:28] <pitti> ^ bryce, tjaalton: -ati depends on -r128 amd -mach64, which are not installable
[14:31] <gribouille> 1759 stack frames for firefox ! what's the term for such software ? bloatware ?
[14:37] <cjwatson> pitti: should be fixed soon; I promoted xserver-xorg-video-{r128,mach64} as obvious split-outs
[14:39] <cjwatson> pitti: need to wait for nautilus to build though
[14:40] <pitti> cjwatson: thanks for fixing; well, ATM I just went on in manual mode and now use apt-get install ubuntu-desktop
[14:40] <bigon> infinity, are you around?
[14:51] <bXi> hello
[14:51] <bXi> whats the best place to "whine" about outdated libraries?
[14:52] <dholbach> bXi: file a bug report on the package and tag it as 'upgrade'
[14:53] <bXi> dholbach: i do this on launchpad right?
[14:53] <dholbach> right
[14:53] <bXi> ok
[14:53]  * bXi looks for the pacakge
[14:53] <dholbach> perfect
[14:53] <bXi> -typo
[14:54] <Keybuk> pitti: you wanted to know about prefetch
[14:55] <pitti> Keybuk: your opinion about it seems to have aggravated dramatically since UDS? :)
[14:55] <pitti> james_w: so you have a s3kr1t branch on top of PackageKit 0.2.2 which makes apt backend truly work? (I haven't tried 0.2.2 upstream yet, still installing hardy)
[14:55] <Keybuk> let me grab a drink, and I'll pull up an arm chair while you stoke the fire ... :p
[14:56] <pitti> Keybuk: getting out the poker chips then
[14:57] <BenC> pitti: hey, I thought there was something in apport to detect if /var/crash/vmcore existed, to signal that there was a kernel crash
[14:57] <pitti> hi
[14:57] <james_w> pitti: I think my patch is in 0.2.2, but I'm not sure.
[14:57] <pitti> BenC: not ATM; the last time we spoke about it we designed the interface for /usr/share/apport/kernel_hook, which is there for ages
[14:58] <BenC> pitti: right, but that just helps to gather info right?
[14:58] <pitti> BenC: right, it currently doesn't have any support for cores
[14:58] <BenC> pitti: what would it take for apport to trigger on /var/crash/vmcore existing, and take action to report a bug on the kernel?
[14:59] <BenC> and ask that the core be included in the bug report
[14:59] <pitti> BenC: we only have the existence of the file as a trigger? i. e. not something like core_pattern for userspace crashes which could call a binary on an oops?
[14:59] <pitti> BenC: or should we just check it on boot?
[14:59] <BenC> pitti: there's actually two triggers vmcore, and vmcore.log (the former may not exist if makedumpfile failed for some reason, but there was still a crash)
[14:59] <gribouille> where can I find firefox source code online ?
[14:59] <pitti> (that would be easy, we can do it in the init script)
[14:59] <BenC> pitti: check on boot
[15:00] <BenC> pitti: vmcore.log will always exist after a kernel crash, vmcore will most times
[15:00] <joaopinto> gribouille, apt-get source firefox will get you the source code for the ubuntu package
[15:00] <pitti> BenC: ah, that's easy then, no inotify watching required
[15:00] <pitti> BenC: could you please open a bug against apport and put in all the paths and the semantics?
[15:00] <lool> cjwatson: d-i ftbfsed on lpia http://launchpadlibrarian.net/15175531/buildlog_ubuntu-hardy-lpia.debian-installer_20070308ubuntu40.3_FAILEDTOBUILD.txt.gz
[15:00] <pitti> easier to track for me
[15:00] <cjwatson> lool: d-i isn't used on lpia
[15:00] <lool> cjwatson: (I'm looking at hardy-updates -> ume archive promotions)
[15:01] <pitti> BenC: I should be able to do that relatively quickly; still busy with hardy.1, but that will be done soon
[15:01] <cjwatson> lool: so I've never cared about build failures there
[15:02] <pitti> BenC: please also have a look at /usr/share/apport/kernel_hook for the other information it collects; anything that's obsolete or anythign that should be added?
[15:02] <BenC> pitti: ok, thanks
[15:02] <BenC> pitti: I checked it, looked good
[15:02] <pitti> BenC: (aside from the obvious source package renaming, etc.)
[15:02] <lool> cjwatson: Ok; I'd like to fix it still, we might use d-i bits in intrepid at least; I'm curious about the dpkg barks and I would guess something is weird if gcc is missing
[15:02] <ogra> cjwatson, well, we should start consdering using d-i on lpia imho
[15:03] <lool> Also, it shows up in my promotion report, but I could avoid it
[15:03] <ogra> proper preseeding and the ability for using oem mode is really a good idea i think
[15:04] <lool> I guess I'll grab it for intrepid and see if it fails to build then see how to port it
[15:04] <lool> I misread the hardy ftbfs at first, I thought it was just missing bdep
[15:04] <cjwatson> lool: happy to do so for intrepid if you so wish
[15:05] <cjwatson> lool: it's almost certainly just that there's no configuration for lpia; the d-i build system is extremely architecture-specific
[15:05] <ogra> d-i is supressed on lpia in the linux-image source
[15:05] <cjwatson> the gcc: not found message is in the base system, I think
[15:05] <cjwatson> and happens for everything, so don't worry about it
[15:05] <ogra> there shouldnt be any udeb build attenpts even
[15:06] <cjwatson> yes, kernel udebs would need to be laid out too
[15:07] <ogra> which indeed also means lpia will start to need an abicheck (which in turn makes life hard on PPAs)
[15:07] <lool> cjwatson: Ok
[15:08] <cjwatson> ogra: udebs don't significantly increase the need for an abi check
[15:08] <pitti> BenC: oh, please assign that bug to me (apport kernel crash)
[15:08] <cjwatson> I mean, they do a bit, but far less so than e.g. lrm
[15:08] <ogra> oh, i thought they were one of the resons for having it
[15:08] <ogra> ah
[15:08] <cjwatson> quite a minor reason, really
[15:09] <ogra> or more iportant on lpia lum :)
[15:09] <ogra> *important
[15:12] <gribouille> I still don't understand why it is better to use the version of firefox that is distributed by ubuntu instead of the originak one
[15:16] <cgregan> liw: ping
[15:17] <Keybuk> pitti: right, so, prefetch
[15:17] <Keybuk> I think that the idea of doing this in the kernel is the right one
[15:17] <Keybuk> that in the kernel, we intercept requests for disk blocks and record them is absolutely correct
[15:17] <Keybuk> and that when a binary is exec()d, we automatically load the blocks it requires
[15:18] <Keybuk> much better than the readahead approach of doing the watch by inotify, and the fetching by a userspace tool
[15:19] <pitti> right, so that didn't change so far
[15:20] <ogra> will prefetch work with stacked filesystems (unlike readahead with unionfs) ?
[15:20] <afflux> pitti: you are the main apport developer, right? apport strips "k" from function calls / similar, because my username is "k".
[15:20] <pitti> ogra: it's all about blocks on block devices, below the VFS layer AFAIUI
[15:20] <Keybuk> so one would assume that the way prefetch works is this:
[15:20] <Keybuk> - when a process requests a block, it's recorded
[15:20] <pitti> afflux: hm, I thought I fixed that in hardy-updates
[15:20] <Keybuk> - and tracked back to its pid, and thus the executable on disk
[15:20] <afflux> I'm on intrepid
[15:21] <Keybuk> - and a table of executable -> block is updated (and visible in /proc or similar)
[15:21] <afflux> pitti: see the function calls in bug 241320
[15:21] <Keybuk> when an executable is loaded, the table is read to know which blocks to fetch
[15:21] <pitti> afflux: hm, can you please file a bug with an example crash file?
[15:21] <afflux> woops, made public now.
[15:22] <pitti> Keybuk: right, that would be straightforward; maybe an inode (renames/moves), but that's quite similar
[15:22] <Keybuk> ok
[15:22] <Keybuk> we might want to do grouping of apps into stages
[15:22] <Keybuk> so we prefetch boot at once
[15:22] <Keybuk> we might do that by writing "boot" to a stage file, and the stage -> app association is recorded
[15:23] <Keybuk> when we change a stage, we prefetch all of the blocks for all of the apps
[15:23] <Keybuk> if we were clever, we might just do that with cgroups - use a cgroup property to specify the stage for an app, and do it that way
[15:23] <BenC> pitti: ok
[15:23] <Keybuk> but that last bit is a bit of a "obviously it doesn't work like that now"
[15:23] <pitti> Keybuk: so that'd be even more efficient than fetching per-binary?
[15:24] <pitti> Keybuk: ah, I see what you mean, I think
[15:24] <BenC> pitti: bug #241322 created and assigned to you
[15:24] <pitti> Keybuk: so unlike readahead (which slurps in everything we need for boot), you still get interruptions with prefetch, since it only fetches whenever you actually start a process? that sounds weird
[15:24] <pitti> BenC: thanks
[15:24] <Keybuk> pitti: oh, it's worse than that ;)
[15:25] <Keybuk> I was describing how it _should_ work
[15:25] <Keybuk> now, this is how it works:
[15:25] <Keybuk> when you exec() an app, it remembers the executable name, and starts a timer (30s)
[15:25] <pitti> it would be much better if the blocks were already in the cache even before the app is started
[15:25] <Keybuk> all block accesses within that time are associated to the app
[15:25] <Keybuk> if you start two apps, then all block accesses will be for the second app
[15:26] <pitti> BenC: thanks, looks fine
[15:26] <Keybuk> stage is done by overriding the timer, and just associating all block accesses with the magic "stage"
[15:27] <pitti> so that would be the step which brings the actual speedup (the grouping and loading everything in advance)
[15:27] <Keybuk> except it doesn't
[15:28] <Keybuk> since everything just gets mashed
[15:28] <pitti> Keybuk: hm, I always assumed that the kernel would merely do the collection of data; the actual prefetching had to be done in userspace, based on a list of blocks you collected earlier
[15:28] <pitti> Keybuk: well, mashing for a group should be fine, and is even intended, so that you can sort all the blocks you need to be read with peak performance, no?
[15:29] <Keybuk> pitti: it'd work if the grouping wasn't "by time"
[15:29] <Keybuk> consider:
[15:29] <Keybuk> start openoffice, then firefox
[15:29] <Keybuk> all of openoffice's blocks will now be read every time you start firefox
[15:29] <pitti> Keybuk: but only if that 'read for prefetching' is done automatically by exec(), right?
[15:30] <Keybuk> which is desirable
[15:30] <pitti> hm, I'm not sure about that
[15:30] <Keybuk> done right, it makes openoffice start really fast the second and onwards time ;)
[15:30] <Keybuk> consider the udev case on boot
[15:30] <Keybuk> it lasts a long time
[15:30] <pitti> if you merely look at boot speed, I'd think it wuold be better to have a small program that gets a sorted list of blocks, slurps them all in, and then starts the boot sequence
[15:30] <Keybuk> in fact, it does stuff pretty much through each stage of the boot
[15:30] <Keybuk> so you end up prefetching it three times, for each stage
[15:31] <Keybuk> pitti: that's what readahead does
[15:31] <pitti> Keybuk: exactly
[15:31] <pitti> Keybuk: but this time with autogenerated, auto-updated data
[15:31] <Keybuk> prefetch doesn't really help here
[15:31] <pitti> I just fail to see how "fetch the blocks on exec()" can be much more efficient, since that happens on exec() away?
[15:32] <Chipzz> gribouille: you didn't actually read what I told you, did you?
[15:32] <Keybuk> pitti: it's more, fetch blocks inside the kernel
[15:32] <Keybuk> you'd fetch on exec() for programs after boot
[15:32] <pitti> Keybuk: so the problem is that it cannot tell apart which block belongs to which process
[15:32] <gribouille> Chipzz, about what
[15:32] <Keybuk> during boot, you'd fetch on the start of a stage
[15:32] <pitti> bummer
[15:32] <Chipzz> 15:22 < gribouille> bash: /usr/lib/debug/usr/lib/firefox-3.0/firefox: cannot execute binary file. is it normal ?
[15:32] <Chipzz> 15:23 < Chipzz> gribouille: you're not supposed to execute that file
[15:32] <Chipzz> 15:23 < Chipzz> just start firefox the way you normally would
[15:32] <Keybuk> echo boot > /proc/prefetch/stage - causes everything "Boot" to be fetched
[15:32] <Keybuk> that's fine
[15:32] <gribouille> Chipzz, of course I did
[15:32] <Keybuk> it's how it decides what was "boot" and what was "gui" that's the problem
[15:33] <Chipzz> then that should work
[15:33] <gribouille> Chipzz, of course it did.
[15:33] <pitti> Keybuk: hm, but if it integrates the gdm bits into the block ordering/prefetching, that can only benefit us?
[15:33] <Keybuk> pitti: I don't follow?
[15:33] <Chipzz> gribouille: if you want the source, apt-get source firefox I think
[15:33] <pitti> Keybuk: I mean, we do want to prefetch gdm, libgtk, etc. as well
[15:34] <Keybuk> pitti: we do
[15:34] <Keybuk> but we don't want to _over_fetch_
[15:34] <Chipzz> which will extract the source in the current directory
[15:34] <Keybuk> the danger with things like prefetch is that they can be too enthusiastic
[15:34] <gribouille> Chipzz, I know that
[15:34] <Keybuk> so actually spend longer fetching things than it would have taken to just load them anyway
[15:34] <pitti> Keybuk: you mean if the user does autologin and immediately starts OO.o while still measuring boot
[15:34] <Chipzz> then why are you asking where to get the source? :p
[15:35] <Keybuk> pitti: actually, I just mean that the overlap between boot and gui in Ubuntu (we start gdm while stuff is still going on) kills us ;)
[15:35] <Keybuk> we end up fetching things twice
[15:35] <pitti> Keybuk: why twice?
[15:35] <Keybuk> gdm loads, and fetches its pages by itself
[15:35] <Keybuk> a little later, prefetch goes "oh, and load gdm"
[15:35] <Keybuk> often half way through the login after gdm got paged back out again
[15:35] <gribouille> Chipzz, don't bother, I've found the solution to my problem
[15:36] <Keybuk> it's the "fetch by time" stuff that's the issue
[15:36] <pitti> Keybuk: but in the end we want to read it all (boot+gdm+maybe session) in one efficient big block, since the net time will be still faster, even if parts of the boot will happen later due to the large prefetch
[15:36] <Keybuk> pitti: nope
[15:36] <Keybuk> we want to read the really essential bits first
[15:36] <Keybuk> then, while udev is spinning and HAL is waking up
[15:36] <Keybuk> we read in more
[15:36] <Keybuk> hammer the disk while we're blocked in other things
[15:37] <Keybuk> if you prefetch the entire boot into memory at the start, you'll actually usually overflow the memory ;)
[15:37] <pitti> ok, good point, if you have little memory
[15:37] <Keybuk> not even then
[15:38] <Keybuk> I think prefetch would be better if it were process tree based
[15:38] <pitti> yeah
[15:38] <Keybuk> ie. if we prefetch gdm (either because it was exec()'d or because we did it manually)
[15:38] <pitti> I wasn't aware that it is only time-based
[15:38] <Keybuk> then it prefetches the entire login tree until we changed something
[15:38] <Keybuk> we might for example have essential boot, system daemons, gdm, user login
[15:39] <Keybuk> and do those by the pid that starts them (or group of pids)
[15:42] <pitti> hm, so once again we don't have a good solution implemented :/
[15:45] <pitti> eww, the vmmouse driver in intrepid seems to be totally broken
[15:45] <Keybuk> pitti: no
[15:45] <Keybuk> I've gone through the prefetch code, and I don't think it would be hard to make it behave well
[15:46] <Keybuk> after all, it's basically just vm block dump ;)
[15:46] <Keybuk> but there are several things on the critical path before I can do kernel patches <g>
[15:52] <pitti> james_w: joy; 0.2.2 does not even build
[15:53] <james_w> pitti: oh dear.
[15:54] <james_w> I was using a snapshot from just before 0.2.2, so I don't know what changed.
[15:57] <james_w> pitti: I can make my source package available if you like.
[15:57] <pitti> james_w: which configure arguments did you use?
[15:58]  * pitti used --enable-apt --with-default-backend=apt --enable-tests
[15:59] <james_w> I don't think you want --enable-tests
[16:00] <james_w> pitti: --enable-apt --disable-dummy --with-default-backend=apt --with-security-framework=polkit
[16:01] <pitti> james_w: for me it crashes in pk-main.c, over some invalid g_set_error()
[16:01] <pitti> but it uses a lot of constants and macros, it's not just a trivial typo
[16:01] <pitti> james_w: will try that, thanks
[16:03] <pitti> james_w: hm, same problem
[16:03] <pitti> james_w: might be a gcc-4.3-ism
[16:04] <pitti> oooh
[16:04] <pitti> james_w: indeed, that looks like a -Wformat-security issue
[16:04] <pitti> and it uses warnings-as-errors
[16:05] <pitti> it's all kees' fault :-P
[16:05]  * pitti hugs kees
[16:05] <james_w> pitti: lp:~james-w/packagekit/debian-packagekit-0.2.x.jamesw
[16:06] <pitti> james_w: hah, I fixed it; it really was -Wformat-security
[16:06] <pitti> I'll send that fix to upstream
[16:06] <pitti> james_w: thanks anyway
[16:07] <james_w> well, I hadn't fixed that, so thank you.
[16:07] <seb128> hum, archive.ubuntu.com still outdated
[16:07] <seb128> is the mirror sync not running?
[16:09] <liw> cgregan, pong
[16:09] <pitti> james_w: did "make test" work for you ? it fails on "get distro ID" for me
[16:10] <james_w> pitti: never tried it, sorry, I was just trying a few manual tests.
[16:10] <james_w> what's the failure?
[16:10] <pitti> james_w: ok, thanks
[16:10] <pitti> well, it just says that
[16:10] <pitti> haven't looked into details
[16:17] <cgregan> liw: I had a mentor question for you. Heno picked it up in your absence.
[16:20] <liw> cgregan, ok, cool
[16:22] <Twigathy> https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/221613 <-- failbug :(
[16:29] <lamont> 92 MB is not a diff. kthx
[16:29] <ogra> so you dont touch diffs under 100M ?
[16:32] <laga> lamont: unzip it?
[16:33] <lamont> if the diff is that large, it's time to tar it up as openoffice.org_2.4.1+ubuntu1.orig.tar.gz.  really.
[16:33] <lamont> unless that's 92 MB of _new_ stuff
[16:33] <Hobbsee> i'ts probably new spaghetti code, for your enjoyment.
[16:34] <lamont> it's Oo.o...  enjoyment is not possible.
[16:34] <seb128> lamont: better to upload 90meg than 360
[16:34] <Hobbsee> lamont: i was thinking of the sadistic type.  or masochistic.  whichever it is.
[16:40] <wasabi> So would anybody be opposed to something like preventing smb/winbind from stopping BEFORE an upgrade, and only making it restart in the postinst or a trigger or something?
[16:40] <wasabi> When winbind is the provider of your NSS passwd table, stopping it for 3 minutes sort of sucks.
[16:44] <mathiaz> wasabi: that seems reasonable - did you file a bug ?
[16:45] <mathiaz> wasabi: slangasek may also have an opinion on this issue (^^)
[16:48] <pitti> james_w: *sigh* I built with correct sysconfdir(/etc) and prefix(/usr) now, and yet it says "launch helper exited with unknown return code 1"
[16:50] <james_w> pitti: do you have /usr/lib/packagekit/aptDBUSBackend.py ?
[16:51] <pitti> james_w: I have an /usr/lib/packagekit-backend/libpk_backend_apt.so
[16:51] <james_w> you need both of them.
[16:52] <pitti> it didn't even install a directory /usr/lib/packagekit/
[16:52] <pitti> james_w: ok, I'll copy it manually
[16:54] <pitti> didn't help, though
[16:54] <james_w> pitti: running "packagekitd --verbose" often gives good clues, if you haven't found it already.
[16:54] <lamont> generally speaking, having daemon's stay alive across upgrades is a good thing.
[16:55] <pitti> james_w: ah
[16:55] <pitti> james_w: I just used pkmon so far, but that isn't very helpful
[16:56] <mvo> pitti: you may need to build with the "apt2" backend - not sure if they have merged already
[16:56] <mvo> (sorry, was distracted and only just looked at irc)
[16:56] <pitti> mvo: that option is still in 0.2.1, but not in 0.2.2 any mor3
[16:56] <james_w> the rename has been done in 0.2.2 I think
[16:56] <pitti> I think the DBUS apt backend is the only one in 0.2.2 now
[16:56] <mvo> aha, cool
[16:56] <mvo> even better
[16:57] <pitti> hm, nothing really enlightening
[16:58] <pitti> a-haa
[16:58] <pitti> /etc/init.d/dbus reload did the trick
[16:58] <pitti> odd, I thought that bug was fixed now
[17:00] <pitti> so 'search' and 'refresh' work, and 'install' gives me a nice Python backtrace about a missing function in policykitd --verbose
[17:00] <pitti> james_w: ah, seems that "make install" put aptDBUSBackend.py into /usr/libexec/
[17:04] <pitti> ugh, there are so many bugs in aptDBUSBackend.py that this can hardly have been tested on any Debian so far
[17:05] <mvo> pitti: its in development (glatzor was working on it) and PK changes really fast
[17:07] <pitti> ok, I think there is something major missing in doResolve(), which isn't just a typo
[17:12] <pitti> yay, I got doResolve() working mostly \o/
[17:12] <pitti> get-details works now
[17:14] <james_w> pitti: "get-details package_id" or "get-details package_name" ?
[17:14] <pitti> james_w: "get-details pmount" works now
[17:14] <pitti> "install pmount" doesn't
[17:14] <james_w> ah cool, nice work.
[17:14] <pitti> but right, I need to specify an ID there, don't I?
[17:14] <james_w> you used to, but I think it's supposed to do package names now.
[17:15] <pitti> hm, if I disable teh _is_package_visible() test, "install pmuont" works
[17:15] <james_w> I've never worked out what is missing to allow that.
[17:15] <calc> anyone looking into fixing bug 185311 ?
[17:15] <pitti> so apparenlty there are some filters in action
[17:15] <calc> its causing OOo to crash a LOT
[17:18] <pitti> james_w: right; "install package_id works
[17:19] <pitti> james_w: and "remove package_name", too
[17:19] <pitti> james_w: so, I guess that's "good enough"
[17:19] <bdmurray> If I find a bug requiring sponsorship the right thing to do is subscribe the appropriate team correct?
[17:20] <james_w> pitti: yeah, I don't think anyone's going to actually use the tools directly.
[17:20] <iwkse> cjwatson, ping?
[17:20] <calc> bryce: ping
[17:20] <pitti> james_w: right, I wasn't either, but they are great for testing the backend
[17:20] <james_w> I'm not even sure we should have gnome-packagekit in the archive for a while.
[17:20] <cjwatson> iwkse: pong
[17:22] <iwkse> cjwatson, according to this http://pastebin.ca/1051008, it seems i can't file the file since it's in temp file :\ looks like a panel schema file though but i couldn't see such errors in the panel schema file.
[17:22] <cjwatson> iwkse: are you modifying this CD?
[17:22] <iwkse> cjwatson, yeah
[17:22] <cjwatson> it's rather hard for me to say then ...
[17:22] <iwkse> cjwatson, changing the panel objects
[17:28] <pitti> james_w: ok, I'll send the two patches to upstream now; bug reports will work, I assume?
[17:28] <james_w> I assume so, I've done patches to the mailing list previously.
[17:30] <pabix> Hello, where is it possible to leave suggestions?
[17:31] <Keybuk> pabix: brainstorm.ubuntu.com
[17:31] <iwkse> cjwatson, actually gconf in this step is doing update-gconf-defaults?
[17:31] <pabix> Keybuk, thanks
[17:31] <cjwatson> iwkse: user-setup is calling update-gconf-defaults
[17:31] <iwkse> cjwatson, i see, if i call directly update-gconf-defaults i don't get errors
[17:32] <cjwatson> it's not doing anything unusual
[17:32] <iwkse> i see
[17:32] <cjwatson> remember that it's calling it chrooted to /target
[17:32] <cjwatson> so 'sudo chroot /target update-gconf-defaults' if you want to try to reproduce it
[17:32] <iwkse> ah ok, thanks
[17:36] <pitti> kees: what's the compiler switch for enabling format string warnings? -Wformat-security, something like that?
[17:37] <pitti> ah, I think that's it
[17:48] <calc> bryce: ping?
[17:53] <RiotingPacifist> i need to manually install some source code for a ubuntu package, where to i put it so that compilers know its thier?
[17:58] <jordi> are there any daily CDs of hardy with the latest packages accepted in main?
[17:58] <jordi> ie, a test cd of 8.04.1?
[18:00] <persia> RiotingPacifist: It very much depends on the package.  Take a look at debian/rules to determine how patches are applied for a given package.
[18:01] <RiotingPacifist> thx
[18:01] <persia> jordi: http://cdimage.ubuntu.com/hardy/daily/ is likely the closest (but perhaps not quite what you seek)
[18:02] <pabix> Keybuk, proposal done!
[18:09] <kshah> thanks for fixing my eSATA problem with the last update
[18:21] <slangasek> wasabi, mathiaz: I'm not opposed to keeping winbind running until the postinst on upgrades, as long as we're idempotent and handle all the maintainer script cases :)
[18:21] <slangasek> pitti: yes, samba SRU being prepared now
[18:22] <kees> pitti: yeah, -Wformat-security (which requires -Wformat also)
[18:22] <kees> pitti: so, technically,  -Wformat -Wformat-security
[18:22] <kees> pitti: https://wiki.ubuntu.com/CompilerFlags
[18:25] <pitti> kees: thanks
[18:26] <kees> pitti: why? reporting upstream bugs?
[18:26] <pitti> kees: yes, freedesktop bug 16431
[18:30] <kees> pitti: yeah, I've been plucking FORTIFY_SOURCE patches out of Fedora and sending them to upstreams.
[18:31] <mvo> james_w: we need to ask glatzor to add more administrators to the packagekit team :)
[18:38] <persia> sistpoty, nixternal: congratulations !
[18:38] <tseliot> mvo: do you have experience with PolicyKit?
[18:58] <zul> slangasek: bug #180493 those two bugs look similar to me since the network has been disabled but nmbd dies
[18:59] <cody-somerville> nixternal, congratz. :)
[18:59] <slangasek> zul: sure; I just think it makes more sense then to merge the bugs instead of listing two bug numbers for the same issue in the changelog?
[19:00] <slangasek> zul: a minor point - dropping the security patch is a bit more of a problem :)
[19:00] <zul> slangasek: sure no problem
[19:00] <zul> slangasek: yeah I didnt realized that I did that
[19:01] <zul> it can wait until afer 8.04.1
[19:02]  * calc pings bryce until he looks at irc again ;-P
[19:03] <kirkland> slangasek: hey, question about bug https://bugs.edge.launchpad.net/ubuntu/+source/pam/+bug/64064
[19:04] <ion_> Meh, .local/bin :-)
[19:04] <slangasek> kirkland: <whine>
[19:05] <calc> hmm it sounds like 185311 was fixed already but just fixed in debian recently if i am reading the changelog correctly, so why am i getting all these locking bugs still :-\
[19:05] <kirkland> slangasek: :-?
[19:05] <slangasek> kirkland: what's your question? :)
[19:05] <kirkland> slangasek: i'm guessing since that since that bug is really old, the one-line fix i submitted is probably not acceptable as is
[19:06] <kirkland> slangasek: i guess i'm asking you if that bug is really a "won't fix"
[19:09] <slangasek> kirkland: well, that won't get you tilde expansion within the environment variable; not sure whether that's required, or by which shells
[19:10] <slangasek> pitti: samba SRU in the queue for you
[19:10] <kirkland> slangasek: understood that the tilde will be written to /etc/environment ...  my shells (dash/bash) are okay with it and it works as expected
[19:11] <slangasek> I think it ought to be checked with some other common interactive shells, like zsh and maybe ksh
[19:12] <kirkland> slangasek: i'll go test.....
[19:12] <zul> its libdb4.7 in intrepid now isnt it? so sources hardcoded to use db4.6 have to be updated?
[19:13] <slangasek> kirkland: otherwise, I guess I can't see any reason to treat it as wontfix; it will slow down tab completion if $HOME is on NFS though
[19:13] <kirkland> slangasek: yes, i think kees raised that (or a similar) concern
[19:13] <kirkland> slangasek: I can add some logic to test that, i suppose, if you think it's necessary
[19:14] <slangasek> kirkland: no, I think that would be overengineered at that point :)
[19:15] <mathiaz> zul: yes - better to depend on libdb-dev though
[19:15] <zul> k
[19:15] <slangasek> zul: hrm, I don't know that there's been any discussion of systematically moving to db4.7 for intrepid; fwiw, Debian is entering freeze for lenny soon and will most likely not see updates to db4.7, which means pretty much all such updates will be a divergence
[19:15] <slangasek> mathiaz: no, libdb-dev is an abomination :P
[19:16] <slangasek> if you build-depend on libdb-dev, you have no guarantee that a random rebuild won't render your program incompatible with your on-disk data
[19:16] <mathiaz> slangasek: when I merged bogofilter, I saw that : Build-Depends: libdb-dev (>= 4.6.19-1)
[19:17] <pitti> well, admittedly the db format hasn't changed in ages, and most packages don't have on-disk transactions (those should depend on db4.x-dev)
[19:17] <slangasek> mathiaz: bogofilter has the same maintainer as libdb itself
[19:17] <mathiaz> slangasek: hm.. bad example then
[19:17] <slangasek> :)
[19:18] <mathiaz> pitti: when you did the libdb4.6 migration in hardy, what did you use ? libdb4.6-dev or libdb-dev ?
[19:18] <zul> it doesnt look like libdb-4.6 is available now though?
[19:19] <pitti> mathiaz: hm, I'm not actually sure any more; I think libdb-dev for the pacakges without transactions
[19:19] <pitti> zul: it's 4.7 now
[19:19] <pitti> db is a PITA :/
[19:19] <slangasek> anyway, any change in db version requires a sourceful upload in Ubuntu, I don't see the value in creating a delta to use libdb-dev
[19:19] <slangasek> db4.6 is still available, incl. libdb4.6-dev
[19:19] <mathiaz> pitti: ok
[19:20] <slangasek> libdb4.7-dev is also available :)
[19:20] <mathiaz> slangasek: so the policy in debian is to use libdb4.X-dev ?
[19:20] <slangasek> there's no policy
[19:20] <slangasek> I'm just saying that libdb-dev is an abomination :)
[19:21] <slangasek> if it had been named libdb<version number of on-disk format>-dev, that would have made sense
[19:22] <pitti> slangasek: samba accepted
[19:22] <zul> so should php5 be using libd4.6-dev instead of libdb-dev?
[19:22] <zul> ergh libdb4.6-dev
[19:23] <pitti> zul: php5 doesn't use on-disk transactions, so this would be ok with libdb-dev
[19:29] <EagleScreen> hello, can anyone help me?? i need help to validat my GPG on Launchpad, i am reading this howto: https://help.ubuntu.com/community/GnuPrivacyGuardHowto and it gives this other link for validating on Launchpad: https://help.ubuntu.com/community/https%3a//launchpad.net/%7e%3cusername%3e/+editpgpkeys but i think the link is not working propertly
[19:31] <tormod> EagleScreen: -> #ubuntu-doc - seems like a typo
[20:04] <LaserJock> siretart: good email
[20:12] <nixternal> persia and cody-somerville: thanks!
[20:12] <LaserJock> nixternal: congrats!!
[20:13] <nixternal> thanks
[20:13] <nixternal> nothing like walking 1/2 mile for food, then walking 1/2 mile back after eating...I am hungry again :)
[20:13] <LaserJock> nixternal: now get to work!!! *whip*
[20:13] <nixternal> I am at work
[20:13] <nixternal> tired now after eating :)
[20:13] <LaserJock> no, *real* work
[20:14] <LaserJock> the *buntu kind
[20:14] <nixternal> I would kill to do that kind right about now
[20:15] <nixternal> anyone good with Kickstart and Anaconda?
[20:15] <cjwatson> only in that I stared at it enough to reimplement Kickstart for Ubuntu about three years ago
[20:15] <cjwatson> I'm not sure this will actually help you
[20:17] <nixternal> hehe, I wish our appliance used Ubuntu, then I would just use FAI
[20:17] <nixternal> oh well, I will just have to use the hackish way and place the tune2fs -m 0 in a %post script using Python
[20:29]  * slangasek shakes his fist at seb128's retreating form... oh sure, upload and run then :)
[20:32] <kirkland> slangasek: okay, appending ~/bin to /etc/environment seems to work for (bash, dash, ksh, ash), but not for (zsh, csh, fish, tcsh, es, rc, sash)
[20:34] <slangasek> seb128: hmm, the upstream comment in bug #207072 is wrong, I tested this patch in a Kerberos environment and it works correctly
[20:35] <slangasek> seb128: so either he's commenting on a previous version of the patch, or we're getting a disconnect somewhere else
[20:35] <seb128> slangasek: which one? yours or the new upstream version?
[20:35] <slangasek> seb128: I tested my patch in an ADS env
[20:35] <slangasek> kirkland: right, about the spread I was expecting :-)
[20:35] <seb128> slangasek: could you try the current upstream version (that's the one I uploaded to intrepid)?
[20:37] <seb128> slangasek: note that upstream is a redhat guy and I think they are using samba 3.2.0 pre versions if that makes a difference
[20:38] <seb128> slangasek: I'm also not sure what would be the right behaviour for anonymous against authentificated logins
[20:38] <slangasek> seb128: hum, I'm looking over the patch right now, and it looks pretty darn similar to mine.. so I think upstream's comments are about the original patch :)
[20:39] <slangasek> seb128: this bug report exists specifically because anonymous connections are made without giving the user a chance to provide a username/password instead; so like I said, if we continue to try anonymous connections before asking for auth, we aren't fixing this bug
[20:39] <seb128> the issue is that you don't want to get a password prompt for every click you do on a local network which requires no authentification
[20:40] <slangasek> true; could users save settings to the keyring, in that case?
[20:40] <seb128> what setting?
[20:40] <slangasek> the "anonymousness" setting :)
[20:41] <seb128> they somewhat solved this issue in gnomevfs I think, I need to look how
[20:41] <seb128> but my gut feeling is that default should be anonymous
[20:41] <seb128> and you should be able to specify an user name in the uri or using a menu item in case that's required
[20:42] <slangasek> er, for hardy, the difference in default is the difference between "users with unpassworded resources have to click past the auth dialog, and that's annoying" and "users who need to authenticate to get a browse list can't use the GUI browser at all"
[20:45] <seb128> slangasek: did you look at the difference between your patch version and the new upstream one?
[20:45] <slangasek> seb128: still grabbing it
[20:45] <seb128> is that only the order between password and anonymous login?
[20:45] <seb128> one thing to consider is that "upstream" is a new upstream guy
[20:45]  * slangasek nods
[20:45] <seb128> the gvfs maintainer is on holidays currently
[20:46] <seb128> and the one who is writting the patch started looking at gvfs and nautilus some weeks ago but I don't think he knows the code really well yet
[20:47] <slangasek> just so I can keep this all straight - what's the name of this new upstream guy?
[20:47] <seb128> Tomas Bzatek
[20:47] <slangasek> ok
[20:47] <seb128> he's the one who attached the new patch version and commented
[20:48] <seb128> he's not really upstream for gvfs but working for redhat and looking at gvfs and nautilus for them and the most active contributor currently
[20:48] <seb128> technically Chistrian Kellner is the maintainer
[20:49] <seb128> and Alexander Larsson is the original maintainer who is on holidays
[20:49] <slangasek> yes, the main difference between the patches is prioritizing anonymous before password auth
[20:50] <seb128> ok, so I think I'll argue with you than we should get things working first and fix annoyances then
[20:50] <seb128> s/argue/agree
[20:50] <slangasek> ok
[20:50] <slangasek> he also adds an smbc_getdents() check... let me see what the semantics are
[20:50] <slangasek> because that might mitigate somewhat
[20:50] <slangasek> mm, no, it doesn't
[20:50] <slangasek> (I was hoping his check might detect the case of zero share entries, but it doesn't)
[20:51] <seb128> is there still a slot to get your change in 8.04.1?
[20:51] <seb128> I though that was the idea
[20:51] <seb128> 0 share = try login
[20:51] <slangasek> well, that's not what his code does :)
[20:51] <bryce> what do the different colors on the MoM page indicate?  browsing through the MoM source they seem to be related to some sort of prioritization?
[20:51] <seb128> ok, so let's use your version
[20:51] <slangasek> ok
[20:51] <seb128> should I talk to pitti about getting that accepted tomorrow morning
[20:51] <slangasek> we can still get this into 8.04.1, yes; we still have pulseaudio hanging over our heads
[20:52] <slangasek> ( :/ )
[20:53] <seb128> or will you accept it to hardy-proposed if I upload it tonight?
[20:53] <seb128> anyway tonight or tomorrow morning will not make a big difference
[20:53] <kirkland> bryce: i think it has to do with how old or how long it's been since the last merge
[20:53] <seb128> I'll prepare the upload and let see who accepted it then
[20:57] <slangasek> seb128: yes, if you want to get it done tonight I'll accept it; I wasn't going to suggest that, you're allowed to sleep instead and do it in the morning :)
[20:59] <seb128> slangasek: that's alright, it's not late yet ;-)
[20:59] <bryce> kirkland: yeah I also thought it was age related, which is why I was surprised to see it listed as a priority thing
[21:00] <slangasek> seb128: oh - ok, reading closer, the smbc_getdents() part would mitigate
[21:00] <slangasek> seb128: so it would address the AD case, it would not address the "per-user samba share" case
[21:00] <slangasek> so it's up to you which you think is more appropriate, I'll accept either one
[21:02] <seb128> well, I think if the upstream one mitigate I would accept it on the basis that setups allowing to list different shares anonymously or using login are not too frequents, but I'm not sure this assumption is right
[21:02] <seb128> I've no real idea on what setups are used in real world
[21:03] <slangasek> one specific real-world case is the commented-out example in our smb.conf for user home directory shares :)
[21:03] <seb128> I mean that seems to be a good comprise if we don't annoy desktop users and allow to browse ad domains
[21:03] <slangasek> yes, I think it's a reasonable compromise
[21:07] <seb128> ok, so let's try this one for now then
[21:07] <seb128> I've already uploaded an updated gvfs to intrepid
[21:07] <seb128> let me know if you can give a try to the patch in your ad setup before I upload to hardy
[21:07] <seb128> just to make sure it works for somebody ;-)
[21:16] <slangasek> seb128: I only have my laptop set up for AD and it's still running hardy, and the AD setup is through the work VPN, so I'm afraid I'll have to test after it's uploaded
[21:16] <seb128> slangasek: alright
[21:35] <mathiaz> Keybuk: I've modified mom to include the section (updated, outstanding or new) in the status file (tomerge-*). See https://code.launchpad.net/~mathiaz/merge-o-matic/section-in-status-file
[21:35] <mathiaz> Keybuk: what do you think about it ?
[21:40] <hwilde> when I run "top" what is meant by the "buffers" in the memory display?
[21:40] <hwilde> Mem: 507664k total, 475780k used, 31884k free, 92116k buffers
[21:42] <hwilde> don't feel bad, nobody else knows either... :/
[21:47] <Keybuk> hwilde: socket buffers, memory for devices, stuff like that
[21:47] <hwilde> right but what is the interpretation of this?  is that memory in use, or is it free
[21:48] <Keybuk> in use
[21:48] <Keybuk> well
[21:48] <Keybuk> available
[21:48] <hwilde> yeah exactly
[21:48] <Keybuk> it's the amount of memory currently taken up by buffer structures
[21:48] <Keybuk> not the amount of memory in actual use
[21:48] <hwilde> lol
[21:49] <Keybuk> should the system run low on memory, the kernel will return much of that if it's not actually being used
[21:49] <Keybuk> likewise for cached
[21:49] <Keybuk> cached is the amount of memory currently taken up by the page cache
[21:49] <Keybuk> since any of that can be returned, since it's just a copy of what's on disk, it's in use
[21:49] <Keybuk> but available
[21:50] <Keybuk> so we tend to say that the unused memory is free + buffers + cached
[21:50] <Keybuk> err, available memory
[21:50] <Keybuk> 100% of the memory of the system should be in use at all times
[21:50] <Keybuk> anything in free is wasted memory
[21:53] <pwnguin> well, a bit less than 100 but a close value is optimal
[21:56] <hwilde> Keybuk, thnx
[21:56] <hwilde> found a decent articel  http://www.faqs.org/docs/linux_admin/buffer-cache.html
[21:57] <pwnguin> hwilde: written by our very own liw
[21:59] <hwilde> hehe
[22:16] <geser> cjwatson: who grants exceptions for new merges after DIF? I guess for packages in main it's ubuntu-release and motu-release for packages in universe.
[22:17] <cjwatson> sounds right
[22:18] <LaserJock> serious?
[22:18] <LaserJock> I don't believe we've done that in the past have we?
[22:20] <norsetto> LaserJock: iirc, we did it after feature freeze in motu-release
[22:20] <LaserJock> norsetto: right
[22:20] <LaserJock> DIF and FF is quite a bit different though
[22:21] <ScottK> So I can package and upload a new upstream version directly until FF, but I need some kind of approval to sync a new revision from Debian?
[22:21] <ScottK> That seems rather putting the cart before the horse.
[22:23] <james_w> pitti: thanks!
[22:23] <geser> ScottK: only if you didn't merge it till DIF
[22:23] <ScottK> What if there's a new revision?
[22:25] <persia> For the hardy cycle, DIF exceptions were granted by MOTU and core-dev.  Do we really want to change that?
[22:25] <ScottK> Right.  If an developer thought it was appropriate they did it.
[22:25] <ScottK> It's not really clear how that's a freeze, but whatever.
[22:26] <persia> And presumably, we each, as developers, respect the release goals and will ask any relevant people who may be affected.  For deep leaf universe, that's almost nobody.  For deep core platform, that's a lot of people, and probably some meetings.
[22:27] <persia> ScottK: Freeze in the sense that it should only be done for a reason, rather than just because it's there.
[22:27] <geser> cjwatson: does it also apply for new merges where the first new Debian upload after hardy happened after DIF? or only for "old" unprocessed merges?
[22:27] <persia> The idea is to provide some stability so that we can work on integration and bugfixing.
[22:27] <ScottK> Yes, but this is a matter of judgement for the developer.
[22:28] <ScottK> I don't think we should ever be uploading stuff 'because it was there'.
[22:28] <persia> Well, a developer within the development community, sure.
[22:28] <persia> That's what we tend to do pre-DIF.  Lots of stuff comes from Debian that breaks things, and that's fine
[22:28] <ScottK> But that's equally true for sponsorship requests.  Just because someone asks doesn't mean we have to upload it.
[22:29] <persia> Absolutely, although we ought look and provide feedback as to why we won't upload it if we don't.
[22:30] <ScottK> Once again, no different before/after DIF.
[22:30] <persia> The difference is in focus.
[22:30] <persia> Pre-DIF we ought try to get up-to-date with everything out there.  Post-DIF we ought integrate what we have and shape it into an integrated distribution.
[22:31] <ScottK> Right, but I still don't see it as a freeze/exception process.  Just doing what we are supposed to be doing.
[22:32] <persia> I prefer to call it a freeze for two reasons: 1) it's historically been called a freeze, and 2) it helps encourage people to think about it.
[22:33] <persia> I especially don't like it when someone starts a library transition a couple weeks before feature-freeze: it's a bunch of NBS work when we are otherwise busy.
[22:33] <ScottK> Freeze is fine, it's the 'needs an exception' bit that I think is overkill.
[22:33] <persia> ScottK: I detailed the exception process for Hardy, and received Ack from the release manager (and yourself).  It's not onerous.
[22:33] <geser> so we are allowed to request syncs from Debian after DIF but the merge needs an exception?
[22:33] <ScottK> persia: I didn't much like it then either.
[22:34] <persia> geser: sync is also an exception, but as ScottK points out, it's not hard to get one.
[22:34] <slangasek> sbeattie: fixed hardy daily CDs are available again, with right-sizing
[22:34] <ScottK> geser: And apparently since you're a developer you can self grant it.
[22:34] <geser> persia: sync need also a exception from *-release after DIF?
[22:35] <persia> geser: I'm not convinced exceptions ought be granted by *-release, but if *-release says so, I'm willing to try it.  I think it's a lot of work for them for not so much gain.
[22:36] <ScottK> persia: I'm arguing against the entire concept that there is anything to except.
[22:36] <geser> persia: the announcement says sync requests are ok, only new merges need an exception
[22:36] <geser> but I still doesn't understand the rationale for this
[22:36]  * persia hasn't liked any of the DIF announcement mails that have ever been sent.
[22:36] <DktrKranz> so, what's the main difference? I'm a developer, I can upload since I can self-grant an exception myself? weird...
[22:36] <geser> a sync can break as much as a new merge
[22:37] <ScottK> geser: Yes.  You can upload a new package without and exception.  You can request and unmodified package be synced without an exception.  We just make it special if a package has been modified in Ubuntu and Debian did something with it.
[22:37] <ScottK> It's complete nonsense.
[22:37] <persia> More so, really, as a merge tends to involve more careful thought on the part of the developer.
[22:37] <nixternal> actually, a sync can break more than a new merge, as I found out by someone taking smb4k and asking for a sync instead of a merge thinking the patches were there, and somehow it got through
[22:37] <nixternal> and just recently someone cried about their /etc/sudoers getting messed up :)
[22:37] <nixternal> muhahahaha
[22:38] <persia> (for the fourth different time that the bug was raised)
[22:38] <nixternal> 4th? probably getting close to 10th
[22:38] <ScottK> I'd argue that syncing over an Ubuntu diff is much riskier than updating a merge.
[22:38] <nixternal> I will email upstream and be like "wth dude, how many sudoers files you gonna screw before we pull your app"
[22:38] <persia> nixternal: Really?  I thought the smb4k sudoers bug had only been fixed four times.
[22:39] <nixternal> well someone messaged me in -motu yesterday about it
[22:39] <nixternal> wgrant actually did
[22:39] <persia> ScottK: I'd agree with you.  Personally, I think people ought think twice before making any non-bugfix upload after DIF.
[22:40] <ScottK> So why special case one class of upload that is not particularly risky compared to others.
[22:40] <seb128> persia: that's early to think that much before uploading
[22:40] <persia> seb128: Is it?  Why then do we turn off the autoimporter?
[22:40] <ScottK> persia: If you said DIF/FF I'd agree.
[22:40] <DktrKranz> persia, most merges done after DIF will close bug, so I can see a need for some exception
[22:40] <nixternal> are we getting ready to freeze? I haven't even had time to check the schedule
[22:40] <ScottK> persia: So we have control over what happens, not because we're done with features.
[22:41] <DktrKranz> *can't
[22:41] <persia> ScottK: See, I'm not happy about our historical quality.  I think we ought fix more of our bugs.
[22:41] <geser> nixternal: DIF is on 26 June
[22:41] <seb128> persia: to avoid disruptive changes, ie pulling changes that require a transition for example
[22:41] <persia> nixternal: DIF is about a week away.
[22:41] <ScottK> persia: So propose FF = DIF for Intrepid + 1
[22:41] <nixternal> argh
[22:41]  * ScottK wonders where this DIF exception process is documented?
[22:41] <nixternal> need to figure out what is left, ScottK have you heard anything? I thought I saw Riddell say the other day on the K* side we are good
[22:42] <ScottK> Dunno.  I did python-kde3 last night.
[22:42] <persia> seb128: Right.  So, if we're avoiding disruptive changes, why is it early to think twice about uploads?  I think that pre-DIF, it's not so important to worry about integration, but post-DIF, one should think more about how the upload affects other packages, not just the one being uploaded.
[22:42] <nixternal> we need to find us a good little documenter to start reworking our wiki pages
[22:42] <nixternal> they are a mess
[22:42]  * nixternal knows the fingers are getting ready to be pointed at him
[22:42] <seb128> persia: being conservative on random app changes between feature freeze slow down upstream fixes rather than bringing stability
[22:42] <ajmitch> nixternal: correct
[22:43] <nixternal> hrmm, that isn't a bad idea...I should put everything on Development/Policies/* and make it all nice and easy to read
[22:43] <persia> seb128: That's not been my experience, but I tend to work in deep edge leaf territory, where upstream might have one release every 5 years.
[22:43] <ScottK> nixternal: ryanakca is in charge of the web site.  Doesn't that mean he's going to fix it all.
[22:43] <seb128> persia: right, but there is a difference between being careful and disruptive changes and being careful about any non-bug fix uploads
[22:43] <nixternal> but moinmoin sucks for making things easy to read :)
[22:43] <seb128> s/and/on
[22:43] <nixternal> ryanakca: ya, get to fixing it all!
[22:43] <persia> seb128: OK.  I'll grant that.  If you have a good upstream, it's a lot safer to pull new versions, etc.l
[22:44] <calc> btw vmware 6.5 is cool :)
[22:44] <ryanakca> nixternal: fix what about the website?
[22:44] <norsetto> nixternal: congrats!
[22:44] <nixternal> ScottK: lucky for us, the K side is a bit more relaxed...we are already packaging broken software, so if we break somthing, it is easy to just blaim upstream :P
[22:44] <nixternal> norsetto: thanks!
[22:44] <seb128> persia: ie for GNOME I don't bother about stability until feature freeze, granted that GNOME is a special upstream yes
[22:45] <persia> seb128: For GNOME, I think that's a relatively conservative position.
[22:45] <seb128> persia: but I'm happy to upload new version for random applications when there is a request from upstream or users until feature freeze without reading the diff in details
[22:45] <seb128> quick look on the Changelog or NEWS is usually enough
[22:45] <persia> seb128: I think that's dangerous.  Some of those may have knock-on effects that break 10s of packages in universe.
[22:46] <seb128> well, I say rando applications
[22:46] <seb128> not libs
[22:46] <seb128> ie, I'm happy to update to a new gimp version
[22:46] <seb128> or a new inkscape
[22:46] <sbeattie> slangasek: thank you!
[22:46] <persia> seb128: See, gimp would be one of those cases I'd be careful.  It has about 20 rdepends.
[22:46] <slangasek> sbeattie: now to get the other flavors all built for testing...
[22:47] <sbeattie> Heh, yeah. Good luck with that. :-)
[22:47] <seb128> persia: well, we still have 4 months before intrepid, that's early to start wasting energy
[22:48] <seb128> persia: I assume that the breakage cases will cost less energy to fix than checking every upload we do in the next weeks
[22:48] <seb128> we don't have breakages that often
[22:48] <seb128> but we upload a lot
[22:48] <persia> seb128: I guess.  I consider it a waste of energy to have to investigate whether 31 packages FTBFS as a result of someone wanting nicer gradients in gimp, unless there is some particular goal involved in the gimp update.
[22:49] <persia> (and, yes, gimp tends to be one of the better upstreams, and may not be a perfect example)
[22:49] <seb128> right, but for one breakage case you will get 30 updates which bring something without any breakage
[22:49] <persia> seb128: Sure.  For true leaf apps, I don't see a problem.
[22:49] <seb128> that's a matter to know how you want to spend energy
[22:49] <persia> I just think a developer should investigate, and try to avoid any transitions without a good reason.
[22:49] <seb128> rather in a conservative way and slow down changes
[22:50] <seb128> or rather go for changes trusting debian and upstream and fixing issues then
[22:50] <seb128> we usually have time to stabilize after feature freeze anyway
[22:50] <seb128> so I don't think there is a real need to be careful now
[22:51] <seb128> I already tried to argue for not freezing universe syncs so early previous cycle
[22:51] <persia> Well, except that every release since Breezy has crashed my computer once a day or so until the next dev release starts.  I'm not confident we do well with integration post-feature-freeze.
[22:51] <seb128> because motus don't have the manpower to fix everything anyway there
[22:51] <seb128> and the syncs usually bring rather good things than breakages
[22:52] <persia> seb128: For universe, I think I'd be happier with a later DIF, but I don't see the point of ignoring DIF just because we haven't figured out how to do a delayed DIF for universe.
[22:52] <seb128> and that would avoid the energy wasted to file hundred of sync requests every week and having archive admin processing those
[22:52] <slangasek> persia: er, my plain English parsing of that doesn't make any sense at all to me; you're saying that the stable releases don't bceome stable for you until the next devel release is opened...?
[22:52] <MacSlow> hm... what is the cause of LP refusing to copy the PPA-package of clutter 0.7~svn20080619-0ubuntu2 (from Intrepid) to the Hardy?
[22:52] <EagleScreen>  i am learning to use pbuilder, what happens if i am running Debian lenny but i want to build a package for Hardy?
[22:52] <persia> slangasek: Basically.  From about Final Freeze to about two weeks after archive open, I can expect my computer to crash hard every day.  This has been true since Dapper.
[22:52] <MacSlow> it complains that "same version already has published binaries in the destination archive"
[22:53] <slangasek> persia: wow.  what hardware is this?  maybe we should get one to use as a canary :)
[22:53] <MacSlow> this worked without a problem for clutter-cairo - 0.7~svn20080619-0ubuntu1
[22:53] <persia> slangasek: AMD 4400+/nVidia nForce4/nVidia 6800
[22:53] <MacSlow> does LP not like the 0ubuntu2 there being requested to an earlier release?
[22:54] <slangasek> MacSlow: well, if so, LP is right to not like it
[22:54] <MacSlow> persia, is that the thing you mentioned earlier this afternoon?!
[22:54] <slangasek> MacSlow: because copying binaries to a previous release isn't guaranteed to give you satisfied deps
[22:54] <slangasek> I can't speak to whether this is what LP is /doing/, but it's a reason why you shouldn't do it even if it did work :)
[22:55] <MacSlow> slangasek, but it did not complain for clutter-cairo... that what made me assume it would work for clutter too
[22:55] <persia> MacSlow: Indeed it is.
[22:55] <slangasek> maybe LP got smarter in between
[22:55] <MacSlow> so better do Hardy -> Intrepid then I guess?!
[22:55] <slangasek> yes
[22:55] <MacSlow> *sigh* ok
[22:55] <LaserJock> there are certainly examples where people at least need to look at dependencies they may be breaking before uploading, early in the release cycle
[22:55] <MacSlow> thanks forlks
[22:56] <persia> seb128: While I tend to file lots of sync requests, I agree it's a waste of time for the archive admin to process them, and anxiously await being able to sync to anything to which one can upload.
[22:56] <seb128> right, orthogonal issue though
[22:56] <seb128> it takes a lot of energy for you too
[22:57] <EagleScreen> i think i must to use: sudo pbuilder update --distribution DIST-NAME --override-config
[22:57] <LaserJock> I've had libraries synced from Debian experimental to satisfy deps on Main apps without talking with or asking other people about the other deps
[22:57] <LaserJock> it's not great to do
[22:57] <EagleScreen> but in addiction must i change repositories to hardy?
[22:57] <persia> Sure, but I only sync maybe 1 in 5 updates, as most don't offer sufficient improvement to be worth possible disruption.
[22:57] <ScottK> EagleScreen: #ubuntu-motu is a better channel for such questions.
[22:58] <EagleScreen> thanks Scottk
[23:08] <mardi_soir> hello
[23:08] <mardi_soir> i think i have a bug
[23:08] <mardi_soir> i know one issue
[23:08] <persia> mardi_soir: You'll want to file it in LP.  If you have coordination questions, #ubuntu-bugs is probably the right channel for discussion.\
[23:09] <mardi_soir> sorry
[23:14] <cjwatson> LaserJock: I copied the message from Steve's hardy DIF message, so I don't believe I was introducing anything new
[23:15] <persia> cjwatson: You weren't.  Steve received complaints about his hardy DIF message.  I think the issue is that we've not well defined what the "Freeze" part of DIF means, or put any appropriate exception process on the wiki.
[23:16] <cjwatson> I don't feel especially strongly about what the exception process is, but I do think it's important to have some meaningful encouragement that merges do actually need to happen by that point, otherwise they'll just slip and slip
[23:16] <cjwatson> and I think developers *should* think hard about merging something late in the cycle
[23:17] <cjwatson> complicated merges can easily interfere with feature development
[23:17] <cjwatson> I certainly don't think we should erect huge piles of red tape
[23:17] <cjwatson> it's too early for that
[23:17] <persia> cjwatson: Unfortunately, we seem to have lost the forum to take that decision.
[23:17] <persia> There used to be a development-team meeting, but I haven't seen one in several months (just team meetings).
[23:17] <persia> A lot gets determined at MOTU Meetings, but there's a sense that these decisions don't affect core-dev, which leads to strife and confusion.
[23:18] <cjwatson> they became impractical
[23:18] <cjwatson> forum> TB
[23:18] <cjwatson> surely?
[23:18] <persia> I guess.  I don't like to bother the TB unless we, as developers, can't reach consensus.
[23:19] <cjwatson> I'm afraid I had forgotten that it was contentious, anyway; I was just conscious that DIF was approaching and something needed to be announced
[23:19] <persia> cjwatson: Unless you prefer, I'll email TB about the confusion (now over two cycles), and ask to use a TB meeting as a forum to determine the appropriate policy to apply for DIF for general publication.
[23:20] <cjwatson> that seems very appropriate - whatever gets decided should be documented on the corresponding wiki page
[23:20] <MacSlow> trying to upload a source-package with dput I get a "connection refused" error atm
[23:20] <persia> And it's good you announced it :)  I don't think you ought feel any guilt over the mail, despite my disagreement with the content.
[23:20] <cjwatson> since I went and looked there first (https://help.ubuntu.com/community/Debian/ImportFreeze) and it didn't really speak to this at all
[23:20] <cjwatson> MacSlow: Soyuz is down for some hours for an upgrade
[23:20] <MacSlow> it worked just a few minutes ago... and idea what to l...
[23:21] <persia> cjwatson: OK.  Do you want to take it, as one of the people coordinating release (and the author of the mail), or shall I take it as an objector?
[23:21] <MacSlow> cjwatson, ah... ok thx
[23:21] <cjwatson> (since mthaddon hasn't ...)
[23:21] <mthaddon> cjwatson, thx
[23:21] <cjwatson> persia: please go ahead, (a) it's 2300+, (b) I'm not sure I understand the objections coherently (I realise there are some but haven't really absorbed them)
[23:22] <persia> cjwatson: OK (although it's 31:21 here).
[23:22] <cjwatson> mthaddon: is that timeline still accurate given the publisher delay on drescher?
[23:22] <cjwatson> persia: I like your notation
[23:22] <mthaddon> elmo, ^ downtime still looking accurate?
[23:23] <MacSlow> until tomorrow then
[23:23]  * cjwatson is massively looking forward to ditching the backported germinate I had been maintaining for drescher
[23:23] <elmo> cjwatson/mthaddon: yes
[23:26] <seb128> slangasek: alright, I uploaded gvfs, glib and evolution updates that would be nice to have for 8.04.1
[23:28] <seb128> slangasek: the gvfs update is the patch we spoke about, glib fixes nautilus changing timestamps on copy which some users really complained about and evolution fix meeting invitation not being displayed correctly which is an hardy-updates, had been fixed and the patch has been dropped by mistake during a version update so it would be nice to have it again now ;-)
[23:29] <ScottK> persia: Perhaps we should collaborate since I think I object too, but in the opposite direction so we can have a coherent presentation of the concerns.
[23:29] <cjwatson> my bad
[23:30] <jdub> anyone working on the kernel freeze (possibly related to wireless) issue with hardy? i have a useful petri dish of varying hardware presenting the symptom.
[23:31] <calc> either novell has a lot more bandwidth than we do for release day or not many people are downloading opensuse 11.0
[23:31] <persia> ScottK: Sure.  I'm still researching historical announcements and referents, and haven't yet determined how to add to the TB agenda.  I believe it's likely to be a wiki page for presentations of thought that is linked from an Agenda item.  I was planning to send a mail to all known interested parties (having participated in the recent discussion or in the ML threads for any of the freeze announcements) pointing at the wiki page.  Does that wor
[23:31] <persia> k for you, or do you want something closer?
[23:32] <ScottK> That's fine as long as we have consensus around the wiki page before we ask the TB.
[23:32]  * ScottK heads out.
[23:32] <persia> ScottK: I don't think we can achieve consensus on the wiki page, but I do believe we can present a balanced set of opinions for TB review prior to the meeting.
[23:33] <persia> I think the next meeting is in about two weeks, which ought be enough time for interested parties to add to the page.
[23:33] <ScottK> persia: Consensus about what the argument is about.  Not about what the solution is.
[23:33] <persia> I'm not even sure we can all agree on that, beside the central problem that the impact of DIF on developers is ill-defined.
[23:37] <ScottK> persia: I'm after 2nd order agreement.  See http://pastebin.kubuntu-de.org/269 but I'd settle for 3rd.
[23:37]  * ScottK really heads out.
[23:40] <persia> ScottK: Sounds reasonable (and thanks for the reference).  I think we can reach about 2.5, but I don't want to wait too long, as I'd rather see this resolved so it doesn't happen again (especially as we'll already be in freeze by the next TB meeting)
[23:47] <mathiaz> james_w: I'm playing with bzr branches from mysql on LP
[23:47] <mathiaz> james_w: I'm looking into importing all the patches we have in debian/patches/
[23:47] <mathiaz> james_w: do you have a tool that helps doing this ?
[23:47] <mathiaz> james_w: or suggestions on how to do this properly ?
[23:49] <james_w> mathiaz: hi. I don't have anything specifically for that, but it shouldn't be too hard to cook up.
[23:49] <james_w> as a start you could just iterate them and use "bzr patch", provided by bzrtools, to apply them and then commit.
[23:50] <mathiaz> james_w: hm - do you think it's worth creating one branch for each patch ?
[23:50] <mathiaz> james_w: I'd like to see how we can submit patches to upstream
[23:50] <james_w> mathiaz: that's more about how you want to work I think
[23:51] <james_w> mathiaz: one thing that might interest you is "bzr-loom"
[23:51] <james_w> it's designed exactly to help manage a stack of patches on upstream.
[23:51] <mathiaz> james_w: right - well - I don't really know for now :) that's the whole point of this exerciese
[23:51] <james_w> of course :-)
[23:51] <mathiaz> james_w: currently I'm branching lp:mysql-server/5.0
[23:52] <mathiaz> james_w: and then I have all the patches applied in debian/
[23:52] <mathiaz> james_w: so I could apply one patch, commit etc, and finish by adding the debian/ directory.
[23:53] <mathiaz> james_w: how would this work if I want to submit my patches to upstream ?
[23:53] <james_w> yep that would work, but it loses the separation as you move forward
[23:53] <james_w> and yes, it makes it difficult to extract them to send upstream.
[23:53] <mathiaz> james_w: right - so I though about another workflow, which is to create one branch for each patch
[23:53] <james_w> bzr-loom allows you to keep them separate
[23:53] <mathiaz> james_w: and then merge all of them in one ubuntu branch
[23:54] <mathiaz> james_w: ok - and with bzr-loom I can submit only one branch to upstream ?
[23:54] <james_w> that's kind of what bzr-loom does, you get a branch for each patch, but they are ordered like the patches in debian/patches/
[23:54] <james_w> you can get a diff of one patch with "bzr diff -r thread:" I believe.
[23:54] <mathiaz> james_w: right - but I'm still dealing with diff then
[23:55] <james_w> ah, or course, sorry.
[23:55] <mathiaz> james_w: I'd like to be able to use the submit for merging feature in lp
[23:55] <james_w> it's easy if you want to submit the patches from bottom to top. If you want to reorder them it is more difficult.
[23:56] <james_w> lifeless: hi, mathiaz is looking at mysql packaging and the handling on debian/patches/ after today's announcement.
[23:56] <lifeless> hi
[23:56] <lifeless> I'm looking at a FootLong bacon and egg with extra egg in about 2 minutes
[23:56] <lifeless> if you can wait I'd be happy to chat after that :)
[23:56] <mathiaz> lifeless: wfm
[23:57] <james_w> lifeless: I'm explaining loom, but as I haven't used it much I'm not how easy/feasible it is to extract a branch from the middle of the loom to propose for merging to trunk.
[23:57] <james_w> lifeless: sure.
[23:57] <lifeless> james_w: send -r thread:lower..thread:higher
[23:58] <mathiaz> lifeless: ok - but that creates a diff IIUC
[23:58] <mathiaz> lifeless: I'd like to use lp to propose for merging to mysql team
[23:58] <lifeless> mathiaz: no it creates a cherrypick merge request with all needed history
[23:59] <lifeless> mathiaz: #launchpad please, this is about lp not bzr then
[23:59] <lifeless> -> food back soon