[00:12]  * ScottK would just like to be able to say I want UTC after confessing to living in North America.
[01:24] <cjwatson> Keybuk: not true, d-i has the same knowledge - purely a matter of being able to get away with more questions
[01:24] <cjwatson> it's unfortunately not as silly a question as you might like it to be
[01:24] <cjwatson> once again sanity meets the real world
[01:25] <Keybuk> well, yes
[01:25] <Keybuk> u6y can assume things about the world
[01:25] <cjwatson> [reed]: FYI Ben Laurie (one of openssl upstream) has confirmed that it's OK to leave the second MD_Update call commented out (comment 38 on his blog post that IIRC you referred to earlier)
[01:25] <Keybuk> d-i is for experts
[01:25] <[reed]> cjwatson: yep, I saw. Thanks!
[01:25] <cjwatson> but it is not correct to say that d-i doesn't know whether you have a Windows partition
[01:25] <[reed]> cjwatson: will you be at UDS?
[01:25] <cjwatson> it does
[01:25] <cjwatson> [reed]: yep
[01:26] <[reed]> cjwatson: ah, ok. I'll see you there then.
[01:26] <[reed]> :)
[01:27] <Keybuk> cjwatson: noted
[01:41] <slangasek> well yes, d-i by all rights should be able to figure out whether your clock is UTC or not, for all but a narrow band of users, by also comparing it to ntp; but the implementation is lacking currently :)
[01:45] <Keybuk> cjwatson: late night debugging?
[01:45] <cjwatson> sleep failure
[01:45] <Keybuk> I've just learned that Upstart crashes on the openvz variant of our kernel
[01:45] <Keybuk> due to an idiotic patch in the openvz set that un-POSIXes mmap() behaviour
[01:45] <Keybuk> that was fun to debug
[01:45] <Keybuk> oh yes
[01:45] <StevenK> Keybuk: But you aren't bitter in the slightest? :-)
[01:46] <Keybuk> I'm very bitter
[01:46] <Keybuk> but that's unrelated ;)
[01:46] <StevenK> Yes, I was being ironic :-)
[01:52] <Keybuk> cjwatson: I often find that after a few failures to sleep, I get to sleep fine
[01:52] <Keybuk> and then fail to resume
[01:52] <Keybuk> :)
[01:52] <cjwatson> heh
[01:53] <cjwatson> * Keybuk is now known as Linux
[03:04] <evand> haha
[06:18] <dholbach> good morning
[06:19] <ion_> Hi
[06:19] <dholbach> hi ion_
[06:19] <ion_> What's up?
[06:20] <dholbach> I'm just waking up :-)
[06:22] <ion_> I managed to "fix" my sleep cycle by skipping one night's sleep with the third try. :-)
[06:22] <dholbach> wow
[06:25] <fabbione> morning guys
[06:25]  * fabbione gets ready to release cluster 2.99.01
[06:55] <pitti> Good morning
[06:55]  * bryce waves
[06:55] <ion_> Hi
[06:58] <pitti> james_w: any luck with PK?
[06:58] <james_w> didn't get a chance to play with it yesterday, sorry.
[07:00] <pitti> james_w: no need to be, I was just curious
[07:09] <ion_> PK?
[07:13] <james_w> ion_: PolicyKit
[07:14] <james_w> or PackageKit I suppose now, but the conversation works for either :-)
[07:14] <slangasek> Phenyl Kit-onuria
[07:17] <\sh> whoever is to blame for the ssl crap...this guy gives me really a hell of a day start
[07:17] <desrt_> ya.  that's a fun one to wake up to.... ugh.
[07:17] <pitti> tkamppeter_: "- Forth generation of Foomatic
[07:18] <pitti> tkamppeter_: nice typo!
[07:19] <\sh> desrt, yeah...means to upgrade a lot of hardy installs for me now...which are all freezed :( anyways...tests were done already yesterday night...so it's only work
[07:19] <desrt> this is some scary stuff
[07:19]  * desrt is upgrading servers like crazy
[07:19] <desrt> all of my keys seem to be in the 'compromised' list :(
[07:20] <pitti> desrt: hah, I had only two, but I updated them all anyway (ssl and ssh)
[07:20]  * desrt has ssh keys scattered all over the freakin' place :/
[07:20] <Mithrandir> this is when I'm happy my primary SSH key those days is an RSA key living on a smart card.
[07:20] <pitti> the more intense fun was to explain all users on my server why I ripped their SSH keys and how to rebuild them
[07:21] <desrt> hmm
[07:21] <desrt> dapper hasn't gotten updated packages yet, it seems
[07:21] <RAOF> Dapper doesn't need them, right?
[07:21] <RAOF> It's pre-breakage.
[07:22] <pitti> desrt: not affected
[07:22] <desrt> it needs it for the blacklisting of keys generated by borked users, though
[07:22] <\sh> oh hooray...
[07:22] <\sh> soren, ifenslave-2.6 update won't break my systems using it, right? ,)
[07:22] <desrt> or atleast needs ssh-vulnkey so that i can know which keys are affected....
[07:22] <pitti> kees, jdstrand: ^ is a dapper version of the -blacklist packages planned?
[07:23] <cjwatson> yes, it is
[07:23] <cjwatson> I just haven't quite got around to doing a dapper backport of the openssh stuff
[07:25] <YokoZar> pitti: could you please push this to -updates now that it's been tested?  https://bugs.launchpad.net/bugs/224042
[07:25] <desrt> so if something is in the 'compromised' list does that mean that somewhere someone has a copy of my private key?
[07:26] <desrt> like, that key was actually generated....
[07:26] <Mithrandir> desrt: it was generated from a list of 64k possible keys.
[07:26] <desrt> that's really really bad, eh?
[07:26] <Mithrandir> (* key length, etc, but still)
[07:26] <slangasek> which means that, yes, multiple someones have a copy of your private key
[07:26] <pitti> desrt: no, it's really really, REALLY bad
[07:26] <winjer> yes
[07:27] <winjer> it's terrifying
[07:27] <desrt> i hate cryptography
[07:27] <Mithrandir> crypto is good.
[07:27] <winjer> rm /home/*/.ssh/authorized_keys
[07:27] <jamesh> desrt: it means that your key can log you into more systems than you know
[07:27] <desrt> you read all of these textbooks about how secure it is
[07:27] <desrt> and then something like this happens
[07:27] <jamesh> which makes it more useful
[07:27] <pitti> desrt: mathematically it is
[07:27] <Mithrandir> jamesh: I guess that's one way to look at it..
[07:27] <desrt> pitti; nod.  someone always screws up the implementation, though :p
[07:27] <pitti> desrt: implementation adds about 20 new dimensions to the model which can't be covered by maths :/
[07:28] <Mithrandir> desrt: most of the textbooks I've read about it is how hard it is to get it right and how the devil is in the (implementation) details.
[07:28] <pitti> it also adds a new meaning to "open"ssh, "open"vpn, "open"ssl, etc. -- now even more open!
[07:28]  * desrt can't even imagine how something like this goes unnoticed
[07:28] <jamesh> alternatively, you can use a special authorized_keys file with 64k entries that removes the need to exchange keys to get secure logins
[07:28] <desrt> pitti; srsly.
[07:28] <pitti> desrt: well, 64K is still more keys than you'd just spot by comparing fingerprints occasionally
[07:28] <desrt> jamesh; i guess that won't work after the upgrade
[07:28] <\sh> desrt, security is overrated ,->
[07:29] <desrt> jamesh; just another way ubuntu is working to restrict my options
[07:29] <Mithrandir> desrt: people don't do code reviews.  This is quite bad, especially on security-sensitive software.
[07:29] <jamesh> I would have thought the openssl developers would want to keep their code in a state where you can run memory debuggers on it though.
[07:29] <cjwatson> desrt: it did get noticed ... eventually
[07:29] <liw> desrt, debian and ubuntu both have about a _billion_ lines of source code, and potentially every one of those lines is a security problem... it's easy to miss things (but the beauty of free software is that it's also _possible_ to find things before the bad guys)
[07:30] <jamesh> I mean, memory errors in that code aren't particularly good for security either ...
[07:31]  * desrt finds an odd id_rsa.keystore file that didn't used to get created
[07:32] <cjwatson> seahorse, I suspect ...
[07:32] <desrt> tricky.
[07:32]  * desrt nukes everyone's authorized_keys files
[07:39] <soren> \sh: You would have to have taken very special care to make it so.
[07:39] <soren> \sh: In other words: I can't imagine it would break anything, no.
[07:40] <\sh> soren, thx...I just wanted to be sure, that it doesn't try to "restart" any of the bondings...which could lead to unforseen experiences ;)
[07:42] <soren> \sh: No, it doesn't do anything like that in the maintainer scripts.
[07:44] <dholbach> bryce: does bug 226156 look OK to you?
[07:45] <dholbach> I mean the patch in there - it'd be nice to get it in, so intrepid X would work :)
[07:47] <bryce> dholbach: looking
[07:55] <bryce> dholbach: the patch looks fine, but I'm curious how it fixes the issue
[07:56] <dholbach> bryce: unfortunately sistpoty is not around to ask the question
[07:57] <dholbach> doko, slangasek: ^ do you have an idea about why the patch in bug 226156 would fix the issue?
[07:58] <slangasek> dholbach: looking
[07:59] <bryce> dholbach: so it looks like it's being built with LDFLAGS="-Wl,-Bsymbolic-functions", and setting to LDFLAGS="" makes it work
[07:59] <slangasek> dholbach: ah, cripes
[07:59] <slangasek> right, this is being done to override -Wl,-Bsymbolic-functions
[07:59] <dholbach> I'm happy with whatever fixes X in intrepid, so I don't have to keep the old libxfont1 :)
[07:59] <bryce> I'm not familiar enough with linker options - what do those two do?
[07:59] <ogra> asac, i think there are only one or two FF extensions and the idea of willow-ng is actually been to make them obsolete by using a bayesian spam filter on system level for web content filtering ... giving willo the opportunity to force FF into a proxy setting and unprivileged  user cant change is more intresting than extensions here imho (and extension would be a fine additional thing probably though)
[08:00] <ogra> s/and/an/
[08:00] <slangasek> bryce: -Wl,-Bsymbolic-functions is a single option; its purpose is to optimize the start-time symbol resolution at the expense of some "correctness", by causing any references to symbols available within the lib itself to be bound at build time
[08:01] <bryce> slangasek: ah; is it a new addition?  is there risk in turning it off?
[08:01] <slangasek> bryce: there's no risk at all in turning it off, AFAIK it's entirely a performance thing
[08:01] <asac> ogra: well. i think that users would love a complete solution that is easy to setup and use
[08:02] <dholbach> slangasek: could it be a binutils bug?
[08:02] <asac> ogra: not only for schools, but also for parents i guess. not sure where i hit my ethical barrier though ;)
[08:02] <slangasek> dholbach: ... unlikely
[08:02] <slangasek> this option only affects the linker stage
[08:02] <ogra> asac, right, willowng is designed to run as a localhost service as well as a network wide content filter
[08:03] <ogra> depending on your setup
[08:03] <maswan> hm, no new openssh for dapper with the blacklisting etc?
[08:03] <asac> ogra: how is willowng managed?
[08:03] <asac> ogra: do we need a frontend?
[08:03] <ogra> its a bayesian filter as well as a white/blacklist tool for domains l
[08:03] <ogra> the backend lstes on a port like any other proxy
[08:03] <cjwatson> maswan: on the list, just wasn't as urgent
[08:03] <ogra> *listens
[08:04] <maswan> cjwatson: thanks, and good.
[08:04] <asac> ogra: btw, the general config mechanism should have improved in ffox 3/xul 1.9 ... you should be able to specify your own file in /etc/ now
[08:04] <ogra> the frontend communicates through dbus to the backend, there are gtk and web frntends
[08:04] <asac> ogra: if its bayesian ... how well does it work? how much cycles does it need?
[08:05] <asac> how much training do we need? can we provide default data for profiles?
[08:05] <\sh> hmmm..does anyone know if a wintv (haupauge) DVB-T USB Stick (Nova T Lite) works on linux? ;)
[08:06] <ogra> asac, i have to look deeper into the code, but we'll have the author at UDS ;)
[08:06] <ogra> asac, Amaranth wrote it as SoC
[08:06] <bryce> dholbach: ok the patch sounds fine to me - do you want to go ahead and upload it?
[08:06] <dholbach> bryce: ok, can do
[08:06] <Amaranth> it's a neat idea, just needs someone to actually do it :)
[08:07] <ogra> Amaranth, thats my plan for intrepid :)
[08:07] <Amaranth> I setup the basic design for how to do it and a bit of a proof-of-concept for the code and no one ever finished it :/
[08:07] <asac> ogra: wrote what? willowng?
[08:07] <asac> ok
[08:07] <ogra> asac, right
[08:07] <Amaranth> python makes the proxy part really easy
[08:14]  * \sh 's doomed again...new version of this dvb-t stick which is not supported by kernel :(
[08:14] <RAOF> Hah!  DVB + USB = pain!
[08:15] <\sh> RAOF, usb id: :7050 works ;) but I have , you guess, 7070;)
[08:15] <RAOF> See?  Pain. :P
[08:15] <\sh> RAOF, I'll fix my pain...want to see the EM during work on my linux...so it has to work somehow
[08:15] <RAOF> Hax0r the USB id into the driver!
[08:16] <\sh> RAOF, yes..this evening when I have to time to play with the joys of kernel hacking ;->
[08:16] <RAOF> Or, for more reasonable suggestions, linuxtv.org.
[08:16] <RAOF> If you're lucky, you just need to add that USB id to the list of things the driver will drive.
[08:17] <RAOF> But the linuxtv drivers are often ahead of the mainline drivers.
[08:18] <RAOF> Actually, when I say 'often', I mean 'always', since the mainline drivers are merged from a linuxtv hg tree every now and then
[08:18] <\sh> RAOF, yeah found it...needs some tweaking ;)
[08:19] <RAOF> The driver, or the linuxtv mercurial tree?
[08:23] <dholbach> bryce: uploaded
[08:45] <YokoZar> Is lzma the default compression format in intrepid yet?
[08:46] <slangasek> no, nor will it be
[08:46] <liw> slangasek, why? performance reasons?
[08:47] <Amaranth> it only makes sense for a few packages
[08:47] <slangasek> yes - lzma's space savings only outweigh the performance issues for the largest of packages
[08:47] <Amaranth> otherwise the performance loss is not worth it
[08:47] <Amaranth> performance and memory
[08:48] <YokoZar> Is lzma really more efficient percentage-wise the larger packages get?
[08:48] <YokoZar> I would think it depended on the kind of data in the package
[08:50] <cjwatson> YokoZar: it does depend on the kind of data, but tends to be correlated with size
[08:52] <YokoZar> I guess a related question is how lzma varies in terms decompression speed too.  If the poorly compressed packages still decompress quicker, then it's not so bad
[08:59] <cjwatson> YokoZar: its main problem is substantial memory use on decompression
[08:59] <cjwatson> YokoZar: so very slow on smaller-RAM machines
[09:00] <pwnguin> interesting
[09:01] <pwnguin> cjwatson: what about adopting an rsync-able compression?
[09:01]  * cjwatson <- not a dpkg developer
[09:01] <ion_> That probably would be helpful with zsync as well.
[09:01] <cjwatson> well, not really
[09:03] <pwnguin> from what i've read, you can modify gzip for something like 3 percent
[09:03] <pwnguin> 3 percent bigger archives and it's suddenly rsyncable
[09:05] <pwnguin> but yea, lzma uses a lot of memory / history
[09:06] <pwnguin> not something you'd be happy about on say an nslu2
[09:17] <pitti> doko: gcc-defaults hardy-proposed upload does not have a bug# in the changelog; can you please add it and upload again?
[09:17] <doko> mehh, there is no bug report, I can't close the soyuz bug ...
[09:17] <dholbach> geser: could it be that libgnome2-canvas-perl libgnome2-perl libgnome2-vfs-perl libnet-dbus-perl still need rebuilds?
[09:18] <pitti> doko: "I can't close the soyuz bug"?
[09:20] <slangasek> you can open a gcc-defaults/Ubuntu task for the soyuz bug
[09:20] <doko> pitti: https://bugs.edge.launchpad.net/soyuz/+bug/227184
[09:21] <cprov> doko: err, why would you want to close the soyuz task of the bug ? the fix is not yet merged in RF ;)
[09:21] <pitti> doko: please forgive my ignorance, but what does it have to do with the gcc-defaults dependency fix?
[09:23] <slangasek> pitti: it's only because of the soyuz bug that the current gcc-defaults package was accepted at all
[09:23] <pitti> ah
[09:24] <doko> pitti: please reject, will upload with a bug number
[09:24] <pitti> doko: done already, thank you!
[09:25] <cprov> pitti: right, in 1.2.5 builds generating binaries already known (published) will become failed-to-upload and the uploader will receive a build-failure-notification.
[09:34] <geser> dholbach: Hi, might be. I didn't check them all yet. I looked yesterday at the *-perl FTBFS and started looking what needs to be done to resolve them.
[09:35] <dholbach> geser: good work! :)
[09:43] <soren> I thought bzip2'ed orig.tar's were kosher now?
[10:21] <sdh> openvpn vulnkey stuff in hardy but not gutsy yet... right?
[10:22] <sdh> erk
[10:23] <sdh> if your openvpn key is vulnerable, it doesn't survive dist-upgrade and you can't get it up again
[10:23] <sdh> which could be bad if you rely on your openvpn link
[10:31] <siretart> http://www.ubuntu.com/usn/usn-612-4 seems broken
[10:34] <soren> siretart: How so?
[10:34] <siretart> ah. fixed now
[11:02] <tkamppeter> pitti, hi
[11:06] <emgent> http://thc.emanuele-gentili.com/~emgent/Pitti.jpg
[11:06] <emgent> pitti palace in Florence (italy) :)
[11:06] <james_w> when openssl restarts does it open a temporary connection on another port? https://lists.ubuntu.com/archives/ubuntu-uk/2008-May/012957.html
[11:06] <pitti> emgent: heh; believe it or not, I was there already :)
[11:07] <ion_> :-)
[11:07] <pitti> emgent: hm, actually that was the real Pallazzo Pitti
[11:07] <emgent> pitti: ehehe yes
[11:07] <sdh> ugh..... lost openvpn boxes during upgrade :(
[11:07] <emgent> pitti: http://it.wikipedia.org/wiki/Palazzo_Pitti
[11:07] <emgent> :)
[11:21] <tkamppeter> pitti, I have a problem: I have uploaded 14 Brother driver packages. 13 got accepted and 1 rejected. The rejected one is brother-cups-wrapper-common_1.0.0-10-0ubuntu4 and the error message in the notification mail is "Not permitted to upload to the RELEASE pocket in a series in the 'CURRENT' state.". What is broken in that package?
[11:22] <soren> You tried to upload it to hardy instead of intrepid.
[11:22] <soren> tkamppeter: ^
[11:23] <pitti> tkamppeter: what soren said
[11:25] <tkamppeter> pitti, soren, thanks. Then Saivann forgott to switch the changelog entry of this one package to intrepid.
[11:30] <tkamppeter> pitti, I have corrected the debian/changelog now and re-uploaded the package. So it should arrive soon.
[11:32] <tkamppeter> pitti, soren, thanks. The package got accepted.
[11:41] <soren> tkamppeter: You're welcome.
[12:00] <doko> munckfish: you should be able to use the system gcc to build the kernel, no need to use ppu-gcc
[12:01] <munckfish> doko: hi
[12:02] <munckfish> yes I've been doing that on my PS3
[12:02] <munckfish> but I think a couple of other folks have been trying to cross compile on x86 (or whatever)
[12:02] <munckfish> do you mean this is even possible with standard GCC on x86?
[12:06] <munckfish> doko: I have to go out shortly will you be around later? There's a couple of things I need to ask you.
[12:06] <munckfish> either that or I can mail you
[12:06] <doko> sure
[12:07] <doko> I'm online
[12:07] <munckfish> ok speak later then
[12:30] <geser> dholbach: 3 new debdiffs on bug #230016 to sponsor
[12:36] <dholbach> geser: uploaded
[12:40] <geser> dholbach: thanks
[12:44] <elpargo> hi, I was wondering if someone cared when FF3 was introduced to break people's firebug by making official the exact version of FF3 beta that broke firebug.
[12:48] <cjwatson> elpargo: seems more important to fix it ...
[12:49] <cjwatson> looks like a new version of Firebug fixes it? e.g. http://blog.slaven.net.au/archives/2008/03/11/new-version-of-firebug-works-in-firefox-3-beta/
[12:49] <elpargo> cjwatson, sry but I don't agree. that's why it's still in beta so the plugins will catch up.
[12:50] <cjwatson> elpargo: you don't agree that we should fix it?
[12:50] <cjwatson> (e.g. by incorporating newer versions of the plugins that support Firefox 3)
[12:51] <elpargo> cjwatson, yes the irony is that firebug 1.1b12 works in FF3b4 but not in FF3b5+ and b5 is what ubuntu packaged.
[12:51] <cjwatson> elpargo: we knew that we were going to have to be shipping a beta; as a result we will be putting effort into filing off the rough edges there for 8.04.1
[12:51] <geser> elpargo: I've here firebug 1.2~b21+svn573-0ubuntu1 and it works with FF3b5
[12:52] <cjwatson> the version geser quotes is what's in hardy
[12:52] <ion_> How about when Firefox 3 final is released and there is a number of extensions that are still not ported to it. Should a distribution delay switching the default browser even then?
[12:53] <elpargo> cjwatson, actually I think it's a burocratic thing as if it didn't had ff3 you wouldn't be able to put it in in at least 6 months.
[12:53] <cjwatson> elpargo: you are not being particularly helpful
[12:53] <elpargo> geser, according to firebug people 1.2 is alpha, that's even worst.
[12:53] <elpargo> worse*
[12:54] <cjwatson> elpargo: switching from firefox 2 to 3 in a stable release was clearly completely unacceptable, not because of "bureaucracy" but because it would break people's expectations of a stable system
[12:54] <elpargo> cjwatson, well I just don't like to get broken software.
[12:54] <cjwatson> so we had to do so during the development cycle
[12:55] <elpargo> cjwatson, yea that's why I think it was rushed in. causing issues like this one.
[12:55] <elpargo> I also see a weird thing with flash + ff3 which I can't really reproduce.
[12:56] <cjwatson> elpargo: the alternative was for security support for Firefox 2 to lapse very probably in the middle of the 8.04 stable maintenance period, forcing us to upgrade to Firefox 3 anyway
[12:56] <cjwatson> elpargo: we did actually think about this
[12:56] <cjwatson> Firefox security support has been a problem in this regard in the past, hence our unusual handling of it
[12:57] <frandavid100> hiya
[12:58] <elpargo> I see. I guess it's my fault for upgrading.
[12:58] <cjwatson> I didn't say that
[12:58] <frandavid100> Can you guys take a look at that ML message? https://lists.ubuntu.com/archives/ubuntu-devel-discuss/2008-May/004207.html
[12:58] <cjwatson> but please be patient while we are caught between a rock and a hard place and do our best
[12:59] <elpargo> I did notice there is still a package for FF2, I guess I'm going to install that. Just wondering if it will crash with FF3 or should I deinstall this one first?
[12:59] <cjwatson> they ought to be coinstallable
[12:59] <elpargo> cjwatson, I know I said it. I normally don't upgrade after at least 2 months of the thing being out to get the .1 or .2
[12:59] <frandavid100> The reply was that there is no need for a separate /home because ubuntu installs now recognise and preserve the existing /home. but how does it work?
[12:59] <cjwatson> frandavid100: I'll reply to the mail you sent me
[13:00] <frandavid100> hi cjwatson
[13:00] <cjwatson> frandavid100: it works by removing all the bits of the filesystem that will actually cause problems for installation
[13:00] <cjwatson> file by file
[13:00] <frandavid100> I opened a thread at ubuntuforums and everybody seems pretty confused about that
[13:00] <cjwatson> so /usr gets removed, for example, but not /home
[13:00] <frandavid100> so it actually deletes everything but /home?
[13:00] <frandavid100> oh that's cool
[13:00] <cjwatson> that's not quite what I said :)
[13:01] <frandavid100> hm
[13:01] <cjwatson> it won't delete things like /data that it doesn't recognise either
[13:01] <cjwatson> it deletes only the things it knows about, and which are likely to conflict
[13:01] <cjwatson> the details are at https://wiki.ubuntu.com/UbiquityPreserveHome
[13:01] <frandavid100> I'll take a look
[13:02] <elpargo> frandavid100, what you are suggesting is to create a home partition with 1 click?
[13:02] <cjwatson> elpargo: please read the whole thread first ...
[13:02] <frandavid100> elpargo: basically yeah
[13:02] <frandavid100> but there seems to be no need for it
[13:03] <frandavid100> I really am not very computer literate, so I couldn't say which solution is better
[13:04] <elpargo> I totally agree with you that's the recommended way to install linux. although making it automagic is very complex.
[13:05] <frandavid100> you mean, technically? or complex to the user?
[13:06] <elpargo> technically.
[13:07] <cjwatson> elpargo: technically, that is not the recommended way (although use of the passive voice makes that confusing; who's doing the recommending? In this case I mean recommended by me :-) )
[13:08] <elpargo> becuase it depends a lot on the use case. for example you have a university computer which has 100's of users then you need a huge /home. On the other hand you could have a tester computer that has a tons of packages installed. in both cases the optimal solution is opposite therefore which is the best?
[13:08] <cjwatson> elpargo: the reason is that a separate /home is a decision you have to commit to right at the start, precisely when you have the least knowledge of what sizes to choose, and changing that decision afterwards is extremely painful with current tools
[13:08] <cjwatson> I'm not interested in making the automatic defaults handle the case that's going to have a dedicated and knowledgeable system administrator involved; they can look after themselves
[13:09] <frandavid100> you both seem to agree on that
[13:09] <cjwatson> the point is that the whole system is much simpler when everything is on one file system
[13:09] <cjwatson> that involves no difficult up-front decisions and no later painful changes
[13:09] <frandavid100> cjwatson: in what cases can additional folders, not listed in the specification you linked, be created?
[13:09] <cjwatson> and the only problem with it is that preserving /home on reinstall is hard: so we fixed that
[13:10] <cjwatson> frandavid100: manually; it's pretty common for people to e.g. create /data or /space or whatever random name and dump stuff there, e.g. for shared files among lots of users
[13:10] <cjwatson> and really there'd be no justification for removing that since it won't conflict with anything
[13:10] <frandavid100> I think I get it
[13:10] <elpargo> cjwatson, that is the case for a desktop computer yes.
[13:10] <cjwatson> elpargo: right, which is what the defaults are aimed at
[13:11] <frandavid100> so there would be no junk automatically created by the system, without me knowing it's there right?
[13:11] <elpargo> also you are suggesting that I only have ubuntu and you are also suggesting that I'll stick with ubuntu forever.
[13:11] <cjwatson> just about everyone else will be less scared of partitioning
[13:11] <frandavid100> not unless I created it myself
[13:11] <cjwatson> elpargo: no, I'm really not
[13:11] <cjwatson> elpargo: if you are multi-booting different Linux distributions, you can look after yourself with partitioning, and are unlikely to want to use the defaults anyway
[13:12] <frandavid100> that's right... my proposal was intended for that use case anyway
[13:12] <elpargo> cjwatson, then how do you pretend to run multiple linuxes with home in the same partition as linux?
[13:12] <cjwatson> elpargo: I don't. If you want to run multiple Linux distributions, partition manually.
[13:12] <wgrant> Are we waiting for 8.04.1 to fix -server ISOs for the SSH key issue?
[13:12] <cjwatson> elpargo: I don't think it's necessary or appropriate for the defaults to handle that rare case.
[13:12] <elpargo> cjwatson, well if you see the screeshots frandavid100 posted it's yet another option.
[13:13] <cjwatson> elpargo: which will confuse regular users who don't need it into thinking it's a good idea for them.
[13:13] <cjwatson> the only people who need this are the type of people who are quite capable of going straight into the manual partitioner and setting things up the way they like it.
[13:13] <cjwatson> and that's fine, that's what the manual partitioner's for
[13:13] <elpargo> OR which will make regular users think hey that's not such a bad idea, and hey look how easy can I set it up.
[13:13]  * ogra thinks the migration assistant was already the right idea ... just needs more love and attention
[13:14] <frandavid100> cjwatson: one last question: I guess it works either by telling the installer to use the whole drive, AND by using a specific partition?
[13:14] <cjwatson> elpargo: but I actively don't want that, because then in a year's time they'll run into the hard fact that resizing partitions is a pain.
[13:14] <elpargo> cjwatson, sry but I don't agree. I know many people that understand the concept but are scared of gparted.
[13:14] <Hobbsee> cjwatson: any luck on getting IS to drop postmaster@, etc yet?
[13:14] <cjwatson> elpargo: the installer doesn't use gparted
[13:14] <cjwatson> Hobbsee: not yet
[13:14] <cjwatson> frandavid100: yes
[13:15] <frandavid100> cool
[13:15] <frandavid100> you got me convinced then
[13:15] <frandavid100> thanks :)
[13:15] <elpargo> cjwatson, don't take it literally you know what I wanted to say.
[13:15] <frandavid100> gotta go for lunch, bye guys
[13:15] <frandavid100> have a nice day
[13:15] <Hobbsee> cjwatson: 720 mails in the moderation queue.  ouch.
[13:15] <cjwatson> elpargo: no, I didn't. I consider the installer's current partitioner an improvement over gparted (at least for the installation case) in a number of ways and so I felt the distinction was relevant.
[13:16] <cjwatson> frandavid100: oh, wait, one moment
[13:16] <cjwatson> frandavid100: no, if you tell it to use the whole drive, that's basically equivalent to deleting all the partitions and starting from scratch
[13:16] <cjwatson> (this could probably do with being better-documented)
[13:16] <cjwatson> frandavid100: this is if you say "mount this file system as / please"
[13:16] <cjwatson> ... this existing file system
[13:17]  * Hobbsee wonders if it's worth clearing it out
[13:17] <ion_> "Adding an option to the autopartitioner to enable all this functionality. Needs to meet the requirements: swap, ext3." How about people using swapfiles? Or no swap at all, e.g. on a box with solid-state "disk" and plenty of RAM.
[13:17] <elpargo> cjwatson, yea that's true it's better at installing. although I prefer cfdisk :)
[13:17] <cjwatson> swap files> open bug about it, I'd like it to happen but it involves some work
[13:17] <cjwatson> (not as easy as it might sound)
[13:17] <cjwatson> no swap> manual partitioning
[13:18] <ion_> Does picking manual partitioning still allow you install over an existing partition?
[13:18] <ion_> over the contents of, that is.
[13:18] <cjwatson> yes, in fact that's really the only way you can do it
[13:18] <ion_> Alright
[13:19] <ion_> For someone with enough Linux-fu to use swapfiles in the first place, doing a reinstallation preserving home and then making it use a swapfile isn't a problem.
[13:19] <elpargo> since we are in the subject it will be nice to convert the ntfs partition into reisers for some vendor lockin.
[13:20] <frandavid100> cjwatson: good thing I read that last message before leaving haha
[13:20] <cjwatson> frandavid100: I sent mail just in case
[13:20] <frandavid100> thanks
[13:20] <cjwatson> elpargo: *blink*
[13:21] <frandavid100> but that makes it seriously less useful doesn't it
[13:21] <cjwatson> I hope that's a joke ...
[13:21] <cjwatson> frandavid100: well, if you say use entire disk, that kind of suggests wiping everything out
[13:21] <frandavid100> gotta go
[13:21] <cjwatson> frandavid100: but it's true that it would be useful for autopartitioning to be able to use existing partitions rather than deleting and reconstructing them
[13:21] <frandavid100> catch ou later
[13:21] <elpargo> lol yes cjwatson it was a joke
[13:21] <lool> Hmm each time I remove/add my swap, Priority decreases in /proc/swaps... curious
[13:21] <cjwatson> improved documentation on the autopartitioning menu would help here
[13:22] <elpargo> yea that's scary.
[13:22] <lool> I guess it's meant to have different swaps have decreasing priorities when all are inserted at once, but the counter is reused
[13:29] <elpargo> and ff3 hangs....
[13:30] <elpargo> damn it even needed a -9
[13:30] <dholbach> elpargo: best to file a bug with the information listed on https://wiki.ubuntu.com/MozillaTeam/Bugs
[13:32] <elpargo> well the problem is that you specifically say not to post a problem with my profile.
[13:33] <elpargo> dholbach, ^
[13:34] <dholbach> elpargo: it helps to figure out a way of reproduging the issue
[13:35] <elpargo> cjwatson, we'll even with ff2 I can't get firebug running. it's giving out an exception now.
[13:36] <elpargo> dholbach, thing is that firefox issues are 99% the plugins. it's a pain really and the only way I really know to fix it is not to run the beta version.
[13:37] <dholbach> elpargo: in that case file a bug on the plugin
[13:37] <cjwatson> we *have* to get the plugins working; going back to firefox-2 is *not* going to be an option for the lifetime of 8.04
[13:38] <elpargo> dholbach, yes but that means finding out which one is failing.
[13:38] <dholbach> elpargo: I fear that's the only way to come to clear idea of what the problem is
[13:39] <elpargo> cjwatson, yes I agree with that but that is what testing teams should do.
[13:39] <elpargo> dholbach, that the sys upgrade forced me to go to ff3 and now I can't run firebug.
[13:40] <dholbach> elpargo: please provide the information listed on the wiki page and the mozilla team will look into the issue
[13:40] <elpargo> dholbach, since the plugin is part of my profile the bug will be descarted.
[13:40] <ogra> elpargo, how do you test something that doesnt exist ?
[13:42] <ogra> if ff3 is final and the plugins get updated in 8.04.1 all the plugin packages in ubuntu will surely be tested to work with the ff version in 8.04.1
[13:42] <elpargo> ogra, what are you talking about? the bug report?
[13:42] <ogra> "<elpargo> cjwatson, yes I agree with that but that is what testing teams should do."
[13:42] <elpargo> ogra, exactly my point ff3 is not final, so it shouldn't go in.
[13:43] <cjwatson> I have answered that
[13:43] <ogra> cjwatson, i was just pointing to what i was referring to
[13:43] <cjwatson> it was an unpleasant trade-off between one problem you're seeing right now and one problem that you would have seen later if we hadn't taken this route
[13:43] <ogra> oh, you didnt mean me :)
[13:43] <cjwatson> I didn't, no
[13:44] <elpargo> ogra, there is a testing team, that's why people install ubuntu+1
[13:44] <cjwatson> it's not fair to say that the decision is wrong due to these problems you're seeing now (which are very real) while ignoring the problems that you would have encountered otherwise
[13:44] <ogra> elpargo, but no software to test beyond what exists atm
[13:44] <elpargo> cjwatson, I disagree by the time FF3 is out of beta, most mayor plugins will be updated.
[13:44] <cjwatson> elpargo: normally that is true, but we were in an unfortunate bind and we decided that this is the best solution. I must ask you to stop endlessly rehashing it now.
[13:45] <cjwatson> I have answered your questions about why we did it and been as patient as I could
[13:45] <cjwatson> because I recognise that you are encountering real problems
[13:45] <cjwatson> but, despite the problems, I still feel the alternative would have been worse
[13:45] <elpargo> cjwatson, your the ones going at it again. the thing i said (which seems to be forgotten) was that even with the ff2 package the firebug plugin isn't working.
[13:46] <cjwatson> we have time to update plugins in 8.04.1
[13:46] <cjwatson> elpargo: "<elpargo> ogra, exactly my point ff3 is not final, so it shouldn't go in."
[13:46] <cjwatson> that was belabouring a point I had already addressed at some considerable length
[13:47] <elpargo> cjwatson, the fact that you replied with an unsatisfactory answer doesn't means I shall accept it. On the other hand ogra asked so I replied to him.
[13:48] <ogra> elpargo, i didnt ask about ff3 inclusion since i agree with the decision taken
[13:48] <cjwatson> elpargo: you may find it unsatisfactory, but nevertheless it is reality and furthermore is in the past. There is no point in you continuing to harangue #ubuntu-devel about it now because the decision has been taken and thoroughly committed to
[13:48] <ogra> i was questioning your statement about testing
[13:48] <cjwatson> elpargo: if you are willing to assist us with good-quality bug reports, then we can help you
[13:48] <elpargo> cjwatson, and I'm not. I'm trying to ask now why the ff2 is failing with the install.
[13:49] <cjwatson> the first step is to file a bug with the text of the exception
[13:49] <elpargo> and I'm asking here because here is where that was suggested to me.
[13:49] <cjwatson> (and note that our primary firefox maintainer isn't here today)
[13:49] <cjwatson> and we're asking you to use the bug tracking system
[13:50] <elpargo> cjwatson, don't you think that the bug needs to be confirmed before?
[13:50] <cjwatson> that way, it can be properly dealt with; bugs filed on IRC have a very high chance of being lost, particularly when the relevant maintainer is busy trying to repair his laptop
[13:50] <elpargo> no need for bureaucracy
[13:50] <cjwatson> I haven't noticed bugs typically being confirmed *before* being filed
[13:50] <ogra> heh
[13:51] <cjwatson> it is not bureaucracy; it is the simplest way to ensure that your bug is actually remembered about, rather than dropped on the floor
[13:51] <cjwatson> if you will not take advice on this, we can't help you
[13:51] <cjwatson> in practice as a software developer it is completely impossible to remember about all bugs that somebody mentioned on IRC when you weren't even around
[13:52] <elpargo> cjwatson, IMO the last resource to solving a problem is to file a bug. the thing I hate the most about bug reports is when the issue could be solved by just asking in IRC or a mailing list.
[13:52] <cjwatson> it certainly can't be answered at present because you haven't provided any details of the problem
[13:52] <cjwatson> you've just said it throws an exception
[13:52] <ogra> elpargo, and how should we know whats on your screen ?
[13:53] <cjwatson> a bug report is the most convenient way to record that data so that everyone can see
[13:53] <Hobbsee> ogra: become psycopathic.
[13:53] <cjwatson> most bug reports, fortunately, are filed without pages of discussion on #ubuntu-devel first, otherwise it would be impossible for us to get anything done
[13:53] <elpargo> ogra, nice statement really.
[13:54] <elpargo> I'm going to find out on my own since noone here wants to help how to reproduce the issue so I'll file you a precious bug.
[13:55] <ogra> thanks :)
[13:55]  * Hobbsee notes that none talking are firefox developers, either.
[13:55] <elpargo> but please note you will close as won't fix it as the link suggest that things that work with an empty profile will be closed.
[13:56] <elpargo> without not with
[13:56] <cjwatson> I don't think you are in a position to tell us what we will do with a properly-filed bug with full instructions on reproducing it
[13:57] <cjwatson> of course, if you can't give full instructions on reproducing it, that would be a practical problem, but I'm sure with a bit of effort you can construct that (e.g. "you need to install this extension first").
[13:57] <cjwatson> this is a channel for coordination among Ubuntu developers, not for helping people reproduce bugs. Generally we like to help people out when they're cooperative, but it isn't usually any fun to help hostile people
[13:58] <liw> http://www.debuggingrules.com/ -- that book can be quite handy
[13:59] <cjwatson> and it's entirely true that issues with Firefox 3 are a very major problem with 8.04 at the moment; I'd go so far as to say one of the top three problems
[14:00] <cjwatson> which is why we'd like to work with people who report problems, so that we can make sure that we've caught as much as possible
[14:00] <cjwatson> but we need you to work with us too, rather than arguing at every turn
[14:02] <Hobbsee> woot!  one non-spam mail!
[14:03] <geser> dholbach: 4 new debdiffs on bug #230016 to sponsor and can you also sponsor bug #230246?
[14:03] <Hobbsee> hmm, a few ubuntu-related, non-spam mails.
[14:04] <dholbach> geser: can do
[14:05] <geser> thanks
[14:06] <emgent> heya people
[14:07] <geser> Hobbsee: can I ask you for some give-backs?
[14:07] <Hobbsee> geser: yes, if they're in one line, without commas.
[14:07] <geser> Hobbsee: libnet-openid-consumer-perl libnet-openid-server-perl libmail-dkim-perl libmail-box-perl libkwiki-perl libkwiki-cache-perl libmail-verify-perl libdevel-lexalias-perl libexpect-simple-perl libdevel-gdb-perl libemail-valid-perl libgstreamer-perl libnet-dns-async-perl libnet-jabber-loudmouth-perl libnet-rblclient-perl libnet-socks-perl libnet-sip-perl libpoe-component-client-dns-perl
[14:07] <geser> libpoe-api-peek-perl libpoe-perl
[14:08] <ln-> that's two lines
[14:08] <dholbach> geser: done
[14:09] <dholbach> geser: thanks a lot for taking care of it
[14:10]  * Hobbsee O.O @ lp
[14:10] <ogra> openoffice@lp ?
[14:11] <dholbach> geser: could it be that libgnome2-canvas-perl and libgnome2-perl are still unhappy?
[14:12] <geser> dholbach: yes, but they need a fixed libgtk2-perl first
[14:12] <dholbach> OK
[14:12]  * dholbach hugs super-geser
[14:14] <Hobbsee> geser: launchpad's broken buildd.py
[14:15] <elpargo> cjwatson, contrary to your believe I am working with you if not I wouldn't be here in the first place.
[14:15] <geser> Hobbsee: try de-activating the redirection to edge
[14:15] <Hobbsee> geser: done, the script still goes to edge anyway.
[14:16] <elpargo> Now the reason I said the bug will be ruled out is because your own guidelines say that if it's a profile issue it will be ignored.
[14:16] <geser> hmm, pitti somehow managed yesterday to process my give-backs
[14:17] <elpargo> which by the way I just confirmed it was, and now I lost my profile because firefox is kind enough to store a ton of complex data structures in it's .files and finding the error there is harder than just redoing all the config.
[14:18] <pitti> Hobbsee: no, it didn't break the script; we just lost our super-powahs on edge
[14:19] <Hobbsee> pitti: pity.
[14:19] <pitti> Hobbsee: right, disable edge redirection, regenerate cookie, then it'll work
[14:19] <Hobbsee> pitti: well, which does break the script, because it makes it not work.
[14:19] <Hobbsee> pitti: how do i regenerate the cookie?
[14:19] <pitti> cookies-sql2txt .mozilla/firefox/*/cookies.sqlite launchpad.net > .lpcookie
[14:19] <pitti> Hobbsee: with http://people.ubuntu.com/~kees/scripts/cookies-sql2txt
[14:19]  * pitti hugs kees
[14:26] <asac> elpargo: there is no general rule or guideline that says that we ignore profile issues.
 elpargo: best to file a bug with the information listed on https://wiki.ubuntu.com/MozillaTeam/Bugs
[14:27] <elpargo> "Problems with corrupt profiles are not handled in bug reports: if your problem went away with a new profile, please do not report the problem as a bug."
[14:27] <hwilde> elpargo, there is a difference between borked user profiles, and application . files
[14:27] <elpargo> hwilde, for firefox?
[14:28] <hwilde> elpargo, did you file the bug report yet ?
[14:28] <elpargo> why will I if getting rid of the .mozilla/firefox dir fixed it.
[14:28] <asac> elpargo: well ... in fact it means that we cannot deal with profile bugs that we cannot reproduce
[14:29] <asac> elpargo: if there is a bug in firefox that corrupts profiles that can be reproduced we certainly want to hear about it.
[14:29] <Hobbsee> hum.  now i've broken it
[14:29] <elpargo> asac, but that's circular logic..
[14:31] <geser> Hobbsee: still fighting with LP about the give-backs?
[14:31] <Hobbsee> geser: yes
[14:31] <Hobbsee> ahh, looks like that's fixed it.
[14:31] <Hobbsee> dunno what was originally in cookies.txt though
[14:31] <geser> should I ask pitti instead?
[14:33] <Hobbsee> no, it's working now
[14:33] <geser> good and thanks
[14:35] <sistpoty|work> oh asac: for bug #229009, I'll fix up the css of revu soon... if that makes the table drawn correctly in ff3, are you still interested in this bug? (ff2 of gutsy here at work does render it correctly even though the css seems broken)
[14:37] <Hobbsee> geser: all done
[14:37] <asac> elpargo: feel free to help on bugs in #ubuntu-mozillateam and help us to address profile issues better by triaging those bugs - we certainly are open for innovation :)
[14:38] <asac> sistpoty|work: not sure. did you report it as broken website in Firefox Help menu?
[14:39] <asac> sistpoty|work: i think that would be a good start
[14:40] <sistpoty|work> asac: no... well, problem is that this page is dynamic, so I guess I'll make a static copy first... but I'll do that once I'm home :)
[14:40] <asac> kk
[14:40] <munckfish> doko: you there?
[14:59] <doko> munckfish: yes
[15:00] <munckfish> doko: could you explain what you meant earlier about not needing ppu-gcc - in what context don't we need it?
[15:00] <munckfish> I've been using standard gcc for everything on PS3 so far
[15:01] <munckfish> I think these guys that reported that bug may have been trying to cross compile to powerpc from x86
[15:01] <munckfish> Also, next question - you're the maintainer on a bunch of power/cell related packages have you got time to continue that or would you like the PS3 Port Team
[15:01] <munckfish> to take over responsibility for those
[15:02] <munckfish> ?
[15:02] <munckfish> The list I've compiled so far is here: https://wiki.ubuntu.com/UbuntuPS3/Packages
[15:02] <munckfish> could you let me know if there are any I've missed?
[15:02] <munckfish> Thanks
[15:03] <doko> munckfish: these packages are obsolete in intrepid; if you do want to take care of those in hardy, that would be fine
[15:04] <munckfish> obsolete?
[15:04] <munckfish> in what way?
[15:04] <munckfish> can everything they achieve be achieved using something else now?
[15:12] <emgent> norsetto: o/
[15:13] <norsetto> emgent: 0/
[15:21] <munckfish> doko: in what way are they obsolete? Can the features they provide be achieved with other packages now?
[15:22] <doko> munckfish: it's built from gcc-4.3 and binutils, plus we have a newlib package
[15:23] <arthur-> hi
[15:25] <munckfish> doko: so what about the spu stuff?
[15:25] <munckfish> doko: do surely we still need to bag that stuff from the SDK right?
[15:27] <doko> munckfish: maybe ask arthur- what is planned, but what do you mean by "stuff from the SDK"?
[15:28] <munckfish> doko: ok. I've not spent much time understanding what is needed in the way of a toolchain for the PS3, cause I've been battling with Xorg and Kernel probs so far
[15:28] <munckfish> I am aware of IBMs SDK/toolchain for the Cell, I presumed these packages you created related to that
[15:28] <arthur-> munckfish: binutils-spu + g{cc,++,fortran}-spu + newlib-spu ?
[15:29] <munckfish> I noticed the upstream URL was Barcelona Supercomputer project
[15:29] <munckfish> arthur-: yep exactly
[15:29] <arthur-> doko: since intrepid are the spu packages sync from sid?
[15:30] <doko> arthur-: yes, merged
[15:31] <arthur-> cool
[15:31] <arthur-> munckfish: then what is the problem? something does not work?
[15:32] <doko> arthur-: we need to build gcc-4.3-spu without the hardening and the ssp-as-default patches. so either we have to copy the src dir for the build (and unapply these patches), or build from a separate source
[15:32] <munckfish> arthur-: 2 things - first there is a bug raised about ppu-gcc not being able to compile the kernel. Second now my attention has turned to these packages I am about to find out if they need to be updated to a new upstream version
[15:33] <munckfish> arthur-: LP 180319
[15:34] <munckfish> arthur-: also this is how far I got finding out about these packages https://wiki.ubuntu.com/UbuntuPS3/Packages
[15:39] <arthur-> munckfish: there is no ppu support in debian sid, and for spu it has been merged in main gcc-4.3 and binutils packages
[15:40] <arthur-> I don't think ppu is upstream as spu, have to check
[15:42] <arthur-> munckfish: are you sure you need ppu at all for the kernel?
[15:42] <ogra> wow, lots of new abbreviations floating around here today :)
[15:44] <doko> munckfish: ppu is just an alias
[15:45] <_lemsx1_> why was the security update for openssl (0.9.8g-4ubuntu3.1) uploaded with "urgency=low" and now "high"?
[15:45] <_lemsx1_> s/now/not/g
[15:46] <_lemsx1_> the same fix on the debian package is set to "high" (as I think it should)
[15:46] <james_w> urgency has no meaning in Ubuntu so it makes no difference.
[15:46] <arthur-> doko: there is no possibility to update the patch to have no effect for spu targets?
[15:46] <_lemsx1_> james_w: ok
[15:47] <doko> arthur-: it will get more ugly
[15:54] <munckfish> arthur-: no I know I can compile for the ppu from PS3 using normal gcc. I didn't know if it was needed for cross-compiling to ppu from other platforms.
[15:56] <arthur-> doko: shouldn't we build spu packages on i386 and amd64 as well and remove old cell-* packages?
[15:56] <arthur-> (once fixed for gcc-4.3)
[15:58] <doko> arthur-: sure, the old cell packages should be removed in intrepid once the other packages are working. if you do want to provide the cross toolchain, then it would make more sense to build that from separate source, b-d on gcc-4.3-source. at least that's what we want for the other cross builds as well
[15:59] <munckfish> doko: what's "b-d"?
[15:59] <doko> build-depend
[15:59] <munckfish> ok
[16:05] <munckfish> arthur-, doko: ok so am I understanding this correct - PPU is just a powerpc arch so to cross-build for that just need to pass -mppc or similar right?
[16:05] <munckfish> but, for spe we need to have a patch in gcc or a special build of gcc?
[16:06] <munckfish> so we don't need IBM's sdk nor the one from Barcelona Uni, we just need the patch they contributed upstream in order to compile for SPEs?
[16:06] <doko> munckfish: you need two compilers, the native for the ppu (just a normal recent gcc), and a cross build targeting the spu units
[16:06] <munckfish> Ok, that's what I now understand.
[16:07] <munckfish> Therefore to move forward from here - normal gcc is fine
[16:07] <munckfish> we just need to create a cross build for spe
[16:07] <munckfish> what about the ppu-sysroot thing with all the cross libs we still need that right?
[16:07] <munckfish> for linking
[16:08] <munckfish> I am prepared to do whatever work is necessary here.
[16:09] <munckfish> I am not afraid to learn knew things quickly. But I am not a core/motu would you guys be able to offer some sponsorship for uploads etc?
[16:11] <munckfish> arthur-, doko ^ ?
[16:12] <arthur-> munckfish: I'm not a core/motu
[16:12] <munckfish> ok, so I am in similar company :)
[16:12] <doko> munckfish: the spu cross build is already there (install an intrepid chroot), the ppu-sysroot is not needed, unless you want to have cross build dev environment
[16:12] <munckfish> ok fine
[16:12] <doko> arthur-, munckfish: time to become one
[16:12] <doko> dholbach: new victims ^^^ ;-p
[16:13] <munckfish> doko: would love to
[16:13] <dholbach> :-)
[16:13] <arthur-> doko: me?
[16:13] <doko> arthur-: why not?
[16:14] <munckfish> I know there is a process for that - just hadn't bothered to read it yet cause I was concerned to get actual stuff happening. c - j - watson and the xorg team have been sponsoring uploads for me so far
[16:16] <doko> munckfish: it may be a good time at the beginning of the release cycle
[16:16] <munckfish> doko: ok I'll look into it
[16:16] <munckfish> th
[16:16] <munckfish> thx
[16:21] <arthur-> doko: I will be able to do toolchain uploads in Ubuntu then?
[16:25] <doko> arthur-: when build from separate sources, yes, but first you have to become a motu
[16:49]  * Seeker`` would like to do motu stuff at some point
[17:49] <juliank> Keybuk: Update on Bug #228226?
[18:36] <Keybuk> juliank: your quoting of policy doesn't actually answer your question ;)
[18:38] <Keybuk> well, doesn't absolve you from making an unnecessary change
[18:38] <desrt> Keybuk; the change was necessary!  it was causing valgrind warnings!
[18:39] <Keybuk> desrt: this isn't #debian-devel ?
[18:39] <desrt> Keybuk; what?  where am i?
[18:39]  * desrt rubs his eyes
[18:40] <juliank> Keybuk: The question was why it was changed. You said 'so S01mountkernfs.sh will always be run before S01readahead.' - the answer is: "The two-digit number mm is used to determine the order in which to run the scripts" - therefore it has to be S02 and not S01, as it needs a script from S01
[18:42] <Keybuk> juliank: no
[18:42] <Keybuk> policy doesn't say ONLY the two-digit number
[18:42] <Keybuk> it just says that the two digit number is used (along with other things)
[18:44] <juliank> Keybuk: OK, it was the wrong reason. The real reason is that debians mountkernfs is S02 (there is only S01glibc.sh in Debian)
[18:45] <juliank> Keybuk: Ubuntu does not need this change, but I see no reason why it could not be merged.
[18:46] <Keybuk> because it would change our boot order
[18:47] <juliank> Keybuk: The only other script in S02 is hostname.sh
[18:48] <Keybuk> which will also run before it
[18:48] <Keybuk> you're not going to justify to me any change to the boot ordering ;)
[18:48] <Keybuk> it's a deliberate different in Ubuntu
[18:48] <Keybuk> it should not be dropped
[18:50] <CarlFK> cjwatson: just hit #22301 with hardy - want to look at my box before i do the workaround?
[18:51] <CarlFK> bug ﻿#22301 "Install -- Raid setup cannot see all of my RAID partitions"
[18:53] <CarlFK> Bug #22301
[19:03] <Mirv> cjwatson: bug 144741 is btw waiting for input on who should do what to get the new partitioning strings into Rosetta for translation. those that were supposed to be translated for 8.04.1.
[19:18] <Keybuk> lots of timeouts from LP today
[19:20] <Keybuk> soren: do you know much about openvz?
[19:20] <\sh> doko: if you can remember your changes to ruby1.9 ... do we need them still in 1.9.0.1
[19:20] <\sh> ?
[19:43] <juliank> Switching to aufs as the default filesystem to workaround the disabled unionfs?
[20:23] <soren> Keybuk: Not really.
[20:24] <soren> Keybuk: I know that upstream handles the kernel side of things themselves (mostly). I don't know if that's a useful for you in any way?
[20:24] <Keybuk> we build -openvz kernels
[20:24] <Keybuk> from our git tree
[20:24] <soren> Yes.
[20:24] <soren> Based on a patch from them.
[20:24] <Keybuk> which would be nice, if they didn't cause an assertion error in Upstart ;)
[20:25] <soren> Ah.
[20:25] <soren> What's the problem?
[20:25] <Keybuk> openvz for no readily apparent reason change the behaviour of mmap() for zero length files
[20:26] <soren> Keybuk: That's.. unfortunate.
[20:26] <soren> I can't imagine why.
[20:26] <Keybuk> and since vi tends to make zero length swap files while you're editing jobs, you end up with no system l;)
[20:26] <soren> Erk..
[20:26] <Keybuk> the upstream kernel changes the behaviour of mmap() quite a while back
[20:27] <Keybuk> to match POSIX
[20:27] <Keybuk> I can't imagine why openvz have to revert that change for compatibility when the ordinary kernel doesn't
[20:28] <soren> Keybuk: Nor can I.
[20:28] <soren> Keybuk: BenC is the one who's been in contact with the OpenVZ guys. I've not really been involved at all.
[20:28] <Keybuk> yes, well, getting in touch with the kernel team is sometimes like talking to god
[20:29] <Keybuk> you're on your knees, chatting away, but no evidence of anyone on the other end :p
[20:29] <soren> :)
[20:29] <soren> Is it urgent right now?
[20:29] <Keybuk> sorry, that was harsh
[20:30] <Keybuk> religious people do have sometimes compelling arguments for god's existance
[20:30] <soren> I think I have the e-mail for the OpenVZ contact somewhere, but I'm a bit tied up right now.
[20:30] <soren> I've met the kernel team. They exist, too.
[20:30] <soren> s/, too//
[20:30] <soren> I have to run for half an hour..
[20:31] <wasabi> Keybuk: no they don't.
[20:31] <wasabi> =)
[20:40] <e-gandalf> asac: ping
[20:41] <asac> e-gandalf: hi
[20:41] <e-gandalf> hi asac
[20:41] <e-gandalf> so, I'll join you from Mon till Wed
[20:41] <e-gandalf> if you want to use me in relation to any specific Mozilla topic, please let me know
[20:41] <asac> e-gandalf: cool.
[20:42] <e-gandalf> my email is zbraniecki@mozilla.com
[20:42] <e-gandalf> in particular, I'm working on a series of localization related tools, and would love to meet launchpad people :)
[20:43] <asac> ill write something up and search for you on monday. thanks!
[20:44] <e-gandalf> sure
[20:44] <cr3> how can I determine a release is LTS programmatically? dapper use to display "LTS" in the output of lsb_release -a, but not hardy
[21:05] <[reed]> e-gandalf: #ubuntu-mozillateam
[22:16] <sdh_> http://metasploit.com/users/hdm/tools/debian-openssl/
[22:39] <slangasek> sdh: you're aware that Linux pid_t is an 'int', which is a 32-bit type rather than a 16-bit type?
[22:43] <sdh> slangasek: i can read typedefs, yes
[22:43] <slangasek> sdh: oh; I gather you're not actually the author of that page?
[22:44] <sdh> slangasek: i am not HD Moore, no
[22:44] <slangasek> ok then :)
[22:44] <sdh> slangasek: :P
[22:44] <sdh> slangasek: afaik, there is somewhere a "max procid" which effectively makes it 16 bits
[22:44] <sdh> a cursory grep hasn't shown me where
[22:47] <sdh> slangasek: similarly, even though it's a (signed) int, process ids are positive
[22:47] <sdh> etc...
[22:47] <sdh> but like i say, i am just a messenger :)
[22:49] <lucas> kernel.pid_max = 32768 is probably what you are looking for
[22:49] <sdh> thanks :)
[22:49] <sdh> a sysctl, i assume?
[22:49] <lucas> yes
[22:50] <sdh> thanks
[22:51] <slangasek> right, I'm well aware of the actual limitations, I just supposed I might've been talking to the author of that page and might be able to send him on a wild goose chase for a bit ;)
[22:51] <sdh> ;>
[22:51] <lucas> ah :)
[22:58] <cjwatson> Mirv: jtv is the man who can sort that out, I think
[23:19] <theunixgeek> ﻿please, ubuntu devs, fix the copypaste bug!!! :(
[23:19] <ion_> Launchpad ID?
[23:20] <theunixgeek> what?
[23:21] <Nafallo> copypaste?
[23:21] <ogra> Nafallo, !!!!
[23:21] <ogra> Nafallo, just saw the news today
[23:21] <theunixgeek> Nafallo: yes
[23:21]  * ogra hugs Nafallo 
[23:22] <Nafallo> ogra: thanks :-)
[23:22] <ion_> news?
[23:22] <theunixgeek> Nafallo: in fedora, mac os x, and windows *, you can copy, close an app, and then paste elsewhere. not in ubuntu.
[23:22] <Nafallo> theunixgeek: let me try that.
[23:22] <Nafallo> nafallo@wizard:~$ test
[23:22] <Nafallo> theunixgeek: I can :-)
[23:22] <Seeker`> pasting works for me
[23:22] <theunixgeek> :S
[23:22] <theunixgeek> weird
[23:23] <theunixgeek> like, type something in gedit, close, and paste in firefox
[23:23] <theunixgeek> default install
[23:23] <Seeker`> I just typed "pasting works for me" in gedit, closed it, and pasted it in here
[23:24] <theunixgeek> weird
[23:24] <theunixgeek> I'll be back in 1/2 hour
[23:24] <theunixgeek> bye
[23:49] <bryce> ogasawara_: bug #228399 turns out to be a kernel panic.  Not sure it's triaged correctly for kernel stuff though - you might want to take a look
[23:51] <bryce> ogasawara_: the user's situation is a bit crackful - he restored a full system backup onto different hardware.  So don't know how viable it is for solving, but I'll let you judge.