[01:05] <MTeck-Linux> what code is it that requires initramfs support now in 9.10 that wasn't there in 9.04?
[01:07] <RAOF> MTeck-Linux: You mean 10.04 rather than 9.10? IIRC there was a “parallelise populating of root filesystem" patch for bootspeed that broke non-initramfs.
[01:07] <MTeck-Linux> ya..
[01:07] <MTeck-Linux> oh...
[01:09] <MTeck-Linux> RAOF: You have any idea if there's intention to have that fixed by release time?
[01:12] <RAOF> MTeck-Linux: It'd want to be fixed before sending upstream, but I don't belive that the kernel team cares about the non-initramfs case for Ubuntu.
[01:12] <RAOF> MTeck-Linux: I think there was discussion about this on the kernel-team@ mailing list, for what it's worth.
[01:13] <MTeck-Linux> RAOF: any idea how long ago, I could make an attempt to try to fix it - although it's not likely I have that skill
[01:17] <RAOF> MTeck-Linux: It'd be about a month ago, I think.
[01:17] <RAOF> I can't seem to find the exact mail.
[01:19] <MTeck-Linux> RAOF: thanks, I can look for it later too; once I get that other thing that's missing on my config I can see if I can figure out how to fix it :)
[12:02]  * lamont tries to remember - in general could one run a dapper userspace on a hardy kernel?
[12:40] <smb> lamont, With luck but with no guaranties
[12:41] <lamont> right
[12:41] <lamont> our test of PPC_CELL=n didn'
[12:41] <lamont> t pan out, so it's back to dapper
[12:42] <lamont> but it'd be nice test a hardy kernel with tweaks from time to time to see if we can get to hardy
[12:43] <smb> The problem likely is with userspce, as the kernel between lts releases is likely prone to some kernel-userspace abi changes
[12:44] <smb> And actually I got the feeling we probably do not care enough to make much effort there as Dapper goes away soon if I did not remember the wrong year
[12:50] <smb> lamont, Ok, sorry I remembered the wrong year. Maybe more hoping than remembering. :-P
[13:01] <_ruben> my current build box has 2 cpus with ht (so 4 logical cpus), when compiling a kernel, what would be the "optimal" concurrency setting?
[13:02] <rtg> _ruben, by default the kernel build determines the number of CPU and does a 'make -jX' accordingly
[13:02] <_ruben> rtg: ah ok, didnt know that
[13:03] <rtg> _ruben, you can override it by setting CONCURRENCY_LEVEL as an environment variable
[13:05] <_ruben> but i suppose the default should be "best" in general...
[13:05] <_ruben> i had my doubts between N and N+1 .. based on very vague memories from long long ago 
[13:06] <smb> _ruben, You might always try out what is best in your case, but generally the build is io bound in most cases
[13:06] <rtg> _ruben, it hardly makes much difference. you're usually I/O bound anyway
[13:06] <rtg> smb, stop reading my mind
[13:06]  * smb tries
[13:07] <smb> For i7 systems there have been suggesd
[13:07] <smb> stions to limit the number of cores and so allow the rest to go to boos mode
[13:08] <rtg> smb, what is boos mode?
[13:08] <smb> (fingers still a bit numb after doing a walk outside)
[13:08] <smb> On those if you do not use all cores the remaining go faster (the die temp is the same in that case)
[13:09] <rtg> smb,  not many i7's in the real world yet, are there?
[13:09] <smb> rtg, one in my room :)
[13:09] <rtg> smb, and 2 in my shop 
[13:12] <smb> So at least 3. And there was some coverage on those in one magazine I read. But yeah, probably a bit costly for normal usage.
[13:48] <_ruben> default build gives me 67% user, 8% sys, 25% idle .. disk util around 5% .. according to iostat
[13:52] <smb> I believe to remember the iowait statistics were the interesting ones. But must admit I have not really looked closely lately.
[13:55] <_ruben> smb: was looking at those just now .. between 0.00% and 0.30% .. software raid1 over 2 sata disks
[13:58] <smb> Sounds a bit like there would be room for trying a bigger -j value to see whether this would get your cpus more saturated. I think in the end for us it was an ok approach to go just for the numbers of cpu's. You still can do more than one build in parallel...
[14:02] <_ruben> heh .. increase concurrency to 6 and iowait sky rockets :)
[14:04] <smb> hehe, ok, so the default is not too bad. :)
[14:05] <_ruben> apparently so :)
[14:08] <hashimi> Hi everyone
[14:09] <hashimi> I have a problem
[14:09] <hashimi> I want to change my boot splash image from my own setup linux.
[14:09] <hashimi> But i don't know how to do it.
[14:09] <hashimi> could anyone help me?
[14:11] <smb> hashimi, Hopefully they did not send you here from there, but I think in #ubuntu-devel there might be more chance to find one with that knowledge
[14:11] <hashimi> Ok really thanks for reply. I will go out to there.
[14:11] <hashimi> Thanks smb
[14:12] <_ruben> btw .. currently i build kernel by calling debian/rules directly, is there a recommended/supported/whatever method that'd yield proper .dsc files and the like, so i can dput it to my own repo
[14:13] <smb> _ruben, debuild -B should yield the same results as the buildd's
[14:14] <smb> possibly needing -uc -us for not signing the files by default or overriding the signers key
[14:14] <_ruben> smb: that'd probably build all flavours right? as im only interested in my own custom flavour 
[14:16] <smb> Right in that case you would either need to modify things to remove flavours or do as you do now. But that created the deb files only
[14:16] <smb> For the source package you can call "debuild -S -i -I" after a "fakeroot debian/rules clean"
[14:17] <smb> err
[14:18] <smb> "dpkg-buildpackage -S -i -I" or "debuild -S" I think, though I usually use the former
[14:25] <_ruben> smb: ah ok, thanks for the tips
[14:44] <_ruben> crap .. used the wrong buildroot .. used a karmic one, while i need a kernel for hardy :p
[14:45] <_ruben> gives me a libc6 dependency error on install :)
[14:50] <smb> Doh, and oh btw. I think in Hardy the default -j might not (yet) be related to the number of CPU's-
[14:52] <_ruben> good call
[14:52]  * _ruben hits ^C
[14:52] <_ruben> noticed a -j1 :)
[14:53] <_ruben> lets try concurrency level of 5
[16:55] <_ruben> weird .. my i386 backport of karmic's kernel (with custom patch) does work as expected .. but the amd64 backport fails to detect my lvm at boot :/
[16:55] <_ruben> oh well, that's something for next week to figure out
[17:09] <rackerhacker> jjohansen: were you able to remember where that extra kernel memory came from in linux-ec2 on amd64?
[17:12] <jjohansen> rackerhacker: no, I need to spend some time looking at the code and I just haven't managed to get to it yet
[17:13] <rackerhacker> jjohansen: that's okay, i figured i'd just check in ;)