/srv/irclogs.ubuntu.com/2010/01/08/#ubuntu-kernel.txt

MTeck-Linuxwhat code is it that requires initramfs support now in 9.10 that wasn't there in 9.04?01:05
RAOFMTeck-Linux: You mean 10.04 rather than 9.10? IIRC there was a “parallelise populating of root filesystem" patch for bootspeed that broke non-initramfs.01:07
MTeck-Linuxya..01:07
MTeck-Linuxoh...01:07
MTeck-LinuxRAOF: You have any idea if there's intention to have that fixed by release time?01:09
RAOFMTeck-Linux: It'd want to be fixed before sending upstream, but I don't belive that the kernel team cares about the non-initramfs case for Ubuntu.01:12
RAOFMTeck-Linux: I think there was discussion about this on the kernel-team@ mailing list, for what it's worth.01:12
MTeck-LinuxRAOF: any idea how long ago, I could make an attempt to try to fix it - although it's not likely I have that skill01:13
RAOFMTeck-Linux: It'd be about a month ago, I think.01:17
RAOFI can't seem to find the exact mail.01:17
MTeck-LinuxRAOF: thanks, I can look for it later too; once I get that other thing that's missing on my config I can see if I can figure out how to fix it :)01:19
=== jMyles_ is now known as jMyles
=== Whoopie_ is now known as Whoopie
* lamont tries to remember - in general could one run a dapper userspace on a hardy kernel?12:02
smblamont, With luck but with no guaranties12:40
lamontright12:41
lamontour test of PPC_CELL=n didn'12:41
lamontt pan out, so it's back to dapper12:41
lamontbut it'd be nice test a hardy kernel with tweaks from time to time to see if we can get to hardy12:42
smbThe problem likely is with userspce, as the kernel between lts releases is likely prone to some kernel-userspace abi changes12:43
smbAnd actually I got the feeling we probably do not care enough to make much effort there as Dapper goes away soon if I did not remember the wrong year12:44
smblamont, Ok, sorry I remembered the wrong year. Maybe more hoping than remembering. :-P12:50
_rubenmy current build box has 2 cpus with ht (so 4 logical cpus), when compiling a kernel, what would be the "optimal" concurrency setting?13:01
rtg_ruben, by default the kernel build determines the number of CPU and does a 'make -jX' accordingly13:02
_rubenrtg: ah ok, didnt know that13:02
rtg_ruben, you can override it by setting CONCURRENCY_LEVEL as an environment variable13:03
_rubenbut i suppose the default should be "best" in general...13:05
_rubeni had my doubts between N and N+1 .. based on very vague memories from long long ago 13:05
smb_ruben, You might always try out what is best in your case, but generally the build is io bound in most cases13:06
rtg_ruben, it hardly makes much difference. you're usually I/O bound anyway13:06
rtgsmb, stop reading my mind13:06
* smb tries13:06
smbFor i7 systems there have been suggesd13:07
smbstions to limit the number of cores and so allow the rest to go to boos mode13:07
rtgsmb, what is boos mode?13:08
smb(fingers still a bit numb after doing a walk outside)13:08
smbOn those if you do not use all cores the remaining go faster (the die temp is the same in that case)13:08
rtgsmb,  not many i7's in the real world yet, are there?13:09
smbrtg, one in my room :)13:09
rtgsmb, and 2 in my shop 13:09
smbSo at least 3. And there was some coverage on those in one magazine I read. But yeah, probably a bit costly for normal usage.13:12
_rubendefault build gives me 67% user, 8% sys, 25% idle .. disk util around 5% .. according to iostat13:48
smbI believe to remember the iowait statistics were the interesting ones. But must admit I have not really looked closely lately.13:52
_rubensmb: was looking at those just now .. between 0.00% and 0.30% .. software raid1 over 2 sata disks13:55
smbSounds a bit like there would be room for trying a bigger -j value to see whether this would get your cpus more saturated. I think in the end for us it was an ok approach to go just for the numbers of cpu's. You still can do more than one build in parallel...13:58
_rubenheh .. increase concurrency to 6 and iowait sky rockets :)14:02
smbhehe, ok, so the default is not too bad. :)14:04
_rubenapparently so :)14:05
hashimiHi everyone14:08
hashimiI have a problem14:09
hashimiI want to change my boot splash image from my own setup linux.14:09
hashimiBut i don't know how to do it.14:09
hashimicould anyone help me?14:09
smbhashimi, Hopefully they did not send you here from there, but I think in #ubuntu-devel there might be more chance to find one with that knowledge14:11
hashimiOk really thanks for reply. I will go out to there.14:11
hashimiThanks smb14:11
_rubenbtw .. currently i build kernel by calling debian/rules directly, is there a recommended/supported/whatever method that'd yield proper .dsc files and the like, so i can dput it to my own repo14:12
smb_ruben, debuild -B should yield the same results as the buildd's14:13
smbpossibly needing -uc -us for not signing the files by default or overriding the signers key14:14
_rubensmb: that'd probably build all flavours right? as im only interested in my own custom flavour 14:14
smbRight in that case you would either need to modify things to remove flavours or do as you do now. But that created the deb files only14:16
smbFor the source package you can call "debuild -S -i -I" after a "fakeroot debian/rules clean"14:16
smberr14:17
smb"dpkg-buildpackage -S -i -I" or "debuild -S" I think, though I usually use the former14:18
_rubensmb: ah ok, thanks for the tips14:25
_rubencrap .. used the wrong buildroot .. used a karmic one, while i need a kernel for hardy :p14:44
_rubengives me a libc6 dependency error on install :)14:45
smbDoh, and oh btw. I think in Hardy the default -j might not (yet) be related to the number of CPU's-14:50
_rubengood call14:52
* _ruben hits ^C14:52
_rubennoticed a -j1 :)14:52
_rubenlets try concurrency level of 514:53
=== BenC2 is now known as BenC
=== yofel_ is now known as yofel
_rubenweird .. my i386 backport of karmic's kernel (with custom patch) does work as expected .. but the amd64 backport fails to detect my lvm at boot :/16:55
_rubenoh well, that's something for next week to figure out16:55
rackerhackerjjohansen: were you able to remember where that extra kernel memory came from in linux-ec2 on amd64?17:09
jjohansenrackerhacker: no, I need to spend some time looking at the code and I just haven't managed to get to it yet17:12
rackerhackerjjohansen: that's okay, i figured i'd just check in ;)17:13
=== asac_ is now known as asac

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!