[05:36] <allee> [17:21]  <allee> Sun Galaxy X4100: mpt* drivers in dapper.  Two (or one) simple disk are detected during installation. When 2 disk configured as RAID1 in the LSI controller, kernel fails to detect the disk
[05:36] <allee> [17:22]  <allee> scsi0 : ioc0: LSISAS1064, FwRev=01040000h, Ports=1, MaxQ=511, IRQ=169
[05:36] <allee> [17:22]  <allee> Is this a limitation of the driver or a bug?
[05:36] <allee> [17:23]  <allee> SLES9 has no problem with the RAID1 of the LSI controler.  And uses it as sda.
[05:36] <allee> [17:28]  <allee> SLES9 uses: Fusion MPT SAS Host driver 3.02.62sus
[05:36] <allee> [17:28]  <allee> ubuntu uses:
[05:36] <allee> [17:28]  <allee> Fusion MPT SAS Host driver 3.03.04
[05:48] <svenl> hi guys ...
[05:49] <svenl> is it true that you guys dropped the mkvmlinuz support for the dapper linux-image kernel ? 
[05:49] <svenl> i heard a report claiming it was because "nobody uses oldworld machines anymore", which seems likevery clueless ...
[05:55] <mjg59> We've never supported oldworld machines
[05:55] <mkrufky> what is 'oldworld' ?
[05:55] <mjg59> mkrufky: Pre-imac and blue and white G3s
[05:58] <mjg59> When those were introduced, Apple went to a new Open Firmware version
[05:58] <mjg59> So the ended up being called "old world" and "new world" macs
[05:58] <svenl> mjg59: yeah, but the mkvmlinuz support patch was not there for oldworld, but for chrp machines, including IBM chrp and genesis's pegasos machine.
[05:58] <mkrufky> ah, okay
[05:59] <svenl> mjg59: and genesis was just added recently as a ubuntu partner, so i doubt that sabotaging the pegasos support this way is the right thing to do.
[05:59] <mjg59> svenl: If you are going to describe it as "sabotaging", then this conversation is over
[05:59] <svenl> mjg59: well.
[05:59] <fabbione> mjg59: ++
[05:59] <makx> thanks
[06:01] <svenl> mjg59: oh well, i will let my hierarchy handle this with the ubuntu hierarchy, but this is a repeat of what happened for the breezy release, so i am not overly sympathetic, especially since the ubuntu kernel guys are making noises about unification of the ubuntu kernel with the debian kernel on public forums and such.
[06:02] <mjg59> Drive-by svenning
[06:16] <fabbione> uh?=
[06:16] <fabbione> what noise?
[06:16] <fabbione> when?
[06:17] <fabbione> wth is he talking about?
[06:18] <mjg59> No idea
[06:18] <fabbione> neither do i
[06:20] <dilinger> sorry guys, that was probably my fault
[06:20] <mjg59> I don't think Sven can ever be anyone else's fault
[06:21] <dilinger> the fact that he's re-evaluating ubuntu-kernel, that is
[06:21] <dilinger> heh
[06:21] <mjg59> Have you kicked him off debian-kernel yet?
[06:22] <dilinger> no, i just angered him.  i intend to get him ejected from the project completely.
[06:22] <fabbione> dilinger: LOL
[06:27] <dilinger> fabbione: btw, you have any pointers to architecture/design of ocfs2?
[06:27] <fabbione> dilinger: the code?
[06:28] <fabbione> it's the "usual" clustered FS
[06:28] <fabbione> all transactions needs to go trough the DLM
[06:28] <fabbione> it's journaled like ext3 (same backend)
[06:29] <dilinger> fabbione: a coworker's evaluating cluster filesystems (pvfs2, gfs, ocfs2, etc)..  i had was hoping for something to pass along for him to read
[06:29] <dilinger> since he can't seem to find details about ocfs2
[06:29] <fabbione> dilinger: you can check on oss.oracle.com
[06:29] <fabbione> the project is hosted there
[06:29] <fabbione> but the concept behind each clusterFS is the same
[06:30] <fabbione> and that's where the real power of FS comes frok
[06:30] <fabbione> from
[06:30] <fabbione> the Distributed Lock Manager
[06:30] <fabbione> performance of the FS are highly dependent on that
[06:31] <fabbione> also.. ocfs2 is the simplest one around.. at least that i have seen
[06:31] <fabbione> it's really basic clustering
[06:31] <fabbione> a more complete suite is the GFS/RH cluster
[06:33] <dilinger> fabbione: simplest in terms of implementation?  features?
[06:34] <fabbione> features
[06:35] <fabbione> in my experience:
[06:35] <fabbione> - ocfs2 is faster, but needs more manual tuning to handle the timeouts between nodes properly. It does "only" cluster FS
[06:36] <fabbione> - gfs/rh cluster suire is generally slower, no need of manual tuning, it offers a complete suite for clustering, from shared IP, switching services, etc.
[06:49] <dilinger> ok, thanks
[06:53] <BenC> damn, I missed sven and his FUD
[08:10] <torkel> dilinger: Lustre might be interesting for him to take a look at too; http://www.clusterfs.com/
[08:12] <dilinger> yep
[08:12] <dilinger> i've been following lustre development ever since the clusterfs people abandonded intermezzo
[08:13] <torkel> would be nice to see it in Ubuntu :-)
[08:13] <dilinger> yea
[08:13] <dilinger> i was going to package it for debian ages ago
[08:13] <dilinger> but decided not to based on their release policy
[08:14] <dilinger> they've changed it since, but i haven't had time to look at it again
[08:14] <dilinger> i may still package it
[08:14] <dilinger> i think mrvn took over my ITP
[08:14] <torkel> a co-worker did a quick-a-dirty package of it last week
[08:14] <torkel> not sure how far he got though
[08:15] <dilinger> i'd be interested in seeing it
[08:15] <dilinger> like i said, my coworker's evaluating different cluster filesystems; if we decide to go w/ lustre, i'll probably end up maintaining packages
[08:15] <dilinger> quick-and-dirty packages makes it that much easier for him to try out
[08:17] <torkel> he is on vacation this week, but if you send me a mail (otherwise I will forget it) I can ask him to get in touch with you
[08:20] <jbailey> BenC: Around?
[08:29] <BenC> jbailey: yeah
[08:39] <jbailey> BenC: Are you packaging newer kernels at all for testing?
[08:40] <jbailey> I thought I saw some reports of some threading stuff fixed in newer kernels.  If you've already got a set built, I might poke my head into them.
[08:40] <jbailey> threading and signals.
[08:40] <BenC> jbailey: haven't had a chance to start merging to newer kernels yet
[08:40] <jbailey> No worries.  I'll try to remember how to build one myself for the test.
[08:41] <jbailey> None of it's a regression from previous, but on some arches there are weird nptl responses and such.
[08:41] <fabbione> jbailey: jumping from .15 to .16 is a big step due to all the mutex changes
[08:42] <fabbione> merging back is difficul
[08:42] <fabbione> +t
[08:42] <jbailey> fabbione: Right, I'm not expecting to backport anything, more just curious if these are the problems refered to.
[08:42] <jbailey> fabbione: I'm trying to figure out what the path to actually getting clean glibc results is.
[11:17] <bronson> Anybody here able to boot in recovery mode?
[11:17] <bronson> Regular mode works fine.
[11:17] <bronson> Recovery mode kernel panics pretty early on.
[11:18] <bronson> Dunno when this started...  I don't go into recovery much.
[11:18] <bronson> Just wondering if this is a me-only thing, or are other people seeing it too?
[11:18] <bronson> (dapper, upgraded last night)