[12:36] <foo> According to http://www-128.ibm.com/developerworks/aix/library/au-satslowsys.html?ca=dgr-lnxw01QuickUnix, vmstat's first r and b column are the number of processes in runtime, and the number of blocked processes waiting for I/O resources, respectively. However, man page on vmstat for r and b says: r) The number of processes waiting for run time. and b) The number of processes in uninterruptible sleep. ... I do think there is a difference. Which ...
[12:36] <foo> ... description is more accurate?
[12:39] <shawarma> b
[12:40] <foo> shawarma: The man page or the web page?
[12:40] <shawarma> r counts number of processes in a runnable state, not the number of processes actually running (which is severely limited by the number of processors).
[12:41] <shawarma> foo: Ah, by bad. I pick door number two. 
[12:41] <shawarma> foo: Also, uninterruptible sleep ~= blocked waiting for I/O resources.
[12:42] <shawarma> foo: so the man page is slightly more accurate.
[12:42] <foo> shawarma: oh, ok, so b = blocked processes waiting for I/O, not just blocked processes.
[12:42] <foo> shawarma: What confused me was that it was blocked and just dropped and ignored, or something
[12:43] <shawarma> foo: There's not much else you can wait for..
[12:45] <shawarma> foo: So "blocked waiting for I/O resources" is sort of redundant.
[12:46] <foo> Ah, I see
[04:17] <wiikki> Hellooooooooooooooooooooooooooooow
[04:17] <wiikki> I installed ubuntu server , how can i install a desktop i want fluxbuntu fluxbox
[04:17] <wiikki> i used apt-get install fluxbox
[04:17] <wiikki> what next
[06:29] <wikkii> Hello ?
[10:16] <shawarma> These are the last ~30 lines of dmesg on a server I have to deal with: http://pastebin.ca/432477    Can someone with OCFS2 experience tell me how fscked I am? 
[10:16] <shawarma> Is fsck.ocfs2 likely to save the day or make it even worse?
[10:17] <shawarma> The filesystem is stored on a drbd device shared between two servers.
[10:21] <shawarma> fabbione: You've used ocfs2 before, no? Got a sec? ^^
[10:21] <fabbione> shawarma: looking
[10:21] <shawarma> fabbione: *G* Excellent.
[10:22] <fabbione> shawarma: is that dapper? edgy? feisty?
[10:22] <shawarma> It's an Edgy server with a custom kernel.
[10:22] <fabbione> also drbd.. brrrrrrrr
[10:22] <fabbione> custom kernel?
[10:23] <fabbione> define custom kernel
[10:23] <shawarma> 2.6.21-rc4
[10:23] <fabbione> oh
[10:23] <shawarma> Vanilla 2.6.21-rc4, I think.
[10:23] <fabbione> you are on your own man :)
[10:23] <shawarma> Heh. :-)
[10:23] <shawarma> Well, the ocfs user space tools are those from Edgy. 
[10:23] <fabbione> .20 is still getting a lot of bug fixes (OCFS2) that are not in .21 yet
[10:23] <fabbione> makes no diff
[10:24] <fabbione> the bug fixes are in kernel
[10:24] <fabbione> tho you want a more recent userland for other reasons
[10:24] <fabbione> you can try to fsck but i don't guarantee you anything
[10:26] <shawarma> I'm not quite looking for guarantees at this point, but just a little something that would help my gut feeling about running that fsck. :-)
[10:27] <shawarma> Of course I don't have enough space available there to move all the data elsewhere as a backup... and there's no proper backup..
[10:27] <shawarma> Gah... clients.
[10:27] <shawarma> :-)
[10:28] <shawarma> And their usual admin is in the Caribbean sound asleep. Typical.
[10:28] <fabbione> shawarma: blame it on you to use untested kernels on unsupported block devices
[10:28] <fabbione> unmount the filesystem from all nodes
[10:28] <fabbione> make sure that drdb is in sync across nodes
[10:29] <fabbione> and then fsck
[10:29] <shawarma> fabbione: Oh, this time, it's not my fault, actually. :-) They managed this without my help. :-)
[10:29] <fabbione> shawarma: also.. upgrade the tools to the latest version to make sure fsck is new
[10:29] <fabbione> i recall some bug fixing there at some point
[10:29] <shawarma> fabbione: Is the feisty versions up to date?
[10:29] <fabbione> shawarma: yes
[10:30] <fabbione> it's one release behind, but the new release from upstream has only minor things that you really don't care about at this point
[10:30] <shawarma> fabbione: Cool. I'll backport them from there then.
[10:30] <fabbione> yeah it should be easy enough to rebuild
[10:30] <shawarma> fabbione: Thanks for your help so far. Gotta run for about an hour.
[11:28] <shawarma> fabbione: Does fsck.ocfs2 at least tell you before it eats your cat^Hdata?
[11:28] <fabbione> shawarma: dunno.. i never had to use it
[11:28] <shawarma> fabbione: lucky. :-)
[11:28] <fabbione> shawarma: because i use sane SAN's and sane kernels
[11:28] <Kamping_Kaiser> fsck has a simulation mode doesnt it?
[11:28] <shawarma> Kamping_Kaiser: Depends on the fsck, I suppose.
[11:29] <fabbione> as shawarma said
[11:29] <ivoks> urgh... funny stuff right in the morning; a guy disconnected two disks in raid5 field (without shutting them down) :)
[11:29] <fabbione> shawarma: well.. man fsck.ocfs2
[11:29] <Kamping_Kaiser> i thought any fscks did
[11:29] <fabbione> Kamping_Kaiser: no.. it depends from implementation to implementation
[11:29] <fabbione> it's good sense to have ut
[11:29] <fabbione> it
[11:29] <Kamping_Kaiser> i think i'll be remembering that :)
[11:50] <shawarma> fabbione: Idea: I could perhaps stop the drbd replication and try fsck on the one disk and see if all goes well. If it nukes everything, I should be able to run off the other.
[11:50] <fabbione> shawarma: it's an option but i don't know how reliable is drbd.. last time i tried to use it, it did blow up badly
[11:51] <fabbione> anyway lunch time
[11:51] <ivoks> all fabbione's talks end with 'lunch' :)
[11:52] <shawarma> fabbione: Yes, drbd is definitely the weakest link that experiment.
[12:19] <shawarma> fabbione: Do you remember anything about drbd? Here's what I'm thinking about doing:
[12:19] <shawarma> On server A:
[12:20] <shawarma> drbdadm disconnect all (there's only that one drbd device)
[12:20] <shawarma> fsck.ocfs2 /dev/drbd0
[12:21] <fabbione> no i checked it only once a while ago to see if it was worth for main
[12:21] <shawarma> If all goes well, I'm not sure what to do.. Log on to server B, and run "drbdadm outdate all", go back to server A, and reconnect.
[12:21] <fabbione> and decided not too because it's bad
[12:22] <shawarma> What would you have used? OpenAFS or something?
[12:23] <fabbione> shawarma: a real shared block device
[12:23] <fabbione> something like one those cheap disk arrays
[12:23] <shawarma> Ah, right. And ocfs2?
[12:24] <fabbione> yes or gfs
[12:24] <fabbione> not gfs2
[12:24] <fabbione> it's not stable enough yet
[12:26] <shawarma> Ok.
[12:31] <shawarma> Ah, that's just frickin' typical. I finally conjure up the balls to shut down the web servers, and unmount the ocfs2 fs, and it stops responding. It's probably in kernel panic.
[12:32] <shawarma> And of course it's locked away in a hosting facility in Germany.
[12:34] <shawarma> I can't say I'm looking forward to unmount the other one. This is really not my day.
[01:14] <shawarma> Pheew.. 
[04:38] <j1mc> woah . . . argonne national labs is running ubuntu server:  http://mirror.anl.gov/pub/centos/  (check out the note at the bottom of the page)
[05:09] <dragonriot> Ahh... Finally got it right.... Debian - didn't like it... Slackware - didn't like my RAID setup... Ubuntu Feisty - Easy as pie, and what's not to love... =)
[05:13] <dragonriot> lively bunch this morning
[06:55] <theacolyte> I'd mention that I just bought a car, but that that's OT :P
[06:55] <theacolyte> or we could talk about my excessive use of the word that above
[07:36] <dragonriot> when a server absolutely must have X installed on it, what is the recommended X-Manager suite?  GNOME, KDE, or XFCE?
[07:40] <theacolyte> well
[07:40] <theacolyte> xfce is lighter
[09:48] <KurtKraut> How can I traceroute a UDP packet, like tcptraceroute does to TCP ?
[09:59] <shawarma> KurtKraut: traceroute
[10:01] <mralphabet> KurtKraut: I believe traceroute uses udp packets
[10:01] <KurtKraut> mralphabet, yes, you're right. I've checked it here. Thanks both shawarma and mralphabet 
[10:01] <shawarma> KurtKraut: any time