[02:50] <davekempe> hey anyone here played with lvm-snapshots much - I have a quick question about i/o memory usage with em
[02:51] <fabbione> davekempe: just ask...
[02:51] <fabbione> somebody might answer
[03:20] <davekempe> hey fabbione - thanks for all your hard work btw
[03:20] <fabbione> thanks
[03:20] <fabbione> davekempe: ask your question? :)
[03:20] <fabbione> i did play a bit with lvm snapshotting
[03:20] <davekempe> i am just having general troubles with performing lvm snapshots when a server is under heavy load. using breezy with xen 3.0.0 kernel and packages
[03:20] <fabbione> so i migth be able to answer
[03:21] <davekempe> i get memory errors and lockups on removing the lvm snapshot
[03:21] <fabbione> we don't support xen sorry
[03:21] <davekempe> lvremove hangs indefinitely
[03:21] <davekempe> yeah i know
[03:21] <fabbione> there were a lot of bugfixes in dapper for snapshots
[03:21] <fabbione> you really want to try to upgrade
[03:22] <davekempe> yeah i upgraded one of my machines to dapper with the debian 2.6.16 kernel today and lvm snapshot seemed to behave better
[03:22] <davekempe> so thats basically it...
[03:22] <davekempe> I figured it was going to be upgrade to dapper - not that i mind
[03:22] <davekempe> just wanted a second opinion
[03:23] <fabbione> there was a bug where doing a lot snapshots in sequence was giving problems
[03:23] <fabbione> that's how i got to test it
[03:23] <fabbione> high load and blablabla
[03:23] <davekempe> ahh
[03:23] <fabbione> we had to upgrade to lvm2 2.0.somethingmorenewthanwehad
[03:24] <davekempe> yeah this machine was rsyncing off an lvm snapshot and the next snapshot hung
[03:24] <davekempe> yeah 2.0.2 i think
[03:24] <davekempe> fixes it
[03:24] <fabbione> yeah something like that
[03:24] <fabbione> but .15 is good enough to do the job
[03:24] <fabbione> no need of .16
[03:38] <davekempe> yeah no xen for .15 though... but its cool. dapper it is :)
[02:38] <pab1> With VMware, I have no DNS name resolution in my guest.  The DNS servers are valid and I entered them manually.  Using bridged ethernet.  Anyone ever run into this or have any ideas?
[02:42] <vars> hey i need a ubuuntu server
[02:42] <vars> here's the good news i'm an idiot
[02:55] <vars> hey
[02:55] <vars> beezly, how is this project comming?
[11:51] <beezly> vars: hi
[11:53] <beezly> slowly - right now, i'm just looking at what the impacts of running /etc over bzr are
[11:53] <beezly> i'd like to get dpkg to do a bzr commit after it makes changes, but i don't think there's a low-impact way of doing that - i suspect it needs changes to dpkg.
[12:01] <infinity> beezly: If you always use apt, you can do it in an apt post-run hook, but there's no sane way to do it in dpkg itself, no.
[12:01] <beezly> infinity: yeah - i thought that might be the case :/
[03:52] <mgalvin> fabbione: ping?
[04:32] <fabbione> mgalvin: pong?
[04:45] <mgalvin> fabbione: hey
[04:46] <mgalvin> did you happen to get to read that email the other day?
[04:48] <fabbione> mgalvin: yes but i am quite busy with the release now
[04:48] <fabbione> also some of the info you are asking are kind of company only
[04:49] <mgalvin> fabbione: i understand, was just wondering about a few things since the meeting is coming up
[04:49] <fabbione> 1 and 2 i can't answer
[04:49] <fabbione> 3) yes, everything that is supported by a FC-HBA controller should work. I am using Emulex controllers here in my house with an old SAN system.
[04:50] <fabbione> 4) The suite is tested. i do the tests personally. got bugs and fixed them from other users (positive reports from upstream too about our packaging).
[04:51] <fabbione> GFS is part of tests of 4
[04:51] <fabbione> OCFS2 is integrated into 4) and it works
[04:52] <fabbione> also there.. good test reports from upstream
[04:52] <fabbione> not just us
[04:52] <fabbione> let me slam this in the email
[04:52] <mgalvin> cool, that should be good enough for now... the CIO really just wants to hear some one else (in addition to me) say this stuff works
[04:53] <mgalvin> thanks a lot fabbione
[04:53] <fabbione> well i did the packaging..
[04:53] <fabbione> and tested them
[04:53] <fabbione> if that's enough....
[04:53] <mgalvin> when you refer to upstream, does that mean redhat?
[04:54] <fabbione> for the cluster suite and GFS I mean some of the upstream developers of the suite (ex sistina) that now work for RH
[04:55] <fabbione> for OCFS2 i mean a couple of developers that works on it
[04:55] <mgalvin> k cool
[04:55] <fabbione> but it's not like you will write at info@redhat.com and get these information
[04:55] <mgalvin> i know :)
[04:55] <fabbione> ok
[04:55] <fabbione> just that we understand eachother
[04:55] <mgalvin> yup
[04:56] <mgalvin> hopefully jane will be able to get back to me before the meeting, the CIO is actually more interested in some success stories... but this info you are giving me certainly helps too :)
[04:57] <fabbione> just to make it clear.. i don't know the answer to 1) and 2)
[04:57] <fabbione> sometimes there are success stories that are not our customers
[04:58] <mgalvin> yea, i know, no prob, mdz seemed to think jane might be able to help us there, no worries
[04:58] <fabbione> ok
[04:58] <mgalvin> thanks again... i'll let you get back to work :)
[04:58] <fabbione> i am not worried
[04:58] <fabbione> your CIO is welcome to contact me for more tech info if he wants them
[04:58] <mgalvin> k cool, i will let him know you offered
[04:58] <fabbione> mgalvin: or show him my office: http://people.ubuntu.com/~fabbione/office/
[04:59] <fabbione> that's where the cluster suite is packaged and tested
[04:59] <fabbione> ;)
[04:59] <mgalvin> nice!!!
[04:59] <mgalvin> i want one
[10:12] <Burgwork> hey all
[10:12] <infinity> Hey Corey.
[10:12] <Burgwork> I am about to write you all a shiny new page for the website about why the ubuntu server is great
[10:12] <Burgwork> but I need some ideas
[10:13] <infinity> Can you ask me for great ideas tomorrow, when I've had some sleep (and am not innebriated)? :)
[10:13] <Burgwork> infinity, sure
[10:13] <Burgwork> https://wiki.ubuntu.com/Website/Desktop <-- something similar to this
[04:41] <BlankB> Is there a place that describes the differences in the linux-image.2.6.15.-23-XXX images? Like the difference between -server and -server-bigiron?
[04:49] <mgalvin> BlankB: there is a brief explanation at http://www.ubuntu.com/testing/dapperbeta#head-0267ca58bb4998011f8a1749714aa566d3fd918c
[04:52] <BlankB> mgalvin: Looking at that now...
[04:54] <BlankB> mgalvin: That is probalby what I was looking for.
[04:54] <BlankB> I should look at the differences of the two kernel configs for them and see what the real differences are.
[06:22] <thefish> anyone got any opinions on which to use between xen and vmware-server?
[06:23] <beezly> thefish: imho, xen is more elegant, if you can do it. vmware-server is easy.
[06:24] <spike> thefish: considering vmware-server is vmware playground released with the only intention to not lose too much visibility with Xen increased populatiry, Xen.
[06:24] <beezly> i would go with Xen too, assuming you can
[06:24] <thefish> mmm
[06:25] <thefish> i have only used vmware, and its as easy as peeing into a well, tried xen ages ago and pulled a lot of hair, i guess it has got a bit easier though
[06:25] <spike> vmware-server is and always will be "beta". Xen is meant to be run in production
[06:25] <thefish> whats it like to install on ubuntu?
[06:25] <beezly> spike: is that true?
[06:25] <spike> beezly: it is
[06:25] <beezly> my understanding is that vmware server is to replace vmware GSX
[06:26] <thefish> they recon on their site that Q2 will be final for vmware server, and they will start selling support and maintenance for it
[06:26] <thefish> beezly: thats what vmware.com says
[06:26] <beezly> thefish: that's my understanding too
[06:26] <thefish> i like the idea of xen though, a lot of good hackers working on it
[06:26] <thefish> but i dont want a server to go down and not have a clue about where to start
[06:27] <beezly> thefish: xen is architecturally much better. it has far lower overhead than vmware.
[06:27] <thefish> iirc xen vms are moveable to other hosts?
[06:27] <spike> speaking f overhead, I like openVZ
[06:27] <beezly> thefish: yes - whilst keeping them running
[06:27] <thefish> that is sexy
[06:27] <spike> thefish: as long as they are on the same subnet, yes
[06:27] <thefish> k
[06:27] <beezly> i'm not familiar with openvz
[06:27] <thefish> swsoft/plesk product no?
[06:28] <spike> beezly: well, when it comes to overhead, the point is you're not running a kernel per guest, which saves a lot
[06:29] <spike> but architecturally it's completely different, sw Vs hw virtualization
[06:29] <beezly> spike: ah, I see.. it's quite like Solaris Zones.
[06:29] <spike> yes, exactly
[06:31] <spike> about the vmware-server, when it came out afaik it wasnt planned to replae GSX afaik, things might have changed, yet I hardly believe it will ever properly supported
[06:31] <thefish> apparently there are some nice new cpus coming out with much more support for virtualising on i386
[06:32] <beezly> thefish: that's true - both AMD and Intel have chips coming along (I think they are due this year) with instructions to support virtualisation.
[06:32] <beezly> i'm not too sure what the impact of that is though.
[06:32] <thefish> running windows in xen
[06:32] <thefish> without modification
[06:33] <spike> unmodified guests
[06:34] <beezly> i'm aware of that, but i'm unsure how it achieves that. I've not looked into it that much
[01:02] <Hardtrac> hello
[01:02] <Hardtrac> can any1 help me?
[11:08] <kermit> http://releases.ubuntu.com/dapper/
[03:20] <shawarma> Just did an upgrade to Dapper on a server with software RAID.. I got a mail with subject: "FW: Debconf: Configuring mdadm -- Initialise the superblock if you reuse hard disks
[03:20] <shawarma> Does that ring a bell to anyone?
[03:22] <shawarma> It says that if I'm using a RAID array from an earlier installation I should zero the superblock...
[03:22] <shawarma> I'm not sure what to make of that.
[04:12] <trs80> shawarma: it's talking about if you move hard disks used in raid between machines
[10:45] <xerophyte> is there anything like Fedora Directory server for Ubuntu ???
[09:24] <crazy_penguin> hi all!
[09:25] <crazy_penguin> small question if i may. is the apache package for ubuntu preconfigurated to a certain level or it is raw?
[09:39] <lionelp> crazy_penguin: like Debian package
[09:39] <lionelp> it serve a localhost (/var/www)
[09:39] <lionelp> in most cases, it will need a little work for sysadmin :)
[09:40] <crazy_penguin> lionelp: ok. thx:)
[11:57] <NobodHere> hey all...is this an OK place to ask about jumbo frame problems?  I have a feeling #ubuntu wouldn't be much help :-|
[12:28] <NobodHere_> anybody home?
[05:42] <gpd>  /var/run is mounted as varrun type in dapper
[05:42] <gpd> which causes ln /var/run/foo /var/spool/bar to fail
[05:42] <gpd> for a chroot postfix -> courier-authdaemon
[05:42] <gpd> no idea where to start on this one...
[05:46] <jsgotangco> its pretty new :)
[05:46] <gpd> not yet released - some might say :)
[05:47] <fabbione> gpd: how old is your installation?
[05:47] <fabbione> tmpfs on /var/run type tmpfs (rw)
[05:47] <fabbione> dapper updated as of today
[05:47] <fabbione> everything has been reverted to be tmpfs
[05:48] <jsgotangco> actually this would be the 2nd ubuntu-server release
[05:48] <jsgotangco> (officially)
[05:49] <gpd> let me check
[05:50] <gpd> varrun       tmpfs       57488        76     57412   1% /var/run
[05:50] <gpd> it's not so much the type that is the problem but the different device
[05:50] <gpd> and i am current with dapper dist-upgrade
[05:51] <fabbione> gpd: file a bug on launchpad, add infos, conffiles etc.
[05:51] <fabbione> assign it to adconrad@ubuntu.com
[05:51] <gpd> again - not sure if it is a bug or if i am jsut not doing it correctly
[05:51] <fabbione> gpd: ok, start filing a bug so that somebody will start looking at it
[05:52] <gpd> chroot postfix and courier-authdaemon normally talk via:
[05:52] <gpd>  /var/spool/postfix/var/run/courier/authdaemon/socket
[05:52] <fabbione> timelimit for any upload is tomorrow
[05:52] <gpd>  /var/run/courier/authdaemon/socket
[05:52] <fabbione> so you better file a bug or it will pass unseen
[05:52] <gpd> ok will do
[05:55] <gpd> what package?
[05:56] <gpd> initscripts or courier-foo
[05:57] <fabbione> hmm
[05:57] <fabbione> courier-foo
[06:03] <gpd> https://launchpad.net/distros/ubuntu/+bug/46858
[06:03] <gpd> my first bug report - probably useless :(
[07:52] <infinity> gpd: Changing that from a hardlink to a symlink should solve the problem.
[07:52] <infinity> gpd: I can't test that locally, though.
[07:53] <infinity> gpd: If I make fixed packages, can you test them for me before I upload them?
[07:53] <gpd> symlink won't work across a chroot :(
[07:55] <fabbione> gpd: please try what infinity asked
[07:55] <fabbione> if the symlink was working before, it will work later
[07:55] <fabbione> it's a matter of creating the proper one
[07:55] <gpd> no ln was working
[07:55] <gpd> not ln -s
[07:55] <gpd> very different
[07:55] <fabbione> i know the diff
[07:55] <infinity> No, he's rish.
[07:55] <infinity> right, to.
[07:55] <infinity> too,
[07:56] <fabbione> hmm
[07:56] <infinity> Argh.  Just woke up.
[07:56] <fabbione> oh well i need to get ready to fly to london
[07:56] <fabbione> but a -f might solve
[07:56] <fabbione> ln -f
[07:56] <infinity> No, dude.
[07:56] <infinity> A symlink can't work across chroots, and a hardlink can't be done across devices.
[07:56] <gpd> fabbione: if you are in a chroot you cannot see outside it
[07:56] <fabbione> go scott!
[07:57] <infinity> No worries.  It's fixab;e.
[07:57] <fabbione> i am not worried
[07:57] <fabbione> gpd: right...
[07:57] <gpd> is /var/run mounted as varrun a recent idea?
[07:58] <gpd> I don't understand what it achieves?
[07:58] <gpd> is it a security thing?
[07:58] <infinity> gpd: mkdir -p /var/spool/courier/authdaemon/ ; ln /var/spool/courier/authdaemon/socket /var/spool/postfix/var/run/courier/authdaemon/socket ; ln -s ln /var/run/courier/authdaemon/socket /var/spool/courier/authdaemon/socket
[07:58] <infinity> gpd: If that makes it work, I'll just do that.
[07:58] <fabbione> what i don't understand is why gpd keeps having varrun fs
[07:59] <fabbione> it was changed back to tmpfs naming, wasn't it?
[07:59] <fabbione> oh well
[07:59] <fabbione> whatever..
[07:59] <infinity> fabbione: He doesn't.  varrun is the name of the mount.
[07:59] <infinity> varrun on /var/run type tmpfs (rw)
[07:59] <fabbione> bleah
[07:59] <gpd> varrun       tmpfs       57488        76     57412   1% /var/run
[08:00] <fabbione> let me tell you something
[08:00] <infinity> gpd: Can you try the above for me?
[08:00] <gpd> that was df -T
[08:00] <gpd> infinity: yes - one sec
[08:00] <fabbione> i am supposed to be the server project leader to drive infinity ... and given my management position, i am NOT supposed to understand shit :P
[08:00] <infinity> fabbione: *grin*
[08:00] <fabbione> infinity: it's all your :) have fun ;)
[08:01] <fabbione> i need to fly to london
[08:01] <fabbione> cya late
[08:01] <infinity> gpd: /var/run on tmpfs makes sense for a varienty of reasons.  It's just a bit of a pain to have to transition everything, that's all.  And this corner case is a fun one we didn't spot.  That's all.
[08:01] <infinity> variety, too.  I really shouldn't even try typing right after I wake up.
[08:02] <ajmitch> hm, redhat decided to rewrite authconfig in python
[08:02] <ajmitch> how useful
[08:05] <gpd> infinity: gpd@www:~$ sudo  mkdir -p /var/spool/courier/authdaemon/
[08:05] <gpd> gpd@www:~$ sudo ln /var/spool/courier/authdaemon/socket /var/spool/postfix/var/run/courier/authdaemon/socket
[08:05] <gpd> ln: accessing `/var/spool/courier/authdaemon/socket': No such file or directory
[08:06] <gpd> the original socket is in: /var/run/courier/authdaemon/socket
[08:06] <infinity> Right, what do I need installed to test this locally? :)
[08:06] <infinity> Oh, the socket is in /var/run?
[08:06] <infinity> Right, I should have read more closely.
[08:07] <gpd> postfix, courier-authdaemon, rest of courier
[08:08] <infinity> What's responsible for doing the above linking magic?
[08:08] <infinity> Oh, a HOWTO... We don't ship it like this?
[08:08] <gpd> i had to add it manually to /etc/init.d/courier-authdaemon
[08:08] <gpd> correct
[08:09] <gpd> this is to allow courier to work with chroot postfix (which you do ship as)
[08:09] <infinity> Okay, I see in the howto, lots of ln magic.
[08:09] <gpd> i would probably not worry too much for the release
[08:09] <infinity> You'd probably get away with circumventing all of that (for courier, mysql, etc), by just bindmounting /var/run to /var/spool/postfix/var/run
[08:09] <gpd> i was just encouraged to post the bug
[08:10] <gpd> you might be right!
[08:13] <gpd> nope - didn't work
[08:13] <gpd> /var/run               57M   76K   57M   1% /var/spool/postfix/var/run
[08:13] <gpd> nevermind
[08:13] <gpd> have to go - thanks for the help
[08:13] <infinity> Nevermind, as in "nevermind, it did work", or "nevermind, it didn't"?
[08:14] <infinity> Certainly looks like it should work.
[08:14] <infinity> (And if so, I'd recommend you update that wiki to reflect that)
[07:06] <gpd> infinity: thanks for your help - I got your suggestion to work after a minor chmod on a directory :)
[07:06] <gpd> chmod 755 /var/spool/postfix/var/run/courier/authdaemon