[00:00] <jmedina> :S
[00:00] <FFForever> :D
[00:00] <jmedina> Im from the broadband generation :D
[00:00] <FFForever> same here
[00:01] <FFForever> but i am not at home at the moment so i am using my phone + usb cable + netzero :)
[00:01] <jmedina> I never used a modem for internet connection, only for fax machines
[00:02] <FFForever> i have never used a modem for faxing
[00:02] <FFForever> i use efax
[00:05] <jmedina> well, last time I used a fax was 5 years ago, when I configure those machines :D
[00:05] <jmedina> fax is evil!!!!!
[00:05] <FFForever> lol
[00:05] <FFForever> yeah sadly businesses live by them
[00:05] <billybigrigger> netzero is still around?
[00:05] <FFForever> a lot of stuff is moving to email but not everything
[00:05] <billybigrigger> is it still free?
[00:06] <FFForever> yeah
[00:06] <billybigrigger> right on
[00:07] <FFForever> http://www.netzero.net/start/landing.do?page=www/free/index
[00:08] <billybigrigger> hmm
[00:14] <orudie> so yeah
[00:15] <orudie> still cant find a way to to connect with ssh key when using chroot in sshd_config
[00:25] <jmedina> orudie: have you contacted openssh people?
[00:25] <jmedina>   /j #openssh
[00:25] <jmedina> :)
[00:33] <orudie> im in there
[00:33] <orudie> they are not saying anything
[00:33] <orudie> this is really weired
[00:37] <pmatulis> orudie: what kind of errors are you getting when you try this?
[00:37] <orudie> pmatulis, the problem is that its not seeing ssh key
[00:40] <pmatulis> orudie: are you putting all the key location info in the chroot area of sshd_config?
[00:40] <orudie> pmatulis, that is exactly what I am trying to figure out, is where to put the authorized_keys for this user
[00:44] <pmatulis> orudie: put it in the chroot i suppose.  what i meant was, are you putting in settings like 'AuthorizedKeysFile' below 'Match'?
[00:45] <pmatulis> orudie: i'm going to try this tomorrow.  are you here often?
[00:47] <orudie> pmatulis, yes every day
[00:47] <pmatulis> orudie: i'll ping you
[00:48] <orudie> pmatulis, ok
[00:53] <Bullterd> Hey All
[00:53] <Bullterd> Ive just finished off my hosting cluster
[00:54] <Bullterd> ive setup rsync to sync the /etc/apache2 folder
[00:54] <Bullterd> how do i get apache2 to reload every so often so that it picks up the new configs ?
[01:19] <pmatulis> orudie: still there?
[01:20] <orudie> pmatulis, yup
[01:20] <pmatulis> orudie: i just got it to work at home.  nothing special done.  not sure where your problem is
[01:20] <orudie> pmatulis, with ssh key ?
[01:21] <pmatulis> sshd[23736]: Accepted publickey for chrooted_user from 192.168.3.101 port 31007 ssh2
[01:21] <orudie> you  have password login disabled ?
[01:21] <pmatulis> orudie: yup
[01:21] <orudie> ok, whats the path to your authorized_keys file ?
[01:22] <pmatulis> like i said, nothing special done.
[01:22] <orudie> can you tell me please ? i have been stuck on this
[01:23] <pmatulis> that file is in .ssh directory of the chroot directory, which also happens to be the user's home
[01:23] <twb> If it accepted the public key, then he got in.
[01:23] <twb> The problem could be that bash isn't installed in the chroot.
[01:23] <twb> (And bash is his default shell.)
[01:23] <pmatulis> i surely hope he set up the shell
[01:23] <twb> pmatulis: /home/foo is his chroot?
[01:24] <orudie> yes
[01:24] <twb> pmatulis: and he's running rsync with --rsh=ssh?
[01:24] <pmatulis> /home/chrooted_user (user is chrooted_user)
[01:24] <pmatulis> so /home/chrooted_user/.ssh/authorized_keys
[01:25] <pmatulis> maybe your chroot directory is not the home directory?
[01:25] <twb> What are you trying to do with this chrooted ssh session?
[01:25] <orudie> sftp
[01:25] <twb> Hmm.
[01:26] <pmatulis> orudie: anything else before i leave?
[01:26] <twb> I suggest you talk to #openssh about it, since I don't know if that's supposed to work, or what to do to debug it.
[01:31] <matthewmpp> Hi, I am new to servers.  I added a user to my server by typing: useradd -m username.  This created the user and the home directory.  Then I used: usermod -a -G admin,adm,group1,etc username. This added the new user to existing groups.  Next, I typed passwd newuser as root, which allowed me to set a password for the new user.  The problem I have is that when I login as the newuser everything...
[01:31] <matthewmpp> ...in front of "$" is missing. It should show something like username@hostcomputer:directory$.  Thanks in advance, any help would be appreciated. - MatthewMPP
[01:32] <matthewmpp> ping
[01:37] <oh_noes> Anyone installed Zend Optimizer?  In ./install.sh it'a asking for apache httpd.  However apache2 doesnt have it
[01:37] <twb> matthewmpp: you should be using adduser, not useradd.
[01:37] <twb> matthewmpp: the former is a high-level wrapper that will handle most of the work.
[01:38] <twb> matthewmpp: the reason "everything in front of the $ is missing" is because that is the default behaviour for /bin/sh, which is the default shell.
[01:38] <twb> matthewmpp: only if you use adduser(8) will /etc/adduser.conf be used, and this is what sets the default shell to bash, and populates the new home directory with the contents of /etc/skel.
[01:38] <orudie> pmatulis, so the path is /home/chrooted_user/.ssh/authorized_keys , why doesnt it wanna work for me then ?
[01:40] <matthewmpp> twb: cool.
[01:40] <twb> orudie: did you read the log files?
[01:41] <matthewmpp> twb: what syntax do I use?  adduser -m newuser?
[01:41] <twb> matthewmpp: RTFM
[01:41] <orudie> twb, auth.log does not produce any new logs when i try to connect
[01:41] <twb> orudie: is sshd running?
[01:42] <pmatulis> orudie: maybe you have bad file permissions.
[01:42] <pmatulis> orudie: .ssh in particular should be 0700
[01:42] <twb> pmatulis: the log will tell you if that is the case.
[01:42] <orudie> you know what? i'll try to create a new user and start over , i think i messed with this particular user account way too much trying to figure this out
[01:42] <orudie> i'll let you know what happens
[01:43] <pmatulis> orudie: good idea
[01:43] <twb> oh_noes: sounds like your install.sh assumes RHEL; I suggest you talk to the Zend people about it.
[01:44] <pmatulis> orudie: also, make sure you can connect with password before going to key authentication
[01:46] <orudie> pmatulis, i tested on 2 boxes, one with password the other with ssh key
[01:46] <twb> as there are no entries in auth.log, there is something seriously wrong with your sshd service.  I would investigate that before trying to get the client side working.
[01:46] <orudie> pmatulis, the one with password worked like a charm , took me 2 seconds to set it up
[01:46] <pmatulis> orudie: ok
[01:46] <orudie> twb, you are wrong
[01:47] <orudie> twb, the other user with different ssh key works very well, its my company's box
[01:47] <pmatulis> orudie: i got a quick recipe for this if you're interested, you might be missing something small
[01:47] <orudie> ok
[01:47] <pmatulis> orudie: will msg
[01:48] <twb> orudie: if you are not seeing rejection notices in auth.log for failed login attempts, then either the service is not running, it is not writing to auth.log, or your client is not connecting to the ssh server.
[01:48] <twb> I suppose that could indicate a failure in a firewall or a misconfigured client.
[01:48] <orudie> twb, are you familiar with ssh keys ?
[01:48] <twb> orudie: yes.
[01:49] <twb> Reading the log files is *the* way to find out why your connection was rejected by ssh.  It deliberately does not provide any detailed information to the client.
[01:49] <orudie> twb, trust me there is nothing wrong with sshd
[01:50] <twb> With respect, you're in here asking for advice.  That's the advice I'm giving.
[01:52] <orudie> twb, hold on
[01:52] <orudie> ok
[01:52] <orudie> to begin, here is the copy paste from my sshd_config
[01:52] <orudie> http://pastebin.com/m87120f4
[01:53] <orudie> now i'm looking here http://www.debian-administration.org/articles/590
[01:53] <pmatulis> orudie: did you at least try to just ssh (not sftp)?
[01:54] <orudie> i will create user and add him to group sftponly
[01:54] <orudie> pmatulis, yeah man
[01:54] <orudie> i did
[01:54] <twb> I agree, I'd also get basic SSH working first.
[02:43] <orudie> pmatulis, around ?
[02:43] <twb> From #upstart, which is asleep:
[02:43] <twb> 11:42 <twb> I am looking at /etc/event.d/ on an Ubuntu Server 8.04 system.  Can someone explain why tty1 and tty2 differ in their start/stop parameters?  It looks like tty2 through 6 are only active for runlevels 2 and 3.
[02:58] <ha1331_> how can I prevent ssh session from terminating because of timeout?
[02:59] <ha1331_> I know I need to add soething to ssh_config, but what?
[03:04] <w3wsrmn> ha1331_: you could set ServerAliveInterval in ssh_config on your client, and/or ClientAliveInterval in sshd_config on the server
[03:05] <ha1331_> w3wsrmn: are the units for the value seconds?
[03:05] <w3wsrmn> ha1331_: yup
[03:06] <twb> ha1331_: I cheat and use -o BatchMode=yes
[03:07] <ha1331_> twb: what does that do?
[03:07] <twb> It enables TCP keepalives.
[03:07] <twb> As a side effect, I mean
[03:07] <twb> Typically if I want keepalives, it's because the connection is unattended, e.g. ssh -w'
[03:08] <ha1331_> twb: that setting is aplicable also for sshfs?
[03:08] <twb> sshfs should do it automatically IIRC.
[03:09] <ha1331_> IIRC?
[03:09] <twb> Perhaps you want -o reconnect
[03:10] <ha1331_> oh: IIRC = If I Recall/Remember Correctly
[03:11] <ha1331_> knew lol already
[03:11] <ha1331_> :)
[03:59] <FFForever> What is the best way to do a jailed shell
[04:04] <twb> FFForever: OpenVZ
[04:04] <FFForever> i am already on a vps :P
[04:05] <twb> Then stop.  You are done.
[04:06] <FFForever> i want to give users on my system a jailed shell
[04:06] <twb> Good luck with that.
[04:06] <FFForever> i know there is a way
[04:07] <twb> AFAIK there's no particularly secure way.
[04:09] <FFForever> there has to be a bettery way then to just give them a regular shell
[04:09] <twb> Well, yes, but basically what you end up doing is approximating a VPS system in userland, insecurely.
[04:10] <FFForever> but they will only have access to cp, mv, rm, uptime, nano, how can they destroy that?
[04:10] <twb> FFForever: if that's all they have access to, how will they log in?
[04:11] <FFForever> what do they need to login?
[04:11] <twb> FFForever: well, login(8) and sh(1).
[04:11] <FFForever> not bash
[04:11] <twb> And access to /dev/pts
[04:12] <FFForever> (8)?
[04:12] <twb> login is a chapter eight program.
[04:12] <twb> Oops, it's not
[04:12] <FFForever> what is a chapter program?.
[04:12] <twb> man man.
[04:49] <ixpl> hey
[04:50] <ixpl> i need to know if it is possible to run ettercap on my remote box via ssh
[04:50] <ixpl> i got some errors and just wondering if there's a workaround
[04:52] <ixpl> possible to run ettercap remotely via ssh?
[05:10] <matthewmpp> Hi, In ubuntu-server 9.04 is it okay to edit the fstab file manually?
[05:11] <matthewmpp> It does look like the standard config file I am used to.
[05:11] <matthewmpp> ping
[05:12] <matthewmpp> mistake: it does not look like the standard config file. :-(
[06:00] <jmarsden> matthewmpp: man 5 fstab  # describes its format
[06:03] <FFForever> what is a good tutorial for quota's?, also what happens when a user runs out?
[06:10] <matthewmpp> yeah, i found an answer. thanks though.
[06:22] <FFForever> root@chr1831:~# edquota -u meklort -f PRGMRDISK1, edquota: Cannot stat() given mountpoint PRGMRDISK1: No such file or directory, any ideas?
[06:36] <TimReichhart> can anybody help me out is there anyway that I can hide port 8080 on url
[06:44] <twb> TimReichhart: "hide" it how?
[06:44] <TimReichhart> instead of going to mail.domain.com:8080/rc cant I just put it like domain.com/rc
[06:45] <TimReichhart> the webmail and webserver are on 2 different servers
[06:45] <twb> TimReichhart: that would involve putting a proxy webserver on port 80
[06:46] <twb> e.g. mod_proxy or mod_rewrite
[06:46] <TimReichhart> ok
[06:46] <twb> FFForever: PRGMRDISK1 doesn't sound like a filesystem
[06:50] <ball> What tape backup software can I use with Ubuntu Server?
[06:50] <ball> tar?
[06:53] <twb> ball: tape is super yuk
[06:53] <twb> Unless you already have your tape drive and hardware, get a HDD or DVD solution instead.
[06:54] <TimReichhart> so twb can u show me what a mod_rewrite looks like
[06:54] <twb> TimReichhart: no.
[06:54] <TimReichhart> alright
[06:56] <ball> twb: it's already in place (and for many systems, DVD simply isn't large enough)
[06:57] <ball> the drive shows up as st0
[06:57] <ball> ...but my usual tar incantation doesn't work.
[06:57] <ball> I lack practice with Linux
[06:57] <twb> ball: right; you'd use multiple DVDs for each backup.
[06:58] <twb> But anyway, you have tape infrastructure already.
[06:58] <twb> I don't know much about the nasty details of tape, but I would start by looking at amanda (the "overkill" end of the spectrum) and tar (the "underkill" end up the spectrum).
[06:59]  * ball tries tar again
[06:59] <ball> ah, I needed the "-" for Linux
[07:00] <twb> Theoretically, TAPE=/dev/st0 tar c /etc/ or similar.
[07:00] <twb> Which "-"?
[07:00] <ball> "tar -tf /dev/st0"
[07:00] <ball> I come from a world where there is no - there.
[07:00] <twb> You shouldn't normally need the - there.
[07:01] <twb> Unless you have stuff before it, e.g. you can't say "tar cf /dev/st0 C /etc ppp" -- you have to say "tar cf /dev/st0 -C /etc ppp"
[07:03] <ball> I was trying *t*f, to get a table of contents.
[07:07] <twb> Yes, that should work.
[07:08] <twb> I don't know why it didn't.
[07:09] <ball> I'm just backing up some files now, will compare checksums after a restore.
[07:23] <twb> If you're making WMRN-type backups, --lzma or -j might be a nice idea to save space, at the cost of extra CPU during the backup
[07:24] <ball> straight tar is fine
[07:26] <ball> Looks promising too.
[07:26] <ball> it was just the "-" that threw me.
[07:29] <twb> OK, cool.
[07:31] <ball> Hmm... seems like I have to keep power cycling the drive though.  That's not good.
[07:33] <twb> I'm afraid I can't help with that.
[07:35] <oh_noes> Anyone awake to help me with a mdadm RAID10 problem?
[07:36] <ball> damnit.
[07:52] <_ruben> oh_noes: not unless you give us some more details on the problem
[07:53] <oh_noes> I posted my problem here:  http://forums.overclockers.com.au/showthread.php?t=787262
[07:53] <oh_noes> forum should be open to hte world
[07:53] <oh_noes> but basically, madm has dropped my md5 RAID10 volume  and I have no idea what next steps to try
[07:54] <ball> time to reach for your backup tapes perhaps.
[07:59] <oh_noes> Why?  All 4 disks are live and sdd1 confirms they are healthy
[07:59] <oh_noes> but mdadm has dropped the disks
[07:59] <oh_noes> (maybe its just trying to prove why it doesnt belong in the enterprise space)
[07:59] <ball> could be.
[08:01] <_ruben> looks like all 4 are marked as spare
[08:03] <_ruben> and the 'fault removed' lines sound scary as well
[08:05] <ball> ouch.
[08:13] <twb> From what I've seen of OCAU weenies, I wouldn't trust them to do ANYTHING linux-related.
[08:14] <oh_noes> I don't really half a choice, I bum around on that forum so i might as well ask
[08:14] <twb> YMMV, but I tend to think of them as mainly being hardware weenies -- particularly Windows gaming hardware.
[08:14] <twb> Fair enough.
[08:14] <oh_noes> twb: you dont have a sec to see the state of my madm in that post?
[08:14] <twb> Incidentally, why are you using RAID10 instead of RAID5?  Are the disk pairs of different sizes?
[08:15] <twb> I make a point of not reading web forums, because they seem to have deliberately poor accessibility.
[08:16] <ball> twb: RAID1+0 may be lighter in terms of CPU load
[08:16] <twb> ball: I suppose...
[08:17] <ball> (slightly ;-)
[08:17] <twb> I'd have to think about the failure more for RAID1+0, but I'd be more scared of it than RAID5 or 6.
[08:17] <twb> Assuming by 0 you mean striping and not mere catenation
[08:18] <_ruben> raid10 is atleast as safe as raid5
[08:18] <ball> twb: usually it's taken to mean a stripe over mirrored pairs of disks.
[08:18] <_ruben> raid10 can sustain multiple diskfailures, as long as they're not part of the same raid1 set
[08:19] <_ruben> also raid5 has lousy write performance
[08:19] <twb> _ruben: OK, so it's kinda 1½ parity drives :-)
[08:19] <ssm> _ruben: unless you pay big $$$ for hardware that does raid5 for you, then it _may_ be fast.
[08:19] <_ruben> raid10 doesnt do parity
[08:20] <_ruben> raid5 will *never* be as fast as raid10
[08:20] <ssm> you don't need parity for raid1+0
[08:20] <_ruben> raid5 is fine for a fileserver or so .. but for db's or vm storage, you'd need raid10 to get a bit of decent performance
[08:20] <ball> ssm: that's what we did, and I rather wish we hadn't.
[08:21] <ssm> _ruben: on my EMC hardware, raid5 on 4+1 disk _is_ faster than raid10 on 4 disks.  On MD, it's not.
[08:21] <ssm> I don't like raid5 anyhow.   Stripe and mirror everything important
[08:22] <ball> I'm going to bed.
[08:22] <ssm> unless it's raid5 on ZFS, then you'll get rid of the possibility of rad5 write hole.
[08:23] <_ruben> ssm: 4+1 ? thats a hotspare i assume?
[08:24] <_ruben> ssm: also, workload is a very important factor here
[08:24] <oh_noes> 4+1 is most SAN speak means 4 data 1 parity or 5 disk RAID5
[08:24] <ssm> _ruben: 4+1 is one of the two raid5 combinations on EMC clariion, the other is 8+1.
[08:24] <oh_noes> RAID10 is typically faster for writes, RAID5 reads may beat it but with slower write performance
[08:24] <ssm> oh_noes: true
[08:24] <_ruben> if 4 data + 1 parity .. its an unfair comparison ..  4 versus 5 disks
[08:25] <ssm> oh_noes: unless you've got a good write cache, and a storage processor to layout the data to avoid disk seeks.
[08:26] <oh_noes> which, in our example (mdadm on sata) you don't have.
[08:26] <_ruben> must admit i havent been lucky enough to get my hands on a EMC/EQL/EVA/etc .. just various levels of poorman's sans
[08:26] <oh_noes> I needed write speed and performance over space,so RAID10 in my use is the obvious answer
[08:27] <oh_noes> but, why mdadm thought it would die, was not part of my asumptions
[08:28] <_ruben> oh_noes: have you tried anything to revive it? if so, what?
[08:28] <oh_noes> I havent tried anything.  I'm not familar with mdadm.
[08:28] <_ruben> odd
[08:28] <oh_noes> Thats my problem, i have no idea what to try next.
[08:28] <oh_noes> heck, I don't even understand mdadm --detail and I'm not sure what state it's in
[08:29] <_ruben> as i interpret it, the seperate disks disagree on the state of the other disks
[08:29] <oh_noes> http://pastebin.com/m10018694
[08:29] <twb> I'd be nervous about a nine-way array with only one parity disk
[08:29] <oh_noes> thats the (non forum) output
[08:32] <_ruben> at this stage i'd be prepared to lose your data (and thus get the backups ready, if any), and try to rebuild the array, the data *might* not be lost
[08:32] <_ruben> s/to lose/to have lost/
[08:33] <jmarsden> If you really think all disks are 100% fine, you could try using mdadm --re-add to add devices back into the array... but I'm definitely *not* an expert on this, and unless you have good backups, at this point it looks like you need an expert :)
[08:34] <_ruben> jason^: re-add wont work i think, as they're currently all listed as being part of it already and marked as spare, atleast that's my interpretation of those (S)'s
[08:34] <_ruben> jmarsden: ^
[08:34] <_ruben> damn autocomplete
[08:34] <oh_noes> the part that I have found weird is, mdadm --detail /dev/md5 returns "mdadm: md device /dev/md5 does not appear to be active."
[08:35] <oh_noes> What does that mean?  it doesnt have enough active/online dev to bring it online?
[08:35] <ssm> _ruben: if you've got disk space somewhere else, you could try to dd your disks, and try to use mdadm to assemble the virtual disks
[08:35] <oh_noes> I'm trying to see a higher level 'what mdadm thinks' against all 4 disks... is it DEGRADED with 3 of the 4 disks down?
[08:35] <_ruben> ssm: indeed .. (though im not the one with the problem ;))
[08:36] <ssm> _ruben: ah, it's oh_noes :P
[08:36] <_ruben> oh_noes: it depends on which disk you ask that question .. mdadm's point of view is that is sees 4 spares (i think)
[08:36] <twb> oh_noes: /proc/mdstat?
[08:37] <oh_noes> twb: mdstat is at the bottom of that pastebin output
[08:37] <oh_noes> _ruben: where is it showing them as spares?
[08:39] <_ruben> md5 : inactive sdf1[3](S) sde1[2](S) sdd1[1](S) sdc1[0](S)
[08:39] <twb> oh_noes: the (S), I imagine
[08:39] <oh_noes> _ruben: I dont want to ask the disk, I want to ask mdadm..  Surely mdadm manages every IOP to ensure each dev gets the command and in the case of RAID10, ensures both dev's (the '0' part) ackowledge and return ok
[08:39] <_ruben> mdstat output
[08:39] <_ruben> mdadm's point of view is represented in /proc/mdstat
[08:45] <soren> That's not entirely accurate.
[08:45] <soren> /proc/mdstat is the kernel's point of view.
[08:53] <_ruben> got a point there :)
[08:54] <twb> "mdadm" is being used loosly to refer to the underlying md.ko or whatever, I think
[09:12] <ghostlines> hi all
[09:13] <ghostlines> i was trying to run an script using sudo and it didn't run, I had to switch to root to get it to run
[09:13] <ghostlines> why is this?
[09:14] <ghostlines> it was s simple script from open-vpn http://openvpn.net/index.php/open-source/documentation/howto.html#pki
[09:28] <owh> Is anyone aware of a tool that will provide me with a web based UI into a maildir directory? I'm not really looking for an full IMAP webmail client, or installing sqwebmail with courier - the only functionality I really need is to view the message in a browser so the user can manually process the message in another web based process.
[09:29] <owh> Even a command-line tool that would render a message would do the trick.
[09:31] <twb> owh: mutt -f /path/to/maildir
[09:32] <twb> Or did you actually mean CLI when you said CLI? ;-)  People tend to include charcell GUIs in that list ;-)
[09:32] <twb> Strictly speaking, cat(1) will render a message in a maildir
[09:33] <owh> Well, if it was a CLI, then I'd hope to run the magic parser command and render it within a web-frame :)
[09:33] <owh> cat doesn't qualify as a parser :)
[09:33] <owh> Well, I suppose, technically it does, parsing bits and all :)
[09:33] <owh> I mean, make a maildir message human readable :)
[09:34] <owh> And with human, I mean, *not* a programmer like me -- think secretary.
[09:35] <twb> Chop of everything before the first \n\n sequence.
[09:35] <owh> Yeah, except that lots of this mail has multi-part crap in it with funky encodings and line wraps.
[09:36] <twb> owh: haha, then you need a mime demuxer
[09:36] <owh> Imagine I rewrote my question appropriately :)
[09:38] <owh> Oooh, mimedecode and mpack are ringing bells.
[09:38] <twb> what language are you writing in?
[09:38] <owh> php
[09:38] <owh> Yes, I could write it all from scratch - I'd rather not :)
[09:40] <owh> Just for the record, I'm trying very hard not to have to use php-mail-mimedecode and decode each message manually if I can avoid it.
[09:43] <twb> Sorry, I don't condone the use of PHP.
[09:43] <owh> That's ok, it's not on your server :)
[09:45] <owh> twb: It's not on mine either, but that's just semantics :)
[09:48] <_ruben> ghostlines: without looking at the url but judging from my memory, it involves sourcing a file with variables, and with sudo you get a temp shell (afaik), so the sourcing wouldnt do what you want
[10:42] <BrixSat> is there any way to connect to a machine and administer like team viewer or log me in?
[10:49] <_ruben> ok .. this is nuts .. i can resolve an internal hostname using 'host', i can ping the corresponding ip, but i cant ping the hostname: it says it cant resolve it
[10:53] <ewook> dns-missmatch.
[10:53] <_ruben> hmm .. it doesnt even attempt to contact my dns server
[10:54] <ewook> check what dns-servers you have set it to use.
[10:55] <_ruben> $ host vn-t-mx04.mailtest001.local ; ping vn-t-mx04.mailtest001.local
[10:55] <_ruben> vn-t-mx04.mailtest001.local has address 10.0.64.134
[10:55] <_ruben> ping: unknown host vn-t-mx04.mailtest001.local
[10:55] <BrixSat> Failed to query Postfix config command to get the current value of parameter home_mailbox: /usr/sbin/postconf: fatal: open /etc/postfix/main.cf: No such file or directory
[10:57] <_ruben> hmm .. its not a local issue, other machines show the same .. lets check my dns server
[11:01] <_ruben> hmm .. the .local seems to be the issue here .. i see avahi and multicast traffic going on
[11:02] <BrixSat> is there any way to connect to a machine like team viewer or log me in, i need to bypass lots of router's and i cant port forward all?
[11:03] <_ruben> BrixSat: still dont have a clue what you're asking
[11:04] <BrixSat> :p
[11:04] <_ruben> stupid mdns stuff .. editing /etc/nsswitch.conf did the trick
[11:05] <BrixSat> i used to have a machin running windows inside a huge network, and i used team viewer to administer it, now i have ubuntu server and i cant connect to it from the interner, cause it has at least 10 routers and im not the network administrator
[11:05] <BrixSat> got it?
[11:06] <_ruben> well, you'd need atleast a single port opened to it in order to be able to connect it .. and routers arent the problem, its most likely firewalls that are interfering
[11:09] <BrixSat> i have port 22 ssh
[11:09] <BrixSat> but how can i reach the machine from the outside world?
[11:09] <stanman1> hi, i'd like to run postfix as a relayhost for an exchange (sbs 2003) server, anyone done this before?
[11:10] <stanman1> or knows a tut
[11:23] <BrixSat> _ruben?
[11:50] <_ruben> stanman1: inbound or outbound?
[11:51] <_ruben> BrixSat: ask the network admin(s) to open up port 22
[11:51] <BrixSat> [_ruben] lool dont you think i have done that before? he wont open!!
[11:52] <BrixSat> teamviewer did not need that and log me in was the same! no port opening on router
[11:55] <_ruben> teamviewer would need atleast one port to be open as well .. atleast to (for example, as i dont know that tool) a teamviewer server
[11:56] <_ruben> if no inbound connections are allowed, then its probably for good reason
[11:58] <_ruben> having the box initiate an outbound vpn connection to a known place *might* do the trick, assuming outbound isnt filtered
[11:59] <stanman1> _ruben: both in- and outbound
[12:19] <_ruben> stanman1: the biggest challange is telling postfix the list of valid email addresses, tho there's quite a few scripts out there on the net that dump the AD info into a file that postfix understands
[12:31] <ewook> not that hard.
[12:51] <_ruben> probably not, indeed
[12:51] <ewook> pull the addys from ad, and insert into file/db.
[12:52] <ewook> and the format for postfix is already defined. so, ya.
[14:00] <qiyong> can I use php cgi, withouth #! ?
[14:19] <qiyong> how do i install a pkg without installing its depends?
[14:19] <PhotoJim> qiyong: if the depends aren't installed, your package won't work.  if they're already installed, they won't be reinstalled.
[14:20] <qiyong> PhotoJim: my package can work
[14:20] <qiyong> libapache2-mod-passenger depends on mpm worker, but i don't like to use worker
[14:20] <qiyong> PhotoJim: ^
[14:21] <orudie> questin. how do i view the keys on my host ?
[14:21] <PhotoJim> qiyong: you may need to install from source, then.  or convince the libapache2-mod-passenger developer that the dependent package is not actually required.
[14:21] <qiyong> can i ignore the depends?
[14:22] <PhotoJim> that depends on whether that dependency is actually required or not.
[14:24] <soren> qiyong: I told you already.. You don't have to change anything in your php scripts or directory layout or anything to use php via fastcgi.
[14:24] <soren> All you need it to change your apache configuration a tiny bit.
[14:25] <qiyong> soren: sorry, i can't get my apache confed properly for fastcgi
[14:25] <soren> I use libapache2-mod-fcgid myself. See http://fastcgi.coremail.cn/ for docs.
[14:26] <iulian> Can someone please point me to a list of server specific merges that should be done?  I remember seeing a wiki page about this but unfortunately I cannot find it anymore and google is no help :-(
[14:26] <soren> iulian: I don't know if we maintained such a list this time around.
[14:26] <soren> iulian: Ask mathiaz when he shows up.
[14:27] <soren> Probably within the next hour or so.
[14:28] <iulian> soren: OK, I will then check on launchpad for packages that need to be merged.
[14:28] <iulian> I mean, where -server is subscribed.
[14:29] <iulian> Aha! https://bugs.edge.launchpad.net/~ubuntu-server/+packagebugs
[14:29]  * iulian hopes they are not all in main.
[14:30] <soren> Most are, I'm afraid.
[14:30] <soren> Please don't let that stop you.
[14:30] <iulian> It doesn't matter, I will just attach the debdiff to the bug.
[14:30] <soren> Myself, mathiaz, and kirkland can all sponsor stuff for you.
[14:31] <soren> as well as any other core-dev.
[14:31] <iulian> Indeed.
[14:42] <iulian> That's odd.  I'm wondering why bacula has as the Maintainer the MOTU developers and the package is actually in main.
[14:43] <soren> iulian: Probably because noone bothered to fix the maintainer when it was promoted.... three releases ago. :)
[14:44] <iulian> soren: Yeah, well, in 2.2.8-4ubuntu1 they modified the Maintainer.
[14:44] <soren> From what to what?
[14:45] <iulian> No idea, that was back in Hardy.  The changelog only mentions that the maintainer field has been modified.
[14:46] <iulian> Ah
[14:47] <iulian> It was first modified in Gutsy, 2.0.3-4ubuntu1.
[14:47] <iulian> Blah, it doesn't matter when it was modified, we just need to update it, that's all.
[14:48]  * iulian shakes head.
[14:52] <pschulz01> Greetings.. I'm not going to be able to join the meeting, but I have been looking into the VirtualBox OSE repository (svn).. and their Debian packaging.
[14:53] <pschulz01> Is dkms the 'prefered' way to include modules these days?
[15:10] <soren> pschulz01_away: Yes.
[15:22] <LordDicranius> is there a way to make Courier-IMAP deliver to an external mailbox (of the same name locally) using the MX records (rather than just dropping it off locally)?
[15:52] <iulian> zul: Ah, I've just been preparing the nut merge :-)
[15:53] <zul> iulian: sorry :)
[15:53] <iulian> Heh, no worries.
[15:55] <joe-mac1> helloy all, i've created a custom repo with reprepro and it works pretty great, except i get a warning from apt on my nodes when they run an update saying expected distro hardy but got ), presumably just an empty string. i looked at the distributions file and it looks set... any ideas?
[15:55] <orudie> why am i having so much trouble with ssh keys ?
[15:56] <fbc-mx> IS there no equivalent in Ubuntu-server that announces/broadcasts nfs shares like samba does for it's shares?
[15:56] <Jeeves_> fbc-mx: Does that even exist for nfs?
[15:56] <fbc-mx> My desktops can only see the windows shares but not nfs shares
[15:56] <fbc-mx> Jeeves_, I dunno, that's why I'm asking.
[15:57] <fbc-mx> Jeeves_, I mean there has to be a way of making them show up to my desktops.
[15:58] <Jeeves_> fbc-mx: Yes, by mounting them
[15:59] <fbc-mx> Jeeves_, I'll try to download one of those UBUNTU PDFs from some torrent site. Maybe I can get some insight as to how it's supposed to be done in a network environment.
[15:59] <Jeeves_> fbc-mx: afaik, nfs does not broadcast
[15:59] <Jeeves_> neither does samba, afaik
[15:59] <Jeeves_> showmount -p can do some stuff with nfs
[16:00] <Jeeves_> but that is to be run from the client, asking the server which mounts he has
[16:00] <fbc-mx> Jeeves_, NFS does not broadcast??? Neither does samba?? Then every desktop goes out and port scans every computer to find shares? That's very ineffecient.
[16:00] <Jeeves_> fbc-mx: No, a desktop will broadcast to see which computers reply
[16:01] <fbc-mx> Jeeves_, There has to be a broadcast of services by Samba. It would be so inefficient for every Machine to do that.
[16:02] <fbc-mx> Jeeves_, ahh.
[16:02] <Jeeves_> fbc-mx: Ok, whatever you want
[16:02]  * Jeeves_ will shutup now
[16:02] <fbc-mx> Jeeves_, ah, ok  so a desktop puts out a special query packet that the samba server responds to with a list of shares. Is that correct?
[16:03] <Jeeves_> no
[16:03] <Jeeves_> the client asks which other samba clients there are
[16:03] <Jeeves_> those clients show up in the 'windows networking' stuff
[16:03] <Jeeves_> and than you click further and further
[16:04] <jmarsden> http://www.ubiqx.org/cifs/Browsing.html may be a relevant chapter of "Implementing CIFS" ?
[16:04] <fbc-mx> Jeeves_, so back to the problem. I have to go to every computer mounting NFS shares every morning when they boot up? There has to be a better way.
[16:05] <Jeeves_> vi /etc/fstab
[16:07] <Gena01> hi
[16:07] <Jeeves_> hi
[16:07] <jmarsden> fbc-mx: Are you aware of autofs ?  https://help.ubuntu.com/community/Autofs
[16:08] <Gena01> I am running Ubuntu Server 9.04 and apache+php.. and when I change apache2/php.ini error_log=/var/log/apache2/php_err.log it's not working.. I tried chown root.adm and 666, but it keeps writing errors to error.loh
[16:08] <Gena01> error.log i mean
[16:08] <Gena01> is it a known issue or I am doing something stupid?
[16:08] <jmarsden> Gena01: Did you restart Apache?
[16:08] <Gena01> yup
[16:12] <Gena01> the cli works. it's able to write to the file.. it's 666 now... but apache still doesn't
[16:14] <Gena01> should I file a bug?
[16:16] <jmarsden> I'd see if you can get someone else to duplicate it first, but you could if you want.  I have to head out to work so I can't help further right now, I'm afraid.  I generally use syslog logging rather than direct-to-file logging on "my" servers, so I don't have much experience with using error_log=
[16:18] <Gena01> jmarsden: for us it helps to have 1 error log file for both apache and cli apps
[16:19] <jmarsden> Sure, but can't the cli apps also log via syslog?
[16:19] <Gena01> i want all php errors to go there.. they could.. if they can catch and redirect things.. but that's more complicated
[16:20] <orudie> can someone help me ssh key ?
[16:20] <Gena01> and some errors are not possible to catch from php..
[16:21] <jmarsden> If you just set error_log = syslog   then whatever would have gone to your file goes via syslog... right?
[16:21] <Gena01> jmarsden: mmm.. i guess it could work.. but then I have to change syslog and redirect php errors out to a separate file and fix permissions so that devs can read the file
[16:22] <jmarsden> Probably.  man 5 syslog.conf
[16:22] <Gena01> jmarsden: still weird that it's not working
[16:23] <jmarsden> Yes, it should work your way too.  But I need to get out of here... sorry :)
[16:23] <Gena01> jmarsden: np, thanks for your help
[16:24] <jmarsden> orudie: See http://ubuntuforums.org/showthread.php?t=30709
[16:48] <jmarsden> Gena01: One more thought: check permissions on /var/log itself.  Or try error_log = /tmp/php_err.log as a test.
[16:50] <Gena01> jmarsden: but that would only matter if that file doesn't exist.. right?
[16:51] <tomsdale_> there is a command to force a file not to get overriden by the system but I forgot. trying to make my resolv.conf unchangable.
[16:53] <Gena01> jmarsden: mmm... ok..  /tmp/php_err.log works..
[16:54] <tomsdale_>  chattr +i  that's it. Makes a file unchangable
[16:54] <jmarsden> So you have a permissions issue in /var/log, I would strongly suspect.  Syslog handles that for you :)
[16:54] <jmarsden> tomsdale_: Better to tell your dhcp client to leave DNS info alone that do strange things like that, surely?
[16:56] <tomsdale_> jmarsden: it's temporarily so I'll change it back.
[16:57] <jmarsden> Your choice.  Editing /etc/dhcp3/dhclient.conf to do a supercede for the domain info seems more logical to me...
[17:07] <iulian> Would anyone like to sponsor bug#385262?
[17:43] <tomsdale> can anyone explain this. host www.mydom.com => IP1     ping www.mydom.com => IP2.   Why is the name resolution via hostfile disregarded by some programs?
[17:43] <tomsdale> I have an extra entry in my /etc/hosts for www.mydom.com. Ping resolvs via the /etc/host
[17:47] <mathiaz> tomsdale: ping uses the libc library (and thus nsswitch+resolv.conf) while host doesn't use the libv resolver but talks *directly* to dns servers
[17:48] <mathiaz> tomsdale: host is a utility to query dns servers and debug them
[17:50] <tomsdale> thx, that makes sense mathiaz
[18:32] <muszek> hi... I'm trying to do a remote backup using ninjabackup.  It uses rdiff-backup.  I have hardy on production server and jaunty on home server.  ninjabackup complains that rdiff-backup has a different version on each computer and doesn't want to proceed.  Any solutions?  There's no rdiff-backup in hardy-backrports
[18:33] <muszek> or maybe you can recommend some other backup solution?  needs to handle mysql and regular files.
[18:45] <iulian> mathiaz: Thanks.
[18:50] <shadow98> what is the best way to have an active/active failover for our webserver/mysql server
[18:51] <orudie> how do you specify port with scp ?
[18:54] <PhotoJim> orudie: scp -P xxxx
[19:14] <orudie> can someone have a look at this and maybe hint me on whats wrong ? http://pastebin.com/d5a3338fd
[19:23] <alex_muntada> orudie: it seems that the remote server is closing the connection
[19:23] <orudie> alex_muntada-> yeah but why ?
[19:24] <alex_muntada> it will be very helpful to see the logs on the other side
[19:24] <alex_muntada> orudie: you can try increasing verbosity level as in sftp -vvv ...
[19:34] <jared555> what are the advantages/disadvantages of ubuntu server with kvm vs centos 5.3 with xen?  I have mostly used centos with xen
[19:51] <shadow98> what is the best way to have an active/active failover for our webserver/mysql server
[19:55] <ivoks> drbd
[19:56] <ivoks> and mysql in master-master replication
[19:56] <ivoks> drbd for web site
[20:01] <pmatulis_> ivoks: don't you need a ha component?
[20:02] <ivoks> depends on setup
[20:02] <ivoks> if you have two nodes in 'cluster'
[20:02] <ivoks> and both serve the same stuff
[20:02] <ivoks> then drbd in primary/primary should be enough for web site
[20:03] <ivoks> and mysql in master/master replication for mysql
[20:03] <ivoks> hopefully, you have a loadbalancer that can load the traffic on them
[20:03] <ivoks> if you don't have it, then you need to manage IP failover
[20:04] <ivoks> on top of drbd you can have ocfs2 or gfs2 (if you want gfs2, then you need redhat cluster suite)
[20:05] <orudie> ivoks-> hi, are you familiar with sftp when using ssh key ?
[20:06] <pmatulis_> orudie: you still didn't get it working?
[20:06] <orudie> no but i'm getting a different error this time
[20:06] <orudie> how did you configure your sshd_config ?
[20:07] <pmatulis_> orudie: last time you were trying to chroot with ssh, is this the same now?
[20:07] <orudie> yeah exactly
[20:07] <pmatulis_> orudie: did you try to use just ssh (not sftp) with the simplest config (no groups, etc)?
[20:08] <pmatulis_> ivoks: thank you for your answer
[20:09] <orudie> pmatulis_-> yes ssh worked with ssh key i figured that out
[20:09] <pmatulis_> orudie: in chroot right?
[20:09] <orudie> nope not in chroot
[20:09] <orudie> still having a problem with chroot
[20:10] <pmatulis_> orudie: so sftp problem is not related to chroot right?
[20:11] <Hillaballoo> hey all, I need some emergency help- after a reboot, libvirtd is hanging repeatedly
[20:11] <Hillaballoo> 9.04 64x86
[20:12] <Hillaballoo> hangs pegging one CPU core...but after it manages to kick off the KVM machines that are auto-start
[20:12] <ivoks> 'night
[20:16] <yann2_> hi
[20:16] <yann2_> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/hardy-updates/main/binary-i386/Packages.bz2  Hash Sum mismatch
[20:16] <yann2_> I've been getting this for weeks - am I the only one?
[20:18] <phoenixz> What are the max. specs on CPU and memory for ubuntu-server?
[20:19] <phoenixz> as in, howmany CPU's could it support
[20:19] <phoenixz> and howmuch memory?
[20:19] <phoenixz> What are the max. specs on CPU and memory for ubuntu-server?
[20:22] <jmedina> phoenixz: from the ubuntu oficial site: http://www.ubuntu.com/files/server/UbuntuServerBrochure804LTS.pdf
[20:22] <phoenixz> thanks!
[20:22] <jmedina> ups
[20:23] <jmedina> it is not there
[20:23] <jmedina> http://www.ubuntu.com/getubuntu/download-server
[20:23] <phoenixz> jmedina: just a detail... 9.04 specs are equal to 9.04 specs? I need the 9.04 limits.
[20:23] <jmedina> thgere is a link to  Installation requirements"
[20:23] <jmedina> phoenixz: I dont know I dont use 9.04 for servers, only LTS
[20:24] <steffan> jmedina: If I recall correctly those are minimum requirements? phoenixz is asking for maximum
[20:24] <phoenixz> steffan: correct
[20:24] <phoenixz> I need maximum supported..
[20:24] <LMJ> phoenixz  : ubuntu is a kind of bundeled open source softwares with one famous called Linux : the kernel. Is the one who deals with hardware, check on the Linux kernel max specs instead (according to the ubuntu kernel version) You will see limits are pretty huge
[20:25] <phoenixz> We're looking at as sweeet 24 core IBM server which probably will run some 500+GB memory... I'd like to be sure that ubuntu-server will keep running on it
[20:25] <jmedina> phoenixz: and that will depend on the arch
[20:25] <phoenixz> LMJ: I know the kernel limits yeah, thats pretty high.. but I dunno if ubuntu itself has some lower specs of that?
[20:25] <phoenixz> jmedina: i386 architecture
[20:26] <LMJ> phoenixz : pretty sure : no, maybe you could have some sysctl tweaks or a custom ubuntu kernel to optimize the ressource utilisation
[20:27] <phoenixz> LMJ: pretty sure: it will work, or pretty sure: it will not work?
[20:27] <phoenixz> Its not very clear :)
[20:27] <LMJ> it will ;)
[20:27] <jmedina> phoenixz: so what do you expect?
[20:27] <jmedina> do you already have a requirement?
[20:27] <LMJ> 500GB : nice, but i'm wondering why you are not running AIX crap on this hardware to have full support from IBM, ubuntu is kinda exotic
[20:28] <jmedina> ups, /me scrolling up
[20:28] <shadow98> ivoks: sorry i stepped a way for a bit and just got your message
[20:28] <shadow98> ivoks: ok so I am going to use drdb and mysql replication master/master
[20:29] <shadow98> what is the purpose of the new filesystem ocfs2?
[20:29] <LMJ> cluster oriented shadow98, developped by oracle iirc
[20:29] <phoenixz> LMJ: its sweet yeah, but its still all planning.. Using ubuntu because.. it simply works :) Going to do virtualization with it.. correction bout the memory by the way, its going to be more like 100 - 200 GB..
[20:30] <jmedina> phoenixz:  what are you planning to use for virtualization?
[20:30] <LMJ> it may work but you should use 64bits architecture is CPU can handle it
[20:31] <LMJ> if*
[20:31] <phoenixz> looking at kvm based solutions.. We've done quite a bit of testing, looks good so far
[20:31] <phoenixz> LMJ: it should be 64 bit yeah.. if the CPU could not handle that, I doubt the server would be able to exist in the first place :)
[20:31] <LMJ> you have an efficient storage too? That's the typical virtualisation bootleneck
[20:33] <phoenixz> LMJ: Fiberoptic SAN.. probably multiple cards per server to be able to sustain high throughoutput (how do you write that again?)
[20:33] <phoenixz> another thing we're working on.. it should be possible to "bundle" multipe (say 4) network cards together to access them like if they were only one network card, right?
[20:33] <shadow98> so are the majority in agreement the best bet for an active/active failover is drdb
[20:34] <phoenixz> LMJ: ubuntu server also supports fiberoptic cards like Qlogic and Emulex?
[20:34] <jmedina> phoenixz: for storage is multipath, IBM has a RDAC drivers wich is not supported in ubuntu, you can use kernel DM-Multipath which works fine
[20:35] <jmedina> and channel bonding for network interfaces
[20:35] <orudie> pmatulis_-> pm
[20:35] <phoenixz> so we should not have a problem with the fiberoptic cards under ubuntu?
[20:36] <jmedina> it depends, I have used QLogic HBAs
[20:37] <phoenixz> jmedina: and that worked fine... ?
[20:37] <phoenixz> qlogic..
[20:37] <jmedina> yeap
[20:37] <jmedina> I have IBM bladecender H
[20:38] <jmedina> well my customer :)
[20:40] <Sam-I-Am> jmedina: the launchpad ops have yet to fix my PPA issue so i dont have those packages up yet... but they're done.
[20:41] <jmedina> Sam-I-Am: good, can I get them from other site?
[20:41] <Sam-I-Am> i dont have any place to put them unfortunately
[20:41] <jared555> how is ubuntu/kubuntu's virtualization compared to centos? I know ubuntu uses kvm and centos is xen.  I only have experience with xen so I could use some info from real world usage (not just benchmarks)
[20:42] <Sam-I-Am> but i have openldap-2.4.16-cvs w/ gnutls and openssl... dhcp, bind9, samba, and miscellaneous libraries backported from jaunty to hardy
[20:42] <yann2_> jared555 > exciting but new and not very stable
[20:42] <Sam-I-Am> oh, and heimdal
[20:42] <jmedina> bind9 with ldap?
[20:42] <jared555> if I am going to be using virtualization heavily would you suggest centos for the server side then?
[20:42] <phoenixz> jmedina: I just checked in the linux channel.. They say if the motherboard supports it, the linux kernel will support it.. So ubuntu will also support a 24CPU/256G server?
[20:42] <yann2_> would recommend waiting or very, very properly testing if it is for prod
[20:43] <Sam-I-Am> jmedina: well, i'm rolling out a bunch of newer apps for hardy... bind9 is one of them.
[20:43] <yann2_> jared555 > I don't know centos - but I am unsure about kvm in jaunty
[20:43] <jmedina> Sam-I-Am: good, I need bind9+ldap
[20:43] <Sam-I-Am> got em :)
[20:44] <Sam-I-Am> they both need my rebuilt db4.7 libs... also included in the mix
[20:44] <yann2_> jared555 > the most serious issues may have been fixed by now though
[20:45] <jmedina> and what bout samba? do I need new version to support new libldap?
[20:45] <Sam-I-Am> no, i didnt force bind to need libldap 2.4.16
[20:45] <jared555> well, my entire home network will be relying on the virtualization heh
[20:45] <zoopster> jared555: centos is a bit behind jaunty for kvm - kvm was a focal point in Jaunty because of ubuntu enterprise cloud
[20:45] <Sam-I-Am> i'm trying to keep most of the apps as non-interdependent as possible
[20:45] <yann2_> its probably good enough for home network :)
[20:45] <jared555> well I meant centos's xen
[20:45] <phoenixz> What is the larges (known) server running ubuntu-server? largest as in, highest hardware specs ?
[20:47] <jared555> basically I will be running either xen on centos or kvm on ubuntu server
[20:48] <zoopster> jared555: well xen in centos is well behind kvm in jaunty...if you have vt extensions...kvm in jaunty would be a better option
[20:48] <shadow98> exit
[20:48] <jared555> ok, thank you
[20:52] <phoenixz> What is the larges (known) server running ubuntu-server? largest as in, highest hardware specs ?
[20:52] <Sam-I-Am> i've run it on 8 cores and 64 gigs of ram...
[20:55] <phoenixz> Sam-I-Am: If all goes as planned, I'll probably run it on a whee bit more than 8 cores
[20:57] <Sam-I-Am> just a few?
[20:58] <phoenixz> Sam-I-Am: 24
[20:58] <phoenixz> not more than that, simply because I can not find anything bigger on the i386 platform :)
[21:05] <Sam-I-Am> what needs 24 cores?
[21:07] <steffan> phoenixz: I'll have an account on this server, okay :)
[21:07] <phoenixz> Sam-I-Am: virtualization
[21:07] <Sam-I-Am> i'd recommend against more than 8 cores or so in an x86 box
[21:07] <phoenixz> steffan: You have any idea on what the largest known ubuntu-server installation might be, hardware wise?
[21:07] <Sam-I-Am> x86 is just too bandwidth-limited
[21:07] <Sam-I-Am> you'd be much better off with 3 eight-core boxes
[21:10] <jmedina> phoenixz: why dont you send a message to ubuntu server mailing lists
[21:10] <phoenixz> Sam-I-Am: well, virtualization usually means larger == better
[21:10] <phoenixz> jmedina: I may just do that, yeah
[21:10] <steffan> phoenixz: Follow philosphy (as you are in a Linux channel) and push it too it's extreme :)
[21:11] <steffan> phoenixz: You will soon find out that way.
[21:12] <phoenixz> steffan: as in, you think its too extreme?
[21:14] <steffan> phoenixz: No, I think you should try it.
[21:15] <phoenixz> steffan: we'll probably get the server anyway, its just a question of what operating system. Because of very good experiences with ubuntu on servers (and  very bad ones with RHEL, SLES, etc), I want to give it the chance it deserves..
[21:16] <Sam-I-Am> linux is linux though... rather, its just a kernel
[21:16] <Sam-I-Am> the kernel probably scales fine to 24 cores, but x86 itself does not.
[21:45] <mathiaz> kees: hey - I saw you made a bunch of upload around May 11th: No-change rebuild to gain FORTIFY defaults.
[21:45] <mathiaz> kees: what is this for exactly?
[21:46] <ajmitch> is there a new default compiler option for gcc?
[21:47] <ajmitch> funny, launchpad has gone back to the original joining date for ubuntu-server for me, in 2005 :)
[21:47] <mathiaz> ajmitch: launchapd remembers *everything* for *ever*
[21:48] <ajmitch> yeah I know
[21:48]  * ajmitch is just reading over the meeting log now
[21:48] <orudie> i followed this guide http://www.debian-administration.org/articles/590 , but i cant write with chrooted user
[22:00] <orudie> how can i check what the user's home directory is set to ?
[22:01] <littleendian> who wants to help a noob with postifx?
[22:01] <littleendian> better make that postfix
[22:02] <littleendian> fatal: no SASL authentication mechanisms
[22:09] <littleendian> postfix and dovecot / I followed the guide at https://help.ubuntu.com/8.10/serverguide/C/postfix.html#postfix-configuration
[22:15] <kees> mathiaz: it was to catch things that had not been rebuilt in main since the hardening options were introduced in intrepid.
[22:15] <ajmitch> that was a little while ago
[22:16] <kees> mathiaz: the goal for 100% of main being covered by the next LTS
[22:16] <kees> ajmitch: yup, but still a lot of ELF packages hadn't been rebuilt.
[22:16]  * ajmitch isn't too surprised about that
[22:16] <mathiaz> kees: oh ok. So not a new feature.
[22:16] <kees> mathiaz: right
[22:17] <mathiaz> kees: just making sure that everything will be covered for the next LTS.
[22:17]  * kees nods
[22:18] <littleendian> fatal: no SASL authentication mechanisms can anyone help me with this?
[22:31] <muszek> hi... how do I disable the stuff printed out to STDOUT when I log in via ssh?  This output prevents rdiff-backup from working properly
[23:51] <_cpod_> i want to put a bigger hard drive in my server but don't want to lose any files/configurations. what is the best way to copy everything from the old drive to the new one?  (both are currently mounted)
[23:52]  * _cpod_ is sure that's a noob question
[23:54] <phoenixz> _cpod_: cp -a /path/to/source /path/to/destination
[23:54] <dinger2006> is it raided?
[23:55] <_cpod_> no, ive got an old 30GB IDE drive that i want to replace with a 320GB IDE drive.  no raid or sata
[23:55] <phoenixz> New verbs.. I raid, you raid, we raid, we raided, we were raided...
[23:55] <_cpod_> lol
[23:55] <phoenixz> _cpod_: mv /path/to/source /path/to/destination cleans the source right away as well
[23:55] <dinger2006> ok
[23:56] <jmedina> I prefer rsync
[23:56] <jmedina> rsync -a /path/to/source /path/to/destination
[23:56] <_cpod_> oh, and the old drive will be removed.  if that matters
[23:56] <jmedina> if cp fails you have to start from the begining
[23:57] <orudie> jmedina, i'm tired of fighting with chroot, can you recommend a secure ftp modality ?
[23:57] <_cpod_> jmedina/phoenixz: ok i'll give those a try. and how would i copy/redo my MBR?
[23:57] <jmedina> orudie: I use pure-ftpd with virtual users
[23:58] <jmedina> _cpod_: you use dd
[23:58] <phoenixz> _cpod_: you want to have like an image? use dd
[23:58] <phoenixz> _cpod_: dd if=/dev/sda1 of=/dev/sdb1 for example
[23:58] <jmedina> I think is dd if=/dev/hda of=/dev/sda bs=512 count=1
[23:58] <jmedina> I prefer to reinstall grub in the new drive
[23:59] <_cpod_> alright thanks guys i think thats exactly what i need
[23:59] <phoenixz> jmedina: you have to specify block and count for dd? I thought for those operations you could just dd if= .... of=.... and done
[23:59] <jmedina> dd will also copy partition table
[23:59] <orudie> jmedina, can you give a link with instructions on setting that up ?
[23:59] <jmedina> phoenixz: well that way wil only copy MBR
[23:59] <phoenixz> _cpod_: dd copies on block level.. basically on the lowest level you can get
[23:59] <jmedina> dd is really slow, it copies even empty blocks