[00:00] :S [00:00] :D [00:00] Im from the broadband generation :D [00:00] same here [00:01] but i am not at home at the moment so i am using my phone + usb cable + netzero :) [00:01] I never used a modem for internet connection, only for fax machines [00:02] i have never used a modem for faxing [00:02] i use efax [00:05] well, last time I used a fax was 5 years ago, when I configure those machines :D [00:05] fax is evil!!!!! [00:05] lol [00:05] yeah sadly businesses live by them [00:05] netzero is still around? [00:05] a lot of stuff is moving to email but not everything [00:05] is it still free? [00:06] yeah [00:06] right on [00:07] http://www.netzero.net/start/landing.do?page=www/free/index [00:08] hmm [00:14] so yeah [00:15] still cant find a way to to connect with ssh key when using chroot in sshd_config [00:25] orudie: have you contacted openssh people? [00:25] /j #openssh [00:25] :) [00:33] im in there [00:33] they are not saying anything [00:33] this is really weired [00:37] orudie: what kind of errors are you getting when you try this? [00:37] pmatulis, the problem is that its not seeing ssh key [00:40] orudie: are you putting all the key location info in the chroot area of sshd_config? [00:40] pmatulis, that is exactly what I am trying to figure out, is where to put the authorized_keys for this user [00:44] orudie: put it in the chroot i suppose. what i meant was, are you putting in settings like 'AuthorizedKeysFile' below 'Match'? [00:45] orudie: i'm going to try this tomorrow. are you here often? [00:47] pmatulis, yes every day [00:47] orudie: i'll ping you [00:48] pmatulis, ok [00:53] Hey All [00:53] Ive just finished off my hosting cluster [00:54] ive setup rsync to sync the /etc/apache2 folder [00:54] how do i get apache2 to reload every so often so that it picks up the new configs ? [01:19] orudie: still there? [01:20] pmatulis, yup [01:20] orudie: i just got it to work at home. nothing special done. not sure where your problem is [01:20] pmatulis, with ssh key ? [01:21] sshd[23736]: Accepted publickey for chrooted_user from 192.168.3.101 port 31007 ssh2 [01:21] you have password login disabled ? [01:21] orudie: yup [01:21] ok, whats the path to your authorized_keys file ? [01:22] like i said, nothing special done. === asac_ is now known as asac [01:22] can you tell me please ? i have been stuck on this [01:23] that file is in .ssh directory of the chroot directory, which also happens to be the user's home [01:23] If it accepted the public key, then he got in. [01:23] The problem could be that bash isn't installed in the chroot. [01:23] (And bash is his default shell.) [01:23] i surely hope he set up the shell [01:23] pmatulis: /home/foo is his chroot? [01:24] yes [01:24] pmatulis: and he's running rsync with --rsh=ssh? [01:24] /home/chrooted_user (user is chrooted_user) [01:24] so /home/chrooted_user/.ssh/authorized_keys [01:25] maybe your chroot directory is not the home directory? [01:25] What are you trying to do with this chrooted ssh session? [01:25] sftp [01:25] Hmm. [01:26] orudie: anything else before i leave? [01:26] I suggest you talk to #openssh about it, since I don't know if that's supposed to work, or what to do to debug it. [01:31] Hi, I am new to servers. I added a user to my server by typing: useradd -m username. This created the user and the home directory. Then I used: usermod -a -G admin,adm,group1,etc username. This added the new user to existing groups. Next, I typed passwd newuser as root, which allowed me to set a password for the new user. The problem I have is that when I login as the newuser everything... [01:31] ...in front of "$" is missing. It should show something like username@hostcomputer:directory$. Thanks in advance, any help would be appreciated. - MatthewMPP [01:32] ping [01:37] Anyone installed Zend Optimizer? In ./install.sh it'a asking for apache httpd. However apache2 doesnt have it [01:37] matthewmpp: you should be using adduser, not useradd. [01:37] matthewmpp: the former is a high-level wrapper that will handle most of the work. [01:38] matthewmpp: the reason "everything in front of the $ is missing" is because that is the default behaviour for /bin/sh, which is the default shell. [01:38] matthewmpp: only if you use adduser(8) will /etc/adduser.conf be used, and this is what sets the default shell to bash, and populates the new home directory with the contents of /etc/skel. [01:38] pmatulis, so the path is /home/chrooted_user/.ssh/authorized_keys , why doesnt it wanna work for me then ? [01:40] twb: cool. [01:40] orudie: did you read the log files? [01:41] twb: what syntax do I use? adduser -m newuser? [01:41] matthewmpp: RTFM [01:41] twb, auth.log does not produce any new logs when i try to connect [01:41] orudie: is sshd running? [01:42] orudie: maybe you have bad file permissions. [01:42] orudie: .ssh in particular should be 0700 [01:42] pmatulis: the log will tell you if that is the case. [01:42] you know what? i'll try to create a new user and start over , i think i messed with this particular user account way too much trying to figure this out [01:42] i'll let you know what happens [01:43] orudie: good idea [01:43] oh_noes: sounds like your install.sh assumes RHEL; I suggest you talk to the Zend people about it. [01:44] orudie: also, make sure you can connect with password before going to key authentication [01:46] pmatulis, i tested on 2 boxes, one with password the other with ssh key [01:46] as there are no entries in auth.log, there is something seriously wrong with your sshd service. I would investigate that before trying to get the client side working. [01:46] pmatulis, the one with password worked like a charm , took me 2 seconds to set it up [01:46] orudie: ok [01:46] twb, you are wrong [01:47] twb, the other user with different ssh key works very well, its my company's box [01:47] orudie: i got a quick recipe for this if you're interested, you might be missing something small [01:47] ok [01:47] orudie: will msg [01:48] orudie: if you are not seeing rejection notices in auth.log for failed login attempts, then either the service is not running, it is not writing to auth.log, or your client is not connecting to the ssh server. [01:48] I suppose that could indicate a failure in a firewall or a misconfigured client. [01:48] twb, are you familiar with ssh keys ? [01:48] orudie: yes. [01:49] Reading the log files is *the* way to find out why your connection was rejected by ssh. It deliberately does not provide any detailed information to the client. [01:49] twb, trust me there is nothing wrong with sshd [01:50] With respect, you're in here asking for advice. That's the advice I'm giving. [01:52] twb, hold on [01:52] ok [01:52] to begin, here is the copy paste from my sshd_config [01:52] http://pastebin.com/m87120f4 [01:53] now i'm looking here http://www.debian-administration.org/articles/590 [01:53] orudie: did you at least try to just ssh (not sftp)? [01:54] i will create user and add him to group sftponly [01:54] pmatulis, yeah man [01:54] i did [01:54] I agree, I'd also get basic SSH working first. [02:43] pmatulis, around ? [02:43] From #upstart, which is asleep: [02:43] 11:42 I am looking at /etc/event.d/ on an Ubuntu Server 8.04 system. Can someone explain why tty1 and tty2 differ in their start/stop parameters? It looks like tty2 through 6 are only active for runlevels 2 and 3. [02:58] how can I prevent ssh session from terminating because of timeout? [02:59] I know I need to add soething to ssh_config, but what? [03:04] ha1331_: you could set ServerAliveInterval in ssh_config on your client, and/or ClientAliveInterval in sshd_config on the server [03:05] w3wsrmn: are the units for the value seconds? [03:05] ha1331_: yup [03:06] ha1331_: I cheat and use -o BatchMode=yes [03:07] twb: what does that do? [03:07] It enables TCP keepalives. [03:07] As a side effect, I mean [03:07] Typically if I want keepalives, it's because the connection is unattended, e.g. ssh -w' [03:08] twb: that setting is aplicable also for sshfs? [03:08] sshfs should do it automatically IIRC. [03:09] IIRC? [03:09] Perhaps you want -o reconnect [03:10] oh: IIRC = If I Recall/Remember Correctly [03:11] knew lol already [03:11] :) [03:59] What is the best way to do a jailed shell [04:04] FFForever: OpenVZ [04:04] i am already on a vps :P [04:05] Then stop. You are done. [04:06] i want to give users on my system a jailed shell [04:06] Good luck with that. [04:06] i know there is a way [04:07] AFAIK there's no particularly secure way. [04:09] there has to be a bettery way then to just give them a regular shell [04:09] Well, yes, but basically what you end up doing is approximating a VPS system in userland, insecurely. [04:10] but they will only have access to cp, mv, rm, uptime, nano, how can they destroy that? [04:10] FFForever: if that's all they have access to, how will they log in? [04:11] what do they need to login? [04:11] FFForever: well, login(8) and sh(1). [04:11] not bash [04:11] And access to /dev/pts [04:12] (8)? [04:12] login is a chapter eight program. [04:12] Oops, it's not [04:12] what is a chapter program?. [04:12] man man. [04:49] hey [04:50] i need to know if it is possible to run ettercap on my remote box via ssh [04:50] i got some errors and just wondering if there's a workaround [04:52] possible to run ettercap remotely via ssh? [05:10] Hi, In ubuntu-server 9.04 is it okay to edit the fstab file manually? [05:11] It does look like the standard config file I am used to. [05:11] ping [05:12] mistake: it does not look like the standard config file. :-( [06:00] matthewmpp: man 5 fstab # describes its format [06:03] what is a good tutorial for quota's?, also what happens when a user runs out? [06:10] yeah, i found an answer. thanks though. [06:22] root@chr1831:~# edquota -u meklort -f PRGMRDISK1, edquota: Cannot stat() given mountpoint PRGMRDISK1: No such file or directory, any ideas? [06:36] can anybody help me out is there anyway that I can hide port 8080 on url [06:44] TimReichhart: "hide" it how? [06:44] instead of going to mail.domain.com:8080/rc cant I just put it like domain.com/rc [06:45] the webmail and webserver are on 2 different servers [06:45] TimReichhart: that would involve putting a proxy webserver on port 80 [06:46] e.g. mod_proxy or mod_rewrite [06:46] ok [06:46] FFForever: PRGMRDISK1 doesn't sound like a filesystem [06:50] What tape backup software can I use with Ubuntu Server? [06:50] tar? [06:53] ball: tape is super yuk [06:53] Unless you already have your tape drive and hardware, get a HDD or DVD solution instead. [06:54] so twb can u show me what a mod_rewrite looks like [06:54] TimReichhart: no. [06:54] alright [06:56] twb: it's already in place (and for many systems, DVD simply isn't large enough) [06:57] the drive shows up as st0 [06:57] ...but my usual tar incantation doesn't work. [06:57] I lack practice with Linux [06:57] ball: right; you'd use multiple DVDs for each backup. [06:58] But anyway, you have tape infrastructure already. [06:58] I don't know much about the nasty details of tape, but I would start by looking at amanda (the "overkill" end of the spectrum) and tar (the "underkill" end up the spectrum). [06:59] * ball tries tar again === |404NotFound| is now known as error404notfound [06:59] ah, I needed the "-" for Linux [07:00] Theoretically, TAPE=/dev/st0 tar c /etc/ or similar. [07:00] Which "-"? [07:00] "tar -tf /dev/st0" [07:00] I come from a world where there is no - there. [07:00] You shouldn't normally need the - there. [07:01] Unless you have stuff before it, e.g. you can't say "tar cf /dev/st0 C /etc ppp" -- you have to say "tar cf /dev/st0 -C /etc ppp" [07:03] I was trying *t*f, to get a table of contents. [07:07] Yes, that should work. [07:08] I don't know why it didn't. [07:09] I'm just backing up some files now, will compare checksums after a restore. [07:23] If you're making WMRN-type backups, --lzma or -j might be a nice idea to save space, at the cost of extra CPU during the backup [07:24] straight tar is fine [07:26] Looks promising too. [07:26] it was just the "-" that threw me. [07:29] OK, cool. [07:31] Hmm... seems like I have to keep power cycling the drive though. That's not good. [07:33] I'm afraid I can't help with that. [07:35] Anyone awake to help me with a mdadm RAID10 problem? [07:36] damnit. [07:52] <_ruben> oh_noes: not unless you give us some more details on the problem [07:53] I posted my problem here: http://forums.overclockers.com.au/showthread.php?t=787262 [07:53] forum should be open to hte world [07:53] but basically, madm has dropped my md5 RAID10 volume and I have no idea what next steps to try [07:54] time to reach for your backup tapes perhaps. [07:59] Why? All 4 disks are live and sdd1 confirms they are healthy [07:59] but mdadm has dropped the disks [07:59] (maybe its just trying to prove why it doesnt belong in the enterprise space) [07:59] could be. === FFForever is now known as FFForever-Away [08:01] <_ruben> looks like all 4 are marked as spare [08:03] <_ruben> and the 'fault removed' lines sound scary as well [08:05] ouch. [08:13] From what I've seen of OCAU weenies, I wouldn't trust them to do ANYTHING linux-related. [08:14] I don't really half a choice, I bum around on that forum so i might as well ask [08:14] YMMV, but I tend to think of them as mainly being hardware weenies -- particularly Windows gaming hardware. [08:14] Fair enough. [08:14] twb: you dont have a sec to see the state of my madm in that post? [08:14] Incidentally, why are you using RAID10 instead of RAID5? Are the disk pairs of different sizes? [08:15] I make a point of not reading web forums, because they seem to have deliberately poor accessibility. [08:16] twb: RAID1+0 may be lighter in terms of CPU load [08:16] ball: I suppose... [08:17] (slightly ;-) [08:17] I'd have to think about the failure more for RAID1+0, but I'd be more scared of it than RAID5 or 6. [08:17] Assuming by 0 you mean striping and not mere catenation [08:18] <_ruben> raid10 is atleast as safe as raid5 [08:18] twb: usually it's taken to mean a stripe over mirrored pairs of disks. [08:18] <_ruben> raid10 can sustain multiple diskfailures, as long as they're not part of the same raid1 set [08:19] <_ruben> also raid5 has lousy write performance [08:19] _ruben: OK, so it's kinda 1½ parity drives :-) [08:19] _ruben: unless you pay big $$$ for hardware that does raid5 for you, then it _may_ be fast. [08:19] <_ruben> raid10 doesnt do parity [08:20] <_ruben> raid5 will *never* be as fast as raid10 [08:20] you don't need parity for raid1+0 [08:20] <_ruben> raid5 is fine for a fileserver or so .. but for db's or vm storage, you'd need raid10 to get a bit of decent performance [08:20] ssm: that's what we did, and I rather wish we hadn't. [08:21] _ruben: on my EMC hardware, raid5 on 4+1 disk _is_ faster than raid10 on 4 disks. On MD, it's not. [08:21] I don't like raid5 anyhow. Stripe and mirror everything important [08:22] I'm going to bed. [08:22] unless it's raid5 on ZFS, then you'll get rid of the possibility of rad5 write hole. [08:23] <_ruben> ssm: 4+1 ? thats a hotspare i assume? [08:24] <_ruben> ssm: also, workload is a very important factor here [08:24] 4+1 is most SAN speak means 4 data 1 parity or 5 disk RAID5 [08:24] _ruben: 4+1 is one of the two raid5 combinations on EMC clariion, the other is 8+1. [08:24] RAID10 is typically faster for writes, RAID5 reads may beat it but with slower write performance [08:24] oh_noes: true [08:24] <_ruben> if 4 data + 1 parity .. its an unfair comparison .. 4 versus 5 disks [08:25] oh_noes: unless you've got a good write cache, and a storage processor to layout the data to avoid disk seeks. [08:26] which, in our example (mdadm on sata) you don't have. [08:26] <_ruben> must admit i havent been lucky enough to get my hands on a EMC/EQL/EVA/etc .. just various levels of poorman's sans [08:26] I needed write speed and performance over space,so RAID10 in my use is the obvious answer [08:27] but, why mdadm thought it would die, was not part of my asumptions [08:28] <_ruben> oh_noes: have you tried anything to revive it? if so, what? [08:28] I havent tried anything. I'm not familar with mdadm. [08:28] <_ruben> odd [08:28] Thats my problem, i have no idea what to try next. [08:28] heck, I don't even understand mdadm --detail and I'm not sure what state it's in [08:29] <_ruben> as i interpret it, the seperate disks disagree on the state of the other disks [08:29] http://pastebin.com/m10018694 [08:29] I'd be nervous about a nine-way array with only one parity disk [08:29] thats the (non forum) output [08:32] <_ruben> at this stage i'd be prepared to lose your data (and thus get the backups ready, if any), and try to rebuild the array, the data *might* not be lost [08:32] <_ruben> s/to lose/to have lost/ [08:33] If you really think all disks are 100% fine, you could try using mdadm --re-add to add devices back into the array... but I'm definitely *not* an expert on this, and unless you have good backups, at this point it looks like you need an expert :) [08:34] <_ruben> jason^: re-add wont work i think, as they're currently all listed as being part of it already and marked as spare, atleast that's my interpretation of those (S)'s [08:34] <_ruben> jmarsden: ^ [08:34] <_ruben> damn autocomplete [08:34] the part that I have found weird is, mdadm --detail /dev/md5 returns "mdadm: md device /dev/md5 does not appear to be active." [08:35] What does that mean? it doesnt have enough active/online dev to bring it online? [08:35] _ruben: if you've got disk space somewhere else, you could try to dd your disks, and try to use mdadm to assemble the virtual disks [08:35] I'm trying to see a higher level 'what mdadm thinks' against all 4 disks... is it DEGRADED with 3 of the 4 disks down? [08:35] <_ruben> ssm: indeed .. (though im not the one with the problem ;)) [08:36] _ruben: ah, it's oh_noes :P [08:36] <_ruben> oh_noes: it depends on which disk you ask that question .. mdadm's point of view is that is sees 4 spares (i think) [08:36] oh_noes: /proc/mdstat? [08:37] twb: mdstat is at the bottom of that pastebin output [08:37] _ruben: where is it showing them as spares? [08:39] <_ruben> md5 : inactive sdf1[3](S) sde1[2](S) sdd1[1](S) sdc1[0](S) [08:39] oh_noes: the (S), I imagine [08:39] _ruben: I dont want to ask the disk, I want to ask mdadm.. Surely mdadm manages every IOP to ensure each dev gets the command and in the case of RAID10, ensures both dev's (the '0' part) ackowledge and return ok [08:39] <_ruben> mdstat output [08:39] <_ruben> mdadm's point of view is represented in /proc/mdstat [08:45] That's not entirely accurate. [08:45] /proc/mdstat is the kernel's point of view. [08:53] <_ruben> got a point there :) [08:54] "mdadm" is being used loosly to refer to the underlying md.ko or whatever, I think [09:12] hi all [09:13] i was trying to run an script using sudo and it didn't run, I had to switch to root to get it to run [09:13] why is this? [09:14] it was s simple script from open-vpn http://openvpn.net/index.php/open-source/documentation/howto.html#pki === NCommander is now known as ApportRetracerPo === ApportRetracerPo is now known as NCommander [09:28] Is anyone aware of a tool that will provide me with a web based UI into a maildir directory? I'm not really looking for an full IMAP webmail client, or installing sqwebmail with courier - the only functionality I really need is to view the message in a browser so the user can manually process the message in another web based process. [09:29] Even a command-line tool that would render a message would do the trick. [09:31] owh: mutt -f /path/to/maildir [09:32] Or did you actually mean CLI when you said CLI? ;-) People tend to include charcell GUIs in that list ;-) [09:32] Strictly speaking, cat(1) will render a message in a maildir [09:33] Well, if it was a CLI, then I'd hope to run the magic parser command and render it within a web-frame :) [09:33] cat doesn't qualify as a parser :) [09:33] Well, I suppose, technically it does, parsing bits and all :) [09:33] I mean, make a maildir message human readable :) [09:34] And with human, I mean, *not* a programmer like me -- think secretary. [09:35] Chop of everything before the first \n\n sequence. [09:35] Yeah, except that lots of this mail has multi-part crap in it with funky encodings and line wraps. [09:36] owh: haha, then you need a mime demuxer [09:36] Imagine I rewrote my question appropriately :) [09:38] Oooh, mimedecode and mpack are ringing bells. [09:38] what language are you writing in? [09:38] php [09:38] Yes, I could write it all from scratch - I'd rather not :) [09:40] Just for the record, I'm trying very hard not to have to use php-mail-mimedecode and decode each message manually if I can avoid it. [09:43] Sorry, I don't condone the use of PHP. [09:43] That's ok, it's not on your server :) [09:45] twb: It's not on mine either, but that's just semantics :) [09:48] <_ruben> ghostlines: without looking at the url but judging from my memory, it involves sourcing a file with variables, and with sudo you get a temp shell (afaik), so the sourcing wouldnt do what you want [10:42] is there any way to connect to a machine and administer like team viewer or log me in? [10:49] <_ruben> ok .. this is nuts .. i can resolve an internal hostname using 'host', i can ping the corresponding ip, but i cant ping the hostname: it says it cant resolve it [10:53] dns-missmatch. [10:53] <_ruben> hmm .. it doesnt even attempt to contact my dns server [10:54] check what dns-servers you have set it to use. [10:55] <_ruben> $ host vn-t-mx04.mailtest001.local ; ping vn-t-mx04.mailtest001.local [10:55] <_ruben> vn-t-mx04.mailtest001.local has address 10.0.64.134 [10:55] <_ruben> ping: unknown host vn-t-mx04.mailtest001.local [10:55] Failed to query Postfix config command to get the current value of parameter home_mailbox: /usr/sbin/postconf: fatal: open /etc/postfix/main.cf: No such file or directory [10:57] <_ruben> hmm .. its not a local issue, other machines show the same .. lets check my dns server [11:01] <_ruben> hmm .. the .local seems to be the issue here .. i see avahi and multicast traffic going on [11:02] is there any way to connect to a machine like team viewer or log me in, i need to bypass lots of router's and i cant port forward all? [11:03] <_ruben> BrixSat: still dont have a clue what you're asking [11:04] :p [11:04] <_ruben> stupid mdns stuff .. editing /etc/nsswitch.conf did the trick [11:05] i used to have a machin running windows inside a huge network, and i used team viewer to administer it, now i have ubuntu server and i cant connect to it from the interner, cause it has at least 10 routers and im not the network administrator [11:05] got it? [11:06] <_ruben> well, you'd need atleast a single port opened to it in order to be able to connect it .. and routers arent the problem, its most likely firewalls that are interfering [11:09] i have port 22 ssh [11:09] but how can i reach the machine from the outside world? [11:09] hi, i'd like to run postfix as a relayhost for an exchange (sbs 2003) server, anyone done this before? [11:10] or knows a tut [11:23] _ruben? [11:50] <_ruben> stanman1: inbound or outbound? [11:51] <_ruben> BrixSat: ask the network admin(s) to open up port 22 [11:51] [_ruben] lool dont you think i have done that before? he wont open!! [11:52] teamviewer did not need that and log me in was the same! no port opening on router [11:55] <_ruben> teamviewer would need atleast one port to be open as well .. atleast to (for example, as i dont know that tool) a teamviewer server [11:56] <_ruben> if no inbound connections are allowed, then its probably for good reason [11:58] <_ruben> having the box initiate an outbound vpn connection to a known place *might* do the trick, assuming outbound isnt filtered [11:59] _ruben: both in- and outbound [12:19] <_ruben> stanman1: the biggest challange is telling postfix the list of valid email addresses, tho there's quite a few scripts out there on the net that dump the AD info into a file that postfix understands [12:31] not that hard. [12:51] <_ruben> probably not, indeed [12:51] pull the addys from ad, and insert into file/db. [12:52] and the format for postfix is already defined. so, ya. === cemc1 is now known as cemc [14:00] can I use php cgi, withouth #! ? [14:19] how do i install a pkg without installing its depends? [14:19] qiyong: if the depends aren't installed, your package won't work. if they're already installed, they won't be reinstalled. [14:20] PhotoJim: my package can work [14:20] libapache2-mod-passenger depends on mpm worker, but i don't like to use worker [14:20] PhotoJim: ^ [14:21] questin. how do i view the keys on my host ? [14:21] qiyong: you may need to install from source, then. or convince the libapache2-mod-passenger developer that the dependent package is not actually required. [14:21] can i ignore the depends? [14:22] that depends on whether that dependency is actually required or not. [14:24] qiyong: I told you already.. You don't have to change anything in your php scripts or directory layout or anything to use php via fastcgi. [14:24] All you need it to change your apache configuration a tiny bit. [14:25] soren: sorry, i can't get my apache confed properly for fastcgi [14:25] I use libapache2-mod-fcgid myself. See http://fastcgi.coremail.cn/ for docs. [14:26] Can someone please point me to a list of server specific merges that should be done? I remember seeing a wiki page about this but unfortunately I cannot find it anymore and google is no help :-( [14:26] iulian: I don't know if we maintained such a list this time around. [14:26] iulian: Ask mathiaz when he shows up. [14:27] Probably within the next hour or so. [14:28] soren: OK, I will then check on launchpad for packages that need to be merged. [14:28] I mean, where -server is subscribed. [14:29] Aha! https://bugs.edge.launchpad.net/~ubuntu-server/+packagebugs [14:29] * iulian hopes they are not all in main. [14:30] Most are, I'm afraid. [14:30] Please don't let that stop you. [14:30] It doesn't matter, I will just attach the debdiff to the bug. [14:30] Myself, mathiaz, and kirkland can all sponsor stuff for you. [14:31] as well as any other core-dev. [14:31] Indeed. [14:42] That's odd. I'm wondering why bacula has as the Maintainer the MOTU developers and the package is actually in main. [14:43] iulian: Probably because noone bothered to fix the maintainer when it was promoted.... three releases ago. :) [14:44] soren: Yeah, well, in 2.2.8-4ubuntu1 they modified the Maintainer. [14:44] From what to what? [14:45] No idea, that was back in Hardy. The changelog only mentions that the maintainer field has been modified. [14:46] Ah [14:47] It was first modified in Gutsy, 2.0.3-4ubuntu1. [14:47] Blah, it doesn't matter when it was modified, we just need to update it, that's all. [14:48] * iulian shakes head. === pschulz01_away is now known as pschulz01 [14:52] Greetings.. I'm not going to be able to join the meeting, but I have been looking into the VirtualBox OSE repository (svn).. and their Debian packaging. [14:53] Is dkms the 'prefered' way to include modules these days? === pschulz01 is now known as pschulz01_away [15:10] pschulz01_away: Yes. [15:22] is there a way to make Courier-IMAP deliver to an external mailbox (of the same name locally) using the MX records (rather than just dropping it off locally)? [15:35] New bug: #385221 in apache2 (main) "Error 403 after changing default root" [Undecided,New] https://launchpad.net/bugs/385221 [15:52] zul: Ah, I've just been preparing the nut merge :-) [15:53] iulian: sorry :) [15:53] Heh, no worries. [15:55] helloy all, i've created a custom repo with reprepro and it works pretty great, except i get a warning from apt on my nodes when they run an update saying expected distro hardy but got ), presumably just an empty string. i looked at the distributions file and it looks set... any ideas? [15:55] why am i having so much trouble with ssh keys ? [15:56] IS there no equivalent in Ubuntu-server that announces/broadcasts nfs shares like samba does for it's shares? [15:56] fbc-mx: Does that even exist for nfs? [15:56] My desktops can only see the windows shares but not nfs shares [15:56] Jeeves_, I dunno, that's why I'm asking. [15:57] Jeeves_, I mean there has to be a way of making them show up to my desktops. [15:58] fbc-mx: Yes, by mounting them [15:59] Jeeves_, I'll try to download one of those UBUNTU PDFs from some torrent site. Maybe I can get some insight as to how it's supposed to be done in a network environment. [15:59] fbc-mx: afaik, nfs does not broadcast [15:59] neither does samba, afaik [15:59] showmount -p can do some stuff with nfs [16:00] but that is to be run from the client, asking the server which mounts he has [16:00] Jeeves_, NFS does not broadcast??? Neither does samba?? Then every desktop goes out and port scans every computer to find shares? That's very ineffecient. [16:00] fbc-mx: No, a desktop will broadcast to see which computers reply [16:01] Jeeves_, There has to be a broadcast of services by Samba. It would be so inefficient for every Machine to do that. [16:02] Jeeves_, ahh. [16:02] fbc-mx: Ok, whatever you want [16:02] * Jeeves_ will shutup now === pace_t_zulu_ is now known as pace_t_zulu [16:02] Jeeves_, ah, ok so a desktop puts out a special query packet that the samba server responds to with a list of shares. Is that correct? [16:03] no [16:03] the client asks which other samba clients there are [16:03] those clients show up in the 'windows networking' stuff [16:03] and than you click further and further [16:04] http://www.ubiqx.org/cifs/Browsing.html may be a relevant chapter of "Implementing CIFS" ? [16:04] Jeeves_, so back to the problem. I have to go to every computer mounting NFS shares every morning when they boot up? There has to be a better way. [16:05] vi /etc/fstab [16:07] hi [16:07] hi [16:07] fbc-mx: Are you aware of autofs ? https://help.ubuntu.com/community/Autofs [16:08] I am running Ubuntu Server 9.04 and apache+php.. and when I change apache2/php.ini error_log=/var/log/apache2/php_err.log it's not working.. I tried chown root.adm and 666, but it keeps writing errors to error.loh [16:08] error.log i mean [16:08] is it a known issue or I am doing something stupid? [16:08] Gena01: Did you restart Apache? [16:08] yup [16:12] the cli works. it's able to write to the file.. it's 666 now... but apache still doesn't [16:14] should I file a bug? [16:16] I'd see if you can get someone else to duplicate it first, but you could if you want. I have to head out to work so I can't help further right now, I'm afraid. I generally use syslog logging rather than direct-to-file logging on "my" servers, so I don't have much experience with using error_log= [16:18] jmarsden: for us it helps to have 1 error log file for both apache and cli apps [16:19] Sure, but can't the cli apps also log via syslog? [16:19] i want all php errors to go there.. they could.. if they can catch and redirect things.. but that's more complicated [16:20] can someone help me ssh key ? [16:20] and some errors are not possible to catch from php.. [16:21] If you just set error_log = syslog then whatever would have gone to your file goes via syslog... right? [16:21] jmarsden: mmm.. i guess it could work.. but then I have to change syslog and redirect php errors out to a separate file and fix permissions so that devs can read the file [16:22] Probably. man 5 syslog.conf [16:22] jmarsden: still weird that it's not working [16:23] Yes, it should work your way too. But I need to get out of here... sorry :) [16:23] jmarsden: np, thanks for your help [16:24] orudie: See http://ubuntuforums.org/showthread.php?t=30709 [16:46] New bug: #385251 in php5 (main) "apache2/php.ini error_log=/var/log/apache2/php_err.log not working" [Undecided,New] https://launchpad.net/bugs/385251 [16:48] Gena01: One more thought: check permissions on /var/log itself. Or try error_log = /tmp/php_err.log as a test. [16:50] jmarsden: but that would only matter if that file doesn't exist.. right? [16:51] there is a command to force a file not to get overriden by the system but I forgot. trying to make my resolv.conf unchangable. [16:53] jmarsden: mmm... ok.. /tmp/php_err.log works.. [16:54] chattr +i that's it. Makes a file unchangable [16:54] So you have a permissions issue in /var/log, I would strongly suspect. Syslog handles that for you :) [16:54] tomsdale_: Better to tell your dhcp client to leave DNS info alone that do strange things like that, surely? [16:56] jmarsden: it's temporarily so I'll change it back. [16:57] Your choice. Editing /etc/dhcp3/dhclient.conf to do a supercede for the domain info seems more logical to me... === jussi01 is now known as jussio1 === jussio1 is now known as Android === Android is now known as Tuhina === Tuhina is now known as jussi01 [17:07] Would anyone like to sponsor bug#385262? [17:43] can anyone explain this. host www.mydom.com => IP1 ping www.mydom.com => IP2. Why is the name resolution via hostfile disregarded by some programs? [17:43] I have an extra entry in my /etc/hosts for www.mydom.com. Ping resolvs via the /etc/host [17:47] tomsdale: ping uses the libc library (and thus nsswitch+resolv.conf) while host doesn't use the libv resolver but talks *directly* to dns servers [17:48] tomsdale: host is a utility to query dns servers and debug them [17:50] thx, that makes sense mathiaz [18:20] New bug: #385262 in tomcat6 (main) "Merge tomcat6 6.0.20-1 from Debian unstable" [Wishlist,Fix committed] https://launchpad.net/bugs/385262 [18:32] hi... I'm trying to do a remote backup using ninjabackup. It uses rdiff-backup. I have hardy on production server and jaunty on home server. ninjabackup complains that rdiff-backup has a different version on each computer and doesn't want to proceed. Any solutions? There's no rdiff-backup in hardy-backrports [18:33] or maybe you can recommend some other backup solution? needs to handle mysql and regular files. === FFForever-Away is now known as FFForever === FFForever is now known as FFForever-Away === FFForever-Away is now known as FFForever [18:45] mathiaz: Thanks. [18:50] what is the best way to have an active/active failover for our webserver/mysql server [18:51] how do you specify port with scp ? [18:54] orudie: scp -P xxxx [19:14] can someone have a look at this and maybe hint me on whats wrong ? http://pastebin.com/d5a3338fd [19:23] orudie: it seems that the remote server is closing the connection [19:23] alex_muntada-> yeah but why ? [19:24] it will be very helpful to see the logs on the other side [19:24] orudie: you can try increasing verbosity level as in sftp -vvv ... [19:34] what are the advantages/disadvantages of ubuntu server with kvm vs centos 5.3 with xen? I have mostly used centos with xen [19:51] what is the best way to have an active/active failover for our webserver/mysql server [19:55] drbd [19:56] and mysql in master-master replication [19:56] drbd for web site [20:01] ivoks: don't you need a ha component? [20:02] depends on setup [20:02] if you have two nodes in 'cluster' [20:02] and both serve the same stuff [20:02] then drbd in primary/primary should be enough for web site [20:03] and mysql in master/master replication for mysql [20:03] hopefully, you have a loadbalancer that can load the traffic on them [20:03] if you don't have it, then you need to manage IP failover [20:04] on top of drbd you can have ocfs2 or gfs2 (if you want gfs2, then you need redhat cluster suite) [20:05] ivoks-> hi, are you familiar with sftp when using ssh key ? [20:06] orudie: you still didn't get it working? [20:06] no but i'm getting a different error this time [20:06] how did you configure your sshd_config ? [20:07] orudie: last time you were trying to chroot with ssh, is this the same now? [20:07] yeah exactly [20:07] orudie: did you try to use just ssh (not sftp) with the simplest config (no groups, etc)? [20:08] ivoks: thank you for your answer [20:09] pmatulis_-> yes ssh worked with ssh key i figured that out [20:09] orudie: in chroot right? [20:09] nope not in chroot [20:09] still having a problem with chroot [20:10] orudie: so sftp problem is not related to chroot right? [20:11] hey all, I need some emergency help- after a reboot, libvirtd is hanging repeatedly [20:11] 9.04 64x86 [20:12] hangs pegging one CPU core...but after it manages to kick off the KVM machines that are auto-start [20:12] 'night [20:16] hi [20:16] W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/hardy-updates/main/binary-i386/Packages.bz2 Hash Sum mismatch [20:16] I've been getting this for weeks - am I the only one? [20:18] What are the max. specs on CPU and memory for ubuntu-server? [20:19] as in, howmany CPU's could it support [20:19] and howmuch memory? [20:19] What are the max. specs on CPU and memory for ubuntu-server? [20:22] phoenixz: from the ubuntu oficial site: http://www.ubuntu.com/files/server/UbuntuServerBrochure804LTS.pdf [20:22] thanks! [20:22] ups [20:23] it is not there [20:23] http://www.ubuntu.com/getubuntu/download-server [20:23] jmedina: just a detail... 9.04 specs are equal to 9.04 specs? I need the 9.04 limits. [20:23] thgere is a link to Installation requirements" [20:23] phoenixz: I dont know I dont use 9.04 for servers, only LTS [20:24] jmedina: If I recall correctly those are minimum requirements? phoenixz is asking for maximum [20:24] steffan: correct [20:24] I need maximum supported.. [20:24] phoenixz : ubuntu is a kind of bundeled open source softwares with one famous called Linux : the kernel. Is the one who deals with hardware, check on the Linux kernel max specs instead (according to the ubuntu kernel version) You will see limits are pretty huge [20:25] We're looking at as sweeet 24 core IBM server which probably will run some 500+GB memory... I'd like to be sure that ubuntu-server will keep running on it [20:25] phoenixz: and that will depend on the arch [20:25] LMJ: I know the kernel limits yeah, thats pretty high.. but I dunno if ubuntu itself has some lower specs of that? [20:25] jmedina: i386 architecture [20:26] phoenixz : pretty sure : no, maybe you could have some sysctl tweaks or a custom ubuntu kernel to optimize the ressource utilisation [20:27] LMJ: pretty sure: it will work, or pretty sure: it will not work? [20:27] Its not very clear :) [20:27] it will ;) [20:27] phoenixz: so what do you expect? [20:27] do you already have a requirement? [20:27] 500GB : nice, but i'm wondering why you are not running AIX crap on this hardware to have full support from IBM, ubuntu is kinda exotic [20:28] ups, /me scrolling up [20:28] ivoks: sorry i stepped a way for a bit and just got your message [20:28] ivoks: ok so I am going to use drdb and mysql replication master/master [20:29] what is the purpose of the new filesystem ocfs2? [20:29] cluster oriented shadow98, developped by oracle iirc [20:29] LMJ: its sweet yeah, but its still all planning.. Using ubuntu because.. it simply works :) Going to do virtualization with it.. correction bout the memory by the way, its going to be more like 100 - 200 GB.. [20:30] phoenixz: what are you planning to use for virtualization? [20:30] it may work but you should use 64bits architecture is CPU can handle it [20:31] if* [20:31] looking at kvm based solutions.. We've done quite a bit of testing, looks good so far [20:31] LMJ: it should be 64 bit yeah.. if the CPU could not handle that, I doubt the server would be able to exist in the first place :) [20:31] you have an efficient storage too? That's the typical virtualisation bootleneck [20:33] LMJ: Fiberoptic SAN.. probably multiple cards per server to be able to sustain high throughoutput (how do you write that again?) [20:33] another thing we're working on.. it should be possible to "bundle" multipe (say 4) network cards together to access them like if they were only one network card, right? [20:33] so are the majority in agreement the best bet for an active/active failover is drdb [20:34] LMJ: ubuntu server also supports fiberoptic cards like Qlogic and Emulex? [20:34] phoenixz: for storage is multipath, IBM has a RDAC drivers wich is not supported in ubuntu, you can use kernel DM-Multipath which works fine [20:35] and channel bonding for network interfaces [20:35] pmatulis_-> pm [20:35] so we should not have a problem with the fiberoptic cards under ubuntu? [20:36] it depends, I have used QLogic HBAs [20:37] jmedina: and that worked fine... ? [20:37] qlogic.. [20:37] yeap [20:37] I have IBM bladecender H [20:38] well my customer :) [20:40] jmedina: the launchpad ops have yet to fix my PPA issue so i dont have those packages up yet... but they're done. [20:41] Sam-I-Am: good, can I get them from other site? [20:41] i dont have any place to put them unfortunately [20:41] how is ubuntu/kubuntu's virtualization compared to centos? I know ubuntu uses kvm and centos is xen. I only have experience with xen so I could use some info from real world usage (not just benchmarks) [20:42] but i have openldap-2.4.16-cvs w/ gnutls and openssl... dhcp, bind9, samba, and miscellaneous libraries backported from jaunty to hardy [20:42] jared555 > exciting but new and not very stable [20:42] oh, and heimdal [20:42] bind9 with ldap? [20:42] if I am going to be using virtualization heavily would you suggest centos for the server side then? [20:42] jmedina: I just checked in the linux channel.. They say if the motherboard supports it, the linux kernel will support it.. So ubuntu will also support a 24CPU/256G server? [20:42] would recommend waiting or very, very properly testing if it is for prod [20:43] jmedina: well, i'm rolling out a bunch of newer apps for hardy... bind9 is one of them. [20:43] jared555 > I don't know centos - but I am unsure about kvm in jaunty [20:43] Sam-I-Am: good, I need bind9+ldap [20:43] got em :) [20:44] they both need my rebuilt db4.7 libs... also included in the mix [20:44] jared555 > the most serious issues may have been fixed by now though [20:45] and what bout samba? do I need new version to support new libldap? [20:45] no, i didnt force bind to need libldap 2.4.16 [20:45] well, my entire home network will be relying on the virtualization heh [20:45] jared555: centos is a bit behind jaunty for kvm - kvm was a focal point in Jaunty because of ubuntu enterprise cloud [20:45] i'm trying to keep most of the apps as non-interdependent as possible [20:45] its probably good enough for home network :) [20:45] well I meant centos's xen [20:45] What is the larges (known) server running ubuntu-server? largest as in, highest hardware specs ? [20:47] basically I will be running either xen on centos or kvm on ubuntu server [20:48] jared555: well xen in centos is well behind kvm in jaunty...if you have vt extensions...kvm in jaunty would be a better option [20:48] exit [20:48] ok, thank you [20:52] What is the larges (known) server running ubuntu-server? largest as in, highest hardware specs ? [20:52] i've run it on 8 cores and 64 gigs of ram... [20:55] Sam-I-Am: If all goes as planned, I'll probably run it on a whee bit more than 8 cores [20:57] just a few? [20:58] Sam-I-Am: 24 [20:58] not more than that, simply because I can not find anything bigger on the i386 platform :) [21:05] what needs 24 cores? [21:07] phoenixz: I'll have an account on this server, okay :) [21:07] Sam-I-Am: virtualization [21:07] i'd recommend against more than 8 cores or so in an x86 box [21:07] steffan: You have any idea on what the largest known ubuntu-server installation might be, hardware wise? [21:07] x86 is just too bandwidth-limited [21:07] you'd be much better off with 3 eight-core boxes [21:10] phoenixz: why dont you send a message to ubuntu server mailing lists [21:10] Sam-I-Am: well, virtualization usually means larger == better [21:10] jmedina: I may just do that, yeah [21:10] phoenixz: Follow philosphy (as you are in a Linux channel) and push it too it's extreme :) [21:11] phoenixz: You will soon find out that way. [21:12] steffan: as in, you think its too extreme? [21:14] phoenixz: No, I think you should try it. [21:15] steffan: we'll probably get the server anyway, its just a question of what operating system. Because of very good experiences with ubuntu on servers (and very bad ones with RHEL, SLES, etc), I want to give it the chance it deserves.. [21:16] linux is linux though... rather, its just a kernel [21:16] the kernel probably scales fine to 24 cores, but x86 itself does not. [21:45] kees: hey - I saw you made a bunch of upload around May 11th: No-change rebuild to gain FORTIFY defaults. [21:45] kees: what is this for exactly? [21:46] is there a new default compiler option for gcc? [21:47] funny, launchpad has gone back to the original joining date for ubuntu-server for me, in 2005 :) [21:47] ajmitch: launchapd remembers *everything* for *ever* [21:48] yeah I know [21:48] * ajmitch is just reading over the meeting log now [21:48] i followed this guide http://www.debian-administration.org/articles/590 , but i cant write with chrooted user [22:00] how can i check what the user's home directory is set to ? [22:01] who wants to help a noob with postifx? [22:01] better make that postfix [22:02] fatal: no SASL authentication mechanisms [22:09] postfix and dovecot / I followed the guide at https://help.ubuntu.com/8.10/serverguide/C/postfix.html#postfix-configuration [22:15] mathiaz: it was to catch things that had not been rebuilt in main since the hardening options were introduced in intrepid. [22:15] that was a little while ago [22:16] mathiaz: the goal for 100% of main being covered by the next LTS [22:16] ajmitch: yup, but still a lot of ELF packages hadn't been rebuilt. [22:16] * ajmitch isn't too surprised about that [22:16] kees: oh ok. So not a new feature. [22:16] mathiaz: right [22:17] kees: just making sure that everything will be covered for the next LTS. [22:17] * kees nods [22:18] fatal: no SASL authentication mechanisms can anyone help me with this? [22:31] hi... how do I disable the stuff printed out to STDOUT when I log in via ssh? This output prevents rdiff-backup from working properly [23:17] New bug: #385373 in samba (main) "Segfault in smbd" [Undecided,New] https://launchpad.net/bugs/385373 === hggdh is now known as hggdh_ === hggdh_ is now known as hggdh__ === hggdh__ is now known as hggdh [23:51] <_cpod_> i want to put a bigger hard drive in my server but don't want to lose any files/configurations. what is the best way to copy everything from the old drive to the new one? (both are currently mounted) [23:52] * _cpod_ is sure that's a noob question [23:54] _cpod_: cp -a /path/to/source /path/to/destination [23:54] is it raided? [23:55] <_cpod_> no, ive got an old 30GB IDE drive that i want to replace with a 320GB IDE drive. no raid or sata [23:55] New verbs.. I raid, you raid, we raid, we raided, we were raided... [23:55] <_cpod_> lol [23:55] _cpod_: mv /path/to/source /path/to/destination cleans the source right away as well [23:55] ok [23:56] I prefer rsync [23:56] rsync -a /path/to/source /path/to/destination [23:56] <_cpod_> oh, and the old drive will be removed. if that matters [23:56] if cp fails you have to start from the begining [23:57] jmedina, i'm tired of fighting with chroot, can you recommend a secure ftp modality ? [23:57] <_cpod_> jmedina/phoenixz: ok i'll give those a try. and how would i copy/redo my MBR? [23:57] orudie: I use pure-ftpd with virtual users [23:58] _cpod_: you use dd [23:58] _cpod_: you want to have like an image? use dd [23:58] _cpod_: dd if=/dev/sda1 of=/dev/sdb1 for example [23:58] I think is dd if=/dev/hda of=/dev/sda bs=512 count=1 [23:58] I prefer to reinstall grub in the new drive [23:59] <_cpod_> alright thanks guys i think thats exactly what i need [23:59] jmedina: you have to specify block and count for dd? I thought for those operations you could just dd if= .... of=.... and done [23:59] dd will also copy partition table [23:59] jmedina, can you give a link with instructions on setting that up ? [23:59] phoenixz: well that way wil only copy MBR [23:59] _cpod_: dd copies on block level.. basically on the lowest level you can get [23:59] dd is really slow, it copies even empty blocks