[00:44] hi guys, i wanna use iptables but i get an error when i run iptables -L im suspecting my kernel have config_netfilter disabled, how can i enabled it? [00:45] smoser: hi dude! are you there? [00:48] tyska, are the kernel modules loaded? "lsmod | grep iptable" [00:49] Tweeda: no answer, how can i load it? [00:50] tyska, is the package installed? "dpkg -l | grep iptables" [00:50] that's a lowercase L [00:50] ii iptables 1.4.4-2ubuntu2 administration tools for packet filtering an [00:52] tyska, might try 'modprobe iptables' [00:52] FATAL: Could not load /lib/modules/2.6.32-22-server/modules.dep: No such file or directory [00:56] Tweeda: some guess? [00:56] tyska, you might want to read up on depmod. That file should be there. you're problem looks to be w/ your kernel config and your iptables issue is a symptom [00:57] what is depmod? [00:57] tyska, generates modules.dep file that doesn't exist. [00:59] Tweeda: then should i just run depmod? [01:01] helo [01:01] whats the purpose of having upgrades for linux installations? [01:02] i can understand the purpose in a desktop os, ie to give new features, introduce new builtin apps, etc. but whats the reason in a server os? [01:04] tyska, I'd give it a shot [01:04] steven_t, to correct bugs, particularly bugs with exploitable security issues [01:05] so each os upgrade is primarily to enhance security? [01:05] and other bug fixes? [01:06] steven_t, well, perhaps not if you're speaking of upgrading karmic to lucid. [01:06] i wasnt, but now you got me curious :) [01:07] steven_t, updates withing a specific release is primarily for bug fixes. Upgrading to a new release would likely to keep up on latest releases of applications such as apache or php etc in order to take advantage of improvements. [01:08] ah. [01:08] thanks :) [01:19] is there an EASY way to find out how someone is using my postfix server to spam? I can't find how they got into the system with TLS setup [01:20] uh...log files? [01:24] Jeeves_Moss: telnet from a remote host that isn't supposed to be allowed to relay and try to relay. [01:25] Jeeves_Moss: helo fredmail from: rcpt to: datafoo. [01:25] Jeeves_Moss: if you get 'OK' after rcpt to, then you're in trouble. [01:28] bc, I took the box off-line because my ISP yanked our connection untill it's fixed [01:29] bc, http://pastebin.com/D8H1iMB1 [01:29] Jeeves_Moss: in smtpd_recipient_restrictions, the first three lines should be permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, [01:31] Jeeves_Moss: test the relying first (somehow), and if it's not postfix, make sure apache isn't spewing spam from a hole, and make check for unknown listening proceses. [01:31] bc, can you give me a step by step of what to look for so when I go off-line I can look? [01:31] Jeeves_Moss: testing postfix relaying is a logical first step. where is the box? [01:33] Jeeves_Moss: The answer will be in postfix's logs. [01:33] ScottK, http://pastebin.com/vFpfnRzt [01:34] ScottK, ask, and ye shall receive [01:34] Jeeves_Moss: pastebin your main.cf [01:35] bc, I can't, system is shut down. I've already received a warning of disconnection if it spams again [01:36] Jeeves_Moss: you're going to have to start it up to fix it. [01:36] bc, ok, you want to see main.cf, correct? any other requests while I'm in the basement? [01:37] Jeeves_Moss: You need to go back farther. You need to find lines that start like "postfix/smtpd[7578]: connect from ..." [01:37] I didn't see any in that snippet. [01:37] Jeeves_Moss: output of postconf -n. [01:37] ScottK, that snippit is the start of the 250Mb file! [01:38] Jeeves_Moss: grep "connect from" > somefile [01:38] err... [01:38] Jeeves_Moss: grep "connect from" /var/log/mail.log > somefile [01:39] Jeeves_Moss: just unplug the cat5, start it, stop apache and postfix, plug it back in? [01:40] bc, tried that already, and postfix won't die!! it uses ~76% CPU [01:40] brb, going to the basement with the USB stick [01:46] bc, ok, I deracked it. I draged it upstairs, and now I'm waiting for it to boot [01:47] Jeeves_Moss: if it spins the cpu like that without an internet connection then you have bigger issues [01:50] bc, let me verify [01:52] bc, in TOP, there are "qmgr -l fifo -v", proxymap, and showq @ the top of the list [01:53] Jeeves_Moss: you have a bunch of crud in mailq, I'm assuming? [01:53] bc, so do I, how do I clear it? [01:54] Jeeves_Moss: postsuper -d, but I'm not sure if it can clear out in batch. if not, you'll have to get the ID's and postsuper -d each one [01:55] bc, postsuper -d ALL? [01:55] Jeeves_Moss: Yes. [01:55] Jeeves_Moss: yeah that'll work, just be sure you dont care about any of the messages. At this point I probably wouldn't. [01:56] Jeeves_Moss: You need to investigate the logs to figure out where the stuff was coming from. [01:58] ScottK, I have used your grep command. I'm just getting ideas from everyone while I have the box next to me before I postbin everything. Currently, there is a LOT of disk activity running on the purging of the queue [01:58] Certainly. don't plug it back in until you've resolved the question of how the stuff was getting in. [02:00] ScottK, yep! and I won't duplicate this mess to the rackmount stuff untill I have it reviewed by my peers. [02:03] ScottK, I've got a LOT of "bounce -z -n defer -t un..." (it cuts off my screen @ that point) doing a LOT of disk access [02:03] bc, I've got a LOT of "bounce -z -n defer -t un..." (it cuts off my screen @ that point) doing a LOT of disk access [02:03] You've probably got a very full queue. [02:04] postsuper -d ALL will need to grind through it. [02:04] ScottK, ok is there a way to see how much is sitting there from another term while this is working [02:05] postqueue -p will give you a list, but that's not what you want exactly. [02:05] ScottK, the queue delete is still running, I was just wondering how much is in there. the HDD light is SOLID! [02:07] Jeeves_Moss: you can use watch(1) or a while loop. [02:08] first up for your viewing pleasure..... postconf http://pastebin.com/EMyS0BCb [02:08] Jeeves_Moss: regardless, stuff has to go though [02:08] next up, master.cf http://pastebin.com/0e4Q63F6 [02:08] Jeeves_Moss: do you have a lot of users with lame passwords? [02:09] master.cf http://pastebin.com/Fq9EKLwF [02:09] bc, not that I know of [02:09] Jeeves_Moss: try ScottK's grep output [02:10] bc, trying to pastebin it. I think it's too large [02:11] http://pastebin.com/XTweKdBV [02:15] Jeeves_Moss: grep 15059 /var/log/mail.log [02:19] bc, ideas? [02:20] Jeeves_Moss: You need to go back farther to see where it starts. For example, if you look at line 890, it's been in queue over a day. [02:20] Jeeves_Moss: grep 050A8836575 /var/log/mail.log* and see how far back it goes. [02:21] If we trace that one back to it's start, maybe we can figure out what's going on. [02:22] Jeeves_Moss: I'm no genius, I would show #postfix the output of postconf -n. Your smtpd recipient restrictions look ok, assuming a 'spammer' isn't authenticating and assuming someone else on the network isn't infected with something, and assuming it's not coming from Apache. [02:23] FWIW, I think it's unlikely you'll do better on #postfix. [02:25] thanks for your help guys. I @ least have a little better understanding of WTF is going on. I just want to get it cleaned up and moved on! [02:26] does 64 bit ubuntu use a whole different set of packages from X86?? [02:27] Jeeves_Moss: I wouldn't rule out some vulnerable web application, unless you know exactly what Apache is serving. Apache would be allowed to relay. [02:27] bc, did you see anything "odd" in the postfix config that would lend it's self to promoting this mess? [02:29] Jeeves_Moss: I believe your current smtpd_recipient_restrictions should prevent unauthorized relaying in a perfect scenario I think. [02:30] bc, thanks [02:30] I'm still VERY lost [02:31] Jeeves_Moss: Let's try and figure it out. [02:32] Jeeves_Moss: do you have tcpdump or tshark? I guess you could shutdown postfix, start Apache, plug it in and see if you see any traffic from the box itself to 25 [02:32] Trying to trace 050A8836575 back from where it came from is a good start. [02:33] anyone know how to setup iptables as a firewall/router. i am trying to run through a rule set and then forward it on to its destination [02:33] setup is [02:33] cable modem -> linksys -> linux_iptables -> client [02:34] and vice versa [02:34] linux_iptables is gateway for clients [02:34] linksys is gateway for linux_iptables [02:34] npope: I'd start here -> https://help.ubuntu.com/8.04/serverguide/C/firewall.html [02:36] Although I'd use the correct version of that for whatever release you're running. [02:36] Jeeves_Moss: outside of Scott's suggestion, if you have netcat, for the shutdown postfix let apache do it's thing, thing, netcat -l 25 [02:36] Jeeves_Moss: sorry, make that nc -l 25 [02:38] bc, what will that show me? will it show me what process is the one causing problems and let me narrow it down a bit more? [02:40] Jeeves_Moss: no, that won't work. just grep the logs, re: ScottK [02:40] bc, I've tried that, and I'm guessing I have the syntax messed [02:41] bc, and I'm not sure how he arrived @ that string causing the problem [02:41] bc: you good with iptables? want to take a look at my config? [02:42] Jeeves_Moss: line 890 in your log paste [02:42] bc, but that's just a snippit of a 250Mb file though [02:42] Jeeves_Moss: We know that the mail is being sent through postfix, so working throught the logs to understand how it got there is the essential step. [02:43] npope: I only configure iptables anytime something catastrophic happens. :) but I'll look at it. [02:44] Jeeves_Moss: you have the logs there thought right? grepping that ID should show you the entire SMTP conversation [02:46] grrr, i cant figure out how to have *filter and *nat in the same file... or different files for that matter [02:48] npope: you probably want COMMIT (just guessing) [02:49] npope: e.g. *natsome stuff blah blah-A POSTROUTING blah blahCOMMIT [02:49] bc: i got that part [02:49] let me pastebin it [02:49] npope: p.s. apt-cache show pastebinit [02:51] bc: http://paste.ubuntu.com/440156/ [02:55] npope: this should also have a 1 in it: /proc/sys/net/ipv4/ip_forward [02:56] npope: I would also start really small and work my way up [02:59] bc: agreed, it works without the routing though :) just when i try to route the packets drop dead :( which is annoying as all [02:59] npope: this might be a good starting point, but I don't know how painful it would be. I would probably go this route: http://pastie.org/979243 [03:01] npope: if you want it to work right away, you can set default policies to ACCEPT [03:02] bc: what is mangle? [03:06] npope: altering packets, you can leave it out [03:07] bc, it looks like the postfix server is allowing annon TLS connections [03:08] Jeeves_Moss: eww, good you found the cause. [03:09] bc, I threw it back up on the shelf in teh basement, killed apache and postfix, then fired up a tail on the mail.log on one screen, and then popped up postfix. within seconds, I saw the error [03:11] Jeeves_Moss: how'd it log if postfix wasn't running? [03:12] Jeeves_Moss: nevermind, I missed the 'popped up postfix'. I'm getting senile and blind. [03:12] bc, I fired postfix up, then used my smartphone to connect from an external IP to try sending e-mail [03:12] bc, http://pastebin.com/Cm3TmicD [03:13] Jeeves_Moss: you might have seen that using netcat without postfix running, I'm not sure, maybe put it in your bag for the future [03:13] bc, I did have nc running in another screen, and it was blind as my mother [03:13] Jeeves_Moss: it wouldn't show formatted like the postfix log, but you'd see the client speaking to nc [03:14] bc++ [03:14] LOL i didnt have commit under *nat [03:14] LOL [03:14] Jeeves_Moss: was it listening on 25? if it was, then you wouldn't have been able to start postfix [03:14] sudo nc -l 25 [03:15] Jeeves_Moss: ohhh sorry, TLS [03:16] Jeeves_Moss: 465 I'm guessing [03:17] nada [03:18] bc works now [03:18] bc: thanks for the help [03:18] arrgghhhh, I swear, this shouldn't have to be this complicated [03:19] Jeeves_Moss: prob listening on a different interface [03:19] bc, only one interface i the box [03:19] Jeeves_Moss: at least two, lo and eth0 [03:20] npope: :) [03:20] yep, forgot about that [03:21] Jeeves_Moss: you should have been: nc: Address already in use. Look at postconf -n inet_interfaces [03:21] s/been/seen/ [03:22] npope: I didn't help so much :) glad it works though [03:23] npope: that was 99.99% you heh [03:23] bc: heh it helps having a person to ping ideas off of [03:24] I'm trying to install drbd and i keep getting this http://pastie.textmate.org/private/9b79i0lkyoa3sbfwtfs4sw [03:25] http://pastie.textmate.org/private/m3zf7xu0rn3ilqe3cyhiw [03:25] more error [03:25] Jeeves_Moss: do you enforce clientside TLS certificates? [03:27] dude [03:27] why does ubuntu ship with two kernels [03:27] -21 and -22 [03:28] SORRY, kernel makefile not found. You need to tell me a correct KDIR! === rgreening_ is now known as rgreening [05:28] should i use drbd8-utils (source) or drdb-utils (.7) [05:33] Hey guys. I am getting a "no free leases" error on DHCPD. My dhcpd.conf is at http://pastebin.org/284779 and this only started tonight when I added the second subnet section and the last host (mx1). Could anyone tell me if they can see whats wrong? [05:35] jiboumans: Renaming dovecot-postfix to mail-stack-delivery adds ~70 lines of really annoying shell script and changes ~30 more in the maintainer scripts and so there's about two hours of my life I'll never get back. Please not again. [05:36] ScottK: didnt you propose the rename? [05:36] I think I proposed the specific name, but the idea of renaming, wasn't mine. [05:36] Or if it was, I'm a masochist and too tired to remember. [05:37] I just wanted to add I am getting the no free leases error refering to net 10.1.0.0/26 but it's occuring everytime the host connects with the MAC for the 172.16.0.126/26 fixed IP [05:37] ScottK: sommer sent the mail to ubuntu-server@ saying it was an outcome of the UDS session [05:37] Yes, it was. [05:37] ScottK: i claim innocence, but i do really like the rename so ScottK++ for making packages more discoverable ;) [05:37] ScottK: I know a few cute sadists if you're interested [05:38] jetole: Sorry. Already married. [05:38] ScottK: Oh he won't mind [05:39] Yeah, I'd imagine not. [05:39] lol [05:39] jiboumans: No worries, not really blaming you. Just needed to vent. At least it's 5 hours until I have to be up again. [05:39] Urgh. [05:39] * ScottK will test it tomorrow. [05:40] ScottK: no worries, let's vent over beer some time :) [05:40] jetole: if he's a true sadist, he'll enjoy the fact that ScottK will be in trouble with the Mrs. [05:40] it's win win [05:40] jiboumans: In other news, the Debian dovecot maintainer is interested in the package for Debian. [05:41] ScottK: that's good news [05:41] jiboumans: Right and if I was a true masochist, I would too. [05:50] hrm [05:50] are we satisfied with ext4 yet [05:50] considering it's the default install type [06:01] I'm not, but I'm happy for non-LTS users to test it [06:02] eh? [06:02] anyway [06:03] how in the world am i supposed to mount a drbd [06:03] on the slave [06:05] for some reason i think this is the desired thingy [06:09] Team, can any mentor guide/point me to "How to contribute in ubuntu-server team", I have read the doc(GettingInvolved)... but not sure where to start :)/ [06:12] deepak_: URL? [06:13] https://wiki.ubuntu.com/ServerTeam/GettingInvolved ? [06:13] twb: URL I have read it? looking advice that where to start I mean , if I see bug reports [needs-packaging] which has huge list? any pointer where ubuntu-server has specific list. [06:19] I don't know; I don't use launchpad. [06:20] I imagine there's some sort of server tag. [06:20] Otherwise, just look at bugs that deal with packages you use on your server. [06:23] twb: sound good , Thanks [06:25] deepak_: Starting with packages you use and are familiar with is best. It means you'll be able to check and confirm if bugs exist and perhaps make suggestions to bug reporters to improve their bugs or try things to solve their problems. [06:27] ScottK: ok.... [06:27] so ubuntu does not have something like in Debian (RFH or RFA).... [06:30] jiboumans: good point [06:31] let the sadomasochism roll [06:31] when is it necessary to use a dlm like ocfs2 or gfs2 [06:32] webPragmatist: it has to do with file locks for example if you don't want people to edit the same file at the same time but the d is for distributed meaning it works with ocfs of gfs and makes sure if someone is editing a file on server 1 then someone else can't edit it on server 2 [06:32] at least I believe that is how it works [06:32] webPragmatist: I don't use dlm on my ocfs2 systems [06:33] but drbd already doesn't let you mount your distributed resource [06:33] webPragmatist: who told you that? [06:33] well [06:33] if it's asynchronous ? [06:33] hold on and I will give you the url [06:33] i'm telling myself that [06:33] because i can't mount it anyway for whatever dumb reason [06:33] without it bitching about read only [06:34] http://www.drbd.org/users-guide-emb/s-enable-dual-primary.html [06:34] but [06:34] ocfs or gfs is pointless without dual primary [06:34] right [06:34] you may also want to look into the split brain documentation [06:34] that's what i've concluded too [06:34] right [06:34] i understand those concepts [06:35] but my server setup is very simple and I am wondering if it is necessary [06:35] it's just two nodes [06:35] for failover [06:35] ocfs (let's assume I mean both from now on) is useful without dual primary but it's main purpose is to allow dual writing on clustered filesystems simultaneously [06:35] webPragmatist: in that case, you really really don't need dlm [06:35] jetole: so in the event you were trying to loadbalance [06:35] dlm would be useful [06:35] no [06:35] o.O [06:36] i use ocfs + drbd + dual primary on both load balanced and mail servers without dlm [06:36] it has to do with file locks but for the most part file locks are usually only required for specific needs or applications that really require them [06:36] right well in your case [06:37] also you will need primary/primary for load balancing [06:37] the mail would only get processed [06:37] once [06:37] (ideally?) [06:37] webPragmatist: I don't mind the web servers reading file synchronously and I doubt to programmers will upload the same file at the same time [06:38] A distributed lock manager (DLM) provides distributed applications with a means to synchronize their accesses to shared resources. http://en.wikipedia.org/wiki/Distributed_lock_manager [06:39] jetole: do you have other servers to process the load balancing? [06:39] i've always seen like at least 3 nodes to do load balancing [06:39] here is a good example, fail over vm hypervisors using shared storage but you don't want them both using the same image at the same time and dlm acts sort of as a predecesor to Shoot The Other Node In The Head (STONITH) [06:39] atleast network load balancing [06:40] hrm [06:40] webPragmatist: won't say on mail. As per web servers, they are actually using multi path iSCSI to dual file servers [06:40] also you can do up to 4 hosts via drbd [06:40] http://www.drbd.org/users-guide-emb/s-three-nodes.html [06:40] you can apply the three node setup up to 4 however [06:41] i don't really have 3 nodes to work with… you are saying you have load balancing on your web servers? [06:41] If you plan to exceed 4, look into a distributed file system like lustre, glusterfs or... whats the other one, ceph? [06:41] webPragmatist: yes [06:41] how many nodes? [06:42] sorry to say but none of your business [06:42] don't mean to be rude, it comes down to corporate guidelines [06:42] uhhh [06:42] I can't share everything but I can tell you that you can do as many as you like [06:42] i'm asking [06:42] can you load balance with 2 [06:43] oh sure [06:43] you can tell a load balancer to only use 1 though it defeats the purpose but comes in handy for testing and debugging [06:44] well what I am saying is I always see a third pc to "balance" the junk between the other two [06:44] or rather often 4 [06:44] the third pc is a load balancer [06:44] or reverse proxy [06:44] or some sort of high level switch [06:44] yeah [06:44] i assume [06:44] hrm [06:44] layer 4/5 switch if I am not mistaken [06:45] then again I think it falls under the guise of load balancer at that point [06:45] then again, you can setup a linux system as the load balancer with either haproxy or ipvs [06:45] I would choose ipvs but it's more complex [06:45] right [06:46] so i guess coming from drdb8 world… whats the purpose of using ocfs2 other than the locking [06:46] drbd* [06:46] point being, you need to distinguish between a web server and a load balancer, granted they can be on the same system (though it defeats the purpose) but you only need two web servers to load balance [06:47] jetole: at this point i don't have a third node, switch, load balancer, whatever to do that [06:47] so a simple failover is the next best thing [06:47] ocfs does not handle locking afaik unless dlm is used but if you use ext2/3/4, reiserfs or any other normal file system that is written by two different machines simultaneously then it will be corrupt quicker then you can expect [06:48] How do you plan to do the fail over? [06:48] anyways, ocfs/gfs are cluster file systems designed to handle writes from several nodes at the same time [06:48] ext4 for example is not [06:49] your inodes will be corrupt in no time [06:49] ahhh [06:49] okay [06:49] webPragmatist: that only applies to primary/primary afaik [06:49] so back to the dual primary situation [06:49] ext2/3/4 should probably work with drbd fail over [06:49] okay this makes sense [06:50] how do you plan to fail over the servers? [06:50] i've read alot of this just the application was mush [06:50] it's cool. I have been working with it all only a short time myself but by that I mean since about the turn of the year [06:50] maybe a little longer [06:51] Anyways, how do you plan to do the failover with only two nodes? [06:51] Oh YUK. [06:51] well currently i have gone as far as to create a drbd with pacemaker, corosync, csync2, and now i'm at a point of whether i should just go ahead and use GFS2 or OCFS2 [06:51] heh [06:51] I just noticed that VMware Server runs its VMs at nicenes -10 [06:52] jetole: but that's all confidential [06:52] shhh [06:52] webPragmatist: but how will you direct traffic to node2 if node1 goes down without a load balancer / failover machine [06:52] (Admittedly, I noticed this because after inode-bombing the host ext3 filesystem, and the VM was still running smoothly.) [06:52] webPragmatist: I won't tell my boss [06:52] jetole: uhm? multiple ips? [06:52] twb: interesting. I use kvm :D [06:53] webPragmatist: and have DNS point to both IP addresses for the A RR ? [06:53] I assume that's how it works lol [06:53] jetole: I *wish* I was allowed to [06:53] shared ip [06:53] twb: although now that you mention it, I kinda like the nice option you just mentioned but kvm is kernel level already [06:54] webPragmatist: you can't really run them both at the same time with the same IP. This will cause TCP hell [06:54] jetole: only the virtualized parts of [06:54] righ [06:54] ...it are [06:54] jetole: right… heartbeat or whatnot [06:54] jetole: the userspace part is still a normal process [06:54] switcharoos the node to the correct ip [06:54] twb: for vmware of kvm? [06:54] well both probably [06:54] jetole: both [06:54] I know with kvm it is [06:54] Nobody should run vmware-server [06:55] jetole: is that not ideal? [06:55] been... I don't know, 18 months+ since I played with ESX [06:55] ESX is a different beast entirely (I'm told) [06:55] webPragmatist: not sure. Will heartbeat turn on one IP address if another one goes down? [06:55] twb: oh you're using server? [06:55] Yes :-( [06:56] oh well then yes. It's a very very different beast. Server isn't a hypervisor if I recall correctly [06:56] It's no [06:56] *t [06:56] jetole: sureeeeeeeeeee https://wiki.ubuntu.com/ClusterStack/LucidTesting#Overview [06:56] It's more like qemu+kqemu [06:56] Only shit [06:56] right, actually I think ESX uses LKMs [06:56] webPragmatist: then that sounds like it would work [06:56] ESX is a linux product, certainly [06:57] webPragmatist: why not just invest in a cheap little server to act as the load balancer? [06:57] jetole: because all this is dedicated hosting [06:57] twb: I know it is but playing on the console in ESX is taboo to begin with [06:57] lol [06:57] and cheap little server is = not cheap [06:57] webPragmatist: can I ask who with? [06:57] brb. Gotta piss [06:58] bobsbadassdedicatedservers.com [06:59] you could have just said no [06:59] twb: are you pretty good with dhcpd? [06:59] it's softlayer [06:59] No, I use dnsmasq. [06:59] ah [07:00] I got an issue with mine that I am trying to solve, only started tonight when I added the second subnet [07:00] they sell loadbalancers [07:00] but whats 250 connections mean lol [07:00] 250 simultaneous connections? [07:00] webPragmatist: that 250 people can connect to your site at the same time [07:00] or one person with a simple DoS flood [07:01] sounds like a bad idea [07:01] meh [07:01] http://www.softlayer.com/services/network/ [07:01] oh softlayer is the company [07:01] sounds like you are creating more of a bottleneck [07:01] than if you were to just let 1000+ connections spam the server… failover (if it even would) [07:02] webPragmatist: it wouldn't [07:02] that's just my take lol [07:02] the load balancer would just block any more connections [07:02] you web server wouldn't see it [07:02] right [07:02] i'm saying [07:02] buying the load balance = bottleneck [07:02] well if they are limiting connections yes [07:02] but remember a load balancer uses far less resources to load balance then a web server [07:03] well [07:03] a single load balancer creates a single point of failure however [07:03] i could imagine a single server being able to handle 250 connections [07:03] now setting up two load balancers on a shared IP with something like CARP doesn't sound like a bad idea [07:03] for what i do [07:04] 250 is small [07:04] actually shit [07:04] yea [07:04] global load balancing is ridiculously expensive [07:04] and stupid [07:04] now global disaster recovery is a good option [07:05] where you have a site in seattle that goes up if your data center in miami gets hit by a commet [07:05] yea we already do that [07:05] we have one node in seatle and one in dallas [07:05] do you work for a company? [07:05] maybe [07:05] I mean for the load balancing, is this for a company or personal? [07:05] oh well it's both :) [07:06] the company i work for said implement some redundancy [07:06] i said okay [07:06] well for a company, present to them the cost of one server going down vs. the cost of a load balancing system and then tell them to get out their fscking check books [07:07] jetole: well… what i have going against me is the only time our primary node has EVER gone down is due to some stupid dns issue that softlayer had [07:07] an ounce of medicine vs. the price of the cure or however that old saying goes [07:07] this is like 3 years so far [07:07] heh [07:07] my utime is like 900 something [07:07] 3 years is an important time then [07:08] well [07:08] a lot of server companies offer warrenties for 3 years for a reason [07:08] heh... [07:09] jetole: with that said though we are actually switching servers so we will be getting new hardware [07:09] so another 3 years :) [07:09] present the cost of high availability vs. the cost of money lost for reasonable expected downtime and let them decide [07:09] webPragmatist: thats beside the point [07:09] HA doesn't exist exclusively for old hardware [07:10] :) [07:10] if it did companies like MS wouldn't need/want/use it [07:10] google can afford new servers daily [07:11] I'm actually dying to know how google has HA setup. Someone suggested the pod concept to me [07:11] basically it is [07:11] i'm more concerned with me fubbing up drbd than some load balancing [07:11] they have groups of racks. each rack has file servers on HA and a group of web servers [07:12] group the racks together and use the google FS on the file servers which is their own software to mimic lustre/glusterfs [07:12] i wonder if they use vm [07:12] I don't know [07:13] I doubt I ever will unless I work for them and if I am do I am sure I will sign a contract that says if I tell anyone then the google gestapo will "eliminate" everyone I have ever known and myself [07:13] jetole: is there a disadvantage to using ocfs2 if i'm not planning on imediately doing load balancing? [07:13] God I love those 5-hour energy shots [07:14] just processing overhead? [07:14] webPragmatist: it's more complex to setup but that doesn't mean it is complex. In fact it isn't [07:14] webPragmatist: the overhead is negligable [07:14] jetole: that article has how to set it up [07:14] the drbd one I posted? [07:14] probably [07:14] no [07:14] the lucid [07:15] oh [07:15] https://wiki.ubuntu.com/ClusterStack/LucidTesting#Overview [07:15] http://www.drbd.org/users-guide-emb/ch-ocfs2.html [07:15] they go through and piece together a HA server [07:15] yeah I have skimmed it before [07:15] roaxsoax gave it to me before the lucid release [07:15] cool i will read this [07:16] btw, pick ocfs2 over gfs [07:16] dude [07:16] this just sounds cool [07:17] dude!!! [07:17] DUDE WHERES MY OCFS [07:17] 2 [07:17] asdfasdfadsff [07:17] Dude!!! where is your ocfs? [07:17] I dunno dude!!! where is my ocfs? [07:17] :P [07:18] stuck it up my ext4 [07:18] too vivid [07:18] ext2 rather [07:18] lol [07:18] i'm no alien [07:21] jetole: so going back to pre-dual primary… only one node should be allowed write access to the drdb device? [07:21] bd* [07:21] correct? [07:21] if it's primary/secondary then yes [07:21] cool [07:22] i should be able to mount and see the files from secondary though? [07:22] I don't believe so [07:22] so it seems [07:23] i wasn't sure if i didn't have it configured correctly [07:23] take a look at /proc/drbd [07:23] <_chris_> heja all [07:23] <_chris_> im pretty new to linux and haveing a virtual test server [07:23] jetole: doesn't exist? [07:23] ? [07:24] webPragmatist: then you have something configured wrong [07:24] <_chris_> when i log on to my ubuntu-server it tells me there is 1 zombie process, how can i 'Find' it and kill it ? [07:24] jetole: or i'm nt rute [07:24] root [07:24] jetole: what am i looking at [07:24] webPragmatist: it should exist regardless but sudo to root [07:24] oh hrm [07:25] why's it say on;ly 14% sync [07:25] webPragmatist: and does it show a sync rate? [07:25] I forget what it looks like since mine are all in sync [07:25] yea like 470 kbs [07:25] thats low [07:25] are these two servers in different locations? [07:26] no lol these are two vms [07:26] http://pastie.textmate.org/private/fgvf3o3avyjvzh7eboterw [07:26] on the same machine? [07:26] ya [07:26] look at the rate option under syncer [07:26] man drbd.conf [07:26] will do [07:26] change it on both [07:27] then run drbdadm adjust on both [07:27] _chris_: don't think you can kill a zombie [07:27] but... [07:27] to find it, run ps aux [07:27] look for a process with [ and ] around it [07:27] i.e. [sshd] [07:27] I think [07:27] ... [07:27] * jetole looks [07:28] anyways if it can be killed, send it signal 9 [07:28] as in kill -9 pid [07:28] no it's not [ and ] [07:28] yeah you can't kill it anyways [07:28] but it's not running or doing anything [07:28] it will die eventually [07:29] _chris_: you can kill the parent process if you like [07:29] ah [07:29] default is 250 KB/s [07:29] _chris_: a zombie is a dead process that the parent process didn't wait() for [07:29] that would do ti [07:29] it* [07:29] webPragmatist: yes it is and yes it would [07:29] lol [07:30] jetole: is there a more standard way at looking at that status [07:30] than snooping through /proc [07:30] webPragmatist: whats wrong with proc [07:30] i dunno [07:30] never used it [07:30] there is some way through drbdadm I think but /proc is preferred [07:30] yea [07:30] there is lots of good info in proc [07:31] the drbdadm way just reads proc and re displays parts of it [07:31] but afaik it doesn't clean it up or make it more human readable [07:32] Hello... does anyone know if Ubuntu Enterprise Cloud can be added to an existing Ubuntu Server installation... 10.04 ? [07:33] webPragmatist: drbdadm role, drbdadm cstate, drbdadm status, drbdadm dstate [07:33] smoge8899: yes it can [07:34] smoge8899: run tasksel [07:34] jetole: yea there needs to be like a summary wtf [07:34] well [07:34] thanks very much for your help [07:34] drbdadn status is actually xml output [07:34] i'll continue this tomorrow [07:34] have fun [07:34] Reason I ask - my hosting provider will install 10.04 LTS, but the install is automated... so I can't select "Install Cloud".... and need to add it afterwards. [07:35] webPragmatist: also, run the command man man [07:35] smoge8899: I already told you how [07:36] it's funny how linux has the command man man but not man women or women man [07:36] I hear it has women donkey on the mexican version [07:36] o.O [07:37] :P [07:37] they should call it woman [07:37] ttx: we meet again.. [07:37] cause they always read the instructions [07:37] guys are lucky to type --help [07:37] ttx: you there? I was wondering about merging the moin package... [07:38] webPragmatist: maybe, maybe not but I tell you that I am the only one I let cook in my kitchen [07:38] and my girl friend doesn't complain [07:38] heh [07:39] I meant she doesn't complain about the food by the way. [07:40] SpamapS: yes [07:40] SpamapS: stuck in hundreds of spec review emails [07:41] SpamapS: take it, it's yours [07:41] * ttx intervened in moin by accident [07:41] ttx: hah, ok. :) [07:43] ok - so for the Cloud Controller Server, I would choose "cluster controller" or "top-level cloud controller" [07:43] using tasksel [07:43] not sure but I believe "top-level cloud controller" since cluster does not nessicerely mean cloud [07:43] smoge8899: the installer does nice extra installation steps though [07:44] ttx: He mentioned it's managed hosting doing a default install [07:44] smoge8899: for a first-time install, I'd suggest using the UEC installer to save you some post-setup pain [07:44] hm [07:44] yep [07:44] <_chris_> jetole, sorry was not at the pc before, ok i saw the zombie just disappeared somwhow [07:44] Ok - I'll setup a VM and give it a try. [07:45] "uggest using the UEC installer" - yes - would love too - but it is not available to me [07:45] as the host does the server install... I need to add afterwards [07:45] smoge8899: with the package-based install, you have to do networking setup and key sync yourself, basically [07:45] _chris_: you never really have to worry about zombies unless you have some situation where they are appearing faster then they die and you have hundreds [07:46] _chris_: and when that happens it's a programming bug in the application creating them [07:46] zombies always die on their own [07:46] <_chris_> ah ok :) [07:47] <_chris_> thanks for the info [07:47] no prob [07:48] ok - I'll give it a shot.. [07:48] one last question.... [07:48] for now - ha ha [07:48] <_chris_> just noticed after having the server up some weeks that sometimes a zombie appears and didnt know if this was something bad ^^ [07:48] is Enterprise Cluster (private cluster) production ready? [07:48] yay vmware fusion for mac supports lucid finally [07:49] _chris_: it can be caused by a lot of things but the occasional zombie is normal [07:49] SpamapS: why couldn't it install it before? [07:49] i eman [07:49] i use parallels and it's always worked? [07:50] problem with mrtg: i use "ssh -t user@host 'sudo command'" in my scriptto get hd-temp in variable. the script itself works from cli, but wheni use it in mrtg as "Target[]: `/path/to/script` it doesn't work. [07:51] webPragmatist: something in their easy-install procedure where they took a value from some config file, shoved it into another one, and b0rked the keyboard in gdm [07:52] trapmax: thats really.. an awful way to do monitoring [07:52] trapmax: consider munin or collectd.. much more sane. :) [07:59] Thanks for your help! Gotta run! [08:08] SpamapS: any ideas though? the same script without the "ssh -t user@host" -part works well enough [08:29] trapmax: no with -t I would expect it to work the same as if you had logged in. [09:17] * SpamapS decides he needs at least 4 hours of sleep before tackling moin... [09:29] SpamapS: about the thrift packaging, it will need adaptations to be fully policy-compliant... but it's not in the bad shape i imagined it would be [10:12] How to install xlibs in lucid ? [10:15] Define "xlibs" === schmidtm_ is now known as schmidtm [10:30] hello, having problems getting a Qemu-KVM bridge to work nicely with IPtables; all traffic shows up as "martian", while the firewall used to work quite nicely with Xen... can anybody help? I've got the firewall script in ubuntu pastebin.. [11:01] are there any big advantages to using apt-proxy instead of a general http proxy like squid? [11:02] It's a "smart" cache [11:02] In that it will read the index file and purge cached objects that are no longer part of the release [11:02] That kind of thing [11:02] I know that, but how big are the advantages? [11:02] However, in production I found apt-proxy and apt-cacher to be very very flaky [11:03] Instead I know run debmirror(1), which is working solidly [11:03] squid should be solid though? [11:03] It depend how much you've tweaked it [11:04] For example, here we were caching Packages.gz but not the Release file, so what apt saw was a bad checksum. [11:04] I see there is something called squid-deb-proxy. Have you tried that one? [11:04] I have not [11:04] How many hosts do you have? [11:04] twb, I've had problems with that here as well. I though I'd made a mistake, but I couldn't figure out what I did. [11:04] not very many. 20-30. [11:05] Running what? [11:05] ubuntu desktops. [11:05] In particular, do they track main or also universe/multiverse, and do they use ubuntu+1 or a stable release? [11:05] they use the current lts. [11:06] they use universe, but not multiverse. [11:06] To track hardy/main, single arch, no sources, is under 10GB [11:06] For me, that was a negligible hit [11:07] yes, diskspace is not a problem. I want a good tool. And if I could use it a as a general web cache as well, it would be wonderful. [11:07] I think at customer sites I'm keeping a mirror of hardy/* and hardy-*/*, single arch, no sources, and that's about 30GB with about 100MB per week of updates [11:08] Maybe as much as 300MB a week if something like openoffice is binnmud [11:08] So I have the hosts pointing at an NFS export of that debmirror for apt, and browsing goes through squid [11:08] For me, that works quite well [11:09] twb: the coherency thing is a apt-client + server config issue exacerbated by the apt archive design [11:09] I've used apt-mirror in the past. It worked well. Do you have any experience with that? [11:09] twb: just saying :) [11:09] lifeless: granted [11:10] lifeless: I'm sure once apt-bittorrent takes off everything will be much, uh, better [11:10] SpamapS: thanks for the advice. collectd does everything better [11:10] lol [11:10] I also love how the recommended tertiary mirror software isn't in the archive :-/ [11:10] rsync? [11:11] No, I mean a tertiary mirror, like your ISP and your school run [11:11] As opposed to a quaternary partial mirror like you might might run at work [11:11] I can't remember what it was called, though [11:12] While we're on the subject, can apt be made to use rsync:// URLs? [11:13] No. [11:14] Pity [11:14] It's not like foo_1-1 and foo_1-1.1 are gonna differ much [11:15] And my ISP happens to export /pub with rsyncd as well as http/ftp [11:15] That's https://blueprints.launchpad.net/ubuntu/+spec/foundations-m-rsync-based-deb-downloads [11:15] Shiny [11:17] Heh. Is it just me, or is CJWatson running half of Ubuntu? :-) [11:18] "The apt-sync package is now included in Ubuntu and will make upgrades faster." [11:18] ...I can't see it in rmadison [11:18] I'm just the approver on that, which is because that spec has been carried over from a point when I managed the team responsible for that [11:19] twb: that's in the "Release Note" section. The point of that section is to write, in advance, something which would be suitable for integration in the release notes when it's complete. [11:19] Oh, I see. [11:19] so it's written in the present tense because that's how the release notes are, but read it as if it were in the future tense [11:20] It's for the RM/PR teams to copy-and-past into ANN posts [11:20] hello everyone. I'm running 6.06 on one of my servers, and suddenly my apache went offline. Now it cannot start and give me this error: http://dpaste.com/199862/ I disabled cgi module (I don't use it) but realy curious about what the real problem could be. Any help please? [11:32] when you enter an HTTP proxy during the server installation, then that's only used during the install and is forgotten afterwards, right? [11:32] jo-erlend: no [11:33] jo-erlend: it's normally written to /etc/apt/apt.conf [11:33] YMMV depending on exactly how you do the installation. [11:35] thanks. [11:41] New bug: #586285 in mysql-dfsg-5.1 (main) "package mysql-server-5.1 5.1.41-3ubuntu12 failed to install/upgrade: Paket ist in einem sehr schlechten inkonsistenten Zustand - Sie sollten es erneut installieren, bevor Sie es zu entfernen versuchen." [Undecided,New] https://launchpad.net/bugs/586285 [12:00] Anyone has any ideas what could be causing such high memory use? < http://itstar.co.uk/memleak.png [12:02] only thing I can think of is the md0_resync process that's running - but it's not showing any memory use [12:04] hackeron: echo 3 > /proc/sys/vm/drop_caches will free it [12:04] it looks like os cache ;) [12:05] to prevent you can add vm.swappiness=0 to your sysctl.conf [12:05] binBASH: did you look at the screenshot? -- notice only 6mb is cached [12:06] hackeron: please pastebin the output of "free -m" [12:06] binBASH: I took the screenshot after running echo 3 > /proc/sys/vm/drop_caches -- running it again makes no difference to used ram [12:06] Under normal circumstances, linux should have 100% utilization of RAM [12:06] Since it caches disk blocks in unused parts of RAM [12:07] twb: http://pastie.org/979675 [12:07] I don't know if top(1) is counting those [12:07] Hmm, OK. [12:07] It was, but you don't have much cached anyway [12:07] twb: aha, so where's the memory going? [12:08] Here, I have 732MB used, but 512MB of that is disk cache [12:08] hackeron: I don't know yet. [12:08] right, but I have almost no disk cache, so where is it going? :) [12:08] only process eating cpu is md0_resync (I use software raid) - I suspect it may be the culprip [12:08] culprit* [12:09] did you sort by mem usage in top already? shift-M [12:09] binBASH: yes, I did, you can see in the screenshot [12:10] you don't happen to run a sphinxsearch (searchd) right? [12:11] erm, no [12:11] because I run it here and it's ram usage is also not shown in top ;) [12:11] CPU isn't RAM [12:12] Incidentally, a text dump of top's output would've been easier to read [12:14] twb: here: http://pastie.org/979684 [12:16] Hmm. I wonder why I have 52 "udev --daemon" processes still running [12:21] hi all how can I copy files from windows to a ubuntu server instance that I connect to via putty (ssh) on windows? [12:22] tschundeee: sftp [12:23] for example via filezilla [12:23] binBASH: okay that sounds good [12:23] soI download filezilla and connect to my server [12:24] yup [12:24] with your ssh user [12:25] binBASH: thx a lot... usually I am using osx with cyberduck for that [12:25] :) [12:27] tschundee: I use cygwin on my Windows XP/ Server 2003 instances. It allows me not only to do file copy but SSH and schedule cron jobs on my win machines. Only if you want to get this deep. If you only need file transfer, use Filezilla. [12:30] tschundeee: your welcome [12:31] cygwin isn't for the faint of heart [12:31] FWIW, Windows also has its own cron-like scheduling infrastructure [13:28] morning [13:29] morning to you too, though it's 14:30 here. :) [13:31] hello to all. i have just started the server and thinked i can login fast on the server but for some reason the programm fsck is running now the whole time and i have a big raid disk [13:32] question: how can i stop this and second whats the best possibility to disable this fsck as it break my works flow especially on a server with no Screen [13:32] till fsck is not finished i am not able to login over ssh to see whats the problem [13:32] I think I'd examine the reasons why fsck is running all the time. [13:33] needed now to find extra a Monitor to Plug In and to see whats the Problem [13:34] it looks now it would hangs [13:34] it dont happen anything now [13:34] but you should still attack the cause and not the symptom. [13:35] jo-erlend: does fsck not periodical check the disk everytime you startup ubuntu after a few time or is this now different [13:36] it does, yes. [13:37] so why that. i have a big raid disk and no monitor, keyboard and such things attached on the server. why does fsck run automatic and hang the full server [13:37] till fsck not finish i am not even able to login over ssh [13:37] because a system is useless if the filesystem is broken. [13:37] mhm. How often do you reboot your server? [13:38] nearly every day as it is a test server [13:38] twice a day at least [13:38] in the last weeks [13:38] oh, ok. You can set the check frequency. I don't remember where the setting is located though, but you shouldn't have any problems finding it. [13:38] and now what should i do it dont happen anything ! [13:39] okay thanks for suggesting that [13:39] need now only to boot in the system [13:39] but it hangs [13:39] what should i do [13:39] cold restart [13:39] wait until fsck is finished? [13:40] it dont finish have rebooted now. this auto fsck sucks on a server [13:40] hehe, yes, it's horrible... [13:41] better to be lucky than good...- [13:42] okay it boot [13:42] ohh noo i got fsck again [13:42] why don't you let it complete? [13:42] ohhh man [13:42] do you understand what fsck does? [13:42] i dont have the time [13:43] that is the most stupid moment to do this fsck [13:43] and i dont see also any progress bar or simmilar that indicate how long it would need [13:43] yes, it's insane to make sure your system is intact on a important computer. [13:44] if you let it finish, then it will probably stop checking your filesystem at every boot. [13:44] and after some boots then the whole fsck again [13:45] at least the possibility to cancel the fsck should exist from my point of view [13:45] yes, if you choose not to change the setting. [13:47] exactly this is what get me angry. a presetting decide that the server hangs now for 1 Hour at the most stupid moment [13:47] hehe, some people actually use servers for other things than testing. [13:47] sometimes they prefer it when their systems are stable too. [13:48] yeah but this fsck can also be done at shutdown or not ? why on boot [13:48] because if there is a power outage, for instance, then data may be lost. It makes sense to check it immediately, and not wait two months until the next reboot. [13:48] when somebody boot the system he dont want to wait hours till the system is ready this just dont make sense [13:48] this can also happen if the system freezes so you have to do a cold reboot. [13:49] Hello, I have sudo access to a ubuntu server where two instances of MySQL are running. Using the mysql (or mysqldump) command drives me to mysql4 but the installed websites are running on a mysql5 instance. UnfortunatelyI can not find the bin to start mysql5. I'd appreciate any help [13:49] If you don't want to wait for fsck, run a journalling filesystem [13:49] DBs are there. I can see them within /var/lib/mysql [13:50] as i have installed ubuntu lts on the server it used ext4 as file system [13:50] hi guys, I need to add a user into my remote server.. and the user should have no pass, but use a public/private key to log in. How to do it? [13:50] so from your answer then fsck is not needed or i am wrong now [13:50] xperia_: then take comfort in the knowledge that with ext2 it would have been an order of magnitude slower [13:50] xamanu: do you need to start mysql ? [13:51] I'd certainly run an fsck if people were pulling the power cord from my server several times a day. [13:51] xperia_ no it is running already [13:51] xperia_ I want a db dump [13:51] xperia_: it should be better again when btrfs is ready [13:51] xamanu: a mysqldump you do this way [13:51] for a database [13:51] xperia_ but the mysqldump command is linked to the wrong version/instance of mysql and I don't know where is my mysql5 [13:52] ahhh okay [13:52] did you looked at /usr/bin [13:52] "/usr/local/bin" [13:52] and such places [13:53] xamanu, you can use apt-file to search for files in packages. That will tell you where the file gets installed. [13:54] You can't have both mysql4 and 5 on an Ubuntu system without working around the packaging system [13:54] Which means that the state of that system is anybody's guess [13:55] twb: strange thing is that with fsck dont happen anything. no numbers are changed. no progress bar nothing [13:55] xperia_: what was the last output? [13:55] unfortunately I haven't done any configuration on this system. just trying to get a dump out of the live system [13:55] xamanu: try "which mysql" [13:56] Hmm, that's no good [13:56] Jeeves_Moss: did you get your mail server sorted? [13:56] type -a z [13:56] twb: gives me the link to /usr/bin/mysql which is the bin of mysql4 but I need the mysql5 [13:56] xamanu: what does "type -a mysql" report? [13:57] twb: /dev/cciss/c0d0p1: clean, 63186/2662400 files, 533281/10639872 blocks [13:57] twb: same thing "mysql is /usr/bin/mysql" [13:57] xamanu: OK, so mysql 5 is definitely not in your path. [13:57] xperia_: is that one of those half-assed IBM raid controllers? [13:57] twb: but the websites are using it :-) where could I look for it? [13:58] xamanu: you could try looking at the process table, finding a mysqld instance, and looking at its /proc//exe symlink to find out where it lives. [13:58] xamanu: since it's not in /usr/local, it's probably in /opt [13:59] Ah, HP, not IBM [14:00] xperia_: it prints that at the end of the fsck run, so either it's fscking the next partition, or it's hung on the NEXT step in the init process, without printing anything [14:02] twb: it is a "hp proliant ml530 g2" server. till yet everything worked fine. just today i wanted to boot the server and now that [14:02] twb: thanks! it says /usr/sbin/mysqld - as I understand this is the deamon and not the bin to gain shell access [14:02] what is wrong with fsck ? [14:02] xamanu: and "mysql --version" reports 4.x, not 5.1? [14:02] how can i kill it [14:03] xperia_: are you running 8.04 or 10.04? [14:03] the LTS Version [14:03] new LTS version [14:03] They're both LTS [14:03] twb: no it says 5.1 [14:03] the new released this year [14:03] xamanu: so what's the problem? [14:03] xperia_: well, prior to upstart, you could hit ^C and kill off just about any init script. [14:03] twb: but mysql -u root -p leads me to the mysql4 [14:04] xperia_: last time I looked, upstart didn't have that, so you are royally screwed [14:04] xperia_: you could try a ctrl+alt+del and bounce into busybox and recover from there [14:04] xamanu: I don't know what you mean by that. [14:04] twb: Server version: 4.1.14-pro [14:05] xamanu: OK, so you have a mysql 5.1 client, a mysqld 4.1 in the usual place, and a mysqld 5.1 running somewhere else. [14:05] ctrl alt del works but it reboot direct [14:05] and it hangs again [14:05] at fsck [14:06] twb, if fsck doesn't complete, then it'll be run at next boot, right? [14:06] twb: i guess :D what can i do to access the mysql5? [14:06] xperia_: now stick a "single" or a "break" in your boot script, so that you can get into a recovery shell. [14:06] xamanu: find out where it lives and point the mysql client at that place [14:06] xamanu: where "lives" is probably an IP and a port, or perhaps a socket. [14:06] jo-erlend: right. [14:06] xperia_: did you recently UPGRADE to 10.04.0? [14:07] twb: no fresh intall [14:07] twb, he never lets fsck complete, which is why it runs all the time. [14:07] xperia_: since I don't have any other ideas, I suspect that either 1) upstart isn't running the jobs it should; or 2) your RAID controller/driver is screwy. [14:07] jo-erlend: how do you know fsck isn't completing? [14:07] jo-erlend: oh ,right, I see. [14:08] till yet everything worked [14:08] no problems [14:08] twb, he's been saying that several times. He pulls the plug because it takes so long, and he's angry because it runs at the next boot. [14:08] only this stupid fsck [14:08] breaks everything now [14:08] xperia_, the problem is that you never let it finish. [14:08] jo-erlend: that's dumb; he should be using ctrl+alt+del [14:08] it's not broken. [14:08] Even if it *does* finish, pulling the plug out will make it start again [14:09] xperia_: how big is the ext4 filesystem you're fscking? [14:09] jo-erlend: from my side of view fsck dont work it hangs [14:09] xperia_, let fsck complete, then configure the bootup check frequency to a higher number. [14:09] twb: 64 GB [14:09] OK, then it should take maybe ten minutes -- not one minute, and not one hour [14:09] Unless your controller is retarded, in which case all bets are off [14:10] I had some of those HP controllers and I had to throw them out for being too stupid to waste my time with [14:10] well i would wait even 2 Hours if at least something change on the screen but it dont happen anything [14:11] how long have you waited? [14:11] it is just one line all the time with the same numbers and in such case fsck should be CTR-C [14:11] well now for sure around 5 to 10 Minutes [14:11] xperia_: so just to confirm: you let fsck complete -- it printed "/dev/cciss/c0d0p1: clean" -- and you then type ctrl-alt-del and it did a fsck on the VERY NEXT boot? [14:11] yes [14:11] you forget however the numbers [14:11] xperia_, and you only have one partition? [14:12] That shouldn't happen. [14:12] 23:06 xperia_: now stick a "single" or a "break" in your boot script, so that you can get into a recovery shell. [14:12] xperia_, those numbers are printed when fsck completes. [14:12] I'd also bounce into the RAID BIOS and have it do whatever verification it can [14:12] ahh okay then i should try again ctrl alt del [14:13] what should i look in the bios [14:13] ctrl+alt+del is the right way to do a soft reboot [14:13] twb: I don't think it is an IP. should be on localhost. so a socket maybe. how can I find this out? [14:13] twb: I know; I'm asking anoying questions..... sorry. thank you so much for helping [14:14] Dunno, mysql is for people too lazy or dumb to use sqlite for toys and postgres for production. [14:14] Try #mysql [14:15] twb: haha, you are right. thanks again! [14:18] Anyone having issues starting SSH on 10.04 LTS Server? [14:18] !anyone [14:18] A large amount of the first questions asked in this channel start with "Does anyone/anybody..." Why not ask your next question (the real one) and find out? [14:18] twb: rebooted right now in the bios. the raid controller is a "hp smart array 5304-128 Controller" [14:18] ubottu: I did ask a question. [14:18] Error: I am only a bot, please don't think I'm intelligent :) [14:18] twb even. [14:19] SirStan: then: no, since I'm still running 8.04. [14:19] twb: Signal to noise. [14:19] SSH in 10.04 wont start, it bitches about '/dev/null' not existing, eminating from line 17 of the init script. [14:20] SirStan: i am having problems to login over ssh to the server [14:20] SirStan: see, that was the kind of information I expected in your first message. [14:20] twb: signal to noise. [14:20] Plonk. [14:21] xperia_: presumably a different server to the one that won't start at all. ;-) [14:21] SSH Err -> http://i.imgur.com/pYflC.png [14:21] SirStan: my problem is fsck it hangs allways there. will use now a live cd and try to hack the boot scripts but it looks like ubuntu server lts fucked up [14:21] time to downgrade to 8.04 eh? [14:21] twb: it is the same server. i just wrote what i have seen on the Screen [14:22] xperia_: I'm not familiar with RAID BIOSes, sorry. [14:22] man this drives me crazy. [14:25] SirStan: can you do this: "ls -ld /dev/null" (minus the quotes) [14:25] xperia_: I don't suppose you have a support contract with HP? [14:25] cybrocop: srwxr-xr-x 1 root root 0 2010-05-26 15:38 /dev/null [14:26] twb: buyed the server from ebay :-) [14:26] and no i dont have such a contract with HP [14:26] Anyway, you should still be bouncing through single/break and debugging the init process. [14:26] SirStan: Something is wrong with your install. /dev/null must be a character device and it should be there for every install. [14:26] cybrocop: Clean install from 10.04 LTS [14:27] SirStan: According to your output, /dev/null is a regular file. [14:27] SirStan: This is what the output should look like: crw-rw-rw- 1 root root 1, 3 2010-05-26 20:08 /dev/null [14:27] SirStan: A lot of people are running 10.04 LTS and I have done a "clean install" about 10 times this past week. Never seen this problem. [14:28] Dunno what to tell you. [14:28] Now you haev. [14:28] SirStan: Is it repeatable? Can you do a re-install as there seems to be a serious OS process. Did you get any errors during installation? [14:29] process -> problem [14:29] same thing :P [14:29] <_tydeas_> which is the lifespan of the ubuntu server? [14:29] Well, I meant 'process' in general terms, not pid 1 [14:29] ScottK: hey. Got a question regarding proceeding on that SRU for tacacs+ now that it's in Maverick [14:30] _tydeas_: you mean support lifetime; as in, when is it EOLed? [14:30] _tydeas_: LTS releases are supported for 5 years, other releases are supported for 18 months. [14:32] <_tydeas_> i am between installing centos or ubuntu server ( my collegues support it ) and i am searching to find out what to choose [14:33] *some packages* in LTS are supported for five years. [14:34] _tydeas_: are you a debian or redhat shop :) [14:34] e.g. most of gnome is probably only three years for LTS, even if you install it on a server, because it's considered part of desktop [14:34] _tydeas_: if everyone you know uses centos, it's best for you to use centos. [14:35] _tydeas_: even if distro A beats distro B, you won't enjoy A if you don't have any support for it. === rgreening_ is now known as rgreening [14:36] rgreening_: SRU or backport? [14:36] * ScottK doesn't recall details. [14:37] Is it possible to have samba setup so one user accesses the share, but then uses a different user account to write the files through that share? [14:38] ScottK: hmm... tacacs+ doesn't exist in Lucid, but I would like it to be added. [14:38] due to it being an LTS [14:39] cybrocop: reinstalling [14:39] ScottK: so, I would like to enlist you direction in getting me there correctly :) [14:39] MTecknology: you mean a single individual having two accounts? [14:39] SirStan: Please also verify your installation media and make sure that there are no errors on it. [14:39] cybrocop: iso, crc matches [14:42] ScottK: the software has no interaction with anything else, and only adds a missing service, the ability to provide AAA (authentication, authorization and accounting) services for various NAS devices (like those from Cisco), and it is a service currently missing from our offering. [14:43] ScottK: so, it is low to no risk at all, and is being actively maintained and in Debian and consequently in Maverick now. [14:43] twb: two samba accounts -> one system account [14:43] so we should get bug fixes and security updates fairly regularly if/when they occur. [14:44] MTecknology: I'm not sure. Ask #samba. [14:44] MTecknology: I *think* you can samba accounts that don't associate with any unix account at all. [14:45] e.g. point samba at LDAP and don't point pam at LDAP [14:46] ScottK: I have just built and uploaded to my PPA with a Lucid build. It builds cleanly under Lucid. And should via the PPA (buildds). After that, I'll setup and test it via a Lucid VM/Server I have. If all runs fine, I'll need some direction and a seal of approval to get in officially in Lucid. I have no issues maintaining this package BTW goin forward [14:46] :) [14:46] twb: alrighty, thanks [14:47] SirStan: What kind of HW do you have? [14:47] twb: jo-erlend: fixed the problem. last time i have copyed from ubuntu server that runs very well this line from /etc/fstab in the new installed servers fstabs file "/dev/sda3 /media/usbdisk auto user,rw,exec" and exactly this line caused the hanging of the server [14:48] commentd this line now out and it works like it should now [14:48] xperia_: that might be because /dev/sda points to the cciss array's first node [14:49] You should (almost) never address a USB block device by its device name, anyway. [14:49] Use UUID or LABEL [14:49] SirStan: https://launchpad.net/ubuntu/+bug/63031 <-- May be related. Once you reinstall, make sure to go through syslog. I wonder if there are any other errors before OpenSSH that may give you a clue. [14:49] Launchpad bug 63031 in udev "/dev/null: Permission denied" [Undecided,Fix released] [14:53] twb: thinked ubuntu will work the same way like on the other mashine that is why i have jut copyed. but okay thnaks a lot for your helpfull answers here [14:54] good need now to work on the server. lost about two hours. see you all later. bye [14:54] xperia_: try "blkid /dev/sda3" to get info about it [14:54] Hallo i am using a transparent squid. I can do web login sessions on the server, but not so on the clients. Do you have any hint for me? [14:55] c13: on which server? You mean when you do "w3m http://127.0.0.1" on the host running the httpd? [15:00] I can do web login sessions on the machine that runs squid, but not so on the clients from the network [15:01] That would be because the host running squid isn't subjected to transparent proxying [15:06] how to make it transparent, when I already have "(Insert Line with transparent)" in the conf [15:06] insert line: http_port 192.168.0.10:3128 transparent [15:07] what do you mean by "subjected to transparent squid" === jdstrand_ is now known as jdstrand [15:18] mathiaz, ping [15:18] https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/583542 [15:18] Launchpad bug 583542 in openssh "ssh server doesn't start when irrelevant filesystems are not available" [Undecided,New] [15:18] so you declined that for lucid because it doesn't have a fix. [15:18] Normal transparent proxying would be done with a -A PREROUTING ! -s 1.2.3.4 -p tcp --dport http -j DNAT --to 1.2.3.4 rule on a router. [15:18] but if someone came up with a magical fix, they would no longer be able to "nominate for lucid" is that correct ? [15:19] You obvious can't DNAT requests from 1.2.3.4, because then squid's own requests would be transparently proxied back to itself [15:31] New bug: #586398 in tomcat6 (main) "when updating from 9.10 to 10.04, the dependency between tomcat6 and jsvc is lost and tomcat won't start" [Undecided,New] https://launchpad.net/bugs/586398 [15:44] hi all [15:44] I can't start mysql :( [15:44] xoen: I believe there are some bugfixes coming for that [15:44] I'm really lucky :P! [15:45] anyone know a good server monitoring tool? [15:45] I don't use mysql-server from some weeks and when I need it doesn't work :P [15:46] binBASH: monitoring really has several pieces.. data collection, health checking, alerting .. which are you interested in doing? [15:47] SpamapS: Actually a tool where I can log into web frontend and have a list of servers and see the disk usage, etc. [15:48] I've done this : $ sudo apt-get purge mysql-server-5.1 phpmyadmin so now my system *should* be clean...I try to reinstall it [15:48] binBASH: munin is good for that, ganglia too. [15:50] OK, it asked me the password for root mysql user but it can't be setted because it tell it's already setted... [15:50] $ sudo start mysql [15:50] start: Job failed to start [15:55] is there a place where I get more information? [15:57] Spamaps, to extend on binBash's question, will munin and ganglia allow for writing custom scripts. We manage several web resources, but for performance resources have local copies of those websites running. (when the user selects a special squid server, they're served with local copies.) Then we sync the content between local and remote daily. I need a way to make sure the content is properly synced all the time. [15:57] Spamaps, what do you think of Nagios? [15:58] xoen: check your mysql logs to see if you any corrupt tables [15:58] cybrocop: Nagios is for health checking and alerting, which isn't what binBash wants. Nagios is *amazing* for health checking and alerting. [15:58] cybrocop: munin in particular is good when you have nagios, as it has built in support for feeding data into nagios [15:59] @zul I'm removing mysql-server and everything related (/etc/init.d/mysql /etc/mysql/ and /var/lib/mysql/) and installing it again.... [15:59] cybrocop: Nagios is really, really awful for data collection and instrumentation though.. and after years running nagios grapher, which tries to shoe-horn it in.. I think its better done by munin [16:00] smoser, jdstrand: UT? [16:00] the university of texas ? [16:00] smoser: it can mean a lot of things, but I thought "You there" was a popular interpretation. :) [16:01] I'm here, but haven't had a chance to dive into the bug yet [16:01] well, one way or another "The Eyes of Texas" is now stuck in my head [16:01] which isn't going to make kirkland happy [16:01] smoser: I thought I was done with reporting on the bug but there are some new developments. [16:01] cybrocop: also to answer your other question, munin has a really simple plugin architecture that makes it very easy to write very powerful monitoring/data collection scripts [16:01] Spamaps: Thanks, I'll investigate it. [16:02] smoser: OK, so remember how I rebuilt node01... After a clean rebuild it worked. Then I purged apparmor and it stopped working. Then I reinstalled apparmor and it started working again. [16:02] cybrocop: as a side note, we're working on some new things to de-couple collection from instrumentation so that each node collects its own data, and things like munin just build the graphs.. :) stay tuned: https://blueprints.launchpad.net/ubuntu/+spec/server-maverick-monitoring-framework [16:03] spamaps: thanks. :) [16:03] @zul I've received an error : http://pastebin.com/AMfwY34c [16:04] @zul this appens when I've installed mysql-server-5.1 [16:04] SpamapS: I wonder if Zenoss is any good ;) [16:04] check your mysql tables [16:05] xoen: many IRC clients won't recognize that.. (such as irssi.. the one I'm using) .. you might want to try : instead of @ [16:05] binBASH: I've heard good things, but have never used it. [16:05] SpamapS: thank you :) [16:05] jdstrand/smoser: Well, today, this is what I did. Node01 was working fine 30 mins ago. All I did was: [16:05] cp -rp /var/lib/eucalyptus/instances/* /UEC/instances [16:05] rm -rf /var/lib/eucalyptus/instances [16:05] cd /var/lib/eucalyptus/ ; ln -s /UEC/instances [16:05] SpamapS: You know the last time I used monitoring software was big brother :) [16:06] apt has held back linux-generic and linux-image-generic packages, safe to update those if I've not done any kernel level tinkering? [16:06] zul: how can I check my mysql tables? [16:06] smoser/jdstrand: And the problem came back! [16:06] [Thu May 27 19:54:56 2010][001711][EUCAERROR ] libvirt: monitor socket did not show up.: Connection refused (code=38) [16:07] yeah. [16:07] its app armour [16:07] you can't do that [16:07] smoser: he removed apparmor [16:07] no. [16:07] xoen: you'll have to check google [16:07] jdstrand: I reinstalled it in order to make libvirtd happy again. [16:07] (purged, it broke, reinstalled it worked, then cp -rp ... ln [16:08] and it broke [16:08] ah [16:08] apparmour is denying you access because of the symlinks [16:08] cybrocop: then yes, apparmor necessarily realpaths symlinks [16:08] cybrocop: you need to update the profile [16:08] zul: But I've deleted everything (I believe) and reinstalled mysql-server... [16:08] smoser: so how can I make the instances live on a RAID partition [16:08] !google [16:08] While Google is useful for helpers, many newer users don't have the google-fu yet. Please don't tell people to "google it" when they ask a question. [16:08] cybrocop, i'd suggest mounting that directory there [16:09] alternatively i think you can configure where the path /var/lib/eucalyputs [16:09] @zul ahahha it's cool, a bot defended me :P [16:09] smoser: OK. Have to run now. I'll try this. [16:09] cybrocop: i reinstalled, and now /dev/null is a character device. . wtf.. [16:09] actually, it would be better to see the dmesg [16:10] (ops I used the @user again :() [16:10] since the driver should take care of the realpathing and adjust the profile accordingly [16:11] xeon: you might also want to joint this #mysql may be they have a clue [16:11] it is probably virt-aa-helper that is doing the denying [16:11] (come to think of it) [16:11] wise_crypt: OK, I'll try in #mysql... :) [16:12] jdstrand: Btw. I checked the virt-manager machine cloning again. [16:13] locally I can clone a machine, seems like I just can't clone remote [16:14] wise_crypt: of course mysql doesn't work when I need it :P [16:15] xeon: eh ? u should register your nick then [16:15] !hi > xeon [16:16] binBASH: that sounds like a non-apparmor issue.... I advise filing a bug. Please check kern.log on the local host and remote for any apparmor messages and add them to the bug [16:16] !hi | xeon [16:16] xeon: Hi! Welcome to #ubuntu-server! Feel free to ask questions and help people out. The channel guidelines are at https://wiki.ubuntu.com/IRC/Guidelines . Enjoy your stay! [16:18] smoser: hi! [16:19] smoser: hm - you're right - the bug cannot be nominated for lucid anymore :/ [16:19] that sucks [16:19] smoser:hm - actually no [16:19] smoser: I can still accept it [16:20] but i can't nominate it [16:20] smoser: however I think it can't be *nominated* anymore [16:20] :) [16:20] smoser: right - using the nominate for release link? [16:20] jdstrand: For me it more looks like the virt-manager doesn't watch for the disk image on remote server, but on the local host it's running at. [16:20] right. lucid will not appear (or karmic) [16:21] smoser: ok - I'll take this into account then [16:21] smoser: should I accept that specific bug for lucid now? [16:21] smoser: https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/583542 [16:21] Launchpad bug 583542 in openssh "ssh server doesn't start when irrelevant filesystems are not available" [Medium,Triaged] [16:21] no [16:21] this was a theory thing [16:21] :) [16:21] smoser: ok - great [16:22] smoser: thanks for letting me know [16:22] smoser: I'll have to take this one into account when updating the SRU process [16:25] btw. someone knows what this iptables message is? http://www.pastie.org/980085 [16:28] how can I get kernel modules to get installed at /lib/modules ???? is there any package? [16:29] why does postfix have its own hosts resolve.conf, nsswitch.conf, etc? [16:30] jorgelinux: default they are in /lib/modules/ [16:30] I mean those that are stored in /var/spool/postfix/etc [16:30] simplexio, I don't have any files in /lib/modules [16:32] jo-erlend: because it chroots into /var/spool/postfix, and those files are kinda needed to do things... [16:32] mysql-server package installs something out of /etc/mysql /var/lib/mysql and /etc/init.d/mysql? [16:32] jorgelinux: hmm. actually i dont know which package install all modules. there linux-image, linux-backports-modules and linux-restricted-modules packages [16:32] lamont, ah.. Thanks. :) [16:34] hggdh: i have an easy fix for you on your loop issue [16:34] hggdh: do you have a cloud where you can test this now? [16:39] hey after I modify the rate parameter a for drbd how is it supposed to speed up [16:39] i have reloaded the drbd daemon config [16:40] oh wait nm it got faster :) [16:40] look at that hawt vm action http://screencast.com/t/M2M3ODM3ZDIt [16:41] is there a way to monitor a /proc file continuously [16:44] snmpd [16:45] webPragmatist: do you want to do a health check or collect stats? [16:45] webPragmatist: there are drbd monitor scripts for nagios on nagiosexchange.org (worst domain name ever btw) ... [16:45] SpamapS: got it using watch [16:45] oh just for a while, yeah watch is perfect. :) [16:46] xoen: are you getting errors in /var/log/syslog? [16:46] * zul lunches [16:49] smoser: sorry I had to run... But, if it is apparmor shouldn't it leave logs somewhere? [16:49] thats why jdstrand was asking about dmesg [16:49] SpamapS: yes http://pastebin.com/za24F8fQ (this is grep -i mysql /var/log/syslog) [16:49] but your cp and then fail surely indicates that [16:50] cybrocop_, [16:50] INSTANCE_PATH="/var/lib/eucalyptus/instances/" [16:50] http://manpages.ubuntu.com/manpages/lucid/man5/eucalyptus.conf.5.html [16:50] is how you would put that elsewhere. [16:51] is there an offsite third party backup like crashplan for ubuntu (that's not a desktop gui, i'd actually use crashplan if not?) [16:51] smoser: OK. I can fix the instance path. [16:51] or what would you guys suggest [16:51] I don't feel like running another server to keep backups [16:52] cybrocop_, if that indeed fixes your problem , please summarise and close the bug [16:53] smoser: that doesn't close the root cause of teh bug. Yesterday, when I reported the bug, I wasn't using symlinks. [16:53] thats what i thought. [16:53] smoser: And yesterday, I hadn't reinstalled apparmor.. It was in disabled state so it never should've prevented me from running my instances. [16:54] smoser: I'm now trying to run the instance again to see if it leaves any logs or dmesg. [16:54] kirkland: we can use the test rig (right now on topo2. But there is not much space available there [16:55] kirkland: about 55G in total [16:55] smoser: I'm assuming that in normal (non-buggy) operation, it should leave something in the syslog that it prevented kvm from following symlinks.. correct? [16:56] i dont know. ask jdstrand for why that would or would not happen. i know that it doesnt afaik. [17:03] smoser/jdstrand: My bad. As opposed to yesterday, this time there are logs indicating the operation was denied. Here is the dmesg: http://slexy.org/view/s2HefYKUan [17:11] smoser: setting the INSTANCE_PATH variable worked. Thanks. [17:12] Any of you tried to backup using davfs or the like? With maybe rdiff-backup or something eqiuvalent? Suggestions? [17:36] New bug: #586442 in mysql-dfsg-5.1 (main) "package libmysqlclient16 (not installed) failed to install/upgrade: intentando sobreescribir «/usr/lib/libmysqlclient.so.16.0.0», que está también en el paquete mysql-cluster-client-5.1 0:7.0.9-1ubuntu7" [Undecided,New] https://launchpad.net/bugs/586442 [17:40] rgreening: File a bug against lucid-backports, say that the package builds, installs, and runs, and then give me a ping. [17:42] ScottK: ok. I'll be testing it tomorrow, most likely, so prob no ping today. ty for the assist [17:44] * zul returns [17:44] i've upgrade from karmic to lucid. things appear to have generally gone well, but now when i do dpkg-reconfigure grub-pc, it's somehow decided i've "chose not to install grub to any devices". why? [17:47] RoAkSoAx: you around [17:54] I am trying to pxe image using a local apt-mirror. Kickseed file loads, gets dhcp IP, but then throws this error when i starts looking for the packages to install http://dpaste.de/NU2t/ [17:55] I have tried getting the netboot files from the newest ubuntu-8.04.4-server-amd64.iso as well as the newest ubuntu-8.04.4-alternative-amd64.iso [18:02] When I do sudo csync2 -k /etc/csync2_ssl_cert.key it just hangs? [18:12] webPragmatist: what was the point of doing that in #ubuntu? [18:13] lots [18:14] webPragmatist: I can totally see how spamming the channel and getting banned would benefit you [18:14] that's the fun thing about the internet. [18:28] I'm looking at installing Sobby or Infinoted on my server.. both seem the same.. what's the difference? [18:30] Hallo i want to use the squid over firestarter. how can i configure the firestarter to accept the squid that all the traffic goes over the transparent squid? [18:33] Scunizi: The infinote one is a newer version with a different on the wire protocol. Gobby 0.5 and Kobby are compatible with it. === tschundeee_ is now known as tschundeee [18:35] I want to swap from debian server to ubuntu server give me some supporting reasons :o [18:36] ScottK: ok.. thanks.. so kobby on the kde machines and gobby on the gnome machines and infinote on the server... [18:36] Yes. [18:37] once I ap-get infinote on the server.. will kobby/gobby find it automatically on the LAN ? [19:02] jiboumans: ping [19:02] hggdh: on calls, mail is probably best [19:02] jiboumans: k [19:03] hey [19:03] any of you guys use csync2 [19:04] does csync2 -k just return a small key? [19:04] mine ends up just getting stuck and has a file like http://screencast.com/t/YTc0OTA0MTEt [19:05] Scunizi: No. When that are started, they have to say where the session they are joining is located. [19:10] hrm [19:10] maybe my entrophy sucks? [19:10] entropy? [19:10] it's taking forever to make this key [19:11] oop [19:11] http://lists.linbit.com/pipermail/csync2/2005-December/000063.html [19:19] ahahaha this is a new one [19:20] i guess this is what happens wen you cat /dev/urandom http://screencast.com/t/NzE4Yjg5N2Et [19:23] hi all, after hours spent fixing a problem with my mysql server now I've installed phpmyadmin but I can't access it [19:23] if I go to http://localhost/phpmyadmin I get error 404 [19:23] I'm using Ubuntu 10.04 [19:24] I'm going crazy :P [19:24] xoen: not sure what the port number is but it's typically written like this.. http://localhost: Scunizi: usually after I installed phpmyadmin just worked without port number (so port 80 I guess) [19:25] xoen: so did you try http://localhost without /phpmyadmin? [19:26] that would be port 80 [19:26] Scunizi: apache works [19:26] Scunizi: I've also configured a vhost with a Zend Application inside of it and it works :) [19:26] yes.. but if phpmyadmin is on port 80 then there is no need for /phpmyadmin at the end of the address unless it's in a subdirectory of /var/www [19:27] Scunizi: I don't know why but I've never seen phpmyadmin in /var/www but always worked === ersoy is now known as \z [19:28] xoen: I don't really use phpmyadmin so I'm not aware of the specifics on how to get to the admin page.. just guessing here.. [19:28] Scunizi: PhpMyAdmin files should be in /usr/share/phpmyadmin [19:28] Scunizi: Don't worry :) [19:31] I've got infinoted (gobby server) installed on a local server and have connected to it with a windows box and my kubuntu box.. user highlighting works on the windows box with gobby but is not working on the kubuntu box with kobby.. Any ideas why? [19:32] what should cat /proc/sys/kernel/random/entropy_avail read [19:32] and how can i increase this so my stuff can generate keys [19:43] anyone here using Hardy on a Xen DomU? [19:44] why is the only server version I find named amd64bit [19:44] what about intel [19:45] all the repositories are the same, so you can take 32 bit and remove the desktop + install server bits [19:45] amd32bit is == EMT64 - its intel or amd 64 bit, but not IA-64 (thats different again) [19:46] will amd64bit install and run properly on intel64bit? [19:49] depends what you mean by intel64bit [19:50] its not, on its own, a well defined term. You might be meaning itanium, or you might be meaning EM64T [19:53] elb0w: yes, it will be fine [19:53] elb0w: we run all intel processors in my datacenter, amd64 just means its the standard 64 bit image, not that its meant for amd chips [19:53] elb0w: I also agree its a stupid naming standard [19:55] yeah [19:55] meant space [19:56] sorry [19:56] wasnt sure we have all intel boxes here [19:56] and were wiping debian out [19:56] to put ubuntu [19:56] didnt want it to be a long night [19:58] someone can help me with phpmyadmin? http://localhost/phpmyadmin doesn't work ("The requested URL /phpmyadmin was not found on this server.") [19:59] sounds like an httpd issue [19:59] xoen: do you have access to the httpd log file? [19:59] /var/log/httpd/error_log [20:01] xoen: can you ls -l of this dir /etc/apache2/conf.d [20:02] is there a phpmyadmin sym link in there? [20:02] now I check... [20:02] learn mysql from chell imo [20:02] :o [20:02] shell* [20:02] don't get into this bs argument its been done [20:02] hahaha [20:03] I dont argue [20:03] I give advice [20:03] phpMyAdmin is an nice tool for creating databases and tables [20:03] I guess its a long term thing [20:04] those will hurt you in the long run [20:04] apache works [20:05] what about /etc/apache2/conf.d/ [20:05] is there a phpmyadmin.conf symlink inthere [20:05] I've also a vhost with a Zend Framework application and it works [20:05] xoen, ls -l /etc/appache2/conf.d/ [20:05] seriously [20:05] Hypnoz: no there is phpmyadmin.conf symlink [20:06] how did you install phpmyadmin? [20:06] repos? [20:06] elb0w: yes I've installed with apt-get [20:07] what command did you give? [20:08] elb0w: sudo apt-get install phpmyadmin [20:08] do this [20:08] Hypnoz: http://pastebin.com/nMkYxxZt [20:08] echo 'Include /etc/phpmyadmin/apache.conf' >> /etc/apache2/apache2.conf [20:08] xoen: cd /etc/apache2/conf.d/ && sudo ln -s ../../phpmyadmin/apache.conf phpmyadmin.conf && sudo /etc/init.d/apache2 restart [20:09] ya elb0w's way will work too i guess [20:09] my way is just how it was auto-setup on my machine [20:10] Hypnoz: elb0w: so it's better do as it was auto-setup? [20:10] it doesnt matter [20:10] theyll both accomplish the same [20:10] xoen: you should also try "grep -i phpmyadmin /etc/apache2/*" [20:10] to see if anything comes up [20:10] yeah [20:10] also check that /etc/phpmyadmin exists [20:11] Hypnoz: no result from grep [20:11] does /etc/phpmyadmin exist? [20:11] then ya, do either method, and restart apache2 [20:11] elb0w: /etc/phpmyadmin exists [20:12] then do sudo echo 'Include /etc/phpmyadmin/conf' >> /etc/apache2/apache2.conf [20:12] or how hypnoz showed you [20:12] then sudo /etc/init.d/apache2 restart [20:12] oh wait [20:12] dont do what I said [20:13] sudo echo 'Include /etc/phpmyadmin/apache.conf' >> /etc/apache2/apache2.conf [20:13] forgot apache.conf [20:13] lol. [20:14] elb0w: I LOVE YOU [20:14] Hypnoz: I LOVE YOU [20:14] i guess it worked [20:14] (in alphabetical order :P) [20:14] <3 [20:15] yes it worked. Really I love you, I thinked about killing myself ahahah :P [20:15] (just kidding) [20:15] when you work with open source software every day how can you not consider that at times [20:15] Hypnoz: it's not really a stupid naming standard - the instruction set was defined by AMD, not Intel [20:16] then it's just amd being egotistical [20:16] xoen : :) [20:16] and confusing people by not calling it x86_64 or something [20:16] Hypnoz: I feeled so stupid having problem with this thing that worked everytime for me :P [20:17] past tense of feel is felt [20:17] i know english is confusing as hell :( [20:18] Hypnoz: thank you, sorry for my english, I'm talking just because I need phpmyadmin ahahah [20:18] there is no reason for half the crap that goes on in english [20:18] http://www.beautifulperth.com/dumbenglish.html [20:19] wise_crypt: I love you too, you know :P? === dendrobates is now known as destro [20:19] http://www.beautifulperth.com/comp.html [20:20] i feel so much smarters [20:20] Some Companies will pay you to serf the Internet.  It's not a "get-rich" scheme, but you can earn a little extra spending money. [20:20] If you pay by the hour to use the Internet, forget this money making idea.   If you have unlimited Internet access, here's a few sites to get you started. [20:21] OK, next problem :P [20:22] what's this? "$cfg['Servers'][$i]['tracking'] ... non OK" (In PhpMyAdmin) [20:23] now you ask in #PHP [20:23] yikes [20:23] ##php [20:23] And how can I choose to use InnoDB? [20:23] xoen: sudo apt-get update && apt-get install php5 phpmyadmin [20:23] maybe there is something out of date? [20:24] Hypnoz: OK [20:24] sudo should come before apt-get install too [20:24] Hypnoz: sudo apt-get install --reinstall? [20:24] xoen: I don't see how that could hurt anything [20:25] Hypnoz: So I put --reinstall too :P? [20:25] make sure you do sudo apt-get update [20:25] I haven't used that before, but if that works then sure [20:25] huh? [20:25] innodb is in the standard mysql packages [20:26] alter table engine innodb [20:27] then tune mysql to share more memory to innodb than myisam [20:28] RoyK: how can I see which engine is used from PhpMyAdmin? [20:28] hes using php my admin RoyK [20:28] xoen: my.cnf shows default engine [20:28] is there an easier way to generate entropy than typing a bunch of crap [20:28] I think it usually is myisam, which sucks [20:29] webPragmatist: use /dev/urandom instead? [20:30] Hypnoz: just symlink /dev/random to it? [20:30] or whats the trick [20:30] erm [20:30] or maybe if you did a "cat /dev/urandom" that would generate entropy [20:30] don't do that [20:30] tach [20:31] RoyK: I've seen /etc/mysql/my.cnf and there is nothing I think, there is a comment that say InnoDB is the default with 10 MB bla bla bla [20:31] Hypnoz: in the contrary: that would use up what has been accumulated [20:31] RoyK: yea i've done that lol [20:32] jacked up my terminal [20:32] guntbert: wrong, /dev/urandom doesn't use entropy only /dev/random does [20:32] xeon : j #phpmyadmin [20:32] OK OK I will try for my own :) [20:32] webPragmatist: within a computer there is no such thing as "randomness" - thats where *you* come in :) [20:32] xeon : :) [20:33] guntbert: well [20:33] guntbert: i've just been pasting random crap into the terminal [20:33] is there a better way to do this lol [20:33] xoen: see /usr/share/doc/mysql-server-5.0/examples/my-innodb-heavy-4G.cnf.gz for a hint of the innodb tunables [20:34] webPragmatist: in another window (ssh session or ctrl-alt-F2) do "cat /dev/urandom" i'm curious if that will work [20:34] Hypnoz: I did [20:34] RoyK: OK, PhpMyAdmin make me choice the engine when I create a table. I guess I need to stop for today :) [20:34] Hypnoz: do i have to paste it though [20:34] back in [20:34] after i cat [20:34] Hypnoz: I think you are wrong here (from wikipedia: but the output may contain less entropy than the corresponding read from /dev/random) [20:34] webPragmatist: nope just let that run on the system [20:34] okay [20:34] hey guys I go, thank you very much for the help [20:35] bye [20:35] guntbert: hmmm ... maybe it uses some entropy... [20:35] xoen: what I meant was how you tune the mysql server - I don't think you can do much of that from phpmyadmin [20:35] webPragmatist: the generated entropy usually doesn't come from the characters but from your action on the keyboard [20:35] RoyK: I don't plan to tune nothing for the moment :) [20:35] how hard is it to setup carp? do I just give my systems one ip and add some config that makes them request another ip? [20:35] what does ctrl+alt+f2 do? [20:35] xoen: iirc mysql is set to use some 16MB RAM for innodb by default, perhaps a little more, and quite lower for innodb - you need to tune it up to make innodb good [20:36] change run level? [20:36] elb0w: TTY2 [20:36] ah [20:36] o cool [20:36] i never use this [20:36] alt+left/right works well [20:37] elb0w: you need the ctrl+alt combination if you are in X [20:37] gt [20:37] Hypnoz: well the entropy goes up and then goes back dow [20:37] Hypnoz: is the keygenerator like using the entropy up [20:37] xeon : http://www.indowebster.com/MySQL_Bible.html [20:37] anyone here using lvm on ubuntu servers.. especially those running databases (MySQL, Postgresql). Does it have any advantage ? [20:37] i'm no expert at this [20:38] user_: one advantage would be the ability to grow the volume size as the database grew I suppose [20:39] Hypnoz: yep that works btw [20:39] cating urandom [20:39] webPragmatist: of course it is using it up - within a computer there is no such thing as "randomness" (repeating myself :-) [20:39] Hypnoz, I'm worried about the performance impact of using lvm [20:40] ahhhhhhhhhhhhh [20:40] i catted for too long [20:40] RoyK: how can I choose innodb engine directly on create table? (last question for today, I promise :P) [20:40] RoyK: the MySQL SQL syntax [20:42] xoen: the default engine is set in my.cnf [20:42] xoen: but I don't know if that applies to phpmyadmin [20:43] It should be btw "CREATE TABLE name (...) ENGINE innodb;" [20:43] yes === unreal_ is now known as unreal [20:43] but then, if you set the default to innodb, you won't need to specify engine [20:43] !carp [20:43] unless you want myisam, that is [20:43] MTecknology: fish! [20:44] RoyK: :P - I'm trying to learn about it but the docs are evasive.. [20:45] MTecknology: http://www.fishbase.org/search.php [20:45] RoyK: lol.. [20:46] RoyK: I understand what do you mean but the problem is I need to choose innodb because I need transactions and I have a file in which there is the SQL code to create the schema (for a ZF application). So for this reason I need to be explicit (sorry guys for the explicit language :P) [20:48] xoen: generally you should choose one of the engines and tune mysql for that alone [20:48] RoyK: puppet will replace libvirt in 10.10? [20:48] MTecknology: asking me? [20:48] RoyK: Yes but I'm paranoic so I prefer make things idiot proof :P [20:49] RoyK: ya [20:49] xoen: see the innodb config from the docs, perhaps tune it down if you don't have 4 gigs of memory (or if your db is smaller or using memory for other things). change the tables to innodb [20:49] MTecknology: no idea :) [20:50] RoyK: but you're smart - you should write up some info in the serverguide for setting up carp :) [20:51] Hypnoz: i think it only help you so much [20:51] the cat .dev/uranodm [20:51] Bye guys and thank you again :) [20:52] MTecknology: sorry - no idea about carps unless they swim [20:53] webPragmatist: believe me or not - the biological factor is not replaceable for getting randomness (you could trace the movements of ants too) [20:53] guntbert i don't really care about entropy this is a testing server [20:53] i just want a damn key lol [20:53] RoyK: I want two redundant servers - carp made the most sense [20:53] RoyK: know of anything better? [20:54] webPragmatist: and where is the problem in typing a little pattern ? [20:54] we use drbd and pacemaker [20:54] guntbert: watcha mean [20:55] !info pacemaker [20:55] pacemaker (source: pacemaker): HA cluster resource manager. In component universe, is optional. Version 1.0.8+hg15494-2ubuntu2 (lucid), package size 786 kB, installed size 2884 kB [20:55] guntbert: atm cat /dev/urandom wasn't working [20:56] webPragmatist: sorry if I misunderstood - the usual way to get randomness for the key is to type away at the keyboard in a non-determined way, its the pattern with time that is used, not the characters themselves [20:57] webPragmatist: start a find / -type f -exec md5sum {} \; [20:57] RoyK: how does pacemaker work? can it give a set of computers a certain ip that they share? [20:58] MTecknology: yes, read the docs :) [20:58] they are quite extensive [20:58] RoyK: thanks for that :D [20:58] it's service-oriented, not host-oriented, but it works well [20:58] !pacemaker [20:58] !info pacemaker [20:58] pacemaker (source: pacemaker): HA cluster resource manager. In component universe, is optional. Version 1.0.8+hg15494-2ubuntu2 (lucid), package size 786 kB, installed size 2884 kB [20:59] I was just peaking for a wiki page :P [20:59] google for pacemaker drbd ubuntu [20:59] https://wiki.ubuntu.com/ClusterStack/LucidTesting [21:01] RoyK: thanks - this looks awesome [21:08] hi guys [21:09] the speed of the internet connection into my instances in UEC are very low. What can i do to speed up that? [21:10] RoyK: so if i understand this right... I setup some servers - setup a key for those clusters - then pacemaker handles the start/stop of services on that cluster and the IP's the cluster has - is that about accurate? [21:19] if i have an EBS attached to an instance and then reboot the instance, the EBS is still in use when i run euca-describe-volumes but it cannot be used from the instance [21:19] the volume is not showed with fdisk -l and cannot be mounted [21:20] whats the diff between inetd and xinetd [21:20] besides the volume cannot be detached [21:21] somebody can help me? [21:22] MTecknology: that's quite correct, yes [21:24] okay nm [21:25] interesting [21:25] RoAkSoAx: wake up lol [21:26] RoAkSoAx: i'm interested it knowing why we are using xinetd for csync2 [21:26] instead of the standard inetd [21:26] is there something inetd can't do [21:45] ForceType text/plain in my webdav configuration isn't working. It's always rendering the php which I want to edit in plain text. Anyone using webdav for php development on a local server? [21:49] hey all, i'm trying to setup a server that can send and receive from a couple of different accounts and be accessible by imap to local users, is there any documentation on this? [21:58] someone can tell me why the internet connection speed in my instances are so low? [22:01] any developers here who work on a local server for development. I set up webdav but it's getting tedious. What is your solution for remote php projects? === veebull_ is now known as veebull [22:05] qyestion about csync2 [22:06] when generating the keys… do i only make one key using csync2 -k [22:06] and share it between all the nodes [22:06] and then on each node i need certs [22:06] and also do i need to register the certs (how does it know where they are really?) [22:06] tomsdale: use LAMP server and then use an IDE with ftp connection to edit the files [22:07] tomsdale: we use svn and just svn update [22:07] but thats not easier than webdav to setup === geneticx is now known as wins2ts [22:08] tyska: I guess the classics are still the best. ftp has no file locking though. [22:08] i'd suggest webdav [22:08] webdav I have the problem that it renders my files. I already included ForceType text/plain in the VHost but it's not working. [22:09] tomsdale: if you wanna modification control, use SVN [22:09] otherwise I think it would be perfect for my needs. [22:10] yeah - svn. I will have to get up to speed with that. [22:11] tomsdale: well even if you get up to speed with svn you would be behind the vcs curve [22:11] as people are now using hg or git [22:11] Where can I create domains within my system so that I can map them to specific IPs? I.e. for a private network [22:12] Well, my project hasen't reached yet the size of the linux kernel :-) And svn seems to be included in my ide (netbeans) [22:12] webPragmatist: can you view .php files in plain text? [22:12] huh? [22:12] tomsdale: svn you checkout a local copy [22:12] and it syncs [22:13] with the repository [22:13] when you commit the changes [22:13] zul: around? [22:13] then you update your /var/www with the latest version [22:13] it's pretty slick… i've even made it auto update in some instances when someone commits [22:13] is that fast enough if you do small changes in your php code and want to see the results in the browser. [22:14] tomsdale: uhm we have done that but it's not it's intention [22:14] there's also a webdav plugin svn (apache) to auto commit when you make a file change… but it fubs up your repo because you don't get commit messages [22:15] tomsdale: the idea though is you should have a local copy of the website [22:15] I think that's why I wanted to stick with direct editing for the moment. maybe svn from a devserver - staging server at some point [22:15] well [22:15] it's not very professional what you are doing is all i will say :P [22:15] your going to break something [22:15] you're* [22:16] has been working so far - we are just a 2 man show. [22:16] well i'm a 1 man show… … and I wouldn't dare do it [22:16] but maybe my website's a bit more mission critical [22:16] who knows [22:17] ok, you are editing a live environment. That's just the development version. [22:17] tomsdale: with svn you program locally, and when all its fine, you sync with the server [22:17] tomsdale: you can have two remote servers… [22:17] one live one dev [22:17] how can I install gnome on ubuntu-server as light weight as possible [22:17] but use webdav [22:17] and locking [22:18] it's not ideal but if you dont' want a vcs its the way to go [22:18] so you use webdav to upload your changes. [22:18] not me [22:18] it goes like this... [22:19] local -> svn repo (hosted on staging server) -> staging server (check on same hardware/setup) -> svn repo (could make changes directly on staging if i wanted to) -> live server [22:19] the svn repo to staging os automatic… and the 4th step is optional [22:20] so it's really three steps (or two if you don't check staging) [22:20] ok I get it. how's about database? do you replicate them? [22:20] yea… i have a local copy of the db [22:21] which just has some of the data [22:21] staging reads the db off the live server [22:21] you could read the live db [22:21] depending on your mission criticalness [22:23] hm - I feel I got to rethink my setup :-). with the development local it probably also speeds up things [22:25] but if I work together with my designer - she probably will work on the stagingserver directly. How do I get the changes she does onto my local machine. rsync? [22:26] tomsdale: the webPragmatist solution is great, local > SVN > development server > SVN > production server [22:26] tomsdale: svn update [22:27] tomsdale: when you modify any file locally, you run svn update and everything will be synchronized [22:27] tomsdale: with a dumby what you can do is run davsvn or whatever [22:27] tomsdale: which will work like webdav and just "commit" changes automatically for her/him [22:27] you won't have commit messages though which sucks [22:27] tomsdale: if someone modify and commit some file to the server, you just run svn update again [22:27] so it makes a real mess of your repo [22:28] tomsdale: in fact the commands are svn commit and svn update [22:28] tomsdale: so your designer could make changes directly on the development server with davsvn [22:29] depending on how likely it is for her to flub up your repo you would also consider making a "branch" of your current website [22:29] If I have not invested anything in svn is it worth investigating git or is svn better supported and "good enough" [22:29] which is a copy of it basically that only she makes changes to [22:30] tomsdale: well… svn has been around longer….. but much of the stuff is going to git….. they are only similar in nature… git is a dvcs… meaning you keep the full copy of the repo locally [22:31] we use svn because this was 3 years ago that we set it up [22:32] for me svn is a bit less complicated….. but that's because i've been using it forever [22:32] its most likely considered a dated vcs now [22:33] and you will be scoffed at for using it :) [22:33] well, I'm just redoing my development server from scratch - now is the time to bring in change :-) [22:34] anyone having success with mysql on 10.04? [22:35] it won't start, restart, stop, unninstal, reinstall, install, reconfigure anything [22:35] I'm trying this fix right now: http://goo.gl/wiEA (last comment) [22:36] webPragmatist: Thanks so far - you definatly got me conviced using a vcs :-) [22:36] good luck [22:40] hrm [22:48] what is ubuntu's equiv of /etc/sysconfig/network-scripts/route-ethX [22:48] hrm [22:57] elb0w: good question.. hmm [22:58] elb0w: you could use an up stanza [22:58] elb0w: up route add -net xxxx gw [22:58] elb0w: probably want a corresponding down too [23:02] I have the latest and greatest 10.04 and I removed a drive from a software RAID 1 setup, if I put the a brand new swappable disk in will it magically join the mirror? [23:04] hmm [23:04] wont that go away [23:04] when I restart? [23:04] any news on the MySQL stuff? [23:06] elb0w: /etc/network/interfaces I mean [23:06] elb0w: you add it as a sub-option to your interface [23:06] just add it? [23:07] so iface eth0 inet dhcp [23:07] up route add blah [23:07] should of payed more attention in networking haha [23:07] im a dev our networking guy left for the day [23:07] elb0w: 'man interfaces' for an example [23:07] ok will do [23:07] have to figure out how to set static ip first [23:07] lol [23:07] elb0w: the 'up flush-mail' .. replace 'flush-mail' with your route command [23:15] SpamapS, you got a moment have a question for you [23:16] elb0w: ask away, if I don't have an answer somebody else might [23:16] http://pastebin.org/286673 [23:17] im trying to do that [23:17] in ubuntu [23:19] http://pastebin.org/286677 [23:19] elb0w: maybe that works? I don't know if you can have multiple up commands [23:19] what is the -host vs -net [23:19] elb0w: if not, you can put your up commands in a shell file and run that [23:20] just as it would sound.. -net routes to a network, -host routes to a single host [23:20] can I break something if I use wrong one? [23:20] well it wont work [23:20] networks need netmasks [23:21] hosts dont [23:21] so in my paste [23:21] netmask0 is 255.255.255.192 [23:21] you dropped it to 255.255.255.0 [23:21] thats ok? [23:21] no i missed that [23:22] ok [23:22] i can figure that one out [23:22] i think [23:22] :P [23:22] copy the values ;) [23:22] from yours i mean [23:24] ok [23:24] now just put the iface down then up? [23:24] and ill know if it worked? [23:24] as soon as I test the route [23:24] of course [23:32] yes