[00:00] soren: Yeah, I installed a specific smartlink file from synaptic so I guess I don't know what I'm doing. :) [00:00] soren: Would pppoe taking control of ppp0 have a conflict? [00:02] Centaur5: Depends on what you want to do with the modem. :) [00:02] Centaur5: Not all uses of modems involve ppp. [00:02] soren: I want to use hylafax so any machine can send a fax. Any better method for doing this? [00:03] Centaur5: Not sure. I've used asterisk for it a few times. [00:03] Centaur5: But that's quite different. [00:04] I know this is going to be a newbie question but how to you find out if the modem is using /dev/ttyS0? [00:05] Centaur5: Just use /dev/modem [00:05] Aw, crap, I need to go to bed. [00:05] Time flies when you're having fun. [00:05] soren: alright, thanks. I'll try to figure it out. [00:05] or when you're arguing? [00:06] Centaur5: Arguing is fun. [00:06] sometimes. [00:06] Centaur5: When it turns out to be not completely pointless, it's fine. [00:06] haha, if you're going to argue with the wife do it naked so you can make up promptly [00:07] and this time, I managed to convince someone that I was right (or so i think), so it's all good. [00:07] Centaur5: Good advice. [00:07] :) [00:07] alright, g'night soren [00:08] yes, g'night soren [00:08] g'night, everyone. [00:08] night soren [00:08] Oh, hi, ajmitch! [00:08] And good night. [00:08] hi :) [00:08] :) [00:08] * soren whisks off to bed. [00:20] anyone know how I can enable putting the monitor in power saving mode during inactivity, rather then just blank screen? [00:27] nealmcb: ew. smarthost/password... is that sasl-ish? [00:27] nealmcb: so what I need is a howto-do-sasl config [00:33] lamont: I'm haven't looked at exactly what fastmail lis looking for, and maybe this is a less common use case than I was thinking, but it seems like it would be increasingly popular. [00:39] lamont: the itch that started me down this path was wanting to run caff to sign keys from the uds-boston keysigning party. but I didn't have email configured on my laptop, and I didn't have my pgp keys configured on the server I send mail from. I thought while I was at it, it should be a smarthost setup to fastmail (a password-protected relay) so it would work on the road without reconfiguration. but maybe the number of smarthost installs t [00:41] ah. [00:42] I just taught my postfix install that anyone with a cert signed by my CA is loved. [00:43] and maybe it's silly to even be wanting to run postfix on the laptop since I usually read it over ssh via mutt to another machine. but I'll probably be changing that. [00:44] lamont: I don't run the smarthost - just the postfix on the laptop - so I don't set the policy... [00:44] Anyone familiar with freeradius? [00:44] I'm having some freaky issues trying to use 1.1.7-1 from debian unstable [00:45] reeradius: relocation error: /usr/lib/freeradius/rlm_sqlippool-1.1.3.so: undefined symbol: sql_get_socket [00:50] nealmcb: once I have a good sasl-config writeup that doesn't break the other options, I plan to include it. [00:50] it's more a function of me not needing it :-( [00:52] lamont: i.e. you want someone else to figure out how to fit that into the way ubuntu does sasl configs? I'm certainly no expert there, but if I ever get the itch badly enough I may plunge in.... [00:53] I don't care who does it... I just know it'll go faster if someone else does it... [00:53] in the meantime I'll send some patches to the doc to clarify that this is NOT what the current doc describes how to do.... [00:54] re: who does it - that's what I thought - makes sense [01:01] nealmcb: Did you get your smarthost problem solved? [01:02] ScottK: nope [01:02] nealmcb: What do you use for SASL? [01:02] nothing yet [01:02] I'm just using my cable smarthost for the time being [01:02] no password there [01:02] If you have "The Book of Postfix" it gives you a good how-to. [01:02] ScottK: heh. I might at that [01:03] not sure where it's hiding though [01:04] * lamont works on figuring out what got changed in gutsy(?) that broke pam/ldap for him [01:05] Using cyrus-sasl and sasl-db it wasn't that hard. [01:05] Assuming you've set cyrus-sasl up once already. [01:06] Looking at my thunderbird config, I'm not even sure it uses sasl - it specifies "username and password" and "tls" but doesn't say sasl, though thunderbird may just be dealing with it under the covers.... [01:07] That's SASL. [01:10] if it wants a user/pass, it's SASL [01:11] it's been too many years since I looked at that - so even plain text passwords are sasl - that makes sense.... [01:13] so in the real world, how common are plain-text passwords that use tls for secrecy, vs no-tls, and some other crypto for just the passwords? [01:15] anyone here using ldap for user creds? [01:15] * nealmcb resists the urge to !ask lamont (not) [01:16] nealmcb: well, the follow up is "WTF am I doing wrong?" [01:16] I had it working in feisty, and then y'all made it better in gutsy, and broke everything [01:16] lamont: You'll have to be more specific :-) [01:17] perhaps the question is "will the last person to touch ldap step forward" :-) [01:18] ldapsearch -LLL -x -D cn=admin,dc=mmjgroup,dc=com -W -H ldaps://ldap.mmjgroup.com -b dc=mmjgroup,dc=com 'uid=lamont' [01:18] that works. finger lamont doesn't hit ldap [01:18] rather, if I use a diff user, which only exists in ldap, then 'no such user' although ldapsearch happily drops the entire entry above. [01:20] 'lo [01:25] nealmcb: plain methods plus TLS are the most common I believe. I suspect plain with no TLS is nearly if not more common. [01:38] * lamont is reminded that he hates perl [02:04] fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 [02:04] connect(3, {sa_family=AF_INET, sin_port=htons(389), sin_addr=inet_addr("192.168.35.41")}, 16) = -1 EINPROGRESS (Operation now in progress) [02:04] select(1024, NULL, [3], NULL, {30, 0}) = 1 (out [3], left {30, 0}) [02:04] getpeername(3, 0xbfe35138, [128]) = -1 ENOTCONN (Transport endpoint is not connected) [02:04] maybe it's not me... [02:04] I think there should be a "connect" call after the select, no? [02:05] ITS YOUUUU [02:05] no [02:06] heh. [02:06] so... why did getent use ldap, instead of ldaps. [02:08] for the love of pete [02:09] diff ldap.conf{.bad,} [02:09] 2c2 [02:09] < uri ldaps://ldap.mmjgroup.com [02:09] --- [02:09] > uri ldaps://ldap.mmjgroup.com/ [02:12] it would really be nice if the docs actually said that a trailing slash is required. Or, for the &*)(*^)( win, append a / when there isn't one. [02:13] heh. [02:14] have you got start_tls or whatever it is? [02:14] tls start_tls [02:14] ssl start_tls [02:14] that's it [02:14] ssl on/off/start_tls [02:14] I gave up on tls/ssl [02:14] * lamont floods a little. [02:14] and just put all of my ldap authentication into a secure airgap network and put it plaintext [02:14] base dc=mmjgroup,dc=com [02:14] uri ldaps://ldap.mmjgroup.com/ [02:14] ldap_version 3 [02:14] rootbinddn cn=admin,dc=mmjgroup,dc=com [02:14] nss_base_passwd ou=People,dc=mmjgroup,dc=com?one [02:14] nss_base_shadow ou=People,dc=mmjgroup,dc=com?one [02:14] nss_base_group ou=Group,dc=mmjgroup,dc=com?one [02:14] TLS_CACERT /etc/ssl/certs/MMJ-2005-cacert.pem [02:14] TLS_REQCERT demand [02:14] use_sasl no [02:14] rootuse_sasl no [02:15] mm, looks fine [02:15] works fine, except for gutsy (1) renaming all the files, and (2) changing to require that trailing slash [02:15] awesome gutsyness [02:15] I haven't even started testing it yet :\ [02:15] oh, I expect that it's true in debian too [02:15] Oh geeze. [02:15] nss-ldapisms [02:15] shield my eyes! [02:15] wasabi: nothing else can do what it does! [02:15] nss is love [02:15] Winbind and Samba can [02:15] Better [02:15] oh pfft [02:16] except for the sodomotron parts. [02:16] I would not mind LDAP on Linux if nss-ldap and pam-ldap didn't suck so blatantly compared to alternatives. [02:26] yeah, that's true [02:26] they are pretty shit [02:26] I wish my senior hadn't told me we had to use it [02:27] so that I could use like, nss-mysql and pam-mysql [02:27] or something, else. [02:48] * lamont looks around for a pam.conf knowledgeable person to confirm that '... required pam_permit.so' is basically a no-op [02:48] whereas '... sufficient pam_permit.so' is a "no more checking, just let'em in" directive [02:50] should be sufficient pam_ldap.so [02:50] required pam_permit.so [02:52] right [02:53] my "sufficient" example above being totally wrong other than for explaining how stupid it is... [03:53] * lamont calls the new home-config package sufficient. [04:30] is there a way to see a dhcp table of what addresses have been given out on Gutsy? [04:39] Centaur5, do you mean /var/lib/dhcpd.leases [04:39] hatter: /var/lib/dhcp3/dhcpd.leases [04:39] er, Centaur5 [04:39] ya thats it [04:39] perfect, thanks [07:40] <_ruben> bah .. i suffered from Bug #141601 last night .. not sure if i should be happy with the fact that it's a "known" issue :p [07:40] Launchpad bug 141601 in tasksel "tasksel packages stays at 100%" [Undecided,New] https://launchpad.net/bugs/141601 [08:03] _ruben: What does it mean that a package is at 100%? [08:37] hi all [08:37] got question [08:38] does ubuntu-server use upstart or sysv ? [08:39] Gutsy [08:40] how can I figure this out? [08:40] you've installed gutsy? [08:40] dpkg -l |grep upstart [08:40] yes [08:40] gutsy uses upstart [08:41] Is there any tool like sysv-rc-conf to manage startup sctipts ? [08:44] there is upstart-compat-sysv package that says "compatibility for System-V-like init" so I gues sysv-rc-conf is ok [08:47] anybody here interested in creating drive images ? I've found good project - fork of partimage, it needs our help https://launchpad.net/partimage-ng [09:05] <_ruben> soren: its indeed described a bit vague, but it means that the progress bar gets stuck at 100% [09:06] _ruben: Aha.. Anything interesting in the process table? [09:06] <_ruben> zombie process [09:07] <_ruben> cant reproduce atm since im at work and the issue was at home and wol aint working on that box :/ [09:07] <_ruben> i think it was some apt-* process that was in zombie state [09:08] <_ruben> it was 1 or 2 lines below the whiptail process [09:08] _ruben: The proces that is a zombie is not the problem. [09:09] <_ruben> (never really understood the real concept of zombie procs) [09:10] <_ruben> apart from getting rid of them can be quite tedious, except for this case, killing the tasksel proc kills all [09:12] It's hardly ever tedious. [09:12] When a process terminates, it has an exit code. [09:13] Until another proces has read this exit code (by issuing the wait() system call), the process can't be removed from the process table. [09:14] A process that has terminated, but has not been "reaped" (had its exit code read), is a zombie process. [09:14] <_ruben> ah, didnt know that [09:14] If a process' parent process dies, the process is orphaned and adopted by init (pid 1). [09:14] init will always take care of calling wait() on terminated processes. [09:15] So... Putting these two facts together, we get: [09:15] To get rid of zombie processes, you need to focus on the parent. [09:15] Get the parent to bury its dying child process, or kill the parent so that init can take care of it for them. [09:15] A zombie process is harmless. [09:16] It takes a spot in the process table, but all its memory and such has already been freed. [09:16] It's cosmetic, really. [09:17] <_ruben> true .. tho the fact that tasksel is hanging (be it cause or result), is a bit of an issue ;) [09:18] _ruben: Possibly. I'd need to see the process table when this happens. [09:18] <_ruben> figured as much [09:19] <_ruben> trying to reproduce it on a vm here probably aint gonna work, since if it'd be 100% reproducable, there would probably be more comments, etc [09:19] yeah [09:20] <_ruben> on the system i played with last night (dell c521) it was 100% reproducable (tried few times) [09:20] <_ruben> tho i cant think of anything fancy that could be causing this [09:31] Dunno. === ScottK2 is now known as ScottK === AnRkey_ is now known as AnRkey [14:13] how can i get nmap to use a broadcast ip? [14:13] it's driving me nuts cause google is not turning up much [14:13] why would you do that? [14:19] AnRkey: do you want to scan an entire subnet? [14:19] AnRkey: sudo nmap -sS 10.0.0.0/8 [14:20] for example... if you are trying to scan a subnet anyway [14:23] yeah but our cisco vlans are confed to block broadcasts [14:23] so i need to spec a broadcast ip in the nmap command [14:24] for example when we use wakeonlan we do ... wakeonlap -i 172.16.10.255 -p 9 172.16.12.0/24 [14:25] so the wakeonlan broadcast for 12.0/24 goes through 10.255 [14:25] i can't see any option for broadcast ip's in nmap though... [14:26] AnRkey: mmmm... not sure, you might double check the man page if you haven't [14:28] i have almost memorized the man page :D [14:29] thanks anyhow [14:31] i will not let it win!!! [14:38] people do strange things with their networks :) [14:40] AnRkey: It wont' work anyway. [14:41] AnRkey: I can't imagine any system in its right mind will respond to requests sent to the broadcast address. [14:47] lamont: Got a minute for an HPPA question? [14:47] good point soren [14:48] * AnRkey ponders his predicament... [14:51] lamont: Nevermind. Figured it out. Sendmail isn't built yet. Urgh. [14:52] lamont: Are you planning on asking for give backs on Universe stuff that doesn't build for HPPA in Hardy because builds are out of sequence? [14:52] ScottK: yeah. at some point. [14:52] I figured I'd let it catch up, and then have someone do a mass give-back [14:53] sendmail should bump up? [14:53] OK. I won't worry about it then. [14:53] and what needs to be retried because of it? [14:53] lamont: dkim-milter [14:53] It was looking for libmilter1, but HPPA doesn't have it yet because Sendmail 8.14 isn't built yet on HPPA. [14:54] sendmail at 900, dkim-milter at 350 [14:54] Which means? [14:54] so it's _way_ down the pipe after sendnail [14:55] sendmail will build before anything else in universe, after all of main [14:55] Well it already FTBFS once. [14:55] universe largely defaults to 355, so it'll come after a large chunk of universe [14:55] dkim-milter, taht is [14:55] sendmail ftbfs? [14:55] No, dkim-milter [14:55] Sendmail is not yet built. [14:55] The new one [14:56] 8.13 built, but not 8.14. [14:57] sendmail is next up, unless something from main hops in ahead of it. [14:58] * lamont needs to go heads-down on a work thang today [15:17] moin [16:02] hello, this is my first visit, I have 2 ubuntu servers hosting 10 domains :-) [16:09] Runithad: welcome! [16:44] thx [16:54] I need to build php5 with a certain configure options. How can I figure out which configure options the ubuntu php5 package has? [16:59] jaredthane: php --info from a terminal will tell you. [16:59] you could also create an info.php file calling the phpinfo() function. === dantalizing is now known as dantalizing|lunc [17:14] if I am using DRBD with heatbeat 2, do I still need outdate-peer in drbd.conf? [17:35] What is the preferred way for granting ftp access to a non home directory, for instance the apache web directory /var/www -R , using vsftpd [17:36] Just give the account access to that directory [17:36] Or even set it's home directory to /var/www if you want it to start there on login [17:37] set the users home directory [17:37] ? [17:37] or chmod the user [17:38] If you want them to start in /var/www when they login, set their home directory to that [17:38] However, that doesn't give them permissions [17:38] You need to change the permissions to do that [17:40] thx [17:40] No problem. [18:00] i want to fully understand file permissions, so if /var/www is owned by user root and group root, to give my local user write permission there, i have to execute chmod with the +o option? [18:01] kshah, no [18:01] oh no [18:01] how does my local user relate to the groups? [18:02] You would have to set the group for /var/www to something different [18:02] And than add that user to that group [18:02] and set the permissions for the group for that directory to what you want [18:05] okay that makes sense, but does that effect, say the daemons who need to read there, apache, or rails? [18:05] It might if you don't do it correctly [18:07] I'm not following, as I understand the apache user is www-data, right? [18:07] but unless they are in the 'root' group, how does their access work? [18:08] What is the output of ls -l | grep www ? [18:09] 755 [18:09] for /var/www and subs [18:10] Who is the owner and group of /var/www [18:10] root / root === dantalizing|lunc is now known as dantalizing [18:13] i just don't get what is the proper way to give my user permissions to the /var/www directory [18:16] kshah "sudo chown -R /var/www" then "sudo chgrp -R www-data /var/www" will allow your user to "own" the files, and the web server to read them [18:18] dantalizing: and that won't interfere with rails or anything like that because Apache hands off the files to rails and then rails back to Apache? [18:20] okay, so now I 'own' the files, and I put myself in the group that apache creates when it installs www-data? is that right [18:21] shouldnt interfere with your rails [18:21] regarding your perm setup, really depends on what else you need to do [18:21] does the web server process need to modify files? [18:22] do you have other users who will be modifying files? [18:22] users: probably not [18:22] if you own the files, no need to add yourself to www-data [18:22] web server process: i don't know, this is just a rails app [18:23] are you saying that if rails needs to create a file, it may have a problem since it'll be apache handling it and it doesn't have write permissions? [18:23] for instance, a typical php blog app will write to a config.php during a web based configuration, and therefore www-data would need write access to that file [18:23] okay i see, yeah [18:23] but if you're just reading files, www-data only needs read [18:24] so then what do people typically do to accomodate for all situations? [18:24] do they just do it case by case? [18:24] and grant permissions for specific files? [18:24] best practice [18:24] imho, "all" is too general [18:24] ok [18:25] i dont know "best" practice, but for my wifes static website (no rails, no php), i made the files owned by her, read by www-data [18:25] all html is 640 [18:26] and dirs are 750 [18:26] that wouldnt work if you have a web based template modify page, for instance [18:27] I never leave the files with root owner/group [18:28] okay, and so if my rails app needs to write uploaded files, I can do it in a folder that i specifically grant permissions to that is below the web root [18:28] or preferably outside the webroot, but yes [18:29] so assuming you own the dir, and www-data is the group, that dirs permissions would be 770 [18:32] cool, I think I got it, make exceptions to the security, not security to the exceptions [18:32] well put.. [18:33] exceptions might not be the best word, but I get it :) thank you dantalizing [18:33] advice worth every penny you paid! [18:33] :) [18:37] lol [19:16] bug 155947 [19:16] Launchpad bug 155947 in libnss-ldap "ldap config causes Ubuntu to hang at a reboot" [Undecided,Incomplete] https://launchpad.net/bugs/155947 [19:17] i think we got bitten by that today [19:17] at work [19:21] Hey all... [19:21] I have a Dell PERC 5i controller that works beautifully with the megaraid_sas driver [19:21] but now I'd like to get notified when the array is degraded [19:22] I yanked out a drive, and the LEDs indicated that the array was being rebuilt, but there's nothing in syslog [19:24] Will the module do any status reporting, or do I need Dell's OpenManage cra^H^H^H stuff to talk to the controller? [19:27] ...so apparently the megaraid_sas has no hooks into /proc >:-| [19:27] Anybody had any luck with the Dell OMSA stuff in Ubuntu? [19:31] can anyone possibly tell me a reason why every time my friend visits my website (any file, ubuntu 7, apache 2.2) he has to refresh the page before it shows, the first time he visits it is a blank page? [19:31] kshah: his cache [19:32] mralphabet: but it is the first time he's visited the page, he clear his cache and it still requires him to refresh, or am I misunderstanding you? [19:33] so you make a page, blah.html with stuff in it and it shows blank the first time he visits it? [19:33] yes [19:33] sorry, I misunderstood then, that is odd [19:35] does your error log say anything? [19:38] does this happen for any other visitors? [19:39] it doesn't happen for me [19:39] checking the log [19:39] if it works for you and doesn't for him, I have to say it is something on his side of things [19:39] what client browser? [19:40] FF [19:41] its so odd [19:41] and has he tried IE? [19:42] or safari or any of the others? [19:42] or lynx if he's on a linux box? [19:42] asking him to use IE [19:43] I wonder if it is because there is a conflicting DNS entry [19:43] two servers both claiming to be something.com [19:43] doesn't seem to make sense though [19:44] that could be a roundrobin answer [19:44] attempt 1 goes to ip 1, attempt 2 goes to ip 2 [20:03] damnit [20:04] I am using mod_jk to connect to an ajp13 worker, and it totally ignores my JkWorkersFile setting and just initializes a worker called ajp13 trying to connect to localhost:8009 [20:04] it'll bitch if the file isn't there of course, but it doesn't load workers from it [20:24] not sure if I should ask this here or in #apache, but I've successfully followed the ubuntu-server guide in the past to enable SSL, self signed, but I want to it to work like a real website, only for pages that I designate as needing to be secure, login/logout, accounts, etc [20:24] can someone help me with that? [20:26] kshah: you can place the security settings in a .htaccess file [20:26] I'm not 100% sure if that's what you're looking for though. [20:27] well, like when someone clicks on 'login', that should be in https:// [20:27] 'should be in' didn't make sense, but you know what i mean [20:28] kshah: for a situation like that what I usually do is a rewrite rule. [20:28] the link that you make points to https://somesite.com/somedir/somesslfile.html [20:28] okay, i know what you mean, i think i saw an example of that [20:28] sommer: in the conf file [20:28] kshah: did your friend fix his browsing problem? [20:29] kshah: should be there's also some great examples in the docs on the apache site. [20:29] mralphabet: I don't think so, I don't think its happening to him in IE, he doesn't know whats up [20:31] sommer: thanks, I think I read over an example there, I just wanted to confirm here in case I misunderstood [20:31] np === jetole_ is now known as jtole [20:39] hey guys, I am looking to do load balancing with failover for a web site, the two locations for the site are located states away from each other and we were originally going to do DNS with two A records and low cache so we can manually remove one if a site goes down [20:39] but I thought you guys might know of a better solution [20:39] redhat-cluster-suite + ldirectord [20:39] oh... sorry [20:40] states away [20:40] didn't notice that part :) [20:40] especially if it can be automated so if one site fails, traffic is automatically diverted to site B [20:40] yeah, it's ok [20:41] plus it has to be OS independent since the sites are on windows IIS/SQL however some of them are on xen on ubuntu with a debian IDS at one site [20:41] SQL is fine actually, the web servers always connect to the SQL at the same site [20:41] so basically it would be end user @ anywhere connecting to 80/443 [20:45] well... you can't do much on those systems [20:46] both have public IP address, right? [20:48] jtole: use the linux HA packages [20:49] can do easy failover between n+1+x systems [20:49] I use it here locally on a private LAN for a callcentre Asterisk setup [20:49] across the intertrons it should work fine [20:49] oh what [20:49] one server is windows? nevermind [20:50] ivoks: Saw your reply on the server list. Sounds very good. I think this will be a big step forward for Ubuntu mail server easy of setup. [20:50] easy/ease [20:52] i hope so [20:53] fujin: HA or redhat-cluster-suite is not good options for this situations [20:54] ivoks: yes, they are all on public IP, like I said, right now our main coarse of action is DNS with two A records and a 5 minute cache time but if one server goes down it requires manual intervention to initiate the failover [20:54] fujin: so for linux HA I am fscked? [20:55] jtole: do you really have a fail over? [20:55] jtole: You could write a script to check and modify the DNS if it gets no response. [20:55] i mean... i guess each server has it's own sql database, right? [20:55] so... services don't fail over [20:55] they just die, right? [20:56] Personally, I think it's more trouble than it's worth. Just make the primary as reliable as you can and suck up what little outages you get unless it's so critical you can afford to do it right. [20:58] ivoks: no, not yet, the second co-location will be implemented in about two weeks [20:58] right now we just simply have a primary site [20:58] will they have same SQL data? [20:59] ivoks, there will be SQL servers at each site, currently there are two at our main site but no fail over and it is managed hosting solution (which I don't like) and they will not provide us one [20:59] however both sites will be getting transaction (up to the minute) updates of remote sites [20:59] MS SQL transactional replication [21:00] and one machine is windows, and the other is linux? [21:01] ScottK: hey, just wondering if you'd had a chance to review the Mail Filtering section of the Postfix docs? [21:01] No. Sorry. Still on my list. [21:01] ScottK: cool, no rush [21:08] ScottK: it is not only crucial but was more then a recommendation of upper management, so far I have allocated 24k in new hw expenses as well as 1400 a month on co-location costs and it was all approved in record time [21:09] I don't imagine any big web site has only one location and I have seen many mid sized companies in previous employment that do not [21:09] jtole: In that case, I'd suggest doing a proper failover or HA solution like ivoks was suggesting. Don't mess with the Windows/Linux mix [21:09] well right now windows is a requirement [21:09] yuck@windows/linux mix [21:09] then do windows/windows [21:09] jtole: True, but I think it's more important for scalability than reliability. [21:10] and use the windows cluster tools [21:10] fujin: windows VM is a pain in the ass and we want quick restoration in the event of a problem [21:10] My web host, on a shared server has ~5 minutes of down time a year. [21:10] all that you have right now is round robin dns answers and a low TTL, that is not failover ;( [21:10] i.e. in xen copying c:\ from 5 days ago back over corrupt c:\ etc [21:11] ScottK: well our managed hosting provider has had 3 of our servers go down in the last few months [21:11] so those are windows on top of linux [21:11] jtole: if you want quick recovery from a meltdown on the windows side, there is a symantec product that can restore to bare metal in ~ 1 hour [21:11] ayayay [21:11] that is why co-location is now a priority [21:11] jtole: then use linux/linux [21:11] jtole: Then get a better provider. [21:11] ivoks: yes A windows on top of linux [21:11] xen [21:12] * mralphabet sighs [21:12] drop linux and go with windows only [21:12] nothing else works [21:12] fujin: windows is a requirement, this site has been long established for years and is all ASP / SQL 2000 [21:12] ScottK: co-location will be a better provider [21:12] then take linux out of the equation and use the windows HA tools [21:14] does the linux OS actually do anything other then serve xen? [21:14] mralphabet: no but it will be serving multiple machines on xen [21:15] and what do these multiple vm's do? one for asp and one for sql? [21:15] two for IIS, one SQL, one mail, another one running linux nagios on one of the machines [21:16] so many xen machines... [21:16] i hope you have two quad core processors :) [21:16] that is what xen was built for [21:16] you're doing it wrong [21:16] and 16GB of ram :) [21:16] as I said earlier [21:17] aye, you are doing it wrong [21:17] yes, AMD 2.4 Ghz w/ 8GB RAM and RAID 5 with 5 250GB sata 2 [21:17] take Linux out of the equation and use the windows clustering/HA tools [21:17] your stated goals do not match up with the hardware / software mix you have [21:17] on two machines + IDS and bypass switch + switch w/ monitor port [21:17] so you guys are saying to lose windows all together on this one? [21:17] !pastbin [21:17] Sorry, I don't know anything about pastbin - try searching on http://ubotu.ubuntu-nl.org/factoids.cgi [21:18] er, lose linux I mean [21:18] !pastebin [21:18] pastebin is a service to post large texts so you don't flood the channel. The Ubuntu pastebin is at http://paste.ubuntu-nl.org (make sure you give us the URL for your paste - see also the #ubuntu channel topic) [21:18] jtole: either lose linux, or lose windows [21:18] or go to vmware esx [21:18] a 100% linux environment will enable you to use the heartbeat / linux-ha clustering packages for failover [21:18] esx has failover packages for vm's [21:18] and a 100% windows environment will let you do a similar thing with clustering [21:18] mralphabet: esx wont' work, as, his two servers are 'states' away afaik [21:18] khm... redhat-cluster-suite instead of ha :) [21:18] like I said, I can't lose windows, I would like to but I cannot [21:19] wth@ redhat-cluster-suite [21:19] I don't even know what that is, it's so wrong [21:19] s/redhat.*// [21:19] fujin: I thought esx could do remote failover in case a building disappears [21:19] lol [21:19] fujin: ? [21:19] mralphabet: not sure about that [21:19] but ESX at both locations would be expensive [21:19] fujin: it's a tool, fully suported in ubuntu [21:19] (san, n+1 esx hosts) [21:19] wich isn't something you can say for ha [21:19] well, unfortunatly, we won't have linux at all at one site [21:20] ivoks: apt-get install heartbeat? [21:20] apt-get install heartbeat2 [21:20] fujin: in universe [21:20] fujin: he's already 24k deep /shrug [21:20] although this may become two co-locations once the first one is up and proves useful [21:20] fujin: r-c-s is in main [21:20] fujin: and much much better than ha [21:20] mralphabet: san+esx host(s) > 100k [21:20] fujin: what's another 75?! ;) [21:20] I guess. [21:20] I posted here a week or so ago about a Tripp Lite KVM keyboard and touchpad that didn't work in the server install. I got it working only by unplugging and plugging it back in. I included output of dmesg in this process here http://paste.ubuntu-nl.org/45942/ [21:21] akincer: log a bug [21:21] Was just thinking that [21:21] fujin: i'm just being sarcastic [21:22] Generally the engineer shouldn't have to worry about pricing. [21:22] true, to a point [21:23] anyway, jtole, as I said before, your stated goals and what have already don't really mix [21:23] I gotta run, cheers [21:23] I feel kinda bad for him [21:24] heh, yeah. [21:24] I wouldn't want to inherit that shitbag of a system. [21:24] he *is* doing it wrong, though. [21:24] yes [21:24] he's asking for help AFTER he already bought the system [21:24] "I did it wrong! help!" [21:24] ;| [21:25] epic fail [21:25] how about the novel approach of doing a little research first ;( [21:25] That's always good. [21:28] fujin: if you use HA, really check out cluster-suite [21:29] Got a bug report of my very own. How nice [21:29] It'd be a pain to change it. [21:29] fujin: it provides some features HA doesn't and provides support for shared (file) systems like drbd and gfs [21:30] fujin: that's what i tought so [21:30] ivoks: I rolled heartbeat v1 (linux-ha) for my systems, for basic ping-node failover. [21:30] fujin: now i just wish i did't it sooner :) [21:30] and have no use for drbd/gfs [21:30] I just check if asterisk is running, check conectivity etc [21:30] it's only very basic. [21:30] ok [21:31] s/drbd/gnbd/ [21:31] is gnbd functionally identical to drbd? [21:31] no [21:31] I had thought of using drbd for voicemail replication etc [21:31] drbd provides shared disk [21:31] but gave up and went with one-way rsync from the secondary from the primary [21:31] gnbd provides access to physical disk [21:31] oh, cool [21:32] ivoks: without copying the data? [21:32] with drbd you can set up network mirror [21:32] with drbd? [21:32] i'm using drbd for web servers [21:33] what does GNDB do? [21:33] imagine you have NAS [21:33] provide access to data over the network (like NFS)? [21:33] I've been looking for a way to share mailstores between my 3 mailhosts [21:33] well, yes and no... :) [21:33] all the data is on a SAN, but implementing file locking between them has been a pain [21:33] filesystem does that [21:34] with gnbd you export device [21:34] and then create GFS on it [21:34] I see. [21:34] so all systems can access that device at the same time [21:34] you just need to make sure that gnbd server doesn't fail [21:34] this is why i use drbd [21:35] drbd keeps data in sync on two machines [21:35] and allows both machines to rw at the same time [21:35] with GFS on top of it, problems with locking are solved [21:36] but theoretically [21:36] I'm reading the usage stuff now [21:36] it looks like it'll do what I want [21:36] i took me one week to figure it out what is what exactly :) [21:36] drbd would work, but replicating 300gb of mail is silly [21:36] between all 3 [21:36] you can't do that [21:37] oh? [21:37] you can have only two primaries at the same time [21:37] it's doesn't replicate all 300GB, only changes [21:37] so, on reboot, only changes are replicated [21:37] I see [21:37] but it'll still mean having 300gb x X [21:37] just to redundantly have 300gb [21:37] yes [21:37] while space isnt' really an issue (we've a 5tb~ SAN) [21:38] I'd prefer something that just shared the exact data, with happy file locking [21:38] (and wasn't NFS!) [21:38] * fujin cringes @ NFS [21:38] gnbd+gfs [21:38] Yes, it seems like it'll do what I want. [21:38] just don't use OCFS [21:39] ocfs died on me couple of times during testing [21:39] gfs works great [21:39] Thanks for the suggestion [21:39] I've made note of it and will investigate further when my current projects are completed [21:40] source: http://sources.redhat.com/cluster/ [21:40] :) [21:40] And you said it's apt-gettable? [21:40] it's in main [21:40] That's handy. [21:40] it's only clustering system supported in ubuntu [21:41] everything else is in universe [21:41] community supported [21:41] I see. [21:41] * Nafallo hates servers [21:41] I hadn't had any issue with linux-ha, and that was the first tutorial I found [21:41] me too [21:41] generally don't do application-level failover. [21:41] or, hadn't done it before [21:41] i had one problem with linux-ha [21:42] two machines, both runing mysql in master-master replication [21:42] each machine has it's own IP [21:42] and mysql binds to that IP [21:42] one has VIP, so mysql binds to VIP also [21:42] but when that machine fails, VIP goes to other machine [21:42] nasty [21:43] and then you have a problem [21:43] I hate two-way MySQL replication. [21:43] we do master-slave here, with manual failover [21:43] mysql needs restart, cause it isn't binded to VIP [21:43] with r-c-s, you don't have to do that :) [21:43] ivoks: any resources/tutorials on r-c-s configuration? [21:43] fujin: there's a GUI tool for setting up :D [21:44] My servers don't run GUI's! [21:44] it creates cluster.conf [21:44] no... it's a tool; you can run it on your laptop [21:44] it creates cluster.conf, which you then transfer to servers [21:44] I wouldn't run Ubuntu on a desktop, either. [21:45] does apt-getting redhat-cluster-suite install all of the magic stuff? like gfs-tools etc? [21:45] yes [21:45] ah, it's a metapackage I see. [21:45] so, basically [21:46] the clients (my mailhosts, in this example) will have gfs and gndb client configured [21:46] and then theoretically behind that I'd have say, mailstores [21:46] with gndb-server and gfs installed on it [21:46] s/it/them/ [21:46] right [21:47] cool [21:47] sounds great [21:47] now if only I could find some documentation or a tutorial on rcs [21:47] there are PDFs [21:47] search for Global_Network_Block_Device.pdf [21:48] and [21:48] Cluster_Administration.pdf [21:48] and Global_File_Syste.pdf too [21:48] cool, found it [21:49] will pass them onto my senior and have him browse through [21:49] may roll it on my phone system too, for the fun of it :) [21:50] if you have only two servers [21:50] it would, maybe, be better to stay with HA [21:53] anyway... good night to you all === tiborio__ is now known as tiborio