[00:00] <andol> captainkirk: When you to a "rsync filename user@host:path" that is nowdays using ssh by default. In the early days it used rsh instead.
[00:00] <captainkirk> andol: is this still possible though if the other machine is w2k server?
[00:01] <andol> captainkirk: It's always possible :) That said, I have no idea what's the easiest way to handle rsync in regards to windows server.
[00:02] <captainkirk> andol: i have installed a prog called deltacopy on the w2k server, it is running in server mode and configured to serve certain folders:
[00:03] <captainkirk> andol: the rsync command on my ubuntu server is able to connect and pull the data from the w2k server.... seems to be working ok
[00:04] <andol> Then it should be all good :)
[00:05] <captainkirk> andol: so far so good :)
[00:06] <phaidros> hm, just fiddling with fastcgi init scripts.
[00:07] <phaidros> is there a known solution for multiuser fastcgi init scripts, maybe configurable via /etc/default ?
[00:07] <phaidros> so, that the owner (unix user) is able to restart his own fastcgi script?
[00:21] <leonel> phaidros: you can try cherokee web server  it starts your  fcgis now with the owner you want  in case the daemon ends  cherokee restarts for you
[00:23] <dsuch> Hey
[00:23] <dsuch> is it normal that I cannot find drbd8-utils package on a fresh Jaunty server?
[00:24] <dsuch> apt-cache search tells me there's nothing related to "drbd"
[00:26] <andol> dsuch: Strange, I can find it using the exactly same method. Does an apt-get update help?
[00:26] <dsuch> *blush*
[00:26] <dsuch> Yea, it does :)
[00:27] <dsuch> Thanks andol
[00:27] <andol> np
[00:44] <rayne> is there an easier way to get the ssh-rsa key to the node without typing it? it is so long i cant seem to type it correctly
[00:50] <phaidros> rayne: just scp a textfile containing the key to the machine ;)
[00:50] <phaidros> leonel: ic, that sounds good. but I'm nailed to nginx :/
[01:17] <rayne> phaidros: would i not still have to type the key... is there a file that contains the key that i may xfer with a usb stick ?
[01:19] <phaidros> rayne: actually I don't know about what kind of key you speak
[01:20] <rayne> i am seeting up the node for ubuntu 9.04 eucalyptus, when i --addnode <node name> it prints on screen the ssh-rsa key to type into node
[01:22] <Rafael> anybody can give me help on software raid
[01:24] <phaidros> rayne: I am no expert, but have used mdadm before
[01:25] <rayne> mdadm ?
[01:25] <phaidros> rayne: rayne software raid in linux is usually done with mdadm (see man mdadm)
[01:26] <phaidros> irk, I wanted to say that to Rafael
[01:26] <phaidros> sry rayne ;)
[01:26] <rayne> :)
[01:27] <phaidros> rayne: I have no idea about eucalyptus, but you can echo that key to a file, transfer that file onto the target, and cat the content of that file into your commandline
[01:28] <rayne> ummm.. thats just might work
[01:28] <phaidros> like 'somecommand --tell-key >> keyfile', scp keyfile to server, 'somecommand --mykey=`cat keyfile`'
[01:30] <rayne> let me see. rbr
[01:30] <rayne> i man brb
[01:32] <rayne> nope...
[01:32] <rayne> since the screen does the output the echo command will not work
[01:32] <rayne> but that was a good idea
[01:33] <rayne> i must find the file that has the auth keys
[01:34] <phaidros> ack
[01:44] <Trebacz> Okay I screwed up upgrading Ubuntu Server from 8.10 to 9.04. I accidently merged  (experimental) my /boot/grub/menu.lst and lost my raid configuration.
[01:44] <phaidros> irks
[01:44] <Trebacz> When merging is a backup copy of the file kept anywhere?
[01:45] <phaidros> umh, /boot/grub/menu.lst~ or /boot/grub/menu.lst.dkpg.something ?
[01:46] <Trebacz> There is a menu.lst~ but it's the same as menu.lst
[01:47] <phaidros> ouch, then I dunno
[01:52] <rayne> phaidros: i am installing [apt-get] gnome on both servers and going to copy and paste :)
[01:56] <Trebacz> The sad thing is there are four drives and I can't remember the arrangment I had them in. Two were mirrored, but I can't remember the last two...
[02:01] <phaidros> rayne: why not cut and paste in your terminel o.O
[02:02] <phaidros> rayne: gnome on servers .. *sheesh*
[02:02] <phaidros> Trebacz: see your mdadm config
[02:03] <phaidros> Trebacz: in grub you need imho only the device file .. but you can reconstruct that from mdadm config imho
[02:04] <Trebacz> Cool. Can you give me a location or file name. Is it /etc/init.d/mdadm-raid? Sorry new at this.
[02:06] <phaidros> /etc/mdadm/mdadm.conf
[02:07] <phaidros> uhm, but maybe only the entry for the root is wrong in your menu.lst
[02:07] <phaidros> did your overwrite /boot/grub/device.map? if not, just check the root= entry
[02:07] <phaidros> (in menu.lst)
[02:09] <Trebacz> Unfortunately the mdadm.conf is empty and modified at the time of upgrade.
[02:13] <Trebacz> Root entry is there for the primary hard drive, but for none of the mirrors root            (hd0,1)
[02:40] <phaidros> Trebacz: irks
[02:41] <phaidros> no idea how to detect an former software raid
[02:42] <PhotoJim> you should still be able to reassemble the RAID, no?
[02:42] <phaidros> Trebacz: mdadm --assemble --auto might be of help
[02:42] <phaidros> but read the manpage!
[02:42] <PhotoJim> especially if you can remember what devices were in the RAID.
[02:42] <phaidros> afk, gtg.
[02:43] <Trebacz> Will do -thanks for all your help
[03:04] <Trebacz> Judging from what I'm reading I'll be very careful...
[03:07] <twb> Trebacz: do you still have a correct partition table on each disk?
[03:07] <twb> Trebacz: was it an md RAID1 or RAID5?
[03:09] <Trebacz> I'm sure two of the disks were RAID1. The other two may have been striped RAID0.
[03:10] <Trebacz> Not sure how to check the partition table on the drives, but only one is mounted. I'm assumming the other ones are just as they were before they were dropped from the array.
[03:13] <twb> cfdisk /dev/sda
[03:13] <twb> If it's a RAID1, you *can* mount the nodes directly (i.e. independent of the array), but shouldn't.
[03:32] <Trebacz> Using cfdisk all 4 drives are identical and the file system is listed as Linux raid autodetect.
[03:34] <twb> Good.
[03:34] <twb> So you know where the nodes are and how big they are.
[03:34] <twb> You only need to determine how many arrays there are, what level they are (RAID1, RAID0, etc.) and which nodes belong to which arrays.
[03:39] <Trebacz> So I'm pretty sure the first array was RAID1 (partition SDA2 and SDB2) both are bootable.
[03:40] <Trebacz> The second was RAID1 (partition SDA1 and SDB1) both not bootable.
[03:41] <Trebacz> The third was SDC1 and SDD1 RAID0.
[03:51] <twb> Then attempt to assemble those.
[03:54] <Trebacz> Running mdadm --examine --brief --scan --config=partitions gave me:
[03:55] <Trebacz> ARRAY /dev/md0 level=raid1 num-devices=2 UUID=ef93e149:63b87da5:2ae5e72e:eca9ce0b
[03:55] <Trebacz> ARRAY /dev/md1 level=raid5 num-devices=4 UUID=52124232:55c69294:9e58b9e7:324a27fd
[03:55] <Trebacz> ARRAY /dev/md2 level=raid0 num-devices=2 UUID=5f1fde30:e8b82ce2:aea75784:f4e5b623
[03:57] <Trebacz> Do I interpret this as there was one RAID0, one RAID1, and a RAID5 utilizing four partitions?
[04:10] <Trebacz> If I intreprete this correctly I may have had the boot partions RAID5 on all drives (SDA2,SDB2,SDC2,SDD2) RAID1 (SDA1 and SDB1) RAID0 (SDC1 and SDC2)
[07:18] <sluimers> Hello, I'm trying  to setuo an e-mail server. I use dovecot and dovecot-postfix. Now when I try to send mail from my gmail account I don't see what happens.
[07:18] <sluimers> errrmmm... nothing happens I mean
[07:18] <sluimers> the e-mail gets send
[07:19] <sluimers> but where it arrives is a mystery to me
[07:19] <sluimers> I do not get a mail delivery failure though
[07:20] <sluimers> Since I bought a domain, this is my setup:
[07:21] <sluimers> A RECORD
[07:21] <sluimers> @ my.ip.address.numbers
[07:21] <sluimers> C RECORD
[07:21] <sluimers> mail @
[07:21] <sluimers> smtp @
[07:21] <sluimers> email @
[07:22] <sluimers> pop @
[07:22] <sluimers> www @
[07:22] <sluimers> MX (Mail Exchange)
[07:22] <twb> Plonk (flood).
[07:22] <sluimers> @ my.ip.address.numbers
[07:23] <sluimers> plonk?
[07:35] <sluimers> oh wait, the last one is: @ mail.mydomainnme.com no my ip address
[08:13] <Taco24501> any one know ant thing about avantfax
[08:49] <VK7HSE> if he had of hung around I could have help Taco24501 .... :-/
[12:04] <_ruben> damn .. my home file server's system disk started doing the clicking-before-death rirtual
[12:05] <ivoks> that's a 'hope you have a backup' song
[12:06] <_ruben> i dont .. which aint such a big a deal anyways .. its "just" the system disk ;)
[12:07] <_ruben> i have this odd effect on server i work with .. whenever i decide to upgrade or replace a server, it dies just before i get the replacement/upgrade in place :p
[12:08] <XiXaQ> ivoks :)
[12:10] <TeLLuS> Just got similar disk, turning readonly here.. Turned it off now, letting it rest for a day..
[12:10] <XiXaQ> my 2.5" usb disk is singing the heavy metal version of the "hope you have a backup song", and loudly.
[13:22] <aljosa> is it possible to setup eucalyptus on a laptop to create a development environment?
[13:25] <mattt> aljosa: xen/kvm wouldn't be more suitable?
[13:26] <aljosa> mattt: i'm working with amazon services so i need something local to test things before i upload to production. also it would be perfect when i don't have internet access
[14:21] <AndroidData> i've got a question: is it possible to create a chroot jail which has its own users? it's own uids and gids with different access within the jail?
[15:20] <RoAkSoAx> ivoks, heya-!
[15:21] <ivoks> RoAkSoAx: hi
[15:22] <RoAkSoAx> how's it going
[15:22] <ivoks> not so well
[15:22] <ivoks> my ISP canceled my ADSL and phone line
[15:22] <ivoks> without explanation or any form of information
[15:22] <RoAkSoAx> oh that's awful... they do whatever they want whenever they want
[15:22] <ivoks> and i can't talk with them till monday, since it's a holiday and they don't work till then
[15:23] <RoAkSoAx> yeah
[15:23] <ivoks> f...k as...ls
[15:23] <RoAkSoAx> hahaha
[15:23] <RoAkSoAx> that happens here a lot too and they always tell you "Opss we made a mistake..." or "It was a technical error you should have everything back in a few days"
[15:24] <ivoks> i'm fine with errors
[15:25] <ivoks> mistakes happen, and i would be stupid to get angry cause of that
[15:25] <ivoks> it's their attitude that's wrong and will make me turn my back on them
[15:26] <RoAkSoAx> ivoks, that's true... they should notify someone at least before doing something that affects the costumer...
[15:26] <ivoks> an they shouldn't do anything like that 5 minutes before 4 day holiday season
[15:27] <RoAkSoAx> yeah!!
[15:27] <ivoks> now i'm uploading bacula source to ppa over umts :/
[15:27] <ivoks> lame
[15:28] <RoAkSoAx> here happens things like you lose internet connectivity after 6pm.. where all the sysadmins from the ISP go home.. so weird huh?? just after they go home...
[15:28] <RoAkSoAx> hahaha
[15:28] <RoAkSoAx> ivoks, btw... would you like to start discussing Linux-HA vs RHCS?
[15:29] <ivoks> well, what we should discuss are features of both
[15:29] <ivoks> i know what rhcs has to offer
[15:29] <ivoks> but i haven't worked with linuxha for years
[15:29] <ivoks> canwe setup clustered filesystem with linuxha?
[15:30] <ivoks> i guess ocfs2 shouldn't be a problem; how about gfs2?
[15:31] <RoAkSoAx> ivoks, yes.. of course
[15:31] <ivoks> gfs2 works with linuxha?
[15:31] <ivoks> are you 100% sure?
[15:32] <ivoks> hm...
[15:32] <ivoks> gfs2 should work with openais
[15:32] <ivoks> and since linuxha now supports openais
[15:32] <ivoks> !! that would be awsome
[15:33] <RoAkSoAx> i don't think there'll be a problem ... i have not used GFS2.. let me browse some documentation...
[15:33] <RoAkSoAx> ivoks, and actually.. Pacemaker support OpenAIS and Linux-HA
[15:34] <ivoks> gfs2, unlike ocfs, depends on cluster infrastructure
[15:34] <ivoks> right... pacemaker
[15:36] <RoAkSoAx> ivoks, should we start a discussion in the ML?
[15:37] <ivoks> http://www.mail-archive.com/pacemaker@clusterlabs.org/msg00074.html
[15:37] <ivoks> GFS2 will eventually work with Pacemaker as well.
[15:37] <ivoks> sure
[15:37] <RoAkSoAx> btw... i do think we should also think about what's gonna happen with heartbeat and pacemaker in the repos, since right know, installing heartbeat  + pacemaker at the same time, will generate some conflicts
[15:38] <ivoks> packaging will be much easier once we decide on all components
[15:39] <ivoks> just ignore packaging problems now, since we could make them even bigger if we don't get idea right in the start
[15:40] <RoAkSoAx> indeed, but currently, installing both heartbeat and pacemaker there are conflicts.. right know... you'll only are *allowed* to use pacemaker and openais if you isntall them from the repo's... since the heartbeat version on the repos is 2.1.4 and that version still has the CRM into it
[15:40] <RoAkSoAx> ivoks, indeed. how will you like to proceed then?
[15:42] <ivoks> RoAkSoAx: karmic development just started
[15:42] <ivoks> RoAkSoAx: so, there's plenty of time to fix that...
[15:43] <ivoks> what i see linux-ha is missing is quorum disk
[15:43] <ivoks> that might be a deal breaker
[15:43] <ivoks> how do we restore from split brain situations then?
[15:43] <RoAkSoAx> ivoks, I do think that PaceMaker handles that now
[15:43] <ivoks> oh... that would be great
[15:44] <RoAkSoAx> ivoks, http://pastebin.ubuntu.com/162281/
[15:45] <RoAkSoAx> ivoks, http://hg.clusterlabs.org/pacemaker/dev/file/tip/xml/crm-1.0.dtd
[15:46] <ivoks> hm... that doesn't sound like quorum disk
[15:47] <RoAkSoAx> ivoks, in the linux-ha timeline says that they added a: Added a membership/quorum subsystem (CCM)
[15:47] <ivoks> quorum disk is a feature where you can take some block device which is accessibile from both machines in cluster
[15:48] <ivoks> then we network link is broken between machines, they still know which one is most recent
[15:48] <ivoks> and they don't kill each other
[15:49] <ivoks> quorum is not remotly the same thing as quorum disk
[15:50] <ivoks> without quorum disk, in two node cluster without network link, they'll just try to kill each other
[15:50] <ivoks> that's quite important in virtualized enviroments
[15:52] <RoAkSoAx> ivoks, but wouldn't that be resolved with STONITH?
[15:52] <RoAkSoAx> ivoks, or that will be only in case there's not a fencing device?
[15:52] <ivoks> no
[15:52] <ivoks> that's when you have network independent STONITH
[15:53] <ivoks> what happens when both nodes in two-node cluster lose network connection?
[15:53] <ivoks> they think the other node died
[15:53] <ivoks> the problem is that both think the other one died
[15:53] <ivoks> so they both try to stonith the other node
[15:53] <RoAkSoAx> ivoks, yes
[15:53] <ivoks> and you end up with both machines powered down
[15:53] <ivoks> just cause of network failure
[15:54] <RoAkSoAx> ivoks, something similar happened to me while doing my thesis.. but what heartbeat did was to restart heartbeat on both nodes
[15:55] <RoAkSoAx> ivoks, and btw.. those are things that I actually want to experiment on "what will happen in the event of a network failure"... but because I lack of hardware resource i'll not do it just yet
[15:56] <ivoks> i have one client which has network problems in his clustered environment
[15:56] <ivoks> i don't know why and i can't fix it
[15:57] <ivoks> but...
[15:57] <ivoks> that's a two node cluster, and without quorum disk, rhcs just refuses to connect to the cluster
[15:58] <ivoks> with quorum disk, those situations are irrelevant, cause it does everything for me
[15:58] <ivoks> i guess we'll have to test how this works with heartbeat
[15:58] <ivoks> i'll create a testing enviroment in my office
[15:59] <RoAkSoAx> ivoks, ok awesome.. btw. i've also found this: http://www.linux-ha.org/QuorumServerGuide
[15:59] <RoAkSoAx> unfortunately it will not be shipped in heartbeat 3
[16:01] <ivoks> and it's a broken by design
[16:01] <ivoks> if i would need 3rd machine for quorumd, why not have all three in cluster and avoid split-brain situation for ever :)
[16:02] <ivoks> split brain can happen only in two-node cluster
[16:02] <ivoks> well, it can happen in n-node cluster, but only when network switch dies, so then you'll have more to worry about than which one is master
[16:03] <ivoks> well, rhcs is open source
[16:03] <ivoks> i don't see why we shouldn't look at it and myabe create something for pacemaker
[16:05] <RoAkSoAx> ivoks, indeed
[16:05] <RoAkSoAx> ivoks, anyways.. instead of using quorum disk, why don't just use multiple communication paths?
[16:06] <RoAkSoAx> it's unlikely that all the communication paths fail...
[16:06] <ivoks> RoAkSoAx: now we are entering into the 'what will sysadmin do...'
[16:06] <ivoks> we should provide something that doesn't require multiple network paths :)
[16:06] <ivoks> quorum disk is ideal for that :)
[16:07] <ivoks> i'm sure some will create full tolerant environment
[16:07] <ivoks> and that's great
[16:07] <ivoks> but most will not :)
[16:09] <RoAkSoAx> ivoks, i see... btw... i've found this answer: "That heartbeat does not need a quorum disk is actually a _feature_, you know. " I don't know how much of this is true
[16:09] <ivoks> hehe
[16:10] <ivoks> there are cluster solutions, like microsoft which *require* quorum disk
[16:10] <ivoks> there are cluster solutions, like heartbeat which don't have quorum disk
[16:10] <ivoks> and there are cluster solutions, like rhcs that *offer*, but do not require quorum disk
[16:10] <ivoks> i'd argue rhcs has the best approach on this one :)
[16:11] <RoAkSoAx> ivoks, yes in case we're handling those scenarios you've mentioned above... and yes it seems that heartbeat/pacemaker don't have quorum disk support...
[16:12] <ivoks> well, we will workout something ;)
[16:12] <RoAkSoAx> ivoks, btw.. you have plenty of experience in HA Clustering :)
[16:12] <ivoks> hehe :)
[16:13] <ivoks> well, i have couple of clusters in production, so i had to investigate all options
[16:14] <ivoks> RoAkSoAx: i don't think we should give up on pacemaker cause of quorum disk
[16:14] <ivoks> i'm all for moving to pacemaker
[16:14] <RoAkSoAx> ivoks, I don't either!! and yes.. my preference is still on heartbeat/pacemaker
[16:14] <ivoks> we just have to know all disadvantages
[16:14] <RoAkSoAx> indeed
[16:18] <RoAkSoAx> ivoks, btw.. what's what you're going to talk about this subject in the UDS?
[16:18] <ivoks> mail server, maybe clustering
[16:19] <ivoks> and maybe even more :)
[16:20] <ivoks> the biggest problem we have with clustering in ubuntu is that we praticly have only one supported architecture
[16:20] <ivoks> and that's rhcs
[16:20] <ivoks> now, there aren't many ubuntu rhcs users and that's visible in bugs
[16:21] <ivoks> basicaly, only two people apear there and none of us knows rhcs that much to solve all situations
[16:21] <ivoks> so, giving up on some featuers wouldn't be that bad, if we would have you and maybe more people to work on pacemaker in ubuntu
[16:22] <ivoks> not that long ago, we had one ubuntu-server member that was very good with rhcs
[16:24] <RoAkSoAx> ivoks, yeah... i see the problem.. and yes i've much more people working with linux-ha / pacemaker based clusters rather than rhcs
[16:25] <ivoks> i'm sure there is
[16:25] <ivoks> but you know, the guy that took care of rhcs in ubuntu is now one of leading rhcs developers... so, at that time that was great deal :)
[16:26] <RoAkSoAx> ivoks, indeed.. but having rchs without someone to maintain it, is useless
[16:26] <ivoks> i agree
[16:26] <ivoks> that's why i wanted to propose move to linux-ha/pacemaker
[16:27] <RoAkSoAx> ivoks, you should do that.. since i'm not attending the UDS?
[16:27] <ivoks> i will
[16:27] <ivoks> you could also attend discussion
[16:27] <ivoks> over irc or gobby
[16:27] <RoAkSoAx> i will be there to support you :)
[16:28] <ivoks> :)
[16:28] <RoAkSoAx> ivoks, btw... you are going to be my motu mentor right ?
[16:28] <ivoks> i just have to figure out all the details of pacemaker before proposing that :D
[16:28] <ivoks> RoAkSoAx: yes
[16:28] <RoAkSoAx> ivoks, awesome!! i relly look forward to start working on it
[16:29] <ivoks> me too
[16:29] <ivoks> i have couple of production clusters and it scares me that it's on rhcs with a blury future in ubuntu :)
[16:30] <RoAkSoAx> ivoks, unless someone with a lot of experience in rhcs would like to maintain it
[16:30] <RoAkSoAx> which i do think it's very unlikely
[16:31] <ivoks> we will see :)
[16:31] <ivoks> does pacemaker support lots of stonith devices?
[16:32] <RoAkSoAx> i do think that fully supporting one cluster stack is the best thing to do... and having the other one in the repos
[16:32] <RoAkSoAx> s/and/while
[16:32] <ivoks> i agree
[16:33] <ivoks> qood question on ml; hopefully we will get some feedback
[16:34] <RoAkSoAx> ivoks, yeah... we'll need anyone who uses linux-ha/pacemaker and rhcs to give us feedback.. since we could learn lots from other people before having to start testing ourselves
[16:39] <ivoks> hm
[16:39] <ivoks> good news
[16:39] <ivoks> http://lists.fedoraproject.org/pipermail/fedora-server-list/2009-January/000071.html
[16:39] <ivoks> all our
[16:39] <ivoks> stacks (Novell - pacemaker, Oracle - ocfs2-tools and Red Hat cluster)
[16:39] <ivoks> will converge into one, killing the whole decision problem at the root.
[16:39] <ivoks> that's why pacemaker supports openais
[16:39] <ivoks> and rhcs had some big changes in 3.x series
[16:40] <RoAkSoAx> let's see
[16:41] <ivoks> Fabio was the guy i was talking about
[16:41] <ivoks> s/was/is/
[16:43] <RoAkSoAx> ivoks, that's awesome!
[16:44] <RoAkSoAx> so now we'll have heartbeat/pacemaker and pacemaker/openais that will actually become rhcs right?
[16:45] <ivoks> well, they should all merge into one, i guess
[16:46] <jbernard> RoAkSoAx, ivoks: i'd be happy to help out if you need it, i work with linux in enterprise environments and HA solutions fairly regularly, is there a roadmap or wiki page i can use to come up to speed?
[16:46] <ivoks> we should include ubuntu into this process
[16:46] <ivoks> jbernard: we are just starting :)
[16:46] <ivoks> jbernard: there's ubuntu-ha team
[16:47] <RoAkSoAx> jbernard, https://launchpad.net/~ubuntu-ha
[16:47] <jbernard> ill check that out, what are the immediate goals?
[16:48] <ivoks> create ubuntu cluster stack
[16:48] <jbernard> we'll use rhcs as the base?
[16:48] <ivoks> for start, we have to decide which architecture to use
[16:49] <ivoks> jbernard: rhcs or pacemaker
[16:50] <ivoks> jbernard: what do you use?
[16:50] <jbernard> ivoks: architecture?
[16:50] <ivoks> red hat cluster suite or linux-ha?
[16:51] <jbernard> ivoks: i have more experience with linux-ha
[16:51] <RoAkSoAx> jbernard, we are leaning towards it too
[16:51] <RoAkSoAx> btw i've just created the wikipage: https://wiki.ubuntu.com/UbuntuHighAvailabilityTeam
[16:51] <RoAkSoAx> we can start putting some ideas into it
[16:52] <RoAkSoAx> and creating a roadmap
[16:52] <jbernard> i think that's a great idea
[16:52] <ivoks> we should also be aware of the fact that there's a process of merging them all into one
[16:52] <ivoks> and we should, as ubuntu-ha, get involved in that
[16:52] <RoAkSoAx> indeed
[16:52] <jbernard> ivoks: you mean an existing effort to merge them?
[16:52] <ivoks> jbernard: yes
[16:53] <RoAkSoAx> jbernard, http://lists.fedoraproject.org/pipermail/fedora-server-list/2009-January/000071.html take a look at the end of the post
[16:54] <ivoks> the fact that pacemaker supports openais is one step
[16:54] <ivoks> you can already have parts of linux-ha (pacemaker), rhcs (openais) and ocfs2 (ocfs2-tools) working as one stack
[16:55] <jbernard> ahh, so we could become involved in that collaborative effort
[16:55] <ivoks> right :)
[16:58] <RoAkSoAx> what we'll need to find out is if they are considering linux-ha (heartbeat specifically) into this merging process
[16:59] <jbernard> i would think we would have to have heartbeat, or some replacement to provide a complete stack
[16:59] <ivoks> anyway... going offline
[16:59] <ivoks> take care
[17:00] <RoAkSoAx> see ya ivoks
[17:00] <RoAkSoAx> jbernard, yeah... i do think that this merge process is only considering pacemaker  with openais
[17:00] <RoAkSoAx> and not heartbeat
[17:05] <jbernard> ill go through the list threads and see what i can gather
[17:05] <jbernard> and put it up on the wiki page
[17:06] <RoAkSoAx> awesome
[17:07] <RoAkSoAx> well i'm off too see ya jbernard
[17:11] <oruwork> i followed this to install oracle xe, not sure how to uninstall it http://www.cyberciti.biz/faq/howto-install-linux-oracle-database-xe-server/
[17:12] <a|wen> oruwork: it looks to be a package you installed ... sudo aptitude uninstall oracle-xe should do the trick
[17:13] <oruwork> a|wen-> yup that worked, how can i undo the swap space ?
[17:15] <a|wen> oruwork: instead of swapon you use swapoff on the file
[17:15] <a|wen> if that goes good, you should be able to delete the file
[17:15] <oruwork> a|wen-> nope:( says invalid file
[17:15] <oruwork> swapoff didnt work
[17:15] <oruwork> swapoff: /swpfs1: Invalid argument
[17:16] <oruwork> oh wait, system was restarted so might be able to just delete it ?
[17:17] <a|wen> oruwork: does "swapon -s" list the file?
[17:17] <a|wen> oruwork: if not you can just delete it
[17:17] <oruwork> invalid option -- 's'
[17:18] <a|wen> oruwork: you need a dash before s
[17:18] <oruwork> yup
[17:18] <oruwork> only lists the real swap
[17:18] <a|wen> oruwork: then let go of the file
[17:19] <oruwork> nice ....
[17:19] <oruwork> thanx man
[17:19] <a|wen> :)
[17:36] <ClaytonG> Hi, I'm new to using ubuntu(previously using rhel4) and just set up a new server.  I'm attempting to integrate it into my backup software which requires a inetd(or xinetd) to be running on the server to be backed up.  Any recommendations to use on ubuntu 9.0.4?
[17:38] <a|wen> ClaytonG: you probably want to start by installing "xinetd" in that case
[17:39] <ClaytonG> Thanks for the tip :)
[18:17] <Hancok> i just purchased a domain on godaddy.com and set the names servers to n1.atspace.com  . a hosting site that iam using with a free hosting account to test. now i got back to godaddy.com and its has disabled the option of 'total dns control' as says. 'site hosted elsewhere'   i want to  make an 'A' record for irc.mydomain.com as iam planning to run an ircd. any help?
[18:27] <PhotoJim> Hancok: unfortunately that's really an issue with godaddy, so it's out of the scope of this channel.  I host my own DNS so I can't offer any advice.
[18:29] <a|wen> Hancok: as PhotoJim says it is a bit out of scope ... but as you now use atspace.com for your DNS hosting, that is where you need to get it changed
[19:16] <IvanCostaJr> Hi, guys.
[19:17] <a|wen> hello there
[19:18] <IvanCostaJr> I'm with permission problem with /var/www/webpages. Does anyone know about apache2 configuration?
[19:18] <a|wen> IvanCostaJr: just try to ask your real question ... then people can determine if they know the answer
[19:21] <IvanCostaJr> Thanks, a|wen! How can I gave access to webpages (localized in "/var/www/name_of_webpage") with apache2?
[19:24] <a|wen> IvanCostaJr: with the default installation it should just be http://localhost/name_of_webpage
[19:29] <IvanCostaJr> Yes, I know that... But when I try to access, Apache respond access dened. And in the /var/log/apache2/error.log is writen "(13) Permission denied: access to /name_of_web/index.php denied"
[19:31] <IvanCostaJr> a|wen: I need to have permission to read and write to those websites
[19:32] <a|wen> IvanCostaJr: what is owner/permissions of that file?
[19:35] <Hancok>  http://img206.imageshack.us/img206/19/29972958.jpg    where do i put irc.mydomain.com    (that points to the dyndns address given to me. e.g  eg.dyndns.com) that makes irc.eg.dyndns.com
[19:35] <IvanCostaJr> The owner is root.root and the permission is 755.
[19:38] <IvanCostaJr> a|wen: I looked now in /etc/groups and there is a "www-data". The Ubuntu have a user called www-data.
[19:39] <IvanCostaJr> What's the Apache's user?
[19:39] <a|wen> IvanCostaJr: it should be okay for accessing the file with those permissions ... try checking your 000default permissions
[19:41] <a|wen> IvanCostaJr: you probably have an allow/deny rule somewhere that denies the access
[19:41] <IvanCostaJr> I'm going to check now.
[19:44] <IvanCostaJr> a|wen: do you know if the apache server users a real "user" to access and manipilate the pages?
[19:44] <a|wen> IvanCostaJr: the apache process runs as www-data
[19:46] <IvanCostaJr> Ok!! "It's works!".
[19:47] <a|wen> :P :)
[19:59] <Vog-work> Ok I got a funny sys admin problem.... I have a large (5 gb) compressed file that when I attempt to uncompress it  (to 26 gb) I get a file size limit exceeded error. I did a ulimit -a and I basicallu have a file size limit  that is unlimited.  Am I running into some other limitation with gunzip?
[20:01] <a|wen> Vog-work: which file-system do you try to uncompress it on?
[20:01] <Vog-work> ext3
[20:02] <Vog-work> sorry linux lvm
[20:02] <a|wen> Vog-work: and what is your block-size?
[20:03] <Vog-work> I have 144g  open
[20:04] <a|wen> Vog-work: sudo tune2fs -l /dev/<whatever>
[20:04] <a|wen> and check "block size"
[20:05] <Vog-work> 1024
[20:05] <a|wen> Vog-work: then your file-size limit is 16G
[20:06] <Vog-work> Method for increasing that?
[20:06] <oruwork> how to unzip a .zip file ?
[20:07] <a|wen> Vog-work: i'm not 110% sure, but i think it is pretty static after the filesystem is created
[20:08] <Vog-work> shit....
[20:08] <Vog-work> oke... looks like I'm going to ftp to another box and uncompress and send it back uncompressed...
[20:09] <Vog-work> thx for the help alwen
[20:09] <a|wen> Vog-work: well if one of the files is more than 16G (it looks like it) then you're in trouble anyway
[20:09] <a|wen> oruwork: install "unzip"
[20:10] <Vog-work> Yeah, I was sent this file from a remote site. I'm going to get them to split up the directories..
[20:11] <a|wen> it is quite a large file in any case
[20:12] <Vog-work> Yeah, I htink it is a database.
[20:12] <a|wen> Vog-work: if you get the chance to make a new ext3 system with 4kb blocks, the limit will increase to 2TB
[20:15] <Vog-work> a|wen: I think it was setup that was dur to the RAID system it was on. I'm not the primary admin on this system. Just filling in for someone on vacation.
[20:17] <a|wen> Vog-work: the "good" position ... make things work; but please don't change anything
[20:18] <Vog-work> Yep
[20:22] <oruwork> what is the default chmod for all files ?
[20:26] <oruwork> how can i make a directory and all files in it writable ?
[20:38] <henriquelm> hello there
[20:57] <oruwork> is there a way to find out what group apache2 server belongs to ?
[21:13] <genii> www-data
[21:34] <blime> http://pastebin.com/d2f58693e  <-- 30 line configuration from name-based virtual host configuration file in sites-available/
[21:34] <blime> Was wondering if there are any security issues in the configuration.
[21:47] <andol> What do we think about bug #370445? Is it a bug or not?
[21:52] <genii> If I had some other mta I'd be slightly (if not more) upset if postfix got installed as some dep
[21:54] <andol> genii: No, no, the recommendation from bsd-mailx is postfix, or other mta. In other words it won't install postfix if you already have another mta installed.
[21:54] <andol> (At least it shouldn't. If it does, then that definetly is a bug)
[21:55] <a|wen> andol: it doesn't (i have the exim mta-thingy and that makes it happy enough)
[21:59] <netdur> perhaps am doing something stupid, I have installed php5 and apache2, I browse to .php but instead of normal "executing php script", firefox try to download the page (which the php file with source)
[22:00] <a|wen> netdur: you probably want libapache2-mod-php5
[22:01] <netdur> a|wen: it is installed
[22:02] <andol> netdur: You might have to restart apache2 sometimes for those changes to take effect. Also, firefox might cache some responses, so restart you'r web browser never hurts.
[22:02] <a|wen> restart (or at least reload) is needed
[22:06] <netdur> thanks guys
[22:07] <andol> netdur: worked?
[22:08] <netdur> andol: yes, restarted apache and worked just fine
[22:11] <andol> netdur: Great. Sometimes apt, depending on the packages, will restart apache automaticlly if concidered needed, but not always.
[22:12] <andol> netdur: In other words, if possible, a restart or a reload of apache never hurts when you've done some changes to the system.
[22:26] <netdur> andol: thank you
[22:45] <simplexio> May  2 00:46:44 4tune smbd[17803]: Mount of private directory return code [256]
[22:45] <simplexio> and samba mount fails
[23:35] <TimReichhart> hey guys I need to know how to repoint my domain name for email server do I point it to my ip address or to this mail.kustomjs.com
[23:39] <alice583> is ubuntu server appropriate for high load web servers or is it not secure enough for that?
[23:40] <TimReichhart> join #postfix
[23:45] <andol> alice583: Well, do you concider wikipedia as high load web servers?
[23:45] <simplexio> like anny server its good as its admin is
[23:46] <alice583> andol, yes
[23:46] <alice583> andol, wikipedia runs on ubuntu server?
[23:46] <andol> alice583: http://arstechnica.com/open-source/news/2008/10/wikipedia-adopts-ubuntu-for-its-server-infrastructure.ars
[23:48] <andol> Besides that, I'd tend to somewhat agree with simplexio.
[23:49] <alice583> yeah. makes sense. except that I heard that ubuntu tends to have many more exploitable bugs than other servers, but I guess this may have just been mindless bashing by someone with no clue.
[23:52] <ScottK> alice583: It's mindless bashing.
[23:53] <andol> alice583: Well, on that area I wouldn' say that there is too much diffrence between diffrent Linux distributions. After all, it is basically the same software. Following a bunch of announce- lists I would say security fixes get commit about the same, on average.
[23:53] <andol> alice583: Of course, if you run the latest Ubuntu or Fedora you won't get as well tested software as if you run an older LTS version of Ubuntu, or CentOS/RHEL for that matter.
[23:54] <ScottK> Ubuntu has gone to some trouble to compile with hardening options that make it more difficult to exploit some classes of issue.
[23:54] <ScottK> issue/issues
[23:54] <alice583> so, you would recommend using an older LTS version like 8.04 instead of 9.04?
[23:55] <andol> alice583: Unless there is some specific feauture/version you need is 8.10 or 9.04, yes.
[23:55] <simplexio> and actually what actually means something is not how much bugs/exploits systems has, its how you prepared to them and what you do when shit hits the fan, as it does sooner or later
[23:57] <andol> simplexio: You have to stop saying things I agree completly with :-)
[23:59] <alice583> "as it does sooner or later"... you'd think that as long as wikipedia or twitter or whatever stays online it has not been exploited (of course, a naive assumption)... do such services get hacked from time to time (in the sense of, someone malicious gets root access), and the only reason they're still alive is because they make frequent backups and hash passwords (instead of storing them in plain text), and similar things?