[00:03] <Elite> austin@ubuntu-server:/etc/cups$ /etc/init.d/cupsys restart
[00:03] <Elite>  * Restarting Common Unix Printing System: cupsd                                                         start-stop-daemon: warning: failed to kill 8851: Operation not permitted
[00:03] <Elite> cupsd: Child exited with status 1!
[00:04] <genii> Elite: Try using sudo
[00:05] <genii> eg: sudo /etc/init.d/cupsys restart
[00:06] <Elite> There we go!
[00:06] <Elite> jmedina, now that command shows: austin@ubuntu-server:/etc/cups$ sudo netstat -plutn | grep cups
[00:06] <Elite> tcp        0      0 192.168.0.100:631       0.0.0.0:*               LISTEN      8884/cupsd
[00:06] <Elite> udp        0      0 0.0.0.0:631             0.0.0.0:*                           8884/cupsd
[00:06] <Elite> When I try to go to the ip is says 403 forbidden
[00:07] <jmedina> mmm
[00:07] <jmedina> you never tell me about that error when I ask for the error
[00:08] <jmedina> sometime when you dont give enough info it is harder to solve simple problems
[00:08] <Elite> What error?
[00:08] <jmedina> 403 forbidden
[00:08] <Elite> the 403?
[00:08] <jmedina> or where did you get it?
[00:09] <Elite> That is the first time I got that error, and I got it when I got to http://192.168.0.100:631
[00:09] <jmedina> you need to change access restrictions in cupsd.conf
[00:10] <jmedina> sorry I have to go
[00:27] <Elite> genii you still here?
[00:31] <genii> Elite: A bit, yes. I'm not overly familiar with cups errors however, and so not of much help on that subject
[00:32] <Elite> Do you know what permissions stuff he was talking about?
[00:35] <genii> Elite: 403 forbidden is a generic webserver message which means you or the user it thinks you are is not allowed to see the files. This normally happens when people put files in the webserver dir which don't belong to the same user the webserdoes for instance. In cupsys case, the user may have to be specified in the cupsd.conf file which jmedina mentioned
[00:36] <genii> Bleh, typos
[00:37] <Elite> I know what the error is but I can't see any place to set a user name
[00:38] <Elite> How do I get out of a man
[00:38] <genii> man cupsd.conf  shows quite a lot of name settings,auth settings, etc
[00:38] <genii> Elite: q
[00:39] <Elite> I was there and can't seem to find dick all there its too confusing
[00:42] <genii> Elite: Mine has what seems to be relevant entries of:  SystemGroup lpadmin    (my user is a member of this group)    and:  DefaultAuthType Basic
[00:43] <Elite> Whats after that line?
[00:44] <genii> Elite: Wait, I'll just pastebin the whole thing so you can see
[00:44] <Elite> ok
[00:45] <genii> Elite: http://pastebin.com/f3090af06
[00:46] <Elite> thx
[00:48] <Elite> Do you share your printer?
[00:48] <genii> Elite: Nope, it's an usb printer connects directly to my laptop
[00:51] <Elite> God damnit! mine looks literally just like that and it doesn't work
[00:52] <genii> Elite: Hm. You are putting what url in?   192.168.0.100:631    or so?
[00:53] <Elite> yea
[00:53] <Elite> I just keep getting a 403 erroe
[00:54] <genii> try:   127.0.0.1:631
[00:54] <Elite> I can't
[00:55] <genii> Why not?
[00:55] <Elite> I'm not on that machine
[00:56] <Elite> Anything I don on that machine is done by ssh
[00:56] <genii> Elite: Ah, ok. If you have lynks/elinks installed on there, you can do it on ssh
[00:57] <Elite> Whats that?
[00:58] <genii> text mode web browser
[00:58] <genii> Useful to have on CLI machines
[01:00] <Elite> how do I get out of vi
[01:00] <genii> Elite:   :q   or :q!   to not write changes
[01:01] <Elite> not working
[01:02] <genii> Elite: eg:     links http://127.0.0.1:631                  doesn't work?
[01:02] <genii> (after of course sudo apt-get install elinks if it was not installed)
[01:02] <Elite> No the vi exit I mean
[01:03] <genii> Elite: Hit ESC a few times then try again the:    :q!
[01:05] <jmedina> where can I preview Ubuntu Server guide for jaunty
[01:05] <jmedina> ?
[01:05] <jmedina> there is no link in help.ubuntu.com
[01:05] <Elite> I am on dial up is that application on the dvd?
[01:06] <genii> Elite: elinks should be on the cd actually
[01:06] <Elite> how do I make it come from there?
[01:06] <BrunoXLambert> genii, w3m is installed by default for a text web browser
[01:06] <genii> BrunoXLambert: Ah, thanks, did not know
[01:07] <genii> Elite: Apparently you have already a browser installed
[01:07] <Elite> Yea opening now
[01:07] <BrunoXLambert> links doesn't even seems in main
[01:07] <BrunoXLambert> elinks is
[01:08] <genii> BrunoXLambert: links is a symlink to elinks
[01:08] <BrunoXLambert> but not installed by default
[01:08] <genii> (when it gets installed)
[01:08] <BrunoXLambert> yeha
[01:08] <BrunoXLambert> the real links is in universe
[01:09] <Elite> w3m says it can't open http://127.0.0.1:631 or http://localhost:631
[01:10] <BrunoXLambert> netstat -taunp | grep 631
[01:10] <BrunoXLambert> ps faux | grep cups
[01:11] <Elite> tcp        0      0 192.168.0.100:631       0.0.0.0:*               LISTEN      8983/cupsd
[01:11] <Elite> udp        0      0 0.0.0.0:631             0.0.0.0:*                           8983/cupsd
[01:11] <Elite> austin@ubuntu-server:/etc/cups$ ps faux | grep cups
[01:11] <Elite> austin    9013  0.0  0.0   3004   752 pts/1    S+   20:12   0:00              \_ grep cups
[01:11] <Elite> root      8983  0.0  0.2   5988  2336 ?        Ss   19:50   0:00 /usr/sbin/cupsd
[01:11] <genii> Elite: Please use pastebin when a lot of lines
[01:12] <Elite> It was 2 parts or supposed to
[01:12] <genii> Elite: Try it's Listen address you likely specified, which would be the 192.168.0.100:631   or so
[01:14] <Elite> I get a 403 still
[01:14] <BrunoXLambert> 403
[01:14] <BrunoXLambert> hmmm
[01:14] <BrunoXLambert> why would the permition would be bad
[01:15] <genii> Elite: Unfortunately as I already said, I'm not a Cups expert
[01:15] <Elite> I know
[01:15] <jmedina> read the logs!!!!
[01:16] <jmedina> dont guess
[01:16] <Elite> Where are they?
[01:16] <BrunoXLambert> /var/log
[01:16] <genii> You likely want the apache one
[01:17] <genii> or /var/log/cups/error_log
[01:17] <Elite> I don't think I have apache installed
[01:17] <Elite> Nope
[01:18] <genii> Can you even get a "403" if no webserver backend?
[01:18] <Elite> Yes
[01:18] <Elite> Like I said I can use SWAT
[01:28] <Alysum> hello does anyone use apple's Terminal here and know how to alt backspace to delete the previous word like on PC keyboards?
[01:30] <Elite> I use it and alt on a mac keyboard is the button right next to the space on the left side
[01:33] <Deeps> ctrl+w?
[01:34] <owh> Salutations. In fetchmailrc I need to specify many accounts. How do I specify default options like ssl sslchk and sslcertpath for the accounts. At the moment, it appears that I need to specify this for each user, rather than for the server, which makes no sense to me.
[01:35] <owh> Until now, I've only ever needed one account in fetchmailrc - multiple accounts appears to be a whole different kettle of fish :(
[01:39] <owh> In case anyone's wondering, it turns out that you create a "defaults" "server" with the options. Very intuitive :|
[01:40] <Alysum> Elite: it doesnt work, its supposed to delete the WHOLE word until it meets a space backwards
[04:06] <twitzel> Hi all
[04:06] <twitzel> where can I download a "jaunty" iso image ?
[04:07] <jmarsden> http://cdimage.ubuntu.com/releases/jaunty/alpha-6/
[04:08] <twitzel> How alpha is it ? Is it minor issues, or does it have serious problems ?
[04:09] <jmarsden> It is an alpha release... so I suppose it is 100% alpha?  If you can't handle that, wait for the real release :)  How serious its problems are depends on what you do with it...
[04:10] <twitzel> I only want to run an NFS server with it. My current problem is that multipath-tools is all messed up in intrepid, but all the problems I have, are apparently fixed in "jaunty" Nobody seems to want to backport it to intrepid.
[04:11] <twitzel> So its either "jaunty" now, or RH/centos instead. I'd like to keep everything homogeneous, i.e. ubuntu, so I like to give it a shot. But intrepid is basically broken
[04:12] <jmarsden> Intrepid works fine here, if you think it is broken, did you file a bug?  Please supply bug # and I'll look at the bug report...
[04:13] <twitzel>  Bug #338363
[04:14] <twitzel> I think most of it is probably udev script related
[04:15] <jmarsden> Doesn't the workaround stated in the bug report :  ENV{DM_TABLE_LIVE}!="1", GOTO="kpartx_end"   work?
[04:15] <twitzel> No
[04:16] <twitzel> Basically what I have done now, is removed all dm related udev scripts, which makes it at least generate /dev/dm-* by default rule and then call kpartx in a boot script
[04:16] <jmarsden> Then you should add a comment to the bug saying that the workaround fails for you, and what happens when you try it.  ALso, you could consider just grabbing the sources for the newer version of multipath-tools from Jaunty and rebuilding them on Intrepid.
[04:17] <twitzel> From the udev debug output, the kpartx rule is NEVER called
[04:17] <twitzel> (that was before I messed with it)
[04:17] <jmarsden> Or you can take the risk and run an alpha release... but if your NFS server will go into production... I wouldn't do that!
[04:17] <twitzel> How bad can it be ?
[04:18] <jmarsden> I'd grab the Jaunty sources for multipath-tools and build the packages for Intrepid...
[04:18] <twitzel> Sounds like a good plan
[04:19] <jmarsden> http://www.ubuntu.com/testing/jaunty/alpha6  says "This is still an alpha release. Do not install it on production machines."  I'd do as it says...
[04:19] <twitzel> We don
[04:19] <twitzel> 't do HA stuff or webserving
[04:19] <twitzel> Unless it crashes everyday or loses data, its okay
[04:20] <twitzel> I can try jaunty on one machine and try to backport the stuff to intrepid on the others
[04:20] <jmarsden> OK.  There are no guarantees of either of those things being true for Jaunty Alpha6 :)  I'd be surprised if it did that to you, but... it might.
[04:20] <twitzel> we have several of these HW configs
[04:22] <twitzel> One last question about that. If I install the alpha, can it be upgraded to release without complete reinstall ?
[04:22] <ScottK> twitzel: Yes.
[04:22] <ScottK> twitzel: Actually if you install and upgrade now you'll have essentially the beta.
[04:23] <twitzel> awesome
[04:23] <twitzel> I'll email Taiwan and have them burn and insert the CD
[04:25] <twitzel> I wish I knew more about udev
[04:25] <twitzel> then I could contribute something. But right now its a bit above my head whats going on
[05:04] <twitzel> Uuuh
[05:04] <twitzel> U just botched by kpartx rule such that it calls kpartx on all dm-* that come in
[05:05] <twitzel> Now everything works as desired, of course, all other device-mapper functions one could have are shot now
[06:41] <whalesalad> Hey guys I turned my eth0 interface off a little while ago, and just turned it back on... but it's not working at all :/
[06:41] <whalesalad> using ifconfig eth0 up/down
[06:47] <p_quarles> whalesalad: sudo /etc/init.d/networking restart
[06:47] <p_quarles> or sudo dhclient eth0
[06:50] <simplexio> or ifup/ifdown  eth0
[07:16] <n006> Йо!
[07:16] <n006> Есть кто живой? :)
[07:16] <rst-uanic> ага
[07:16] <n006> Вот.
[07:16] <rst-uanic> канал вобще-то англоязычный обычно был :)
[07:16] <n006> Очч нужна помощь. Хотя наверное я хочу нереального.
[07:16] <n006> Ой соррь то чно не тот нажал. xD
[07:16] <n006> sorry
[07:17] <rst-uanic> strange :)
[07:39] <soren> whalesalad: "ifconfig eth0 down" deconfigures the interface, thus bringing down the routes through that interface. ifconfig up only bring the interface back up, not the coresponding routes. So: Use "ifdown eth0" and "ifup eth0" when you want to deconfigure/configre eth0.
[07:39] <owh> I'm in the hunt for opinion, so please don't be shy. I've built an electronic ticket system. It emails out tickets to events. Invariably people provide incorrect emails, make typo's have quota issues and the like. I need to deal with the "backscatter". I was thinking of using dbacl to pre-filter this and then parse the individual messages. Are there other/better ways of doing this?
[07:39] <whalesalad> Thanks for all the help guys
[07:40] <owh> I'm asking here, not from a programming perspective, but because there is lots of server/enterprise experience in the room and I'm sure that u-s ships with all manner of tools I know nothing about :)
[07:42] <soren> Why the filtering?
[07:43] <soren> Do you intend to use the return-path address for other purposes?
[07:44] <owh> The filtering is to make sure that an allocated ticket actually arrives. If it never gets to the recipient, it's never used.
[07:45] <owh> It also means that the email address is faulty, so we cannot send a reminder later.
[07:46] <_ruben> owh: you can only prevent backscatter on your own servers, not those of others .. and checking whether an email address exists or not, is nearly impossible
[07:47] <owh> I'm just wondering, perhaps I don't need to do any of this. If a message comes back for *any* reason, it's borked.
[07:47] <_ruben> well .. a bounce analyzer is another, sometimes useful, system
[07:47] <_ruben> progammatically analyzing a bounce is quite an endeavour due to the non-standard formats being used
[07:48] <owh> _ruben: Sure, but I'm beginning to wonder if I need to do this to actually figure out if the message got there. I suppose I need to ignore the "Delayed" errors, but the rest...
[07:48] <_ruben> over-quota: tempfail .. non-existent domain: could be perm or tempfail .. etc .. rather difficult to handle properly
[07:49] <owh> That in itself is classifying them. Which is why I started down the Bayesian path.
[07:49] <_ruben> owh: well .. i'd atleast recommend "marking" email addresses that bounced atleast once or twice as "special (action required)" or smth similar
[07:50] <_ruben> depending on the mailvolume one could process those marked addresses manually
[07:50] <owh> Yeah. At the moment we do a "time-out" - if you don't collect your ticket with a period it goes back into the pool.
[07:51] <owh> I just downloaded the email from the mailout of 9000 tickets. There were three messages, one to invite, one to collect and one to thank. That generated 3300 "extra" return emails alone.
[07:52] <owh> People are not good at writing their own email address :(
[07:53] <owh> I've not yet analysed all that email, but most of it is mistyped email addresses.
[07:53] <owh> s/is/seems to be/
[07:54] <owh> There isn't any ready-made stuff for this in u-s is there?
[07:58] <soren> owh: Even if they have to type it twice? Wow. I'm surprised.
[07:58] <owh> Nope, they just cannot seem to achieve it :(
[08:00]  * soren loses another little bit of faith in mankind
[08:01] <owh> I just found one user who mistyped their address *nine* times. The same two letters transposed.
[08:01] <Bambi_BOFH> dyslexic ;O
[08:02] <soren> *facepalm*
[08:05] <_ruben> our bulkmailers have their queues filled with @hotamil.com @homail.com @hormail.com etc addresses
[08:05] <owh> Yup
[08:05] <owh> Or @hotmail
[08:06] <owh> No phone numbers though - at least <grin>
[08:06] <_ruben> which basically is a flaw in our software which i keep nagging our dev's about .. no address should be added to a mailinglist untill its verfied
[08:06] <_ruben> hehe
[08:06] <owh> That's the path I'm going down too. Otherwise you're just storing junk.
[08:07] <owh> So, is dabcl overkill for what I want to do, or a smart way to go about solving this?
[08:08] <_ruben> cant say i know what 'dabcl' is :p
[08:08] <soren> dbacl.
[08:09] <owh> Doh
[08:09] <owh> digramic Bayesian text classifier
[08:09] <_ruben> classifier .. hmm
[08:09] <_ruben> sounds a bit overkill indeed
[08:09] <Bambi_BOFH> is a classic 'click here to confirm' to uncool?
[08:10] <owh> Bambi_BOFH: Well, they'll click regardless.
[08:10] <_ruben> putting effort into a proper signup process is best imo
[08:10] <_ruben> not being able to do anything untill a confirm link is clicked for instance
[08:10] <Bambi_BOFH> owh, if the link is clicked, someone got the email.
[08:10]  * Bambi_BOFH heads to dinner. will be interested to see how this discussion evolves
[08:10] <owh> _ruben: I like the notion of sending an email to what ever they tell me, ignoring what ever comes back and only adding the address and sending a ticket once they click the link.
[08:11] <owh> Bambi_BOFH: Ah, I read "on the site", but you mean, "in the email"
[08:11] <Bambi_BOFH> yup.
[08:11] <_ruben> owh: it's about the only way "that works" :p
[08:11] <owh> Yup
[08:11]  * owh adds a few lines of code to make that happen and ditches the dbacl idea. Much appreciated.
[08:12] <_ruben> :)
[08:12] <owh> The more I think about it the less I understand why I didn't think of this before :(
[08:13] <owh> It's not like its a new idea :)
[08:13] <jwstolk> I found my problem: wakeonlan (sending the magic packet) fails if the computer where the packet is send from has more than one nic. (I have 5, it's a firewall)
[08:14] <jwstolk> The only solution I found was disabling all but one noc, which isn't a very good option in my case.
[08:15] <owh> jwstolk: Just out of curiosity, how did you confirm this behaviour because while I've not done what you're doing, it does not appear to make sense to me.
[08:16]  * owh is happy to be disabused of this :)
[08:17] <jwstolk> It works on all my ubuntu-servers, except the one with multiple nic's, en it's the only reason I could find using google as well.
[08:17] <simplexio> jwstolk: if i recall right wakeonlan work only from lan adderss, are all those nics in same lan. maybe packet originates from wrong nic or something
[08:18] <jwstolk> the "send" operation in the python wakeonlan scripts gives an error, the packet never gets send, not even to the wrong subnet
[08:19] <simplexio> jwstolk: ahh.. that script dosent work .. is it in some package or can you paste it to pastebin
[08:19] <jwstolk> simplexio: the "wakeonlan" ubuntu package
[08:20] <jwstolk> "send : Operation not permitted at /usr/bin/wakeonlan line 126."
[08:20] <jwstolk> I think setting up the connection for sending fails, but I don't really know python.
[08:24] <simplexio> jwstolk: are you sure you have enabled wol in those nic which shoul work
[08:25] <jwstolk> I should not need to enable it on the nicks where I send it from. the computer that will receive then has it enabled, and it works, it's just doesn't work from the server with multiple nic's
[08:26] <jwstolk> I want to send the magic packets from the firewall, because that's the one that is on 24/7.
[08:39] <jwstolk> simplexio: If I change the destination port from "discard" to "ntp" in the script, it does send the packet. (port doesn't matter for WOL)
[08:39] <jwstolk> (I also tested with the firewall stopped, but that didn't help.)
[08:46] <jwstolk> hmm, the script no longer gives me an error, but nothing wakes up.
[08:58] <jwstolk> simplexio: Got it to work: changed the port in the script from "discard" to "ntp" _and_ specify the subnet using "wakonlan -i 10.0.1.255 <HW Address>".
[09:03] <jwstolk> ok, works with the "discard" package as well, if I open that port in the firewall software. I think I got confused by the fact that stopping the firewall does not seem to clear the IPtables.
[09:07] <kraut> moin
[09:15] <george__> hey guys, anyone here who worked with apparmour? trying to figure out how jailbash is set to be the shell for specific users only
[09:18] <VSpike> Hi .. I've set up a command-line PPTP VPN connection on my server and it works when I do "pppd call myvpn"... but how can I configure it so that a static route is added when the vpn is connected?
[09:29] <simplexio> jwstolk: have to remeber that
[09:43] <heno> Hi
[09:43] <jwstolk> simplexio: It isn't very clear that wakeonlan is sending to the "discard" port, and that you have to let that through the firewall (if any), but the nic doesn't care where in the packet the "magic" part is, or to what port is send.
[09:43] <heno> Anyone here set up to help with a RAID install test on 64 bit?
[09:43] <heno> http://iso.qa.ubuntu.com/qatracker/test/2490
[09:44] <heno> http://testcases.qa.ubuntu.com/Install/ServerRAID1
[09:45] <jwstolk> heno, works here. (Raid 10,f2 on two disks, on ubuntu-server-64) but I needed the newest kernel before rebuilding after a (simulated) drive replacement worked.
[09:46] <jwstolk> but I can't really test things right now.
[09:46] <heno> jwstolk: thanks - I was specifically thinking of an ISO install test with the pre-beta images
[09:47] <jwstolk> ok. (I cheated anyway, I installed ubuntu on a single SSD, and only use the raid for the served files.)
[09:48] <heno> soren, ttx, dendrobates: do we have anyone with a suitable setup?
[09:50] <ttx> heno: not that I know of. Maybe kirkland.
[09:50] <soren> heno: Is virtualised installs ok?
[09:50] <ttx> soren: probably, looks like a software raid test
[09:50] <soren> Indeed.
[09:51] <soren> If so, I can do it.
[09:51] <soren> I need to take a break now, though.
[09:52] <heno> soren: virtual would be fine - it's mainly to test the ISO itself. Thanks!
[10:12] <domas> how much RAM should left to OS on a DB server?
[10:12] <domas> cause whenever I leave less than 2GB, kswapd starts going nuts :)
[10:12] <domas> (even with swapiness decreased a lot :)
[10:14] <soren> domas: Can you see what those 2 GB are used for?
[10:15] <domas> soren: "cache"
[10:15] <domas> well, it is 32GB machine
[10:15] <domas> so 2G is quite small percentage :)
[10:16] <soren> Cache pages should be evicted instead of swapping.
[10:16] <domas> it isn't swapping
[10:16] <domas> it is just kswapd doing lots of CPU cycles
[10:16] <soren> And what do you think that means?
[10:16] <domas> that it is nuts :)
[10:16] <domas> if I increase swapiness, it starts swapping
[10:16] <domas> and calms down
[10:17] <soren> How do you determine whether it's swapping or not?
[10:17] <domas> vmstat
[10:17] <domas> (and "swap used" stays at 0 :)
[10:18] <domas> sometimes kswapd just starts going nuts and panics machines eventually, if no intervention is made
[10:19] <domas> it doesn't seem to like edge case of "one very very very big process"
[10:20] <soren> You should talk to the kernel guys.
[10:21] <domas> yeah, I guess
[10:26] <tom__> does somebody know why i get "ignoring bad proto spec: '17437' when i try to restart ssh?
[10:27] <tom__> i installed openssh-server
[10:27] <tom__> changed /etc/ssh/sshd_config
[10:28] <tom__> where i changed port 22 to 17437
[10:28] <tom__> and set PermitRootLogin to no
[10:29] <Deeps> did you change port or protocol?
[10:29] <Deeps> double check the change you made
[10:30] <Deeps> default is :Protocol 2
[10:30] <Deeps> (answer found, 2nd hit on google for: 'openssh ignoring bad proto spec:')
[10:32] <tom__> thanks, you're right
[10:49] <Jeeves_> Ola
[10:49] <Jeeves_> Anyone here using kvm + virtio nic?
[10:49] <soren> Yes.
[10:50] <Jeeves_> Ever had a kernel panic while booting it? :)
[10:51] <Jeeves_> http://pastebin.ubuntu.com/138146/
[10:53] <Jeeves_> Or better
[10:53] <Jeeves_> http://pastebin.ubuntu.com/138148/
[10:54] <domas> here, example of linux being idiotic: http://p.defau.lt/?WB6QRUQKJK19nVoZNQlNCA
[10:56] <Jeeves_> domas: How is that idiotic?
[10:56] <domas> Jeeves_: it uses 2G for cache, mostly caching _nothing_, and pushed out 2G of process that had active cache use
[10:57] <Jeeves_> domas: I would expect cache is filled with the mysql-data files
[10:57] <domas> Jeeves_: O_DIRECT
[10:58] <domas> Jeeves_: actually most cache is log file, which is never read
[11:01] <Jeeves_> domas: So fix how syslog opens the logfiles
[11:01] <Jeeves_> so it doesn't get cached
[11:07] <beniwtv> Hi all... I have a strange problem on one of my Ubuntu servers. It has 5 HDD's in RAID (mdadm). However, one drive periodically is put into 'Fault' state by mdadm. Removing and re-adding the drive seems to get it back up. Also strange is that I created a partition on that drive of type fd (Raid autodetect), but when I start my RAID, fdisk -l complains that it hasn't a valid partition table, which I think could be related t
[11:07] <beniwtv> o the error I'm seeing. Note: I created the RAID manually (not with the installer), so I can't rule out that I have done something wrong. Any ideas?
[11:10] <Deeps> not the most scientific solution, but you could try trashing that disk and recreating the partition,filesystem,etc. and readding it to the mdadm arrray fresh, and have it rebuild?
[11:14] <beniwtv> Deeps: Yeah, that's what I thought to. I was previously playing with a fake RAID, which included that disk. Maybe it has some left-over there.
[11:16] <beniwtv> Deeps: But just to verify, fdisk should not give that error (Disk xx doesn't contain a valid partition table), right?
[11:16] <Deeps> should not, no
[11:16] <beniwtv> Even in RAID 5?
[11:16] <Deeps> might wanna use dd to /dev/zero those blocks? (i'm not sure if that has any other potential repercussions, mind)
[11:25] <soren> beniwtv: That depends entirely on how you've set up your raid.
[11:27] <beniwtv> soren: Used mdadm --create, with default options (has 5 devices, RAID5). I have never done it manually, I always used the installer, which didn't gave me any problems afterwards. But this server has had a RAID array added, so the system was already installed on it.
[11:28] <beniwtv> soren: But I'm begining to think the drive is faulty, or the 3rd cable of the RAID is bad. I see timeouts in dmesg for that drive (which is the only one on that cable). And all others seem to work fine...
[12:14] <kinley> hey: is there a safe way to differ between ethernet devices connect to path throght modul or by switch modul for dell poweredge blade server ?
[12:15] <kinley> lspci : http://paste.ubuntu.com/138185/
[12:29] <soren> kinley: I don't understand the question.
[12:29] <soren> kinley: What are you trying to achieve?
[12:33] <acicula> think he's trying to figure out which ethernet device belongs to which physical connection perhaps?
[12:36] <soren> Ah.
[12:37] <soren> kinley: Do you know how the pci "addresses" (I don't know if that's the correct term) map to physical ports?
[12:39] <kinley> sorry, solved it, dell blade chassis map the ehternet port directly to different factorys, so port ethernet port 0 and 1 go to factory A and port 2 and 3 to factory c
[12:40] <soren> Err... Does that answer your question?
[12:40] <soren> If so, that's cool. It just mean that I didn't understand the question after all :)
[12:41] <soren> I don't even know what a "factory" is (other than a place where stuff is produced).
[12:42] <kinley> ;) or a blade chassis modul slot
[12:44] <kinley> you can choose between switches, pass throught moduls....
[12:44] <soren> Googling "blade chassis factory" didn't help, either. It only gave results where "factory" was used in the "production facility" sense.
[12:50] <kinley> http://support.dell.com/support/edocs/systems/pem/multilang/cfggd/west/U003C0D.pdf
[12:50] <kinley> page 39
[12:55] <Deeps> possibly OT, is it possible to get the battery life remaining from a laptop without acpi enabled?
[12:56] <soren> Deeps: If it's old, perhaps apm will do.
[12:57] <soren> kinley: I don't see it.
[12:57] <kinley> you got the pdf ?
[12:57] <soren> kinley: Yes.
[12:58] <Deeps> soren: p3 750mhz, old it certainly is!
[13:00] <kinley> soren: on page 29 is a picture which show the backside of the chassis... the vertical slots are the factorys
[13:00] <soren> Searching for "factory" gives me two hits. "factory default settings"  and "the factory-assigned World Wide..."
[13:00] <soren> Oh, *twenty*-nine.
[13:00] <kinley> 39
[13:00] <kinley> sorry  page 39
[13:00] <soren> Oh. "fabric" :)
[13:00] <soren> You're German or something, aren't you? :)
[13:00] <Deeps> soren: pretty sure the information it's outputting isn't accurate, but thanks anyway (100% battery life after 30mins?)
[13:00] <kinley> fabric
[13:00] <kinley> sorry
[13:00] <soren> Deeps: Mind you, old laptop batteries positively suck at reporting their current charge level.
[13:00] <Deeps> soren: good point
[13:01] <soren> Deeps: I had one that knew three different levels. 100%, 6% and 0%. Ironically, the one where it stayed the longest was 0%.
[13:01] <soren> 100% for the first 5 minutes, 6% for maybe 45 minutes, and 0% for the last hour or hour and a half or so.
[13:02] <Deeps> soren: sounds like my old dell
[13:02] <soren> Deeps: Fujitsu Lifebook.
[13:02] <Deeps> mind you, same dell reaches 0% in about 5mins now, and then cuts out 30seconds later hehe
[13:03] <Deeps> machine in question now is an hp omnibook xe3, p3 750mhz, providing internet gateway, firewall, mrtg
[13:05] <dendrobates> heno: kirkland should be able to test that.
[13:05] <Deeps> ..and i'm currently in a powercut, so only that laptop and mine are still alive, sitting in a rather uncomfortable position too as wireless is also unavailable
[13:46] <kirkland> dendrobates: heno: what specifically do you want me to test?
[13:49] <oruwork> how can i host multiple websites ?
[13:50] <oruwork> on one host
[13:53] <Deeps> apache vhosts
[13:53] <rst-uanic> oruwork: you should add different virtualhosts
[13:53] <oruwork> rst-uanic, ok, any more info on this ?
[13:53] <friartuck> oruwork ip aliases and apache virtual hosts is one of a few answers...http://httpd.apache.org/docs/1.3/vhosts
[13:54] <rst-uanic> http://httpd.apache.org/docs/2.0/vhosts/
[13:55] <rst-uanic> this one is for apache2 :) I'm not sure if there's any difference
[13:55] <friartuck> oruwork and....https://help.ubuntu.com/8.10/serverguide/C/httpd.html
[13:59] <oruwork> friartuck, so its just a matter of creating configuration files for each site in /etc/apache2/sites-available ?
[14:00] <Faust-C> a2ensite
[14:02] <friartuck> oruwork you need to look into ip aliases, this is separate from apache. then...you need to dig into apache documentation. that's too long of a story for IRC.
[14:02] <oruwork> friartuck, ip aliases.... hmm not sure where to start
[14:03] <Faust-C> create virtual IP
[14:03] <Faust-C> eth0:1
[14:03] <Faust-C> gotta love linux's built in functionality
[14:03] <Faust-C> or name based vhosts
[14:03] <Faust-C> ubuntugeek.com
[14:04] <oruwork> errr stuck
[14:05] <twitzel> jmarsden, with jaunty multipath works like a charm
[14:05] <oruwork> and confused
[14:06] <Faust-C> oruwork: google, books, etc
[14:11] <friartuck> strange...ubuntu server guide covers eth bridging but not aliases. hm.
[14:12] <rst-uanic> aliases are quite rarely used i think
[14:14] <oruwork> so i moved file /etc/apache2/sites-available/default to /etc/apache2/sites-available/site1 and changed the site root and directory in this file, nothing happend :(
[14:14] <rst-uanic> oruwork: you sould enable site1
[14:15] <rst-uanic> sudo a2ensite site1
[14:15] <rst-uanic> also, you should specify site name in the virtualname tag
[14:17] <oruwork> http://pastebin.com/m63a12793
[14:19] <oruwork> virtualname tag ?
[14:20] <oruwork> where would i specify this ?
[14:24] <oruwork> in which file is the ServerAlias configured?
[15:04] <oruwork> rst-uanic, really stuck not sure what to do
[15:04] <george__> :q
[15:04] <george__> bye guys
[15:06] <oruwork> ScottK, around? i need some help to get 2 sites working under apache 2
[15:06] <rst-uanic> oruwork: stuck with what?
[15:06]  * ScottK is here, but knows very little about apache.  I'd say just ask the channel.
[15:07] <oruwork> rst-uanic, well.. the same thing, not sure how to get 2 separte sites
[15:08] <oruwork> i have a feeling of hitting the wall
[15:08] <oruwork> :)
[15:08] <rst-uanic> oruwork: what have you done already?
[15:09] <rst-uanic> oruwork: and... you need two different sites, that have different FQDNs but are located on the same ip and server, right?
[15:10] <oruwork> i'm looking at instructions here https://help.ubuntu.com/8.10/serverguide/C/httpd.html , I copied file default to site1 and specified document root and directory in site1 file
[15:10] <rst-uanic> ok
[15:11] <rst-uanic> when you specify virtualhost
[15:11] <rst-uanic> the first line is <VirtualHost *>
[15:12] <rst-uanic> change it so something like this <VirtualHost yoursite.com:*>
[15:12] <rst-uanic> restart apache and try again
[15:12] <oruwork> the first line in default file yes <VirtualHost *:80>
[15:13] <rst-uanic> now
[15:13] <rst-uanic> in a new file specify you site name instead of *
[15:13] <rst-uanic> s/you/your/
[15:19] <oruwork> rst-uanic, how can i remove site from a2ensite ? rst-uanic  ?
[15:20] <rst-uanic> oruwork: a2dissite
[15:36] <boflic> I followed the perfect server howto for ubuntu 8.10 with isp. I have a problem though. i can connect to apache and isp from local ip (192.168.0.x) but when i try to connect from server1.x.x it fails, and firefox gives me an error about that the site is there but it cant connect to it! Can anyone help me out PLEASE!
[15:38] <Zerqent> boflic: are both you and the server behind the same NAT?
[15:39] <boflic> yes, and i forwarded ports to my server, in a attempt to make it work!!
[15:40] <boflic> Can i give any logs???
[15:44] <nomoa> hi, sometimes our bind nameserver refuse to respond (timeout), I can see strange errors in /var/log/messages but I'm not sure it is linked to the problem : http://pastebin.com/da52bd36
[15:44] <boflic> sorry!!! i got it! cybercity (my isp) turned of nat loopback! reenabled it and it seems to work! does anyboddy know if it is possible to disable updates from isp??
[15:55] <soren> boflic: Cybercity has always done that.
[15:56] <boflic> I know! BUT WHY! they should just accept that when i made some changes, its becuase i need it!!! Isn't it posible to make it allways on???
[15:57] <Zerqent> boflic: you have to check that from outside your NAT
[15:58] <boflic> Zergent: I've solved it with nat loopback on! my mistake!
[17:13] <jmedina> morning
[17:14] <kraut> i'm using open-iscsi to use a lun on a netapp filer. my system spams the filer with this message: Thu Mar 26 18:13:24 CET [is@iscsi.notice:notice]: ISCSI: New session from initiator iqn.1993-08.org.debian:01:c3f22ca89d75 at IP addr XXX
[17:14] <kraut> does anybody know, how to fix that?
[17:16] <jmedina> kraut: where those messages are displayed?
[17:16] <kraut> on the filer
[17:17] <kraut> shall i pastebin the default-file of the node, i'm using?
[17:18] <jmedina> kraut: do it, probably someelse can help
[17:19] <kraut> jmarsden_: http://pastebin.com/m6284d233
[17:19] <kraut> XXX is the target IP
[17:20] <kraut> it seems to happen every 30 seconds
[17:26] <jmedina> probably because timeouts, ping timeout I think that is something like a keep alive packet
[17:27] <kraut> how do i deactivate that? because it's working.
[17:28] <kraut> the strange thing is also, when i stop open-iscsi, the disk is still working
[17:30] <kraut> i set ping timeout to 0 now
[17:39] <kraut> seems to help
[18:02] <mathiaz> kirkland: have you heard of mandos? http://packages.ubuntu.com/jaunty/mandos
[18:03] <kirkland> mathiaz: nope, looks interesting, perhaps
[18:03] <kirkland> mathiaz: i'd like to review the full design
[18:03] <kirkland> mathiaz: but looks interesting
[18:11] <paul_> sd
[18:11] <oruwork> hi, so how can i get Apache2 to work with 2 sites ?
[18:12] <acicula> vhosts
[18:17] <oruwork> acicula, this is file /etc/apache2/sites-available/selsovet.com which is my second site that i'm trying to run http://pastebin.com/m7c89c098
[18:18] <oruwork> both domains still open the same document root
[18:19] <oruwork> rst-uanic, still here ? :)
[18:19] <acicula> oruwork: dunno about syntax, guess the vhost dont match
[18:29] <oruwork>  could someone please help me with setting up 2 different sites ?
[18:30] <jmedina> oruwork: isnt documented in ubuntu server guide?
[18:31] <oruwork> jmedina, yes, server guide is what i'm looking at , but i'm struggling with it
[18:31] <jmedina> waht is the problem?>
[18:31] <jmedina> oruwork: both servers are listening in same IP and same POrt?
[18:33] <oruwork> the only thing i did was copy the hosts-avaiable/default file and modified it like this http://pastebin.com/m7c89c098  , reloaded apache, now when typing both domains in the browser they still point to the same document directory, and i want them to point at 2 different directories. So yeah . i really need help on getting this to work  jmedina
[18:35] <jmedina> oruwork: what is the output from
[18:35] <jmedina> apache2ctl -S
[18:35] <jmedina> ?
[18:35] <oruwork> http://pastebin.com/m237bf9b
[18:37] <big_ham> re
[18:39] <jmedina> oruwork: I use this config for virtual hosts
[18:39] <jmedina> http://verde.e-compugraf.com/jm-confs/apache/vhost.apache2.template
[18:39] <jmedina> I only place that file in /etc/apache2/sites-available
[18:39] <jmedina> en then enable with
[18:39] <jmedina> a2ensite vhostname
[18:39] <jmedina> and reload apache
[18:40] <big_ham> using Dovecot/Postfix, can I have one user's email attachments store in a specific directory?
[18:40] <jmedina> oruwork: if  using name based virtual host and both sites uses same IP and same port then remove the domain name from the VirtualHost directive
[18:41] <oruwork> jmedina, ok, this is what i did, both domains still show the same site though :( http://pastebin.com/m4a7861dd
[18:42] <jmedina> oruwork: change your virtualhost directyve
[18:42] <Ayukawa> Okay, at risk of sounding like an idiot, i just set up spam filtering based on the guide at https://help.ubuntu.com/8.10/serverguide/C/mail-filtering.html but I'm wondering how to get a list of mails that are blocked by the filters.
[18:42] <jmedina> jus put a *
[18:42] <oruwork> jmedina, yeah its <VirtualHost *:80>
[18:42] <oruwork>  now
[18:43] <jmedina> Ayukawa: if you are using amavisd-new then you can set notifications for spam, virus, banned, and bad headers
[18:43] <jmedina> I think is not enabled by default
[18:43] <jmedina> oruwork: again
[18:43] <jmedina> apache2ctl -S
[18:43] <jmedina> please
[18:44] <oruwork> jmedina, sure bro http://paste.ubuntu.com/138450/
[18:44] <oruwork> jmedina, i think i didnt specify ServerName , just not sure how to do this
[18:45] <jmedina> ServerName is most important
[18:45] <jmedina> if not specified all traffic goes to default site
[18:45] <jmedina> just put ServerName selsovet.com
[18:45] <oruwork> where do i specify it ?
[18:45] <jmedina> and again -S
[18:45] <jmedina> in your second site file
[18:46] <oruwork> anywhere?
[18:46] <oruwork> at the buttom ?
[18:46] <jmedina> the one you posted, the one you changed *
[18:46] <jmedina> yeap
[18:46] <jmedina> I usually add it near to ServerAdmin and before DocumentRoot
[18:46] <jmedina> you can doit anywhere
[18:47] <oruwork> heh
[18:47] <oruwork> i think its working now  :)
[18:48] <oruwork> can i do this without the default ?
[18:48] <oruwork> i would like to orgonize this by /var/www/site1 , /var/www/site2, etc....
[18:49] <oruwork> cuase i'll be hosting 3 sites on this vps
[18:49] <oruwork> jmedina, ^^
[18:51] <big_ham> jmedina ... did you see my Q above ^^^ ?
[18:51] <jmedina> oruwork: yeap, I usually always put default in /var/www/default, and all the site goes to /var/www/siteN
[18:52] <jmedina> I use default as a catch up, all traffic no directed to a defined virtual hosts goes to default site
[18:52] <jmedina> for example when someone try to use the IP insead of name
[18:53] <oruwork> oh thats right, what would happen if someone would use the ip ?
[18:53] <jmedina> big_ham: I dont know, what you mean as a attachment store dir?
[18:56] <big_ham> here's the scenario brielfy
[18:56] <oruwork> so i should pint the default file to go to /var/www/default ?
[18:56]  * Faust-C wants more work on making sure Kolab works in ubuntu
[18:56] <big_ham> there's an email address people in the field use to email pictures to
[18:56] <big_ham> right now a human checks the emails, strips attachments and uploads to FTP dir
[18:56] <big_ham> mail and ftp are on the same box, so if I can strip attachment on server side and drop in a directory, it saves a step and some bandwidth
[18:56] <jmedina> oruwork: I always do that
[18:56] <big_ham> does that scenario make sense?
[18:56] <oruwork> so when someone types the ip of your VPS, what do they see ?
[18:56] <Faust-C> big_ham: makes sense
[18:56] <Faust-C> picasa has a item like that
[18:57] <Faust-C> you can txt a image to a certain email address and it will be on album
[18:57] <Faust-C> big_ham: you can make a filer cant you?
[18:57] <Faust-C> like setup a images@domain.com and have the filter strip the attachments and save them to a folder
[18:57] <big_ham> on the server side?
[18:57] <Faust-C> yeah iirc
[18:58] <Faust-C> server side filers, w/ imap
[18:58] <big_ham> I'm not sure, I'm fairly new to Ubuntu, let me google that one
[18:58] <big_ham> didn't know the "lingo" I should be using, ya know?
[18:58] <jmedina> I think I already answer this a few days ago, I dont know a solution about that
[18:59] <big_ham> you did jmedina ... but I felt I didn't phrase properly
[18:59] <jmedina> but I think is not that hard to create a script that strips mail and place attachments in a directory, then mangle mail body to add footer with infor of attachments locations in a FTP server
[18:59] <big_ham> and I've been googling with no pertinent results which is wierd
[18:59] <jmedina> postfix has good support por pipe to a program
[18:59] <Faust-C> jmedina: yep
[18:59] <jmedina> renattach did something like that
[19:00] <big_ham> would this be a dovecot filter, or a postfix filter (i assume dovecot)
[19:00] <jmedina> big_ham: depens what Local Deliver Agent you use
[19:00] <jmedina> you can use local postfix, or dovecot 'delivery'
[19:00] <big_ham> jmedina: how can i check to be sure before I waste time in the wrong realm?
[19:01] <mathiaz> kirkland: is there anything cool to mention about the qemu update to 0.10.0 in jaunty?
[19:01] <kirkland> mathiaz: that it happened!
[19:01] <kirkland> mathiaz: it's the first qemu release in almost a yeat
[19:01] <kirkland> year
[19:01] <mathiaz> kirkland: ok - new features? main bug fixes?
[19:02] <kirkland> mathiaz: http://www.nongnu.org/qemu/changelog.html
[19:03] <mathiaz> kirkland: ok - the main thing seems to be kvm support and all the virtio stuff
[19:03] <mathiaz> kirkland: wasn't this already included in Ubuntu?
[19:05] <stickystyle> big_ham: I do something similar to what your asking about.  Previously I did it with a big nasty procmail script to but, but you run into scalability problems processing each message as it arrives.  I don't know what your scripting ablity is, but I would recommend letting the mail deliver to a set mailbox (as it sounds like you already do) then have a script that runs out of cron like every 5min to read the mailbox and take action o
[19:06] <big_ham> unfortunately my scripting abilities are limited, but my learning abilities are very high ...
[19:07] <big_ham> i found some info related to "body_checks" and making filters
[19:07] <big_ham> but it's specifically to "REJECT" bad attachments
[19:07] <big_ham> http://linuxpoison.blogspot.com/2007/12/filter-attachments-bat-exe-etc-in.html
[19:08] <jmedina> stickystyle: would you mind to share your script?
[19:09] <kirkland> mathiaz: yeah
[19:10] <kirkland> mathiaz: the key is that qemu has lacked an active maintainer for most of a year
[19:10] <stickystyle> jmedina: Let me take a look at what I can do to share the idea of how it works, it's kind of tricky since technically it would by my companies property.
[19:10] <kirkland> mathiaz: aliguori just took that over, and will be doing regular releases
[19:12] <big_ham> stickystyle: I have to say that cron seems easy enough, but I must admit I'm not even aware where attachments are kept in the file system
[19:13] <stickystyle> big_ham: They are kept mixed in with the actual email file.
[19:13] <stickystyle> base64 encoded.
[19:13] <big_ham> oh boy ... not all that straightforward then
[19:14] <stickystyle> Well, that's where a modern scripting lang comes to help.  It will abstract most of those little nuances away from you.
[19:14] <big_ham> i see
[19:15] <big_ham> any general google guidance you can provide would prove very helpful as I'm having a hard time figuring a starting point
[19:16] <stickystyle> big_ham: Here are the python examples of mailbox handling http://docs.python.org/library/mailbox.html#examples
[19:17] <oruwork> this is a beauty jmedina :)
[19:18] <jmedina> good, another happy customer
[19:18] <jmedina> :D
[19:18] <jmedina> next :D
[19:19] <big_ham> stickystyle: thank you
[19:19] <big_ham> have to head out on site, but I will pick back up with this when I return
[19:20] <big_ham> BTW ... for whom do you work?
[19:24] <stickystyle> I'm an IT Manager for a Freight Forwarding company, nothing glamorous :)
[19:25] <big_ham> I see ... always interested
[19:27] <oruwork> jmedina, doint some further testing here, it turns out that one of my domains cant look up the directory i specified in /etc/apache2/sites-availab.e/selsovet.com
[19:27] <jmedina> oruwork: which tests?
[19:28] <oruwork> jmedina, i'm reorgonizing everything the way you told me so that my setup will be similar to /var/www/default, /var/www/site1, /var/www/site2, etc...
[19:28] <oruwork> jmedina, i'm putting up index.html files in directories of the sites
[19:28] <oruwork> and trying to access them in the browser
[19:29] <jmedina> oruwork: did you restart apache after changing DocumentRoot in sites config?
[19:29] <oruwork> jmedina, yup
[19:30] <jmedina> oruwork: and what but apache2ctl -S
[19:30] <jmedina> ?
[19:30] <jmedina> also use  apache2ctl -t for sintax checking
[19:31] <mathiaz> kirkland: how many logos are now available in screen-profiles?
[19:31] <kirkland> mathiaz: released in jaunty, or committed to bzr ?
[19:31] <stickystyle> big_ham: I doubt this code will work right away as I did it from memory, but it should give you the general idea what I was talking about http://pastebin.com/d48672594
[19:31] <mathiaz> kirkland: in jaunty
[19:32] <mathiaz> kirkland: I saw a mention about suse in the changelog
[19:32] <kirkland> mathiaz: so the screen-profiles package just comes with ubuntu-light, ubuntu-dark, and ubuntu-black
[19:32] <oruwork> jmedina, http://pastebin.com/m6196adf3
[19:32] <kirkland> mathiaz: there's a new screen-profiles-extras package, which has a bunch of other light/dark colors, plus profiles for (fedora, debian, redhat)
[19:33] <kirkland> mathiaz: committed to bzr are profiles for (centos  debian  fedora  gentoo  mandriva  novell  redhat  slackware  suse  ubuntu)
[19:33] <kirkland> mathiaz: and i just completed a new script, screen-profile-dump
[19:33] <kirkland> mathiaz: which will allow you to dump your profile to one, monolithic, file, which you can install as ~/.screenrc or any unix/linux system that has screen
[19:33] <oruwork> jmedina, http://pastebin.com/m2508310f
[19:34] <kirkland> mathiaz: thus, for distros that don't have screen-profiles packaged for them yet
[19:34] <kirkland> mathiaz: or, for a system where you don't have root access and can't install screen-profiles
[19:34] <kirkland> mathiaz: so people.ubuntu.com, for instance
[19:34] <mathiaz> kirkland: cool
[19:34] <kirkland> mathiaz: i also learned a neat new trick for kvm today
[19:34] <kirkland> mathiaz: which works *really* well in screen
[19:34] <kirkland> mathiaz: kvm -curses
[19:34] <kirkland> mathiaz: runs the kvm in the current console/shell session
[19:35] <kirkland> mathiaz: i now have each of my kvm's running in their own window in screen
[19:35] <NEWzilla> Hi, I have 8.10 installed LAMP configuration plus aptitude safe-upgrade executed with subversion installed.. (just to provide a background on my server)  My problem is it appears Apache's ldap is not searching nested groups.
[19:35] <jmedina> can I paravirtualize using KVM in my opteron cpus (they dont support Full Virt)
[19:35] <mathiaz> kirkland: hm - you mean the kvm command?
[19:35] <mathiaz> kirkland: or the console of the guest?
[19:36] <kirkland> mathiaz: i received a contribution from a novell/suse developer yesterday
[19:36] <NEWzilla> I have found this was fixed in i think 2.2.3 of apache but it appears to not be working  for me.  i have to add the user directly to the group but it does not search nested groups
[19:36] <kirkland> mathiaz: with support for suse's update manager, in the updates-available script
[19:36] <jmedina> I always used xen for paravirt
[19:36] <kirkland> mathiaz: the kvm command
[19:36] <jmedina> NEWzilla: waht you mean with ldap nested groups?
[19:37] <mathiaz> kirkland: ok - does that mean you have to create a new screen window before starting kvm -curses?
[19:37] <jmedina> NEWzilla: what are you trying to do?
[19:37] <oruwork> jmedina, nvm , i made a silly mistake, this beauty is working
[19:37] <kirkland> mathiaz: well, that command will take over your current shell
[19:37] <kirkland> mathiaz: running the kvm itself inside of an ncurses session
[19:37] <jmedina> oruwork: good, what whas the silly mistake?
[19:37] <NEWzilla> I have apache setup to authenticate basic auth off of MS active directory.
[19:37] <kirkland> mathiaz: so, yeah, i hit <f2> to open a new window, name it whatever that vm's name will be
[19:37] <NEWzilla> i have require ldap-group setup for my <location>
[19:38] <kirkland> mathiaz: and then run kvm -curses -hda foo.img
[19:38] <oruwork> jmedina, i didnt copied index.html files to the wrong path lol
[19:38] <oruwork> jmedina, i mean i DID copy
[19:38] <NEWzilla> it works but only if i have the users in the specified group. if i put a group in the group "nested group" in the ad group.. apache does not appear to search the nested group to determine if the user is part of a nested group.
[19:38] <mathiaz> kirkland: what kind of ncurse session is created?
[19:39] <kirkland> mathiaz: it just uses curses to render the console output of the vm
[19:39] <kirkland> mathiaz: rather than sdl
[19:39] <mathiaz> kirkland: could it be possible to detect if you're running in a screen session and automatically create a new window and name it correctly?
[19:39] <mathiaz> kirkland: I'm not familiar with kvm on the command line as I'm running everything from libvirt
[19:40] <kirkland> mathiaz: there is some support in screen for auto-naming windows
[19:40] <NEWzilla> i have tried using the AuthLDAPMAxSubGroupDEpth but it fails and apache does not restart.. says it is not supported or the module is not installed... yet i have authnz_ldap enabled and it works ... just not when the user is in a nested group in the group set for the require -ldap-group
[19:40] <kirkland> mathiaz: it can take some regex of whatever your last command executed or something like that
[19:40] <kirkland> mathiaz: i played with that for a little while
[19:40] <kirkland> mathiaz: it was very distracting, i found to use in the general case
[19:41] <kirkland> mathiaz: my window names were jumping all over the place :-)
[19:41] <NEWzilla> so, i am kind of wondering.... might ubuntu server's apache install be missing this patch?
[19:41] <mathiaz> kirkland: right - I usually don't name my screen windows
[19:41] <mathiaz> kirkland: OTOH I rarely have more than two sessions opened.
[19:41] <mathiaz> kirkland: otherwise it takes to much time to cycle through them.
[19:42] <kirkland> mathiaz: i name all of mine, and i have 10-15 open
[19:42] <jmedina> mm I have not used nested groups in AD I dont know how is strcutured in LDap tree
[19:42] <mathiaz> kirkland: however one thing I made sure when I designed my vm mgmt scripts is to have a consistent naming in the vm.
[19:42] <jmedina> NEWzilla: have you tried using a simple ldapsearch query?
[19:42] <mathiaz> kirkland: ex: if I create a vm named t-slapd, I wanted to make sure that the guest hostname was t-slapd
[19:43] <mathiaz> kirkland: and that I could ssh into the guest using t-slapd
[19:44] <kirkland> mathiaz: i like that consistency
[19:45] <mathiaz> kirkland: I had to modify the root filesystem to be able to specify the hostname of the guest from the host
[19:45] <NEWzilla> jmedina: no, but this is because i currently do not know how to build such a search query.
[19:45] <mathiaz> kirkland: this is why I'm using lvm snapshots rather than qcow2 files as I want to be able to update the root filesystem.
[19:46] <mathiaz> kirkland: do you know if it's possible to get the vm name from the guest?
[19:46] <NEWzilla> jmedina: i have found the bug entry for apache and it says closed and was a bug on the require ldap-group but i have no clue how to determine if my apache really does include its fix.. i have checked and apache 2.2.9 is running..
[19:46] <mathiaz> kirkland: I meant inside the guest
[19:47] <jmedina> NEWzilla: check the changelog of you apache package
[19:47] <jmedina> I really dont know how is nested groups in ldap
[19:47] <kirkland> mathiaz: as libvirt calls it?
[19:47] <kirkland> mathiaz: i don't think so
[19:47] <kirkland> mathiaz: i don't think the guest knows its a guest
[19:50] <jmedina> I use this config por group ldap auth
[19:50] <jmedina> http://paste.ubuntu.com/138481/
[19:55] <NEWzilla> the first difference i have found is you have AuthLDAPGroupAttributeIsDN in your configuration..
[19:55] <NEWzilla> i don't have this in mine.. but going to read up on what it is..
[19:56] <NEWzilla> do you have any sub groups in yoru domain admins that contain users not in the domain admin group directly and are they still granted access to the site?
[19:57] <NEWzilla> a nested group is just a group that contains reference to another group. instead of just the users.
[19:57] <jmedina> NEWzilla: nop I dont uses subgroups
[19:57] <NEWzilla> for example when assigning a user to a gropu you can also assign a group to a group.
[19:58] <NEWzilla> jmedina: i guess you wouldn't want to take a stab at testing it with me to see if i am really finding an issue with ubuntu's apache + mod authnz_ldap or if i am just doing things wrong?
[19:59] <NEWzilla> i am looking at the apache bug 42891 and it says resolved... but it is still not working for me.
[19:59] <mathiaz> kirkland: right. Something similar to the ec2 init script
[19:59] <mathiaz> kirkland: where you can grab information about the guest from an outside source
[20:00] <NEWzilla> http://issues.apache.org/bugzilla/show_bug.cgi?id=42891
[20:00] <kirkland> mathiaz: interesting, can you pastebin that init script?
[20:00] <mathiaz> kirkland: IIRC with vmware-server you can poke at things between the host and the guest
[20:01] <zul> kirkland: for the ec2 set hostname?
[20:01] <zul> the script that changes is called ec2-set-hostname.py in the ec2-init package
[20:01] <mathiaz> kirkland: IIRC in ec2 you can get some information about the AMI by wget a specific address from the guest
[20:02] <zul> mathiaz: the latest updated version for the next ec2 beta has a script called ec2-get-info which allows you to get alot of the information already withough using curl
[20:06] <NEWzilla> how do i determine what version of a mod i have installed?
[20:13] <giovani3> NEWzilla: an apache module you installed via ubuntu?
[20:14] <NEWzilla> oh crap........ i think i just learned that it looks to be part of the apache 2.3 trunk......
[20:15] <NEWzilla> anyone know about getting apache 2.2 upgraded to 2.3 on ubuntu server 8.10?
[20:15] <giovani3> 2.3 is the development trunk, they don't release those
[20:15] <NEWzilla> or even 2.4 ?
[20:15] <giovani3> it turns into 2.4 when it's done
[20:16] <giovani3> until it's out ... it can't be included
[20:17] <NEWzilla> if the module documentation is under documentation > 2.3 > modules... this means it is part of the 2.3 apache trunk.. right?
[20:17] <NEWzilla> i will see if the 2.4 has this part of it in the authnz_ldap module...
[20:17] <NEWzilla> actually i don't even know if there is an apache 2.4 yet... lol
[20:17] <giovani3> there isn't ... like I said
[20:17] <giovani3> 2.3 is a development trunk -- it turns into 2.4 when it's finished
[20:18] <NEWzilla> ok, i got you
[20:18] <giovani3> then 2.5 will be the development trunk, and it will turn into 2.6 when it's finished
[20:18] <giovani3> considering 2.2 is relatively modern, I don't know when 2.4 is expected
[20:18] <giovani3> you could, however, ask about this in #apache, I'm sure they know much more
[20:19] <NEWzilla> i understand .. blarg..... how crazy is it to use the 2.3 right now? or is there a way to only use the authnz_ldap module in my 2.2 apache server?
[20:19] <NEWzilla> er . the new authnz_ldap module that has the subgroup search feature added
[20:19] <giovani3> 2.3 is a development trunk, it's probably not stable
[20:19] <giovani3> once again ... #apache knows far more about this than we do
[20:20] <NEWzilla> ok, i will hop over there thanks guys
[20:22] <jmedina> NEWzilla: havnt you tryed kerberos auth, I think is more appropiate for AD auth
[20:22] <jmedina> http://port25.technet.com/archive/2008/01/25/technical-analysis-apache-with-mod-auth-kerb-and-windows-server.aspx
[20:23] <jmedina> from microsot :D
[20:25] <NEWzilla> I will look at it. but i am working on getting together an identity management solution and to centrally help manage other resources.  this apache server is just one of many resources i would like to manage with Active Directory
[20:27] <jmedina> I think kerb is betther por central id mgmt, you can enable Single Sing On with it using key based auth
[20:27] <jmedina> IE and mozilla supports SSO
[20:40] <geekboxjockey> I was wondering if anyone here uses Bacula, I have a filset issue with backups for 3 systems. Each backup is almost identical in coverage plus the inclusion of an additional folder or two on each system. Is there a way to extend or inherit from a base fileset and add custom additions for each host on top of that?
[20:41] <geekboxjockey> So instead of having to specify a fileset for each host that contains (/usr, /var, /etc, /home...) just specify one, and then add to it for additional locations on each individual host.
[20:43] <acicula> geekboxjockey: been awhile since i set that up, maybe, if it's possible it's described in the manual
[20:44] <geekboxjockey> yeah, I've been scouring it for a bit now, it's a BIG manual, I've also done the obligatory googling before coming here :-P
[20:45] <geekboxjockey> Bacula configuration is an art-form :P
[20:50]  * Faust-C suggests backupPC
[20:50] <Faust-C> considering its apart of amanda now
[21:14] <beniwtv> hi all... I have created partitions on my disks with fdisk on ubuntu server, changes partition type to fd (linux raid autodetect) saved them with "w", but the partitions do not show up in /dev, even after a reboot. Any ideas?
[21:16] <beniwtv> (there also where no errors from fdisk, and fdisk -l after the reboot shows them fine)
[21:26] <Yasumoto> soren: I saw that it looks like you were working on getting Cobbler working on/with ubuntu, did that work out all the way or is it still in progress? I can't seem to find any 'recent-ish' updates
[21:26] <soren> Yasumoto: It keeps getting deferred, I'm afraid.
[21:27] <beniwtv> oh, and one more hint: cat /proc/partitions does not show them... now I'm really worried.... :-/
[21:28] <Yasumoto> soren: ah, totally understandable
[21:30] <Yasumoto> soren: is it close to being done, or are there still some parts that need work?
[21:45] <soren> Yasumoto: there's still quite a bit of work to be done.
[21:46] <Yasumoto> soren: gotcha, I'll poke around a bit, thank you :)
[22:16] <theunixgeek> Any recommendations for a minimal desktop environment for Ubuntu Server?
[22:16] <Jeeves_> openss-server :)
[22:17] <Jeeves_> +screen
[22:17] <Yasumoto> you could try xfce (apt-get install xubuntu-desktop)
[22:17] <theunixgeek> Yasumoto: I'm installing it right now :)
[22:18] <Yasumoto> theunixgeek: cool
[22:18] <theunixgeek> I was wondering if there's something even more minimal
[22:18] <Deeps> ubuntu server + X = #ubuntu
[22:18] <theunixgeek> Since my download speed just dropped from 121 to 66 kbps :|
[23:35] <twitzel> jmarsden_: I did install jaunty and it worked. Interestingly, I noticed other differences in how multipath-tools behaved in jaunty and for fun copied just the executable /sbin/multipath from the jaunty box to the intrepid box. Now that intrepid box works absolutely perfectly. So it wasn't a udev issue after all
[23:36] <twb> twitzel: that hurts my brain
[23:37] <twitzel> Althouh the version of multipath-tools in jaunty and intrepid appear to be the same, the one that comes with jaunty works, the one that is in intrepid doesn't
[23:38] <twitzel> Anyhow, all my problems are solved now. Thanks to everyone who helped.
[23:38] <twb> twitzel: no, they differ in -1 to -2.
[23:38] <twb> http://changelogs.ubuntu.com/changelogs/pool/main/m/multipath-tools/multipath-tools_0.4.8-14ubuntu2/changelog
[23:38] <twitzel> Uh, didn't notice the 2
[23:38] <twb> The first changelog entry say "Let dmsetup run kpartx"
[23:40] <twitzel> will this be backported to intrepid ?
[23:42] <twitzel> Another strange this is, with intrepid multipath-tools you get something like this: multipath -ll
[23:42] <twitzel>  mpath2 (360022190009773680000214a495047ce) dm-2 ,
[23:42] <twitzel>  [size=2.0T][features=0][hwhandler=0]
[23:42] <twitzel>  \_ round-robin 0 [prio=3][active]
[23:42] <twitzel>  \_ #:#:#:# sdd 8:48  [active][ready]
[23:42] <twitzel>  \_ round-robin 0 [prio=0][enabled]
[23:42] <twitzel>  \_ #:#:#:# sdj 8:144 [active][ghost]
[23:43] <twitzel> With the newer multipath-tools from jaunty it looks correctly like this: mpath2 (360022190009773680000214a495047ce) dm-1 DELL    ,MD3000
[23:43] <twitzel> [size=2.0T][features=0][hwhandler=0]
[23:43] <twitzel> \_ round-robin 0 [prio=3][active]
[23:43] <twitzel>  \_ 1:0:0:2  sdd 8:48  [active][ready]
[23:43] <twitzel> \_ round-robin 0 [prio=0][enabled]
[23:43] <twitzel>  \_ 1:0:1:2  sdj 8:144 [active][ghost]
[23:43] <twb> Please stop
[23:44] <twitzel> okay
[23:44] <twitzel> sorry
[23:44] <twb> I don't know if it will be backported to intrepid; I don't know ubuntu's backporting policy.
[23:45] <twb> At worst you can manually install that .deb on each host you have, I guess.
[23:47] <twitzel> There is however, still a minor issue