[00:00] <hggdh> isn't it like ntpd and ntpdate?
[00:01] <jeeves_Moss> how can I configure TLS with my postfix config if I have virtual users who's data is held in MySQL?
[00:02] <SpamapS> jeeves_Moss: TLS shouldn't really matter at that point, unless you're trying to store their client-certificate in mysql somehow.
[00:02] <jeeves_Moss> SpamapS, naaa, I've just been trying to set up TLS so I can send e-mail when I'm external to the local network
[00:04] <JanC> so, a "submission" service on port 587 ?
[00:04] <jeeves_Moss> ??
[00:04] <jeeves_Moss> JanC, was that directed @ me?
[00:05] <JanC> jeeves_Moss: yes  ☺
[00:05] <jeeves_Moss> oh, sorry.
[00:05] <jeeves_Moss> JanC, basically, when I'm internal the the network (where the postfix server is), I can send all day long, but as soon as I'm external, I get "mail relay" issues
[00:06] <JanC> the server can be reached from the outside network I assume?
[00:07] <jeeves_Moss> yes
[00:07] <SpamapS> jeeves_Moss: Ah, so you have no auth setup w/ Postfix.. you just want to create A) TLS, and B) SMTP Auth?
[00:07] <jeeves_Moss> I can do it all (httpd, ftpd, IMAP, etc)
[00:08] <jeeves_Moss> SpamapS, yep, that's all I want.  I want to be able to send e-mail from the outside (no real point of having a smart phone if I can't reply to e-mails on my own friggin' domain!)
[00:11] <SpamapS> jeeves_Moss: yeah I did the same thing
[00:12] <SpamapS> jeeves_Moss: I forget which howto I followed.. its actually quite simple
[00:12] <jeeves_Moss> SpamapS, oh?  what phone do you have?
[00:12] <jeeves_Moss> (and if you get a chance, can you see if you can find the "howto"?
[00:12] <JanC> jeeves_Moss: do you have SASL authentication setup?
[00:13] <jeeves_Moss> I'm thinking I do, but I should remove it and start fresh if you know of a good howto
[00:14] <JanC> you probably need something like the following in master.cf: http://paste.ubuntu.com/436972/
[00:14] <jeeves_Moss> JanC, Thanks.
[00:14] <SpamapS> jeeves_Moss: Android 1.6 w/ K-9 email client
[00:14] <jeeves_Moss> I'll have a look in a sec
[00:15] <JanC> submission == port 587 (which is the default for such a service, as many ISPs block outgoing port 25 connections)
[00:15] <jeeves_Moss> SpamapS, nice.  I personally love my WM5 phone
[00:15] <SpamapS> to each his own
[00:15]  * f1yback bites THEREisONLYzulNUCK
[00:15] <f1yback> CANUCK
[00:15] <jeeves_Moss> JanC, wiat a sec....  You say it's using TLS on port 587?
[00:16] <SpamapS> JanC: why would you put that in master.cf? I don't think I had to do anything in there.. just added stuff to main.cf
[00:16] <SpamapS> I take that back I did add stuff
[00:18] <JanC> SpamapS: because I want port 25 (incoming mail from other servers etc.) & port 587 (submission of mail by me) handled differently
[00:18] <SpamapS> http://paste.ubuntu.com/436985/
[00:18] <SpamapS> JanC: yeah me too, which is why mine is similar. ;)
[00:19] <celeborn999> anyone have any tips for configuring permissions for wordpress on ubuntu? seems like something doesn't have what it needs out-of-the-box. i don't want to just chmod 777 all of /usr/share/wordpress
[00:19]  * SpamapS is embarassed that his production cert file is named 'test.pem' ...
[00:20] <SpamapS> celeborn999: by default it shouldn't need any write perms unless you want to use the admin interface to do things.
[00:20] <celeborn999> SpamapS: yeah that's what i'm trying to do, install a theme through the wordpress admin
[00:22] <JanC> celeborn999: doesn't the wordpress documentation have something about that?
[00:23] <celeborn999> JanC: i was just looking at that, what kinda sucks is the official docs are written for people who (i guess) are having their filesystems managed by their webhost. for example the doc says: "However, if you utilize mod_rewrite Permalinks  or other .htaccess  features you should make sure that WordPress can also write to your /.htaccess  file." i don't know who "WordPress" is. www-data, maybe?
[00:24] <SpamapS> celeborn999: in that case, chgrp -R www-data wp-content/themes && chmod -R g+w wp-content/themes
[00:24] <celeborn999> JanC: or this blurb: "All WordPress files should remain owned by your user account" -- the wordpress files are all owns by root:root or root:www-data
[00:24] <SpamapS> Yeah the wordpress docs take the stance that if you are hosting your own wordpress, you know enough to figure this out.
[00:24] <celeborn999> SpamapS: or which IRC channel to spam, at least
[00:25] <JanC> SpamapS: even then, they should explain which directories need write access and which not IMNSHO  ☺
[00:26] <SpamapS> JanC: its not so much need.. its a choice. ;)
[00:26] <celeborn999> SpamapS: it looks like www-data is already the group for the directories you mentioned and has group write permissions
[00:27] <SpamapS> celeborn999: then should work fine
[00:27] <celeborn999> SpamapS: that's a bummer
[00:27] <SpamapS> celeborn999: its not just the dirs though.. the files must be writable
[00:27] <JanC> I guess it's the usual PHP app stupidness?  :-(
[00:29] <celeborn999> SpamapS: i checked every directory and file, they are all owned by root:www-data and have group write permissions. i wonder if there is some kind of "landing spot" where incoming downloads are stored before they are installed, and the landing spot needs permissions
[00:30] <celeborn999> does wordpress write stuff as www-data?
[00:30] <celeborn999> or is there some third account it likes to use sometimes?
[00:30] <JanC> celeborn999: taht depends on your www-server config  ;)
[00:30] <SpamapS> celeborn999: should if apache is running as www-data
[00:31] <celeborn999> i use apache and it uses www-data
[00:31] <SpamapS> celeborn999: I have to agree with JanC .. the WP docs and forums should help with this
[00:31] <celeborn999> i agree with both of you, they really should
[00:32] <JanC> personally I think the docs should be enough, this sounds liek basic stuff every sysadmin installing WP should know
[00:32] <celeborn999> like with everything else i've installed recently, i'm sure once i figure it out, it will all make perfect sense
[00:34] <celeborn999> i found the Debian-specific notes for Wordpress (/usr/share/doc stuff) to be unusually unhelpful at previous steps in this process, relative to other software i've installed
[00:45] <JanC> celeborn999: in general the Debian-specific notes only document changes from upstream
[00:46] <JanC> if upstream is weird and undocumented, good luck...  :-/
[00:46] <celeborn999> JanC: Debian pretty heavily customizes Wordpress, they have a special mysql install script and a totally different way of handing wp-config.php
[00:47] <celeborn999> JanC: to give two examples
[00:47] <JanC> in that case, that should be documented in Debian-specific docs of course
[01:03] <erichammond> smoser: Did you just publish a new copy of ec2-api-tools or was I dreaming? https://launchpad.net/~ubuntu-on-ec2/+archive/ec2-tools?field.series_filter=karmic
[01:07] <smoser> ami tools
[01:07] <smoser> erichammond,
[01:07] <smoser> but only in lucid
[01:07] <smoser> is there a new api tools ?
[01:08] <smoser> er... i only put it in maverick at the moment.
[01:08] <smoser> https://bugs.launchpad.net/ubuntu/+source/ec2-ami-tools/+bug/582387
[01:18] <erichammond> smoser: I see, thanks.  I was only partly dreaming.
[01:19] <erichammond> and mostly just confused.
[01:19] <smoser> well, there is a ec2-api-tools
[01:19] <smoser> now that i go looking
[01:37] <celeborn999> so for the record, for my wordpress problem, here is the answer: http://www.chrisabernethy.com/why-wordpress-asks-connection-info/ ....... wordpress has a silly method of checking for filesystem permissions, it writes a test file and checks to see if the owner of the test file matches the owner of the script being run. of course with ubuntu the file owner is root but the test file is written as www-data (for apache)
[01:38] <celeborn999> so i can workaround the problem by chowning some files to www-data but this will get blown away during an apt-get upgrade (for example). sucks.
[01:57] <celeborn999> how can i tell ubuntu/apt-get not to try to upgrade a particular package in the future? i want to disable updates for wordpress only and just do the upgrading through the wp admin console
[01:58] <cloakable> Uninstall wordpress and download it from the website?
[01:58] <celeborn999> there's got to be a generic way to do it
[01:58] <cloakable> Yes.
[01:58] <cloakable> Install wordpress from wordpress.org :P
[01:59] <celeborn999> i mean generic for all packages
[01:59] <celeborn999> i.e. i like the current version of FOO, please never try to upgrade it
[02:01] <celeborn999> answer: use aptitude, find package, press "=" to "hold package"
[02:01] <celeborn999> at least that's what i think will work, we'll see down the road i suppose
[02:10] <JanC> celeborn999: basically you want "apt pinning"
[02:11] <JanC> but that also means you'll soon be using a wordpress instance full of security bugs I assume...
[02:11] <SpamapS> honestly if an app has to be chown/chmodded to work..
[02:11] <SpamapS> it sux
[02:12] <JanC> (well, full of known security bugs, they'd have been there before already)
[02:12] <SpamapS> I use wordpress..
[02:12] <SpamapS> and I hate that part
[02:12] <celeborn999> based on the manpages it looks like pinning means you want apt to try to get the software from a different source or a different (past) version. instead the intention is ask apt to not do the update, and instead use the wordpress console's upgrade utility to do it instead. this should avoid the permissions problems
[02:12] <celeborn999> i think the hold package idea from aptitude is what i'm looking for
[02:43] <pwnguin> if celeborn comes back tell him that idea is stupid -- debian packaging of wordpress is worse than no packaging, and pinning an old version of wordpress is bound to be an attack vector
[03:40] <SpamapS> pwnguin: I agree, this is why we have nightly build ppa's.. :)
[03:42] <SpamapS> I kind of wish we had a 'volatile software' ppa where someone could just ask for the latest upstream to always be built and installed. Something tells me this already exists, I just can't find it in the amazon-rain-forest-sized documentation on debian packages. :-P
[03:47] <ScottK> No, the general point of having a release is to have stuff stop changing.
[03:48] <SpamapS> sadly, there are things like wordpress that are more stable when they change.
[03:48] <SpamapS> It stems from the ability to produce software faster than users can break it. Yes I'm saying PHP makes it too easy to program. :)
[03:49] <SpamapS> visual basic was the same. ;)
[03:49] <ScottK> Certainly, there are exceptions.
[03:50] <ScottK> PHP is one of those things that I understand is almost everywhere, but I prefer not to get any on me.
[03:50] <ScottK> Also it does take some effort to make sure the new stuff is packaged properly and works.
[03:50] <SpamapS> I think there's a certain class of things that shouldn't be packaged for release.. wordpress is probably the best example of it.
[03:50]  * ScottK is doing that right now for clamav.
[03:50] <SpamapS> yeah clamav changes *fast* and *must*
[03:53] <SpamapS> I guess the real lesson to learn is that there may not be a single unifying theory of packaging..
[03:54] <ScottK> clamav we treat as a special case and try to keep it current for all releases.
[03:54] <ScottK> It's a lot of work though.
[03:56] <SpamapS> Yeah, IIRC it will start squawking loudly in the logs if the engine falls behind the definitions
[03:57] <SpamapS> seems like at that point the engine becomes data as much as software
[03:57] <ScottK> Unfortunately it's production critical, security sensitive software....
[04:47] <andre_francys> tutorial ldap offline file configuration anyone knows? please
[06:13] <Brando753> is there a way I can connect to a wifi router with ubuntu server?
[06:17] <SpamapS> Brando753: of course. Go find a cable long enough and plug into the LAN port of the router. ;-)
[06:19] <twb> Brando753: yes.
[06:20] <Brando753> as good an option that is can i do it without the large cable
[06:20] <twb> Brando753: yes.
[06:20] <twb> You will, obviously, need a wifi NIC.
[06:20] <Brando753> Network Interface Card?
[06:21] <twb> Brando753: yes.
[06:21] <Brando753> got it.
[08:21] <Weasel[DK]> are modeline in vim somehow disabled in *buntu ?
[08:43] <screen-x> Weasel[DK]: set modeline >> vimrc
[08:46] <Weasel[DK]> screen-x: Perfekt... Tak!
[08:47] <Weasel[DK]> screen-x: oops wrong language.... perfect.. Thanks !  ;)
[09:56] <c13> I am connected via ppp0. I want to share the internet to the network via eth0. How can i set up the network-manager to share the internet?
[09:56] <twb> Either bridge or masquerade
[09:56] <twb> Oh, you're using network-manager.  I don't support that, sorry.
[10:02] <sglinux> one of my 8.10 servers hosting a website has been by
[10:03] <sglinux> one of my 8.10 servers hosting a website has been compromised by Storm7Shell
[10:03] <sglinux> hosting oscommerce v2.2 rc2a
[10:25] <jetole> Hey guys. I am looking for some sudo help if anyone minds. I am in the admin group and prompted for a password proper. That is unchanged from ther server install however I created a group alias in the sudo file and a application alias in the sudo file and said this group alias can execute this app alias without a password as root which has worked fine for non admin users however I am still being prompted for a password.
[10:26] <jetole> Does anyone know how I can execute the application alias without a password even though I want everything else I sudo to, to be password protected?
[10:26] <jetole> my line looks like: DNS_ADMINS ALL=(root) NOPASSWD: DNS_COMMANDS
[10:27] <jetole> I am in the DNS_ADMINS alias and other people not in the admin group are not prompted for a password who are in the DNS_ADMINS alias
[10:33] <c13> : I am connected via ppp0, using a script. I want to share the internet to the network. Typing /etc/network/interfaes shows "iface ppp0 inet ppp", but ppp0 does not appear in the network connections. How can i set up the network-manager to share the internet?
[10:33] <jetole> the ifconfig command should show ppp0
[10:33] <jetole> doesn't it?
[10:36] <jetole> c13: that question was to you
[10:36] <incorrect> i've just fdisk'd my drives however without mknod i don't see a /dev/sdxy appearing, i used to reload udev to see them appear, but 10.04 that doesn't happen
[10:37] <c13> yes it does
[10:38] <jetole> c13: then ppp0 is active. I don't know where you are looking where you say it doesn't appear but it is there regardless so now all you need to do is enable ip forwarding in the kernel (man sysctl) and setup nat via netfilter/iptables
[10:39] <jetole> incorrect: not to sure about the udev in 10.04. I'm using it, haven't looked into it but do you see the disk in /dev i,e, not the partitions but do you see /dev/sda but not /dev/sda1 ?
[10:39] <jetole> *not too
[10:40] <incorrect> i am just missing the new ones i created
[10:40] <incorrect> fdisk -l shows them
[10:40] <jetole> incorrect: you didn't answer my question
[10:40] <incorrect> i did, i am just missing the new partitions i created
[10:40] <incorrect> the block device is there
[10:40] <_ruben> jetole: perhaps the order in the sudoers file is relevant?
[10:40] <jetole> do you see the whole disk in /dev
[10:41] <jetole> _ruben: yeah I have noticed that in the man page. Still not sure. I am still reading
[10:41] <incorrect> disk = block device
[10:41] <jetole> incorrect: do you see, for example /dev/sda if sda is the disk
[10:42] <jetole> I know what a block device is
[10:42] <c13> thx
[11:11] <incorrect> aha! its partprobe i need
[11:16] <jetole> Oh I could have helped you a while ago if you didn't ignore my questions and then me but I guess it always feels good to find the answer yourself
[11:21] <incorrect> well i answered but you didn't seem to understand
[11:50] <selinuxium> Hi all, trying to install sun-jave-jre on ec2... Which repo do I need to point at or do i install direct.
[11:55] <lifeless> the partner repo
[12:04] <incorrect> will the partner repo track release from java.sun ?
[12:51] <jussi> o/ riktking
[12:52] <riktking> jussi, i think im just gunna remove the lamp stack and start again
[12:52] <jussi> fair enough
[12:52] <riktking> its not a mission critical website lol
[12:57] <riktking> fixed it!
[13:00] <DelphiWorld> hi all
[13:00] <DelphiWorld> i am using latest ubuntu 9.10 server
[13:00] <DelphiWorld> my ssh server is very slow
[13:00] <DelphiWorld> how do i fix this problem?
[13:01] <Japje> slow with login
[13:01] <Japje> slow when typing
[13:02] <DelphiWorld> Japje: login and typing both
[13:02] <selinuxium> lifeless, Thanks, sorry for the delay.. :)
[13:02] <Japje> login could be that resolving is slow
[13:03] <Japje> if both are slow.. perhaps high load, or much traffic on your side, or the server side
[13:04] <DelphiWorld> Japje: any other ssh server package to try?
[13:05] <Japje> DelphiWorld: thats probably not the right idea behind the problem
[13:05] <Japje> its not the ssh server itself
[13:05] <Japje> its something thats affecting it to be slow
[13:11] <Pupeno> How do you clear the arp cache?
[13:15] <Amarendra_> can somebody tell me how to install usb modem in ubuntu9.10??
[13:15] <Amarendra_> its is not been detected
[13:20] <SpamapS> Amarendra_: does it have drivers available? Some don't.
[13:20] <Amarendra_> ya it was first detected as cd drive
[13:21] <Amarendra_> i installed the driver
[13:21] <Amarendra_> now it is not detecting
[13:21] <SpamapS> if you run 'dmesg' do you see anything about it when you plug it in/remove it ?
[13:24] <Amarendra_> i am not able to connect in ubuntu so i switched to windows and downloaded x-chat to connect this chat room.. Now i am in windows . So i cannot run any command
[13:25] <SpamapS> ah
[13:25] <SpamapS> EtienneG: eaten all your chocolate yet?
[13:25] <Amarendra_> any more ideas???
[13:26] <DelphiWorld> Japje: fixed just by restarting it
[13:26] <DelphiWorld> Japje: and this is my majore problem in Deb based systems;)
[13:26] <SpamapS> Amarendra_: you might want to try making sure acm is loaded.. (modprobe acm)
[13:27] <Amarendra_> acm??
[13:27] <SpamapS> Though hotplug, or whatever it is we have that has replaced that, should do it.
[13:27] <SpamapS> http://www.linux-usb.org/USB-guide/x332.html
[13:27] <SpamapS> Amarendra_: that explains what acm is
[13:27] <Amarendra_> ok
[13:28] <EtienneG> SpamapS, no, my kids are on it
[13:28] <EtienneG> there was a *lot*  :)
[13:28] <Amarendra_> Spamaps: Is there anyway to uninstall the driver and start again??
[13:30] <SpamapS> Amarendra_: what driver did you install?
[13:30] <Amarendra_> cm200 driver
[13:31] <Amarendra_> shall i mail to u??
[13:34] <SpamapS> Amarendra_: isn't that a webcam driver?
[13:35] <Amarendra_> no
[13:35] <Amarendra_> my usb modem is CM200
[13:35] <Amarendra_> Provider= tata photon whiz
[13:36] <Amarendra_> should i send u the drivers??
[13:36] <SpamapS> Amarendra_: no thats ok
[13:37] <Amarendra_> ok
[13:37] <ttx> smoser: apparently lucid is still not available in the imagestore... I thought Gustavo had it covered ?
[13:37] <SpamapS> Amarendra_: I've had very bad luck with those things on anything but Windows.. :-P
[13:37] <Amarendra_> its .deb packages
[13:37] <Amarendra_> for ubuntu
[13:40] <SpamapS> Amarendra_: well then thats weird that it doesn't work. ;)
[13:41] <ttx> SpamapS: re: "how a blueprint gets released", you mean, the lifecycle of a blueprint ?
[13:42] <Amarendra_> i got some information from internet now i shall restart .. Thanx for yr cooperation
[13:42] <ttx> SpamapS: gets accepted, scheduled against a development subcycle ("maverick-alpha-2"), then work items are burnt... spec goes to beta available, then Implemented.
[13:43] <reisi> hi everyone! after doing a fresh install of ubuntu server 10.04 (over 9.10) one of our php apps went dead and while access.log shows 500 response nothing is logged into error.log; any ideas how to revive this functionality?
[13:46] <SpamapS> ttx: ok that all makes sense.
[13:47] <SpamapS> ttx: and who does the accepting?
[13:48] <ttx> SpamapS: the approver. Usually that will be Jos.
[13:53] <alvin> Hi, I have an urgent problem. No idea how difficult to solve. I upgraded a server from Karmic to Lucid. The server runs 3 virtual machines. Now, 1 of them doesn't want to start anymore. $ virsh start <nameofvirtualmachine> | error: Failed to start domain <nameofvirtualmachine> | error: monitor socket did not show up.: connection refused. (The server is on support, so I contacted Canonical, but I'm not sure if they will call back today.
[13:53] <alvin>  It's probably nighttime at the location of the helpdesk. Hence, I'm looking for tips here)
[14:07] <alvin> ok, got it. (Little bit of panic aside) I created a new virtual machine with the same properties, and compared the xml. There were a lot of differences. I guess the new xml is better. Now, it works fine. (There were differences in cdrom (type='raw' instead of ''), in <serial>, <graphics> and video)
[14:08] <SpamapS> ttx: so the portion of cloud databases where Cassandra wants hadoop overlaps with the hadoop-pig spec, which you are listed as drafter on..
[14:08] <ttx> yes.
[14:08] <SpamapS> alvin: good to hear it works out
[14:09] <riktking> having issues with apache2 cant seem to get website to apear under http://hotsname/username/
[14:10] <SpamapS> ttx: ok, so should I make this one dependent on that one?
[14:11] <ttx> SpamapS: not really. hadoop/hbase should already be in a shape that you can use for building cassandra
[14:12] <ttx> SpamapS: they are already packaged and should be in ubuntu anytime now
[14:12] <ttx> so just add a note that you are dependent, but do not mark the spec as fully dependent
[14:14] <SpamapS> ttx: should I requestsync for hadoop? has someone already done that?
[14:14] <ttx> SpamapS: it should get autoimported
[14:14] <smoser> ttx, this is correct, it is not.
[14:15] <smoser> i have acl to do it now, but its painful.
[14:15] <smoser> i had not made huge interest in it because i was wanting to get a refreshed image
[14:15] <smoser> with the incrased dleep
[14:15] <smoser> increased sleep even
[14:16] <ttx> smoser: ok, you might want to reply to the thread on c-cloud to avoid getting hurt by backslash
[14:23] <Daviey> riktking: sudo a2enmod userdir , and use http://domain/~USERNAME
[14:49] <AlexC_> morning
[14:50] <AlexC_> I have 'AllowGroups adm' in my /etc/ssh/sshd_config file, however I also want to allow the user 'foobar' (who is not in 'adm' group) access to SSH. I added the line 'AllowUsers foobar' however this user can still not login
[15:08] <kirkland> jbernard: neat trick :-)
[15:11] <tlb> just tried enabling apparmor on apache in Lucid and setting complain mode, but i get nothing in kern.log, I'm I looking the right place?
[15:12] <jbernard> kirkland: thanks man! turned out to be much easier than I thought
[15:12] <kirkland> jbernard: yeah, really, really clean
[15:14] <jdstrand> tlb: if you have auditd installed, then it will log to /var/log/audit/audit.log instead of kern.log
[15:16] <tlb> jdstrand, i don't have that installed, but is that the recomended way?
[15:18] <jdstrand> tlb: it will log to kern.log without it. while developing profiles without auditd you will probably want to use 'sudo sysctl -w kernel.printk_ratelimit=0' to cut down on kernel rate limiting
[15:19] <jdstrand> tlb: note that sysctl will not survice a reboot
[15:19] <jdstrand> survive
[15:20] <tlb> jdstrand, installed auditd and still nothing, when i set aa-enforce /usr/lib/apache2/mpm-prefork/apache2 apache fails to start because i enabled mod_fcgi and i get nothing when in the log when i set aa-complain
[15:21] <jdstrand> tlb: the failure to start would seem to be unrelated to appamor if it isn't logging anything
[15:21] <tlb> jdstrand, apache works fine when i set the profile to complain, but i get nothing i the log
[15:22] <tlb> jdstrand, in the apache log i can see it's problem with creating shared memory when apparmor is set to enforce, but i would really like apparmor to give me some debug information in complain mode
[15:23] <jdstrand> tlb: it could be that the environment is being scrubbed because you are using Ux or Px...
[15:25] <jdstrand> apparmor won't (can't?) log in that situation cause the confined application isn't handling the lack of environment due to the scrubbing
[15:26] <tlb> jdstrand, it's the default apache profile that comes with Lucid and a clean install, so far I havent touched a config file
[15:26] <jdstrand> you could try to use 'px' or 'ux' in enforce mode and see if that works
[15:26] <jdstrand> well, that apache profile in lucid is only for phpsysinfo
[15:27] <jdstrand> if phpsysinfo is not working with that profile, that is a bug
[15:27] <jdstrand> (in which case please file it, with exact steps on how to reproduce)
[15:27] <tlb> jdstrand, I'm sorry but i'm kind of new to apparmor, and I don't think I understand what scrubbing or ux and px mode does?
[15:28] <tlb> jdstrand, it's only because i'm trying to run it in fastcgi mode
[15:28] <jdstrand> tlb: are you trying to use the phpsysinfo profile?
[15:28] <Italian_Plumber> hmm... my ubuntu server virtual machine just started up with 64 MB of ram
[15:29] <jdstrand> tlb: or you just happened to enable the profile, and things broke cause you are using fastcgi?
[15:29] <tlb> jdstrand, yes but so far apache is not even starting if you enable mod_fcgi
[15:30] <jdstrand> tlb: sounds like a bug. can you one against apparmor along with how you enabled fastcgi?
[15:30] <tlb> jdstrand, I'm trying to make a profile for mod_fcgi + suExec, but to start simple I just wanted to get the phpsysinfo profile working with fastcgi
[15:31] <Italian_Plumber> let's see if it works with 32. :)
[15:31] <tlb> jdstrand, if you give me some more hint, I'm sure I can come up with a patch and a bug report :)
[15:31] <jdstrand> tlb: sure. I did not develop that profile. it sounds like more needs to be done with it, and filing a bug is one way to make that happen :)
[15:32] <jdstrand> tlb: well, the hint was Ux/Px vs ux/px
[15:32] <jdstrand> I don't know that is the case
[15:32] <jdstrand> when you give a binary Ux, you are saying to transition to unconfined mode, but scrub the envoronment for things like LD_LIBRARY_PATH
[15:33] <jdstrand> the same for Px, except rather than going unconfined, you transition to another profile
[15:33] <jdstrand> ux/px means do the transition, but don't scrub the environment
[15:33] <jdstrand> in general, that is a bad idea, but it would be worthwhile to know if that was the cause
[15:34] <jdstrand> that may not have been as clear as it could be...
[15:34] <jdstrand> the rule:
[15:34] <jdstrand>   /usr/bin/foo Ux,
[15:34] <jdstrand> means that if the application tries to exec /usr/bin/foo, go unconfined and scrub the env
[15:35] <jdstrand> and by 'go unconfined' I mean, /usr/bin/foo executes unconfined, not the application that is executing it
[15:36]  * jdstrand sorta wishes he could have worded all that more clearly from the start
[15:37] <tlb> jdstrand, ok, so when mod_fcgi fails to make shared memory it might be because the fcgi daemon is running in a unconfined but scrubbed environment where it's missing some options
[15:37] <mw88> hi
[15:38] <mw88> Does anyone know if one can use the slapd.conf in Ubuntu 10.04? I read that it's not possible...
[15:38] <jdstrand> tlb: that is the hypothesis, yes. cause apparmor won't log anything if the app craps out due to env scrubbing, so it sorta seems to fit
[15:39] <tlb> jdstrand, is there a way to dump the complete profile with all includes included?
[15:39] <jdstrand> tlb: not currently
[15:39] <jdstrand> it used to be there, but went away
[15:39] <jdstrand> it will be back in maverick
[15:40] <tlb> jdstrand, so my job is looking at all the includes and see if this somehow triggers a Ux or Px
[15:40] <ttx> SpamapS: you should follow strictly https://wiki.ubuntu.com/WorkItemsHowto for your work items
[15:40] <jdstrand> tlb: in the profile or the includes, yes
[15:41] <tlb> jdstrand, is there some good documentation that describes what the permission mean in a apparmor profile?
[15:41] <ttx> SpamapS: also move the discussion notes from the whiteboard to the "BoF discussion" section on the wikispec
[15:42] <jdstrand> tlb: yes. in apparmor-docs there is the techdoc.pdf (though it is a little outdated). also
[15:42] <jdstrand> https://apparmor.wiki.kernel.org/index.php/Main_Page
[15:42] <jdstrand> that is more up to date, but less organized (we are in the process of fixing that)
[15:44] <tlb> jdstrand, ok thanks i will try to da a little debugging
[15:44] <jdstrand> tlb: thanks :)
[16:01] <SpamapS> ttx: oops I forgot the :'s didn't I? ;)
[16:02] <tlb> jdstrand, there is no Ux or Px anywhere in the profile and i guess this /** mrwlkix shold give access to do pretty much anything?
[16:03] <ttx> SpamapS: yep
[16:12] <jdstrand> tlb: well, I'm not sure what the problem would be
[16:13] <jdstrand> tlb: /** mrwlkix will do a transition, but with 'i', which 'i'nherits the current profile. aiui, it will inherit the current env as well.
[16:13] <jdstrand> jjohansen: is that accurate? ^
[16:14] <jjohansen> jdstrand: yeah, at least from an AA persepective
[16:14] <jjohansen> when ix is done apparmor does not request the environment be scrubbed
[16:15] <jdstrand> cool, yeah
[16:15] <jjohansen> however, other things like the loader may decide to scrub the environment anyways
[16:16] <jdstrand> tlb: so I'm not sure why apparmor is preventing apache from working with fastcgi and not logging it
[16:17] <jdstrand> jjohansen: he enabled the fastcgi module, and enabled the phpsysinfo profile for apache, but apache won't start (something with not being able to allocate shared memory)
[16:17] <jdstrand> jjohansen: if he disables the profile, it works. there is no logging in enforce (or complain) mode
[16:17] <jjohansen> tlb: is there any apparmor message in the log?
[16:18] <jjohansen> tlb: can you open a bug and attach the profile so we can look at it?
[16:18] <jdstrand> so, the only thing I could think of off-hand was scrubbing
[16:19] <tlb> jjohansen, this but only first time: type=APPARMOR_DENIED msg=audit(1274455154.369:125):  operation="capable" pid=15801 parent=1 profile="/usr/lib/apache2/mpm-prefork/apache2" name="dac_override"
[16:19] <jjohansen> tlb: as root can you do echo 1 > /sys/module/apparmor/parameters/debug
[16:20]  * jdstrand always forgets about that one...
[16:21] <tlb> jjohansen, did not give more information in the log
[16:21] <jjohansen> tlb: what happens if you add capability dac_override, to the profile
[16:21] <jjohansen> tlb: did you restart apache after doing that?
[16:21] <tlb> jjohansen, after adding debug yes
[16:22] <jjohansen> tlb: okay, that rules out AA scrubbing the environment
[16:22] <tlb> what's the easies way to load my new profile, right now i'm doing apparmor_parser -R, apparmor_parser -r and then aa_enforce
[16:23] <jjohansen> tlb: apparmor_parser -r will replace without needing to do the remove
[16:23] <jdstrand> tlb: just apparmor_parser -r is enough
[16:24] <tlb> add capability dac_override,
[16:24] <tlb> makes it work?
[16:27] <tlb> jjohansen, dac_override is'nt that a tad much to give in capability, DAC_OVERRIDE allows the reading or writing of any file on the system regardless of the ownership or permissions
[16:29] <jjohansen> tlb: well, yes it normally does but AA file rules clamp it down to what is listed in the profile
[16:29] <tlb> jjohansen, aah so even if dac_override is given, aa still has final say?
[16:30] <jjohansen> tlb: yes
[16:30] <jjohansen> in this case DAC is being applied first, and asking for capability dac_override,
[16:31] <jjohansen> AA has the option of denying that or allowing it, if you allow it it gets to apply further mediation after
[16:35] <tlb> jjohansen, dac_override seems to do the trick, do you want that in a bug report?
[16:37] <jjohansen> tlb: you were getting the log message for that right?  If so its really more of an apache behavior, than an AA bug
[16:37] <tlb> jjohansen, but I was only getting after the first time?
[16:38] <jjohansen> tlb: okay file the bug and we will try to replicate
[16:38] <tlb> jjohansen, if I want the error i need to reload the profile
[16:38] <jjohansen> strange
[16:38] <jjohansen> that is a bug then, make sure you attach the profile you are using
[16:39] <jjohansen> file the bug against apparmor so me and jdstrand will see it
[16:44] <Kbca> alguem aqui utiliza samba4 como PDC ?
[16:48] <zul> morning
[16:49] <guntbert> !br | Kbca
[16:55] <tlb> jjohansen, https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/583896
[16:55] <jjohansen> tlb: thanks
[16:55] <tlb> :9
[16:55] <tlb> :)
[16:55] <tlb> jjohansen, thanks for the help both of you
[16:56] <jjohansen> np, any time
[17:18] <bkingx> Greetings!  I can't seem to get ssh working using key authentication without asking for a password.  Can someone help?
[17:19] <guntbert> bkingx: what did you do already?
[17:20] <bkingx> guntbert: Created the keys without using a passphrase, copied the keys to the remote server and cat'd it into .ssh/authorized_keys.
[17:21] <bkingx> guntbert: set permissions on authorized_keys to 600
[17:21] <guntbert> bkingx: when you try to connect it still asks for a password? what does /var/log/auth..... tell?
[17:23] <bkingx> guntbert: Correct. And I don't get anything in /var/log/auth.log until I actually enter the password.
[17:23] <guntbert> bkingx: let me look some things up
[17:24] <deslector> bkingx, it may be easier to use ssh-copy-id (it will copy the key to the remote server and take care of everything)
[17:24] <bkingx> guntbert: When I do log in, I get one line: May 21 12:21:03 sftp sshd[2850]: pam_unix(sshd:session): session opened for user 20383 by (uid=0)
[17:24] <bkingx> Hmm...this user is set up as scp-only, so it makes it difficult to run that command.
[17:24] <deslector> bkingx, just make sure to manually check that it only put the one key you wanted on the other server (just in case)
[17:24] <deslector> bkingx, oh, ok...
[17:26] <deslector> bkingx, have you tried the -v flag to get more info from ssh/scp ?
[17:26] <bkingx> deslector: Yes, I even went as far as -vvv  I'll post that to pastebin.
[17:26] <deslector> bkingx, ok
[17:27] <bkingx> deslector: http://pastebin.com/za4qyYKN
[17:27] <guntbert> bkingx: I didn't find anything special in /etc/ssh/sshd_config -- sorry
[17:27] <bkingx> guntbert: yeah, this has me baffled.
[17:28] <bkingx> I've tried both rsa and dsa keys
[17:29] <guntbert> bkingx: is id_dsa the key you want to use?
[17:30] <guntbert> and told the server to expect?
[17:30] <bkingx> either one is fine, id_dsa or id_rsa
[17:32] <guntbert> bkingx: after "we sent a public key, waiting..." you should get "debug1: Server accepts key: pkalg ssh-rsa blen 277" or so
[17:33] <bkingx> guntbert: can you think of a reason why I can't get that?
[17:34] <guntbert> bkingx: yes but it will be of no help : the server is still not ready to accept pubKey auth  : did you restart the sshd ?
[17:34] <zekoZeko> hello everyone. I'm setting up Postfix and having some trouble with local recipient verification. The verification probes (to Cyrus' LMTP port) are successful, but Postfix still rejects the client with "Recipient address rejected: User unknown in local recipient table"
[17:35] <bkingx> guntbert: yes, after every change.  What else do I need to do to make the server accept?
[17:36] <guntbert> bkingx: I really don't know - therefor I said it will not help you :-)
[17:36] <bkingx> guntbert: lol!  Here is my sshd_config file:  http://pastebin.com/sFhfZsYu
[17:36] <deslector> bkingx, which key did you copied to the remote server?
[17:37] <bkingx> deslector: id_rsa.pub and id_dsa.pub
[17:38] <deslector> bkingx, and you cat'ed both of them into authorized_keys ?
[17:38] <bkingx> deslector: Correct.
[17:38] <guntbert> bkingx: I think I found it: set UsePAM no
[17:38] <bkingx> guntbert: Making that change now...standby
[17:40] <SpamapS> zekoZeko: where, in your postfix config, are you telling postfix to check w/ Cyrus?
[17:40] <bkingx> guntbert: Permission denied (publickey,keyboard-interactive).
[17:41] <guntbert> bkingx: then I don't know - sorry
[17:43] <bkingx> guntbert: No problem...I can't figure it out either.
[17:43] <bkingx> It works fine WITH a password, just not with publicKey authentication.
[17:44] <deslector> bkingx, did you tried using just one key type (dsa or rsa) first?
[17:44] <bkingx> deslector: Yes.  Do you recommend one over the other?
[17:45] <deslector> bkingx, not really... most tutorial I've read use dsa, but not sure why... so I could'nt recommend it over rsa
[17:46] <bkingx> deslector: Ok, thanks!
[17:46] <guntbert> bkingx: you could increase the logging level on your server...
[17:47] <bkingx> guntbert: can you tell me how to do that?
[17:47] <guntbert> bkingx: line 22 loglevel DEBUG (try it, I'm not sure)
[17:48] <SpamapS> Anyone know why puppetmaster in Ubuntu installs with a cert signed with the FQDN instead of the default which is 'puppet' ? Means that if you're using DNS to route clients to the puppet master, puppet doesn't work.
[17:50] <bkingx> guntbert: YOU ARE A GENIOUS!!
[17:51] <zekoZeko> SpamapS:  my config and logs: http://paste.ubuntu.com/437422/
[17:51] <guntbert> bkingx: by no means :-)
[17:51] <bkingx> guntbert: By chroot'ing the user and dropping them into an "incoming" folder, the authentication is looking to "May 21 12:47:17 sftp sshd[2986]: debug1: trying public key file /home/20383//incoming/.ssh/authorized_keys
[17:52] <bkingx> guntbert: so now it is a matter of figuring out how to fix that.
[17:52] <bkingx> guntbert: should I just move the .ssh folder into that "incoming" folder?
[17:52] <guntbert> bkingx: aah - you *could* have said that you are chrooting them -- that is known to be a tough problem
[17:53] <bkingx> guntbert: SORRY SORRY SORRY!
[17:53] <bkingx> Didn't even occur to me.
[17:53] <guntbert> bkingx: np :-)  but I have no resolution -- please try without chroot for now so that you know you are not chasing the wrong rabbit
[17:54] <bkingx> guntbert: K
[17:54] <guntbert> bkingx: and then googling for ssh chroot might reveal some answers
[17:54] <bkingx> guntbert: lol....doing that now
[17:55] <guntbert> bkingx: Good luck :-)
[17:55] <bkingx> guntbert: YES!  Thanks again!
[17:57] <SpamapS> zekoZeko: sorry my knowledge on the subject isn't all that great.. I don't see anything glaringly wrong.
[17:57] <zekoZeko> me neither :)
[17:57] <zekoZeko> and i've set up quite a few of these before, just never with multiple instances :)
[17:57] <SpamapS> zekoZeko: smtpd_recipient_restrictions = permit_mynetworks,  reject_unauth_destination,  reject_unverified_recipient
[17:57] <SpamapS> zekoZeko: I presume one of those is linked to the lmtp check?
[17:58] <zekoZeko> yeah, reject_unverified_recipient
[17:58] <zekoZeko> there's also an implicit permit at the end (because of reject_unauth_destination)
[17:59] <dae_> Hi all! I'm switching my home server from debian etch to ubuntu 10.04, so far a smooth process mostly due to excellent documentation efforts (thanks!). I'm trying to make a decision on what software to use to sort mail identified as spam into folders automatically to the server. Currently on the old server I'm using procmail with cyrus and postfix but I'm wondering whether I should go with procmail on ubuntu 10.04 or to use dovecot
[17:59] <dae_> LDO deliver together with sieve instead?
[18:00] <zekoZeko> erm
[18:00] <zekoZeko> how do you use procmail and cyrus together?
[18:00] <zekoZeko> cyrus does LMTP
[18:00] <zekoZeko> err
[18:00] <zekoZeko> cyrus does Sieve
[18:01] <zekoZeko> and you can deliver to folders using + addressing. I use user+Spam to sort mail into their Spam folder.
[18:01] <dae_> Yeah, I'm not using sieve on the old server with cyrus just procmail.
[18:02] <zekoZeko> how do you do that? Call deliver through procmail or what?
[18:02] <dae_> Ehh, it's been about 5 years since I've set that up... Frankly I can't remember, hold on I'll check...
[18:03] <zekoZeko> i mean that's the only way i could fathom of using those two together, and i don't really think it's optimal :)
[18:03] <zekoZeko> just use Sieve
[18:03] <zekoZeko> and as I've said, you don't even need sieve if you can use the address extensions.
[18:04] <dae_> Ok, seems like I have setup procmail to call cyrdeliver.
[18:05] <zekoZeko> that's what i thought, yeah.
[18:05] <zekoZeko> anyway, you can continue using Cyrus, except you use LMTP to deliver mail, which is way more efficient
[18:05] <zekoZeko> and use Sieve for filtering
[18:05] <zekoZeko> or you can go the Dovecot way and again use Sieve for filtering
[18:06] <SpamapS> http://git.gluster.com/?p=glusterweb.git;a=tree
[18:06] <SpamapS> wow
[18:06] <SpamapS> autoconf..
[18:06] <SpamapS> to build rpms
[18:06] <SpamapS> full of .php files
[18:06] <dae_> Fine, so I'm happy to skip procmail and just use dovecot deliver.
[18:07] <dae_> Have you setup spamassasin to add the "+Spam" then instead of the extra headers?
[18:08] <zekoZeko> asking me?
[18:09] <dae_> Yeah... Sorry
[18:12] <SpamapS> Honestly I gave up on running spamassassin myself about 3 years ago.. so many cheap services do it better than I can. :-P
[18:15] <dae_> SpamapS, I see your point... Which solution have you chosen?
[18:16] <SpamapS> dae_: lately the people who host my VPS offer filtering through a Barracuda for free as long as the volume is low.
[18:17] <SpamapS> I remember when Barracuda came out..
[18:17] <SpamapS> I was a consultant selling my own sort of anti-spam auto-firewall appliance solution and they just cut my legs right out from under me. :-P
[18:19] <dae_> SpamapS, hopefully you had more services to offer :-)
[18:19] <SpamapS> dae_: Not really... gave up, closed up shop, and got a real job for a while. ;)
[18:20] <dae_> SpamapS, I have been thinking about looking into what my web hosting service can offer. But I need to convert my old server to ubuntu first and spamassasin does a decent job for me right now.
[18:22] <SpamapS> dae_: yeah it does a decent job no doubt. I just think the time for running everything myself has passed _for me_. Its an amazing learning experience to try and keep ahead of the rat bastard spammers.
[18:23] <dae_> So I'll go with converting my debian etch setup using postfix -> spamassasing -> procmail -> cyrus to  postfix -> spamassasin -> dovecot deliver -> mailbox under ubuntu then.
[18:24] <Daviey> dae_: Personally, i do spamassassin @ arrival time with postfix
[18:24] <Daviey> then pump it into procmail
[18:24] <dae_> SpamapS, sometimes I'm thinking about dropping my own domain altogether and just use my gmail accounts instead...
[18:25] <SpamapS> dae_: the nice thing is you can keep using your domain, but just pump the email through gmail
[18:25] <dae_> Daviey, that was the other solution I was thinking about... What are the pros of going that way?
[18:26] <SpamapS> dae_: but.. I still do appreciate the control I have with my own server for storage.
[18:26] <SpamapS> Sometimes I do wish there I had as good server-side text searching as gmail though...
[18:26] <dae_> SpamapS, how do I pump email through gmail ?
[18:27] <Daviey> dae_: meh, wfm :)
[18:27] <dae_> Daviey, fair enough :-)
[18:29] <zekoZeko> dae_: sorry, was away for a while.
[18:29] <dae_> SpamapS, forward all mail to gmail and setup a gmail forward back to my own server?
[18:29] <zekoZeko> dae_: I'm using amavisd-new to add the address extension
[18:29] <SpamapS> dae_: you just have to create an apps account.. standard edition is free. :)
[18:29] <zekoZeko> dae_: actualy not yet, this is a new server, on the old one it just adds headers and users can filter on that.
[18:29] <dae_> zekoZeko, ok, I'll look into that. Thanks!
[18:31] <dae_> SpamapS, interesting... Will look into that.
[18:33] <SpamapS> dae_: quite a few of my friends have done just that.
[18:33] <SpamapS> dae_: but, I still find it interesting to run my own IMAP+SMTP :)
[18:34] <SpamapS> just not my own spam filter
[18:44] <zul> SpamapS: done...uploaded
[18:49] <SpamapS> zul: woot
[18:50] <SpamapS> zul: perhaps fixing the root bug in debian would be a good thing for one of us to do, since Debian has been kind enough to add their own default-mta. :)
[18:51] <zul> SpamapS: maybe...well have to see
[18:52] <SpamapS> zul: should we report it as a bug against exim4? Like.. "you're taking users unfairly!"
[18:52] <zul> SpamapS: lemme think about it
[18:54] <micahg> zul: ping re last php upload / size of php5-common
[18:54] <zul> micahg: hmmm?
[18:55] <micahg> zul: so, the test results are a meg larger in the latest upload
[18:55] <micahg> usr/share/doc/php5-common/test-results.txt.gz
[18:56] <zul> micahg: ah ok...please open a bug in launchpad
[18:56] <micahg> zul: k
[19:05] <olvs> hi
[19:05] <olvs> whats your take on the i7 on a linux box?
[19:05] <oru_work> how do i find out which version ubuntu server is ?
[19:05] <oru_work> upgrade ??
[19:05] <micahg> oru_work: lsb_release -a
[19:06] <olvs> i was thinking of upgrading my server from a phenom to an i7
[19:06] <olvs> but just wanted to get some reviews on if there is any large performance gain here running an i7
[19:17] <hersoy> hello
[19:17] <hersoy> channel 8: open failed: administratively prohibited: open failed <- what is the mean?
[19:17]  * ccheney at lunch, bbl
[19:23] <hggdh> hersoy: this sounds like an ICMP response (communication administratively prohibited)
[19:25] <hersoy> ssh -D 12345 huseyin@12.34.56.78, and system > perf > proxy sock - localhost 12345
[19:26] <hersoy> and error, how can i do ?
[19:26] <olvs> are there any large performance gain here running an i7
[19:26] <olvs> compared to a phenom
[19:27] <vraa> olvs, yeah duh, i7 is much newer and faster
[19:27] <vraa> but lot more $$$ too
[19:28] <vraa> you can start here http://techreport.com/articles.x/18799/5 for some syntheic benchmark results
[19:57]  * ccheney forgot to actually leave, heh
[19:57]  * ccheney will just eat his desk
[20:01] <zul> ccheney: yeah that usually helps
[20:01] <ccheney> zul, heh, apparently i typo'd and meat eat at my desk, but eating it might help as you say :)
[20:01] <ccheney> er meant
[20:01]  * ccheney must be hungry considering the types of typos he is committing
[20:12] <Datz> Hi, there are no audio drivers installed by default?
[20:13] <Datz> If not, which are best to install?
[20:14] <Datz> tried to play some music with mpg321, I think it looked for ALSA
[20:14] <Datz> guess I'll try to install that
[20:15] <Datz> humm, more than 40 packages contain ALSA in the description, but none matched the exact string ALSA
[20:17] <Datz> let's try alsa instead of caps
[20:20] <Datz> ok.. installed "alsa" still not playing
[20:30] <Dev_> Sir, I am facing problems in Implementing a grid portal.. My Ubuntu server edition 9.10 apt-get update can not run after 20 percent saying connection error n that's why i m unable to install jre and can't congigure mycertificates and other globus also components wants JAVA_HOME path to work but my jre can't b configured?..
[20:59] <maruen> Hi all, I'm getting some weird error why launching jboss:Protocol handler start failed: java.net.BindException: Permission denied /0.0.0.0:443
[20:59] <maruen> Can anyone help me solve that?
[21:17] <Zelest> I'm running 8 instances of qemu-kvm and I've noticed that ksmd is using loads of CPU.. I've read that you can change the interval which ksmd sleeps in /etc/default/qemu-kvm .. but (how)? can I restart ksmd without restarting the qemu-kvm instances?
[21:46] <smoser> Zelest,
[21:47] <smoser> sudo sh -c 'echo 200 > /sys/kernel/mm/ksm/sleep_millisecs'
[21:48] <smoser> maruen, you have to be root to bind to that port.
[21:49] <maruen> smoser, So, I need to run this script as root?
[21:49] <maruen> write?
[21:49] <smoser> what script ?
[21:50] <maruen> I was launching jboss
[21:50] <maruen> smoser, but I think you solved my problem
[21:51] <Zelest> smoser, oh, thanks!
[21:51] <maruen> smoser, no, I still having the problem
[21:51] <maruen> 17:50:49,581 ERROR [Http11Protocol] Error starting endpoint
[21:51] <maruen> java.net.BindException: Permission denied /0.0.0.0:443
[21:51] <maruen> smoser, I ran the script using root as user
[21:51] <maruen> smoser, but still the same
[21:52] <smoser> maruen, sorry. can't be of more help then.
[21:52] <maruen> smoser, thanks anyway
[21:53] <maruen> smoser,
[21:53] <maruen> I run with root as user
[21:53] <maruen> but when I hit ps -axu, the user that created the job was not root
[21:53] <maruen> strange
[21:53] <maruen> so, I still need to run it as root
[21:57] <maruen> smoser, you are the one
[21:59] <Zelest> smoser, Is it possible to set this value even higher? I mean, what is the drawbacks from increasing the times between each scan?
[22:00] <maruen> smoser, it worked now
[22:00] <maruen> thanks
[22:00] <maruen> smoser, you are the one!!!!
[22:00] <smoser> Zelest, i'm not terribly sure. but it can be disabled entirely (whihc was default prior to lucid), so its not like the end of the world.
[22:00] <smoser> i would think that there is some medium where you're not wasting effort scanning for duplication , but you're saving some memory
[22:01] <smoser> experiment i think
[22:04] <Zelest> smoser, ah, fair enough.. as for ram, I'm not that fuzzed really.. but if this option is available to save ram, I gladly use it.. but not at the price of 30-35% cpu ;)
[22:05] <Zelest> smoser, /etc/defaults/qemu-kvm's commented delay is 2000.. so I guess that's safe to use.
[22:05] <Zelest> once every second that is.
[22:05] <smoser> Zelest, yeah. i sohouldn't have said 200
[22:05] <smoser> thats too low
[22:05] <smoser> i rhink the default per the kernel is 20
[22:06] <soren> This isn't between each full scan, IIRC, though.
[22:06] <soren> The delay is between each iteration. How many pages it scans in each iteration is another configurable.
[22:06] <Zelest> Oh
[22:07] <Zelest> /sys/kernel/mm/ksm/pages_to_scan I presume?
[22:07] <soren> /sys/kernel/mm/ksm has the stuff
[22:07] <soren> Zelest: Right.
[22:07]  * Zelest goes breaks his virtualization host :D
[22:07] <soren> static unsigned int ksm_thread_pages_to_scan = 100;
[22:07] <soren> Whoops.
[22:07] <soren> /* Number of pages ksmd should scan in one batch */
[22:08] <soren> static unsigned int ksm_thread_pages_to_scan = 100;
[22:08] <soren> From the kernel.
[22:08] <Zelest> Ah
[22:08] <soren> /* Milliseconds ksmd should sleep between batches */
[22:08] <soren> static unsigned int ksm_thread_sleep_millisecs = 20;
[22:08] <Zelest> is there anyway to see how many pages are being used atm?
[22:08] <soren> Those two variables map to /sys/kernel/mm/ksm/pages_to_scan and /sys/kernel/mm/ksm/sleep_millisecs, respectively.
[22:08] <Zelest> /sys/kernel/mm/ksm/pages_shared ?
[22:10] <soren> Hang on, let me find the docs for you.
[22:11] <soren> http://tinyurl.com/3amepm8
[22:11] <soren> Zelest: ^
[22:12] <Zelest> thanks a ton! :D
[22:12] <soren> Sure.
[22:18] <corpse> Hi, i just got done installing ubuntu server. right after the restart i get "Missing operating system" (the drive i installed to is set to boot first)
[22:23] <kirkland> hggdh: around?
[22:23] <hggdh> kirkland: yeah
[22:42] <StrangeCharm> when I run tasksel, I get a bunch or perl locale error messages. what do they mean, and how do I fix it? http://pastebin.com/7NmwitL6
[22:47] <maruen> I need a good job....someone are offering this channel?
[22:49] <JanC> maruen: Ubuntu-related jobs are at http://webapps.ubuntu.com/employment/
[22:50] <JanC> if you are looking for a job as an Ubuntu server admin, I'm not sure there exists a site for that...
[22:51] <maruen> JanC, Thank you
[22:51] <maruen> I applied for some position there
[22:51] <maruen> JanC, Know have to dream they will contact me
[22:57] <corpse> is there a better way to edit a file then vi?
[22:57] <corpse> its making me want to through my pc out the window
[22:58] <cloakable> vim :P
[22:58] <cloakable> nano
[22:58] <cloakable> emacs
[22:58] <cloakable> ed
[22:58] <corpse> lol thanks <nub
[22:59] <corpse> i only use gedit ><
[22:59] <cloakable> install gedit onto the server and use X forwarding :)
[23:00] <corpse> i wasnt sure if it would work i was thinking gedit was a gui utility
[23:01] <jbrouhard> cloakable, I use mcedit *alot*
[23:01] <jbrouhard> works great and has syntax highlighting in a terminal
[23:02] <cloakable> heh
[23:02] <cloakable> I use vim myself :)
[23:03] <corpse> nano is working great. thanks alot man
[23:04] <cloakable> syntax highlighting, spellcheck...
[23:04]  * f1yback is impressed with 10.04LTS so far, it's not *CANUCKED* like 6.06 was
[23:06] <jbrouhard> lol f1yback
[23:06] <jbrouhard> I'd try out Ubuntu cloud
[23:06] <jbrouhard> but I think my business will stay with XenServer for our virtualization
[23:06] <corpse> im just settig up a fileserver for a home netowork
[23:08] <f1yback> i'd use openfiler for a fileserver
[23:08] <jbrouhard> speaking of servers.. *goes to check on the Ebox development...*
[23:08] <jbrouhard> I'm using Openfiler for my NAS box
[23:08] <f1yback> i'm using ubuntu server in my mini-itx for cross compiling, and jtag programming stuff
[23:08] <f1yback> so far so good
[23:09] <cloakable> heh
[23:10] <jbrouhard> i've heard iffies about openfiler...
[23:10] <jbrouhard> but that's mostly in terms of someone totally borking the install
[23:10] <jbrouhard> or not doing it right
[23:13] <corpse> yeah so far im pretty good at not doig it right
[23:13] <corpse> new error dont seem to have all the variables for eth0/net failed to bring up eth0
[23:20] <jbrouhard> corpse, sounds like you didn't give it all the IP info
[23:20] <corpse> jbrouhard:  i have modified the interfaces file to make the server static. from what i can see i have it all set up correctly
[23:21] <corpse> if i ifconfig eth0 up it comes on
[23:23] <deslector> hi, does it makes sense to have software RAID1 on the same disk (different partitions)?
[23:24] <corpse> any reason why sudo wont give me permission to etc/hostname?
[23:28] <f1yback> deslector not really
[23:29] <deslector> f1yback, because hard disk would probably die as a whole?
[23:30] <f1yback> yeah
[23:42] <deslector> f1yback, ok, thanks!