[00:12] <amstan> hey guys, why would i get permissions denied for this one cronjob?
[00:13] <amstan> it's user cron, and when i do the exact command manually it works
[00:18] <ayi> Hi, I am googling around for ways to create a failover setup with two ISPs, where one is expensive but reliable and the other is cheap but unreliable
[00:19] <ayi> It seem the "bonding" module may achieve this, but it seems it establishes a dead gateway/route on the basis of the router responding, and not for instance an internet host
[00:19] <ayi> I'm guessing I would need to script this?
[01:10] <ruben23> how to cehck mysql version on ubuntu server..>?
[01:18] <qman__> ruben23, mysql --version
[01:18] <jpds> ruben23: dpkg -l | grep mysql
[01:19] <jpds> ayi: What kind of routers do you have?
[01:43] <p1l0t> Why is it when I change /etc/hostname and /etc/hosts (to make 127.0.1.1 the same as hostname) that I have other issues like Network_Manager not working on my netbook lucid
[02:11] <ayi> jpds: very variable
[02:29] <Skaag> i need to find a cool dedicated server provider in the US that supports Ubuntu Server 10.04, any suggestions?
[03:58] <Fudge> anyone have idea how to get hp dl380G4 fans to spin down?
[04:01] <Fudge> c
[04:18] <Psi-Jack> Hmm
[04:18] <Psi-Jack> Is there a "proper" or repo method to install Sun's JDK on Ubuntu 10.04 LTS?
[04:19] <Psi-Jack> sun-java6-jdk does not seem to exist anymore as an option.
[04:22] <Psi-Jack> Aha, found it. It was in the partner repo.
[06:18] <RudyValencia> How do I make a disc of files from the command-line of my server?
[06:46] <qman__> RudyValencia, see mkisofs and cdrecord
[06:49] <RudyValencia> Whoa, lots of options
[06:49] <RudyValencia> I don't know what half of them are for
[06:50] <RudyValencia> All I want to do is store the contents of a directory to a disc.
[07:59] <talcite> hey guys, I have weird behaviour from portmap coming. I think it's an NFS misconfiguration. Could someone help me out?
[07:59] <talcite> I've got an internal network, 10.1.1.x, and an external network 134.117.55.x . My NFS traffic goes on 10.1.1.x (i'm pretty sure)
[08:00] <talcite> however, in the logs of all the NFS clients, I keep getting portmap errors saying there's unauthorized requests from 134.117.55.52 (my NFS server specifically)
[08:01] <talcite> it's really weird because I can't think of any config files that tell the NFS server to use the 134.117.55.52 interface
[08:02] <talcite> so I don't understand why I'm getting ypserv requests over that network.
[08:04] <talcite> the exact error is: Jul 11 02:54:54 s1 portmap[5048]: connect from 134.117.55.52 to callit(ypserv): request from unauthorized host
[08:04] <ruben23>  hi guys any suggestion a good opensource firewall apps..i mean widely used and one of the best being made.
[08:07] <Jordan_U> !firewall | ruben23
[08:11] <ruben23> Jordan_U: you dont recommend firewall apps..?
[08:12] <Jordan_U> ruben23: Ubuntu comes with ufw (which, as all linux firewalls uses iptables), and it's good and well integrated.
[09:12] <BeeBuu> i can't ping the system that running in UEC, anyong help me please?
[09:55] <BeeBuu> and i can see the status is "running"~~~~
[09:57] <Caer> BeeBuu: never used UEC but servers don't necessarily respond to ping
[10:00] <Caer> Is there a way to nice a process that forks? (ppid=1) and what about threads?
[10:05] <Caer> threads seem ok although transmission-daemon behaves strangely : it lost its nice priority after a few seconds
[10:05] <BeeBuu> Caer: but i can ssh in it
[10:05] <BeeBuu> i can't
[10:07] <Caer> I can't help you, sorry.
[10:55] <brummel444> hi, bind9 doesn't log: logging channel 'debug' file '/var/log/named/named.log':  permission denied. Permissions: -rw-rw-r-- 1 bind bind 0 2010-07-11 11:33 /var/log/named/named.log. Why do i get permission denied ?
[12:21] <joschi> brummel444: probably because of the apparmor profile for bind
[12:22] <joschi> brummel444: are the permissions for /var/log/named/ correct?
[12:24] <brummel444> joschi: i solved by setting write permission to the directory. though the named.conf was set to 777 it didnt write to it.
[12:24] <joschi> brummel444: yes, the directory permissions have to be correct, too
[12:25] <joschi> brummel444: although owner bind:bind and 0750 should be enough for /var/log/named/
[12:25] <brummel444> hm.. dont understand that, because i created a named.log that was writable for all, why does the directory have to be writeable then ? a bind9 specific "feature" ?
[12:26] <joschi> brummel444: no. a posix specific feature...
[12:28] <brummel444> joschi: do you know how to update dns to listen on (a new) ppp (vpn) connection ? i always have to restart dns after i connected..
[12:29] <joschi> brummel444: that's the way bind works. you have to restart (or maybe just reload/SIGHUP?) bind for it to bind on new interfaces
[12:32] <brummel444> joschi: ok. i thought there should be some kind of update function for dns, to inform it about a new ppp interface.
[12:45] <jpds> ayi: routers> Well, you might want something like HSRP.
[12:48] <RoyK> is it possible to have bind listen on 0.0.0.0 instead of specific interfaces?
[12:51] <sander__> Anyone know if UEC uses qemu?
[12:52] <joschi> RoyK: sure, but named will only listen on interfaces known at the start time and bind explicitly to them
[12:53] <joschi> RoyK: ehm, forget it. no, named can't listen on 0.0.0.0:53.
[13:05] <io> Is there something similar to Landscape but free?
[13:07] <joschi> io: red hat spacewalk. but it's veeeery red hat centric ;)
[13:18] <io> joschi: Red Hat provides Spacewalk for free but Canonical charge for Landscape? :-)
[13:19] <joschi> io: the commercial version of spacewalk is red hat satellite.
[13:19] <io> joschi: Right.
[14:09] <nhck> Hi, I am looking for a package that allows me to playback music from my local machine on my ubuntu server. It would be nice if the server would act like a playback device so  it would be autodiscovered via upnp.
[14:10] <jpds> mpd.
[14:10] <jpds> !info mpd
[14:11] <nhck> hmm, i have mpd running currently, got to check how to expose it I guess?
[14:19] <Kream> Hi all.
[14:19] <Kream> Using stock Apache on 10.4. Documentroot is set to /var/www/default. The default webpage is accesible using my.site.com . I want to point my.site.com/doc to /usr/share/doc r . I also want to use the ubuntu system of enabled / disabled sites . /etc/apache2/sites-available/doc is available at http://pastebin.com/9X08QbDC . /etc/apache2/sites-available-default is available at http://pastebin.com/TUmYJTtq
[14:21] <io> Kream: The default setup forwards /doc to /usr/share/doc. Did you see cat /etc/apache2/sites-enabled/000-default already?
[14:21] <io> Kream: You will need to manipulate the allowed/denied hostnames though, as only 127.0.0.0/255.0.0.0 ::1/128 can access it by default.
[14:22] <io> Kream: And why are you making an extra site just for doc? Your site is site.com, not doc? :-)
[14:23] <Kream> io: i know, i'm just using doc as an example
[14:23] <Kream> thing is
[14:23] <Kream> i installed munin and it's working beautifully, but it's config is sitting in /etc/apache2/conf.d/munin
[14:23] <nhck> jpds: I am probably missing something: How do I expose mpd as an upnp media renderer?
[14:23] <Kream> and it's www root is /var/www/munin
[14:24] <Kream> i'm going mad trying to make a munin site work in /etc/apache2/sites-available
[14:24] <Kream> the reason i need to do all this is i'm trying ot get redmine working, which is sitting in /var/www/redmine
[14:29] <io> Kream: I would have /etc/apache2/sites-available/www.example.com and set the DocumentRoot to /var/www/www.example.com and then off that have alises for www.example.com{munin,redmine} to /var/www/www.example.com/{munin,redmine} and then enable www.domain.com.
[14:31] <clusty> hey
[14:31] <clusty> i am trying to mount cifs with automount
[14:31] <clusty> by staring at the files, i cannot figure out where do i tell autofs which host to actually mount
[14:32] <io> Kream: Or as you current setup with /var/www/{munin,redmine} place something like this: http://paste.ubuntu.com/462057/ in to your /etc/apache2/sites-available/{domain} file.
[14:33] <io> Kream: Without the 'Deny from all' line on the Redmine block. ;-)
[14:36] <nhck> Any ideas on how to expose my ubuntu box as an upnp media renderer? Thanks :-)
[14:38] <Kream> io: thanks, puting hip waders on
[14:41] <io> Kream: No problem. :-)
[14:43] <Kream> ok by mistake, I went and asked #httpd for help and they seem to think that Ubuntu's httpd config is borked. they even have a wiki page up at http://wiki.apache.org/httpd/DebianDeb0rkification ... is what's in there useful?
[14:51] <Kream> http://pastebin.com/qxjDK7ut
[14:51] <Kream> ^^^ that is my new /etc/apache2/sites-enabled/000-default and in it xxx.xxx.com/doc works fine
[14:51] <Kream> http://pastebin.com/qxjDK7ut
[14:51] <Kream> is my new /etc/apache2/sites-enabled/000-default and in it xxx.xxx.com/doc works fine
[14:52] <Kream> am I missing something fundamental when I ask if I can "split" away the docs section into another snippet?
[14:54] <nhck> Kream: the doc just points you to the apache docs. if you don't need it just delete it
[15:00] <Kream> let me clarify. I have a website at www.example.com which works fine. Under Apache2 in Ubuntu 10.4, can I have a site (that means something enabled from /etc/apache2/sites-available) that points to somewhere arbitrary? Or should all such instances be aggregated into Aliases in /etc/apache2/sites-available/000-default ?
[15:00] <Kream> I'm not mucking around with multiple hostnames etc etc
[15:14] <Kream> ahhh gods
[15:14] <Kream> i'd basically misunderstood the fundamental reason for entries in /etc/apache2/sites-availble.
[15:43] <ruben23> hi guys how to install rt-kernel on ubuntu-server
[15:49] <ruben23> guys any idea on rt kernel deployment on ubuntu server..?
[16:01] <Kream> in dpkg --list, some packages are prefixed with rc, what does this mean?
[16:08] <RoyK> Kream: google for it
[17:59] <io> Kream: Release candidate. Also, did you need something?
[18:38] <jasonme> Hi. we're in the process of migrating our office to ubuntu
[18:38] <jasonme> we have 1 ubuntu server, 25 ubuntu desktops
[18:38] <jasonme> how can we get the 25 ubuntu desktops to actually log on to the server? instead of to their own computer?
[18:38] <jasonme> so that <user> can login at any computer and their docs/wallpaper etc will be the same
[18:51] <Kream> jasonme: are there going to be windows machines logging in as well?
[18:53] <jasonme> no just ubuntu
[18:55] <Kream> jasonme: then you'll need something like this: home directories exported from an NFS server
[18:55] <Kream> and an NIS/YP server to authenticate over the network
[18:56] <jasonme> schools are an example.. they dont save users documents to the hd, also wallpapers and user settings are available on any computer the user logs in from
[18:57] <jasonme> is there a simpler option?
[18:57] <Kream> sure
[18:57] <Kream> get a big server+thin client setup
[18:57] <Kream> you save big as you add more desktops
[18:57] <Kream> that setup is LTSP
[18:58] <Kream> it's very easy to setup
[18:58] <Kream> well, compared to NIS at least
[18:58] <Kream> and everything works, nowadays... cdroms, usb drives, the works
[18:58] <Kream> sound too
[18:58] <Kream> https://help.ubuntu.com/community/SettingUpNFSHowTo
[18:59] <Kream> https://help.ubuntu.com/community/UbuntuLTSP
[19:05] <jasonme> Kream: thanks so much!
[19:08] <Kream> np share and enjoy
[20:17] <nhck> it doesn't seem to be easy to get ubuntu to act as an upnp media renderer
[20:21] <ruben23> hi guys any help on installing  zoiper communicator on ubuntu..
[21:47] <quentusrex> Anyone know of problems with openldap and ubuntu lucid?
[21:48] <quentusrex> I am following https://help.ubuntu.com/10.04/serverguide/C/openldap-server.html and can not figure out the cause of the " main: TLS init def ctx failed: -1 "
[21:49] <vmlintu_> usually that means that something's wrong with your certificates
[21:50] <quentusrex> I tried this: gnutls-serv --x509cafile /etc/ssl/certs/cacert.pem --x509certfile /etc/ssl/certs/ldap01-test_slapd_cert.pem --x509keyfile /etc/ssl/private/ldap01_slapd_key.pem
[21:50] <quentusrex> and it seems fine
[21:50] <quentusrex> and I checked that the user openldap has read access to all 3 of the cert/key files and the user does have access
[21:51] <quentusrex> so it doesn't seem to be a permission issue, nor does it seem to be a valid certs issue
[21:51] <quentusrex> I'm feeling all out of ideas
[21:52] <vmlintu_> have you tried running slapd with "-d -1" ?
[21:53] <quentusrex> same error message: main: TLS init def ctx failed: -1
[21:53] <quentusrex> after it loads all the ldif files
[21:53] <vmlintu_> what's the command you are using to run slapd?
[21:54] <quentusrex> slapd -h 'ldaps:/// ldapi:///' -g openldap -u openldap -F /etc/ldap/slapd.d/ -d 1
[21:54] <quentusrex> and I tried: slapd -d 1
[21:55] <vmlintu_> -d -1, not -d 1
[21:55] <quentusrex> still the same so far
[21:55] <vmlintu_> does it say anything else about TLS?
[21:56] <vmlintu_> it could be a few thousand lines before
[21:58] <quentusrex> strange it won't pipe to a file.
[21:59] <quentusrex> not even with: slapd -d -1 2>&1 > debug.log
[22:00] <vmlintu_> weird..
[22:03] <quentusrex> I don't see any output with tls in the line except for the lines that define which files to load
[22:04] <vmlintu_> Just to make sure, run it with strace to see if it can actually open the files
[22:05] <quentusrex> I need to fix the documentation on that page,
[22:06] <quentusrex> there is a typo when creating the cert
[22:06] <vmlintu_> ?
[22:06] <quentusrex> Here is what fixed the issue: mv /etc/ssl/certs/ldap01-test_slapd_cert.pem /etc/ssl/certs/ldap01_test_slapd_cert.pem
[22:07] <quentusrex> there is a hyphen where there should have been an underscore
[22:07] <quentusrex> thanks for the help vmlintu_
[22:07] <vmlintu_> I gave up copy-pasting commands a while ago because of these little typos..
[22:08] <quentusrex> Do you know a little about desktop ldap auth?
[22:08] <vmlintu_> I prefer kerberos, but I have used also pam_ldap
[22:08] <quentusrex> I'm trying to plan out the network authentication here, but there are laptops, and desktops. And the laptops are outside the network about half the time
[22:09] <quentusrex> vmlintu_, I'm looking into kerberos as well, but first have to get ldap up and running.
[22:09] <quentusrex> Is there a way to allow for both ldap auth and local auth?
[22:09] <quentusrex> in a way that will allow changes made on local to still be around when authed with ldap?
[22:10] <quentusrex> if that makes any sense.
[22:10] <vmlintu_> is local auth meant to be used when there's no connection and ldap when there's connection?
[22:11] <quentusrex> yes, basically.
[22:11] <vmlintu_> I'd recommend using sssd for that
[22:12] <vmlintu_> when users login with sssd, it stores enough information locally so that later they can login without connection too
[22:12] <vmlintu_> http://www.opinsys.fi/en/user-management-with-sssd-on-shared-laptops
[22:15] <quentusrex> So it would store the info after a successful login?
[22:15] <quentusrex> if you login successfully while connected, you can log in when disconnected?
[22:15] <vmlintu_> yes
[22:17] <quentusrex> any advice for mounting file systems after login?
[22:17] <quentusrex> such as home directories?
[22:18] <quentusrex> One small hope is that I can have the ldap/kerb auth system work from within the network and from remote
[22:18] <vmlintu_> I'm using autofs for that when users are in the local network
[22:18] <vmlintu_> with autofs you can store the share information in ldap and it mounts the correct share when it is needed for the first time
[22:19] <vmlintu_> http://www.opinsys.fi/en/setting-up-nfsv4kerberosautofs5-ldap-on-ubuntu-10-04-alpha-2-lucid-part-7
[22:19] <quentusrex> would that work for when the device is remote?
[22:19] <quentusrex> if the dns entries resolve properly for inside and out?
[22:20] <vmlintu_> Depends on the firewalls and connection speeds
[22:20] <vmlintu_> I wouldn't use nfs with slow connections
[22:21] <vmlintu_> but autofs works with other filesystems too
[22:21] <quentusrex> do you know if nfs would allow for file changes if there is no connection to the nfs server?
[22:22] <vmlintu_> no, it needs a working connection
[22:22] <quentusrex> vmlintu_, and thanks a ton for helping.
[22:22] <vmlintu_> for laptops I'd recommend synchronising the home directories with something like unison
[22:23] <vmlintu_> with unison you can sync file both ways when they are modified
[22:23] <vmlintu_> It's not automatic, though, so users need to activate it
[22:24] <quentusrex> I think I can be happy with a system that only automounts certain directories if there is connectivity
[22:24] <quentusrex> if not, then it is obvious you don't have access.
[22:25] <vmlintu_> with nfs you'll probably have problems if something is mounted when the connection breaks
[22:25] <vmlintu_> unmounting the nfs share with a lost connection can be a pain
[22:25] <quentusrex> yeah, I have seen that happen.
[22:25] <vmlintu_> you might have better success with samba/cifs
[22:26] <quentusrex> I am also looking into glusterfs
[22:26] <vmlintu_> If users connect to the cifs shares through nautilus, they usually behave better than nfs when connection breaks
[22:27] <vmlintu_> I really don't know much about glusterfs as I've tried it only once
[22:28] <quentusrex> what were your thoughts when you did test it?
[22:30] <quentusrex> I'll look into cifs this looks like what I will need
[22:44] <vmlintu_> glusterfs looked nice, but I really need kerberos or some similar way of authenticating users to the file system
[22:47] <quentusrex> what is the advantage of kerberos over just plain ldap for you?
[22:50] <vmlintu_> Running nfs4 with kerberos makes it possible to give access to users instead of just hosts. So once users authenticate with kerberos, they get access to their home directories.
[22:50] <quentusrex> That is well worth it...
[22:51] <vmlintu_> Especially when running hundreds of nfs clients in network, you don't want to share whole /home to anyone who asks for it
[22:53] <quentusrex> right
[22:55] <vmlintu_> I'm running quite a few school networks and I must assume that every user is potentially hostile as kids try to break in
[22:55] <quentusrex> I think I might wind up using glusterfs to aggregate the bricks, then share with nfs and cifs
[22:55] <quentusrex> maybe add something to determine if within the network and if not then use cifs only
[22:55] <quentusrex> vmlintu_, I have a school as a client, I know what you mean.
[23:40] <Psi-Jack> I'm curious. Ubuntu's had a lot of excelent focus on virtualization with kvm and all. But have they put into any focus about HA/HS support as well?