[08:31] <pds_> hello guys i'm trying to kickstart a ubuntu server 14.04LTS from a 12.04LTS desktop , i'm using the following tutorial http://digitalsanctum.com/2013/03/22/how-to-setup-a-pxe-server-on-ubuntu/. I wonder how i can use a kickstart file that i host remotely , and if i only need to provide the boot.iso file in nginx
[09:01] <ronator>  st8:Qk!Lo-W
[09:01] <ronator> sk8er
[09:18] <stemid> in 12.04 LTS, latest patches, I am still having a problem with isc-dhcp-server where /var/lib/dhcp/leases* change ownership whenever the service is restarted. or the OS is restarted. when it is owned by root dhcp can't rotate the leases file and it grows uncontrollably.
[09:18] <stemid> this has gone on for many months, I thought it would be patched.
[09:19] <peetaur2> you could edit the init.d file to add some chown ... or remove what is there
[09:19] <stemid> I did that under start|stop but the last time the OS was restarted not even that helped
[09:19] <stemid> I've been forced to setup a nagios alert on the ownership of the leases file. and so far this alert has saved me the last two times the service has restarted or the OS restarted.
[09:20] <peetaur2> what is the wrong and right owner?
[09:21] <peetaur2> hmm it seems my dhcp server is 10.04 rather than 12.04, and has the user as dhcpd, and the path is /var/lib/dhcp3/dhcpd.leases
[09:21] <peetaur2> so good thing I didn't use 12.04 then? ;)
[09:21] <stemid> 12.04 path is /var/lib/dhcp and dhcpd:dhcpd is correct
[09:22] <stemid> I can re-create this bug anytime, just sudo service isc-dhcp-server restart
[09:22] <stemid> but now I have removed the chown from the script
[09:22] <stemid> seems to me that the script should be patched upstream by ubuntu
[09:23] <peetaur2> does it literally say "chown root" in there?
[09:23] <peetaur2> or some variable?
[09:25] <stemid> the script does no chown on its own
[09:25] <stemid> I have no idea how this happens
[09:25] <stemid> dhcpd -user dhcpd -group dhcpd -f -q -4 -pf /run/dhcp-server/dhcpd.pid -cf /etc/dhcp/dhcpd.conf
[09:25] <stemid> obviously root starts it
[09:25] <stemid> and then it drops privs
[09:26] <stemid> but why create leases before dropping privs?
[09:26] <stemid> and no setgid set on the parent dir
[09:26] <stemid> I could setgid on parent dir, chown it to dhcpd
[09:26] <stemid> then all files will be created with dhcpd as owner
[09:28] <stemid> http://paste.debian.net/108850/
[09:28] <peetaur2> maybe you could also use ACLs to make sure dhcpd always has access
[09:28] <stemid> yes
[09:28] <stemid> workarounds are possible
[09:29] <stemid> https://bugs.launchpad.net/ubuntu/+source/isc-dhcp/+bug/1186662
[09:29] <stemid> I should post there though
[09:29] <stemid> aha it's apparmor
[09:29] <peetaur2> since when does apparmor chown things?
[09:29] <stemid> it can prevent chown
[09:30] <stemid> by the dhcpd process
[09:34] <peetaur2> can't you just modify the apparmor profile so it is allowed to chown and have full control of the file?
[09:34] <stemid> peetaur2: will check
[12:09] <DarkStar1> Hi all I have a SSL question regarding a domain "catch-all" certificate
[12:10] <ronator> you must have too much money ;-)
[12:11] <DarkStar1> if I need to reuse the certificate on another server bearing the domain name how can I? As I understand it the key that I used to generate the csr upon which the cert was created is for that server only
[12:11] <DarkStar1> ronator: not me. someone who's got the cert and wants me to install it for them on another server
[12:11] <DarkStar1> :)
[12:12] <DarkStar1> but I am poor and damn near desolate
[12:14] <ronator> catch-all certificate would mean you can have *.yourdomain.com - "normally" that is done on one machine with several virtual hosts. not sure If I understood your Q
[12:14] <ronator> deploy a catch -all cert for different machines ... goood question :D
[12:15] <DarkStar1> different machines yes
[12:17] <ronator> you _could_ do as you said, but it is not recommended
[12:18] <ronator> https://support.discountasp.net/KB/a132/can-you-export-my-ssl-certificate-use-on-different-server.aspx
[12:20] <peetaur2> of course you can put a cert on many independent machines... all SSL does is validate that a CA cert (eg. in the browser) is the one that signed the server cert, and the browser has no idea which other servers have the same key, and doesn't care.
[12:20] <peetaur2> but copies of the private key all around mean more risk if one system is compromised.
[12:24] <patdk-wk> this is what certificate copies are for
[12:24] <patdk-wk> make a the same cert with many different private keys
[12:24] <patdk-wk> if one server gets compromised, only that one needs to be revoked
[13:37] <Lachezar> Hello all. I had a problem installing Ubuntu Server yesterday, and it was suggested that I use 10.04 installation disk. It worked. Today I did a release upgrade, and now I have a 12.04 with '3.2.0-65-generic-pae #99-Ubuntu SMP Fri Jul 4 21:17:05 UTC 2014 i686 i686 i386 GNU/Linux' kernel.
[13:37] <Lachezar> Does that mean, that my hardware actually has PAE support? Can I do a release upgrade to 14.04 now?>
[13:39] <jrwren> no, it just means your kernel has pae support.
[13:39] <Lachezar> Actually... I have other 12.04 Ubuntu Server machines (two), that show no available release upgrade... Is that correct?
[13:39] <jrwren> grep --color pae /proc/cpuinfo   # to see if your CPU supports PAE
[13:41] <Lachezar> jrwren: Ahha! cpuinfo flags has 'pae'. So that might not be the reason why 14.04 Server CD hangs on language selection (upon boot, not when installing!0.
[13:42] <jrwren> why not run 64bit?
[13:45] <Lachezar> jrwren: Old hardware: Intel(R) Celeron(R) CPU 2.53GHz, low memory: 1G
[13:50] <jrwren> oh. you don't need a pae kernel at all.
[13:50] <jrwren> laptop?
[14:00] <Lachezar> jrwren: I don't need PAE, but it seems I have no choice (apart from recompiling my own).
[14:50] <leotr> hello. I have one (only one) server with 6 HDDs and 64 Gb ram. I want to setup MAAS on it and then use juju for administering it. Is it possible?
[14:50] <leotr> i mean is it true that one server is enough for that
[14:53] <jrwren> yes, its true.
[14:55] <leotr> should i download ubuntu for cloud cd in this case?
[14:55] <leotr> *ubuntu server for cloud
[15:12] <Xbert> are aa-logprof and aa-genprof broken in 14.04?
[15:20] <peetaur2> Xbert: I haven't tried them but heard yes they are
[15:21] <peetaur2> Xbert: #apparmor is on the irc.oftc.net network, maybe they know a fix
[15:21] <tyhicks> Xbert: hi - they're not in great shape in 14.04
[15:22] <Xbert> peetaur2, its seem that way from my experience too, pfft nice for a LTS
[15:22] <peetaur2> Xbert: someone once said that the LTS releases are not officially LTS until some point .. maybe that's the key to stability
[15:22] <tyhicks> Xbert: we've got fixes for a majority of the bugs in the upstream code repo, but no one has yet had a chance to SRU them to 14.04
[15:23] <peetaur2> and it makes obvious sense since any release in general is the same
[15:23] <Xbert> they bragged out having apparmor only a few years ago, now they let is die
[15:23] <tyhicks> it's not dying
[15:24] <tyhicks> the tools were rewritten in python (from perl)
[15:24] <tyhicks> and the rewrite introduced a number of bugs
[15:24] <Xbert> 14.04 is tested for months, apparmor is in based install and it been months since 14.04 release, i would expect it to work
[15:24] <tyhicks> it was unfortunate that the upstream rewrite happened prior to 14.04
[15:24] <Xbert> for me it completely broken
[15:25] <peetaur2> ah cool, python is probably an improvement
[15:25] <peetaur2> but now we're beta testing ;)
[15:25] <tyhicks> Xbert: filing bugs is a big help
[15:25] <peetaur2> can you simply install the old apparmor tools from the old repo?
[15:25] <Xbert> i thought the problem with me doing an in place upgrade, i just did a fresh install and its the same
[15:25] <tyhicks> peetaur2: yes, that should be fine
[15:25] <peetaur2> on non-beta testing servers of course ;)
[15:25] <Xbert> the bug has been reported 3 times already
[15:26] <Xbert> last back in may
[15:26] <tyhicks> Xbert: what's the bug number?
[15:27] <Xbert> https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1319830
[15:28] <Xbert> and https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1319829
[15:29] <tyhicks> Xbert: we've got upstream fixes for those issues, now we need to go through the SRU process to update the package in 14.04
[15:29] <Xbert> how do i do that?
[15:30] <tyhicks> Xbert: How do you do an SRU?
[15:30] <lordievader> Good evening.
[15:30] <Xbert> tyhicks, yes, i don't know what you mean
[15:31] <tyhicks> Xbert: https://wiki.ubuntu.com/StableReleaseUpdates#Procedure
[15:32] <tyhicks> Xbert: it is quite involved, you're probably better off temporarily downgrading to the 13.10 package and waiting for us (Ubuntu Security) to do the SRU
[15:34] <Xbert> tyhicks, ok i give that a go, thanks
[15:34] <tyhicks> Xbert: sorry for the trouble :/
[16:50] <ndf> is there a way to temporarily turn off kernel messages about I/O errors on a (usb)disk? [/dev/sdb]
[17:33] <raray> is there a protocol/program for accelerated file transfer over the internet?
[17:33] <raray> I want to transfer files from my server
[17:33] <raray> File transfer via ssh is very slow for some reason.
[17:33] <raray> FTP over tls is ok, but only as fast as 1 tcp connection
[17:33] <raray> is there a protocol/program using multiple connections or udp?
[17:37] <raray> ...
[17:38] <K4k> Can anyone think of any weirdness I might run in to if I rename GID 27 from "sudo" back to "wheel"?
[17:39] <rbasak> raray: TCP should scale to the available bandwidth. Even one connection. If it doesn't, you have a connection or TCP stack problem.
[17:40] <raray> rbasak: the problem is my connection is over a shared medium
[17:40] <raray> if i use 2 tcp connections it will be almost 2x as fast.
[17:42] <rbasak> raray: http://lartc.org/lartc.html has instructions to help you manage bandwidth and prioritise traffic
[17:44] <raray> Priorize traffic? It's over the internet...
[17:45] <raray> And the utilization on both devices is quite low
[17:45] <raray> the problem is the ISP seems to be overselling the cables.
[17:46] <raray> that means I can not priorize anything
[17:47] <rbasak> If the ISP aren't stupid, then they'll be doing bandwidth management so each customer gets an equal amount of bandwidth under contention
[17:51] <raray> rbasak: fact1 1: I have a quite slow connection at home. fact 2: if i use 2 tcp connections it is almost 2x as fast fact 3: I recently was in a fast wifi with my laptop and got 50mbits over 1 tcp connection to that same server
[17:51] <raray> instead of 1mbit
[17:52] <raray> so the devices on both ends can't really be the problem
[17:53] <sarnold> raray: wow. sounds like a stupid ISP
[17:54] <sarnold> raray: you could probably use split to split a file into chunks then use multiple scp or multiple rsync connections to transfer the pieces, then re-assemble them on the far side
[17:54] <sarnold> raray: the more you abuse it the more likely it is your isp will figure out how to rate limit per customer rather than per connection, which would doubltess be an improvement for nearly everyone :)
[17:55] <raray> sarnold: thats what I'm thinking
[17:56] <raray> sarnold: i already did the splitting and reassembling manually. The transfer was faster, but too much manual effort
[18:20] <leotr> hello. I have server with raid controller, 2 processors, 64 gb of ram. I want to use it for creating virtual machines for experiments. I would like to be able to use juju for fast vm creation and software deployment and so on... as i understand MAAS  is not  what I might want
[18:21] <leotr> could you suggest me something for my task? what is the best option
[18:25] <sarnold> leotr: you could use the juju-local provider to spin up LXC containers; it isn't VMs, so it won't be perfectly like using a cloud provider.. you could also manually create a pile of VMs and then use the manual provider...
[18:26] <sarnold> leotr: see https://juju.ubuntu.com/docs/config-manual.html and https://juju.ubuntu.com/docs/config-local.html
[18:32] <leotr> thanks
[20:41] <zartoosh> HI how could I have boot.log timestamped?
[21:09] <zmbmartin> I am using ghostscript to compress pdfs. On my OSX machine the compressed pdf looks identical to the full pdf. In ubuntu the compressed pdf is missing some patterns and fills from the full pdf.
[21:10] <zmbmartin> So if I run the full pdf through gs on OSX it outputs the same just compressed. But when I run the full pdf through gs on my ubuntu-server the file is compressed but patterns and fills are missing.
[22:50] <dustinspringman> anyone around familiar with routing via VPN?
[22:51] <billy_ran_away> Why does the ldap package break my current ldap install so often?!?!
[22:51] <billy_ran_away> Also why can't I remember the password to my local account?!?!
[23:03] <dustinspringman> so... i've got the tunnels up. The ubuntu-server can ping all the LANs, but I cannot route between LANS for some reason.. I think its related to IPTables, but the instructions I see online seem to imply that I'm an IPTables expert.. not the case.. thoughts on a good resource/walk-thru?
[23:05] <sarnold> dustinspringman: did you set e.g. /proc/sys/net/ipv4/conf/all/forwarding
[23:05] <dustinspringman> sarnold: I believe so, but I will double check, doing it now..
[23:07] <billy_ran_away> anyone know how to pick up a currently running process?
[23:07] <billy_ran_away> I ssh'ed in to my network server, ran screen. Then from there I ssh'ed in to another machine and kicked off a long running process.
[23:08] <sarnold> billy_ran_away: I normally used screen -RAD when reattaching screen sessions
[23:09] <billy_ran_away> sarnold: I upgraded ldap on ubuntu and have since locked myself out of that machine...
[23:09] <billy_ran_away> sarnold: But I still see the processes running on my local desktop...
[23:09] <billy_ran_away> ➜  ~  ps -ef | grep -i heroku
[23:09] <billy_ran_away>   501 53623 53416   0  2:50PM ttys004    0:05.92 ruby /usr/local/Cellar/heroku-toolbelt/2.34.0/libexec/bin/heroku run console
[23:12] <billy_ran_away> i'd like to just pick up the output of that heroku run console
[23:12] <billy_ran_away> process...
[23:14] <sarnold> billy_ran_away: you can't really pick it up without re-attaching to that screen session
[23:14] <billy_ran_away> sarnold: yea that was what I was afraid of
[23:15] <billy_ran_away> sarnold: I can ssh in to my local user on my server that has the still running screen session with my ssh keys
[23:15] <billy_ran_away> sarnold: but I can't remember that password so I can't sudo anything...
[23:17] <dustinspringman> sarnold: when I cat /proc/sys/net/ipv4/conf/all/forwarding I get 0 as a response... Do I change that to a 1?
[23:17] <billy_ran_away> I hate ldap upgrades! I mean sure it's happened before to me when upgrading major versions of ubuntu, but the package maintainers wouldn't be so mean as to break compatibility between minor versions of that one package, or so I thought...
[23:18] <sarnold> dustinspringman: yeah, if you want to be a router :)
[23:18] <dustinspringman> doh!
[23:18] <sarnold> billy_ran_away: argh. that's annoying :)
[23:18] <dustinspringman> sarnold: done, gonna test to some remote sites
[23:19] <billy_ran_away> thanks for listening sarnold!
[23:19] <billy_ran_away> i'm just going to wait for these processes to finish and then reboot the server in to single user mode
[23:20] <billy_ran_away> it's just those long running processes were for work and I was asked about the progress of them and now I look like an idiot who locked himself out or I lie
[23:29] <sarnold> billy_ran_away: heh, I used to keep a 'toor' account around for those kinds of issues.. haven't done that in a while though
[23:30] <dustinspringman> sarnold: I owe you a beer! that got me working! Thanks so much!
[23:31] <sarnold> dustinspringman: nice :D have fun!
[23:32] <billy_ran_away> sarnold: Yea I have an lbill account, which has my ssh keys in it
[23:32] <billy_ran_away> but alas i forgot that password