[02:01] <k2gremlin> Hello all
[02:36] <k2gremlin> I travel alot... I have a Ubuntu Server at home and I was wondering the best way to implement a VPN solution that will put my laptop on my home network..
[02:37] <k2gremlin> I basically want it to pull an IP address from my home network so that I can effectivly connect to my entire network
[02:39] <sarnold> k2gremlin: investingate openvpn and strongswan; openvpn is probably the esier of the two to configure and probably more likely to traverse horrible hotel wifi
[02:59] <k2gremlin> sarnold, thanks, ill lok into it
[02:59] <k2gremlin> look*
[03:16] <k2gremlin> sarnold, you there?
[03:17] <sarnold> k2gremlin: not for muych longer..
[03:17] <k2gremlin> sarnold, Trying to generate keys, and the guides are telling me they make the keys/ directory but it's not making them. It's not prompting me for the questions either
[03:22] <k2gremlin> sarnold, http://pastebin.com/BN35sHJb
[03:23] <sarnold> k2gremlin: hunh, I wonder if there was some other assumptionm that made with the system state before getting started..
[03:23] <k2gremlin> https://help.ubuntu.com/lts/serverguide/openvpn.html using this guide..
[03:24] <sarnold>  k2gremlin another possibilitym, you might be bloxked on low-entropy /dev/random, if it's a remote server withuot keyboard or mouse..
[03:24] <k2gremlin> trying to put this on my Squid proxy server. I seen another guide that changes the interface.. but their setup was a direct proxy. Mine is transparent with a dynamic WAN connection lol
[03:24] <k2gremlin> I'm puttyed into it
[03:24] <sarnold> k2gremlin: the 'source vars' step may not have been completed correctly?> it looked like it was complaining about not having the variables set...
[03:25] <k2gremlin> I tried both 'source vars' and 'source ./vars'
[03:25] <sarnold> check env output, see if they look right
[03:26] <k2gremlin> env?
[03:27] <sarnold> it prints the contents of your environment variables
[03:27] <k2gremlin> http://pastebin.com/hTpSfhzK   here is the whole setup starting at sourcing the vars
[03:27] <k2gremlin> http://pastebin.com/hTpSfhzK
[03:27] <sarnold> aha!
[03:27] <sarnold> thanks for psting the whole thing :)
[03:27] <k2gremlin> whoa..
[03:27] <sarnold> start with sudo -s
[03:27] <k2gremlin> thats got a lot of info
[03:27] <sarnold> then run the commands as shown withuot the sudo in front of each one
[03:27] <k2gremlin> ok ill run it in s mode lol
[03:27] <k2gremlin> ok
[03:28] <k2gremlin> thats working... wtf? lol
[03:28] <k2gremlin> No where in the guide does it say sudo -s to start
[03:29] <sarnold> hehe
[03:29] <sarnold> the trick is that you need to add those variables to the shell's environmment, thus the source
[03:30] <sarnold> .. and sudo starts a new shell for each command you're ruyning
[03:30] <k2gremlin> gotcha!
[03:30] <sarnold> so it affects nothing
[03:30] <k2gremlin> its generating.....
[03:30] <k2gremlin> "This is going to take a long time
[03:30] <k2gremlin> "
[03:30] <sarnold> once you know, you nkow and never think of it again
[03:30] <k2gremlin> lmfao
[03:30] <k2gremlin> right
[03:30] <k2gremlin> So if the rest of this goes smooth... will this work witout changing my interface configurations?
[03:30] <sarnold> I think it should
[03:31] <k2gremlin> The reason I am hesitant to do so is because I am remoted in.... changing them could perma disconnect me lmfao
[03:31] <sarnold> I only use openvpn as a simple cliednt and my admins did all the work for me :)
[03:31] <sarnold> heh
[03:31] <sarnold> yeah
[03:31] <k2gremlin> and if that happens, wife and kids will be ape shit cause the internet will go out lol
[03:31] <sarnold> be careful :)
[03:31] <sarnold> VERY CAREFUL
[03:31] <k2gremlin> LOL!
[03:31] <k2gremlin> Im carefully considering each command before I do it
[03:31] <sarnold> time for me to run, have fun :)
[03:31] <k2gremlin> Thanks again
[03:31] <sarnold> good plan any time you have a root hsell..
[03:32] <k2gremlin> Although!!
[03:32] <k2gremlin> I am remoted in on the WAN side :P
[03:32] <k2gremlin> so as long as that connection stays up im good
[03:32] <sarnold> :)
[03:32] <k2gremlin> later
[03:32] <k2gremlin> thanks
[03:33] <k2gremlin> *plays a game of League while this key generates!*
[09:42] <bojan> Can anybody guide me on NFS with NIS authentication
[09:43] <bojan> I have configured NFS on ubuntu 12.04 and checked the mounting on the same computer its working fine..But i am trying to mount in my network computer it is saying error as :mount.nfs:serevr access denied while mounting"
[09:43] <bojan> But i can see the mount from my network computer
[11:10] <strikov> rbasak: taking into account that tomcat bug has been fixed -- i want more bugs from you to work on; the link you sent last time doesn't work because it shows me nothing in the search results
[11:11] <rbasak> strikov: let me find you one :)
[11:11] <rbasak> Oooh, I know
[11:11] <rbasak> strikov: I was about to do https://bugs.launchpad.net/bugs/1435823 but you can take it if you like.
[11:12] <rbasak> Just need to reproduce and fix it in the right place. Not sure if that's the systemd unit definition, or in mysqld_safe, or somewhere else like one of the commenters has done.
[11:12] <rbasak> strikov: the fix needs to go to Debian too - I have commit access to Debian mysql VCS, so I can push it there and then upload the fix to Ubuntu.
[11:13] <rbasak> strikov: carry on at this rate and you'll get upload rights pretty soon :)
[11:14] <strikov> rbasak: okay, thanks; i'll look into this bug just after we upload juju-1.22 (hope to do this just after our licensing call)
[11:14] <rbasak> OK
[11:15] <strikov> rbasak: what happened with this apparmor/lxc bug btw? I saw that it grew and now covers some other projects (including squid3)
[11:15] <rbasak> strikov: I need to sync with hallyn. The issue is that the apparmor-profile-load script was previously provided by upstart, which is no longer installed.
[11:16] <rbasak> Given that lxc and other packages still need the script despite upstart not being installed, clearly upstart is the wrong package for the wrapper to be in.
[11:16] <rbasak> So we need to move it somewhere (probably init-system-helpers, with the actual implementation in apparmor).
[11:16] <rbasak> But we need to avoid breaking any package that used the wrapper without depending on where we move it.
[11:17] <rbasak> But we need to (avoid breaking any package) that (used the wrapper without depending on where we move it).
[11:34] <strikov> rbasak: understood, thanks; let me know if i can help with this somehow
[12:35] <strikov> rbasak: regarding mysql; i propose slightly different (probably) simpler fix than the one in the bug (/etc/tmpfiles.d/); How about that: http://pastebin.ubuntu.com/10688810/
[12:35] <strikov> rbasak: just two lines to service config and we're done
[12:36] <rbasak> strikov: perfect!
[12:36] <rbasak> strikov: could you verify that this fixes the issue please, and then I'll upload?
[12:36] <strikov> rbasak: the only thing I'm not sure is permissions; bug proposes mysql/mysql ownership while upstart conf uses mysql/root; do you have any clue why it matters?
[12:36] <strikov> rbasak: systemd with my config creates mysql/mysql by default
[12:37] <rbasak> strikov: I can ask upstream. I'll ask in OFTC in #debian-mysql if you want to follow.
[12:37] <strikov> rbasak: thanks
[12:47] <strikov> rbasak: verified that fix work on a clean instance
[12:47] <strikov> rbasak: if mysql/root ownership is needed then we need to fallback to tmpfiles.d though because RuntimeDirectory can't handle ownership other than default one for service (mysql/mysql)
[12:50] <rbasak> OK. thanks. We'll see what they say.
[12:50] <rbasak> I suspect that the previous mysql/root was a mistake that never mattered, and that mysql/mysql is fine, but might as well check.
[12:51] <strikov> rbasak: ack, want me to come up with debdiff or just this pseudo-diff is enough?
[12:52] <rbasak> strikov: pseudo-diff is enough - thanks!
[12:53] <mnaser> I'm trying to setup linux-crashdump .. I have followed all the instructions here: https://help.ubuntu.com/lts/serverguide/kernel-crash-dump.html
[12:54] <mnaser> I can verify that that it's running from dmesg | grep -i crash ... [    0.000000] Reserving 128MB of memory at 720MB for crashkernel (System RAM: 262045MB)
[12:55] <mnaser> However when trying to force a crash .. echo c > /proc/sysrq-trigger .. nothing happens, I just see it sit on the crash screen
[12:55] <rbasak> strikov: just one thought. Was something else creating the directory in a now non-persistent way, causing it to work until next reboot, that is now superfluous and should now be removed?
[12:57] <pmatulis> mnaser: output to 'cat /proc/sys/kernel/sysrq' ?
[12:57] <mnaser> 176
[12:57] <mnaser> (the server did crash as I saw from the console screen)
[12:57] <mnaser> https://wiki.ubuntu.com/Kernel/CrashdumpRecipe#Allocated_memory_for_the_crash_kernel .. I'm checking if that may be it..
[12:58] <pmatulis> mnaser: you are directly on the console?
[12:58] <mnaser> pmatulis: this is a dell server so I was running this on SSH while looking at screen remotely via idrac
[12:58] <mnaser> idrac = dell's ipmi equiv
[12:58] <pmatulis> mnaser: there you go
[12:59] <pmatulis> "If you are using a network connection, you will lose contact with the system."
[12:59] <mnaser> yes, im aware of that :) because it crashes the server
[12:59] <pmatulis> looks like it's working then
[12:59] <mnaser> however, it is supposed to dump the crash to /var/crash and reboot
[12:59] <mnaser> none of that happened
[12:59] <pmatulis> oh
[12:59] <mnaser> (the whole point of having linux-crashdump heh)
[13:00] <pmatulis> mnaser: yes, then check your settings (value of 'crashkernel')
[13:00] <pmatulis> mnaser: how much RAM do you have?
[13:01] <mnaser> 256gb on this server, and using crashkernel=384M-:128M
[13:05] <pmatulis> mnaser: dell model?
[13:05] <mnaser> R630
[13:09] <pmatulis> mnaser: possible to have someone try directly on the console?
[13:09] <mnaser> looks like some people had to go up the crashkernel value to 256Mb for it to work
[13:09] <mnaser> let me give that a shot
[13:21] <mnaser> and it looks like kexec reboots arent working too which I suspect might have to do with this
[13:34] <Arrick> Hey all, I have a 12.04.3  server running a moodle/totara site... I am wondering if I install the latest server version on a new machine, will there be any snags I run into if I export the DB's and copy over files?
[13:59] <pmatulis> mnaser: try from the actual console
[14:00] <pmatulis> mnaser: so it worked when you raised the value to 256MB?
[14:54] <strikov> rbasak: bah, me this bug while reproducing: https://bugs.launchpad.net/ubuntu/+source/mysql-5.6/+bug/1436178
[14:54] <strikov> rbasak: fixing it as well
[14:55] <rbasak> strikov: thanks!
[15:30] <lordievader> Good afternoon.
[16:02] <qhartman> I have a cluster of servers running 12.04 and 14.04 on AWS that every once in awhile will produce log entries with completely bogus dates in them
[16:03] <qhartman> So far I've only seen it in entries that end up in auth.log from sshd and cron
[16:03] <qhartman> the dates are always some period of time in the future, ranging from a few weeks to several months
[16:04] <qhartman> oh oops, not true, I just double-checked, and there are ones that show up in the past as well
[16:05] <qhartman> here's a snippet: https://gist.github.com/qhartman/62d2a49cd3c5a4fbd589
[16:05]  * maxb saw a kernel bug that involved time-jumps of around 2**32 / 10**9 seconds (~ 4 seconds) into the future once, but never anything that wild
[16:06] <qhartman> I've done a bunch of research on this sort of thing, and I've only found one other report and it was dismissed as an sshd bug
[16:06] <qhartman> yeah, I've never seen anything like it before
[16:07] <qhartman> and my log aggregator shows that it's affecting most (if not all) the servers in this cluster
[16:07] <qhartman> I discovered the problem yesterday and I had to clean up about 100k entries before things started looking right
[16:08] <rbasak> qhartman: those dates look like past dates to me
[16:08] <rbasak> Why do you say they're from the future?
[16:08] <qhartman> that snippet was taken on the 23rd
[16:08] <rbasak> All the Mar 23 ones are consecutive.
[16:08] <qhartman> so  there's Dec 8
[16:08] <rbasak> Feb 16 and Mar 3 etc are all in the past relative to that.
[16:08] <rbasak> Dec 8 is also before Mar 23.
[16:08] <qhartman> and in the ones I cleaned out most of them were dec, nov, spe, etc
[16:09] <qhartman> well, I suppose, it's not clear from that entry which year it
[16:09] <qhartman> well, I suppose, it's not clear from that entry which year it is
[16:09] <qhartman> but you could be right
[16:09] <rbasak> Is it possible that your log aggregator is somehow lagging in logging those particular messages?
[16:09] <qhartman> nope, that snippet is from the auth.log file on the host
[16:09] <qhartman> we jsut disovered the problem via the aggregator
[16:10] <rbasak> Maybe put something on each host that logs the time sequentially, and alerts you when it jumps too far forwards or backwards?
[16:10] <rbasak> You could use CLOCK_MONOTONIC to avoid glitching on system time changes.
[16:10] <rbasak> Could it be some NTP related problem?
[16:10] <qhartman> I'm not convinced it's an actual time problem
[16:11] <qhartman> So far, the spurious messages have only shown up in auth.log
[16:11] <qhartman> all the other files are normal
[16:11] <qhartman> and all the messages from our java applications we're running are normal as well
[16:11] <rbasak> I believe the timestamp is written by the logging daemon, not the thing doing the logging. But I could be wrong - I don't know the syslog protocol.
[16:11] <qhartman> though I suppose the jvm may do some kind of internal time keeping
[16:12] <rbasak> It would make sense for the timestamp to be determined by the client end actually. Closer to the real time the log message was generated.
[16:12] <qhartman> yeah
[16:12] <qhartman> that is my undertanding that the timestamp is sent to syslog as part of the message
[16:13] <qhartman> and my understanding is that it's actually sent in epoch time, then converted appropriately by syslog, but I'm less certain about that.
[16:13] <rbasak> Also each of your cron syslog entries has a correct timestamp
[16:13] <rbasak> And most of your sshd entries do not
[16:13] <qhartman> the cron errors are less common
[16:13] <rbasak> So I agree with you now - it looks likely to be the sshd.
[16:13] <qhartman> I must have missed them in that snippet
[16:13] <rbasak> Oh, you do have cron errors too?
[16:13] <qhartman> yeah
[16:13] <qhartman> but they are far less common
[16:14] <qhartman> ,but the messages from cron are also less common
[16:14] <rbasak> I still think something is flapping your system time.
[16:14] <qhartman> our monitoring system creates a lot of log noise from sshd
[16:14] <rbasak> Maybe something being driven from that cron?
[16:15] <qhartman> That definitely makes the most sense at a global level, but there are no anomalies that I've found in other logs
[16:15] <qhartman> and the time is ntp managed
[16:15] <qhartman> if it got messed up that badly ntp would give up
[16:19] <qhartman> Looking through more of these entries, it would be more consistent to consider the bad entries to all be in the past, and just lacking year specificity, though that would indicate that the stamps are likely not sent in epoch time.
[16:21] <qhartman> looks like the syslog format expect time according to this RFC: https://tools.ietf.org/html/rfc3339
[16:22] <qhartman> so not epoch, but it does include the year, so unless rsyslog is swallowing that information then these are future dates... :\
[16:23] <rbasak> Based on your sample I still think they're past dates.
[16:23] <rbasak> They fit better. Clustered closer to the current date.
[16:24]  * qhartman nods
[16:24] <rbasak> Maybe edit rsyslog configuration to make sure?
[16:24] <qhartman> yeah, that's what I'm digging into now
[16:24] <rbasak> And put some logging of system time in.
[16:24] <qhartman> trying t osee how the default rsyslog message templates munge the date
[16:24] <rbasak> Sleep on a 60 second look (using CLOCK_MONOTONIC, without referring to system date) and log the date.
[16:25] <rbasak> loop
[16:25] <rbasak> Maybe also use logger to log the date through syslog as well as directly to a file.
[16:25] <qhartman> yeah, makes sense
[19:45] <jeeves_moss> I'm having an issue with mod_proxy.  I can curl the URL from the CLI on the system, but I can't access it via the outside world.   here is my config.  http://pastebin.com/wbZYwDzR
[19:57] <patdk-wk> it's missing the url you used
[20:00] <patdk-wk> that is one very strange proxy config
[21:00] <Spyidonas> Hello guys i have a small issue with apache and php, i have 2 folders under /var/www , one is a normal website that works as expected (joomla website), the other is a folder i created and while it has user and group as www-data (same as the joomla site) php isn't able to write to this folder (file_put_contents error). Is there any php.ini tha enables this function or a special apache
[21:00] <Spyidonas> config? I have tried 755,777 and 0777 as permissions without success...
[21:03] <Spyidonas> I am trying to install Rainloop, thats the permissions i need http://www.rainloop.net/docs/permissions/ , still it doesn't work.
[21:45] <elnur> Is this the place to get help on creating a network bridge in Ubuntu Server?
[21:46] <sarnold> Spyidonas: mode 777 is almost always the wrong thing to do..
[21:47] <sarnold> elnur: sure
[21:47] <lordievader> elnur: Sure, what do you want to know?
[21:48] <elnur> I have a server with an IPv6 /64 subnet. I want to create several LXC containers and assign an IPv6 IP adress to each so that I can access them from outside.
[21:50] <Spyidonas> sarnold: Yes , this issue is about permissions, normal permissions dont work so i check if full permissions work, but they don't...
[21:51] <sarnold> Spyidonas: check aa-status output to see if your apache is confined by apparmor..
[21:51] <Spyidonas> sarnold: There's no app armor installed
[21:52] <sarnold> Spyidonas: check the logs for apache/php, there may be more detailed errors logged somewhere else..
[21:53] <Spyidonas> sarnold: the error log is empty, the access log says i accesed the website from my browser
[21:54] <Spyidonas> sarnold: i have ErrorLog ${APACHE_LOG_DIR}/web_error.log
[21:54] <Spyidonas> sarnold: is there any other log i can see?
[21:54] <sarnold> Spyidonas: maybe a vhost-specific log if you're using vhosts?
[21:56] <Spyidonas> sarnold: there are no other logs
[21:56] <Spyidonas> sarnold: the error is [202] Data folder permissions error [file_put_contents]
[21:56] <Spyidonas> sarnold: i have tried permissions all the way to 0777
[21:57] <lordievader> elnur: Yes..?
[21:58] <sarnold> elnur: there's probably already a bridge created by lxc for the containers.. there's some information on more complex configurations her https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html
[21:58] <Spyidonas> sarnold: this ( https://github.com/RainLoop/rainloop-webmail/issues/262 ) suggests i create a test file but the output is 5) file_put_contents=0
[22:00] <sarnold> Spyidonas: how about the is_dir, is_readable, is_writable, and mkdir tests?
[22:00] <Spyidonas> sarnold: all of them output 1 (except unlink)
[22:00] <Spyidonas> sarnold: 1) is_dir=1
[22:00] <Spyidonas> 2) is_readable=1
[22:01] <Spyidonas> 3) is_writable=1
[22:01] <Spyidonas> 4) mkdir=1
[22:01] <Spyidonas> 5) file_put_contents=0
[22:01] <Spyidonas> 6) unlink=0
[22:01] <Spyidonas> 7) rmdir=1
[22:01] <sarnold> Spyidonas: weird. I wonder if it is suhosin or whatever that slightly-safer-php thing is called..
[22:02] <Spyidonas> sarnold: well the weird part is that my other folder that hosts a normal website works without issues with joomla
[22:02] <Spyidonas> sarnold: maybe joomla doesn't use file_put_contents anywhere...
[22:11] <elnur> sarnold, there is a bridge, yes, but it doesn't get a public IPv6 address.
[22:12] <lordievader> elnur: Did you give it a slave who can get an public ip?
[22:12] <elnur> lordievader, what does that even mean? :)
[22:12] <elnur> I'm not a network guy, so I don't understand a lot of stuff.
[22:12] <elnur> But I wish to learn. :)
[22:14] <lordievader> elnur: Well a bridge device can be seen as a virtual switch. It will not magically get an ip address but it can take the ip of a physical ethernet device if that gets added to the bridge.
[22:14] <lordievader> elnur: https://help.ubuntu.com/community/NetworkConnectionBridge
[22:14] <elnur> lordievader, okay. The question is how to do that.
[22:15] <lordievader> elnur: brctl addif <bridge> <eth-nic>
[22:15] <elnur> lordievader, the problem with that link is that I don't see any IPv6 examples.
[22:16] <elnur> I have a single IPv4 address and a /64 IPv6 subnet. So I'd want to use IPv6 for this.
[22:19] <lordievader> I should still mess with IPv6, don't really have any IPv6 knowledge. But I suppose you can route it through the bridge interface.
[22:25] <elnur> Okay. :(
[22:29] <sarnold> elnur: how about the lxc config manpage I pasted along? I haven' tried it myself but it does look like you oughtot be able to configure what you want entirely from container configs
[22:33] <elnur> sarnold, too many low level stuff there I don't understand.
[22:34] <sarnold> elnur: scroll down to the examples, I think that'll be easier
[22:35] <elnur> sarnold, those examples are LXC specific. I have yet to figure out how to create a bridge.
[22:36] <sarnold> elnur: that bit is almost too easy: brctl addbr <name>
[22:36] <elnur> sarnold, will that persist after a reboot?
[22:36] <sarnold> elnur: I was a bit underwhelmed when I found that :) hehe
[22:36] <sarnold> elnur: no; that's why it'snice to tie it into something else, like lxc if you want to use containers...
[22:38] <elnur> sarnold, I mean, the bridge is supposed to be persistent. So that when I reboot the server, it all still works.
[23:40] <k2gremlin> sarnold, You around?
[23:41] <sarnold> hey k2gremlin
[23:50] <k2gremlin> sarnold, Sorry got to go out for a bit. I have some questions for you later if your still around.
[23:51] <sarnold> k2gremlin: there's usually someone around even if I'm not, hehe