[01:32] <grendal_prime> no i dont get to grub
[05:31] <Datz> Hi, my samba password keeps getting reset every day. How can I stop this, and why is it happening?
[09:02] <lordievader> Good morning.
[09:04] <soahccc> no iowait, CPU is at like 1% and memory is chilling out still a load of 35... wtf
[09:05] <lordievader> soahccc: Processes waiting for IO? (D state)
[09:06] <soahccc> lordievader: no that I could say. just one is occasionally running otherwise all are sleeping
[09:07] <soahccc> lordievader: ohh I see one is in D state... rsync  :<
[09:10] <lordievader> That is then likely the culprit. And the fun thing about the D state is that you cannot kill them...
[09:14] <soahccc> lordievader: I just noticed and I think the machine did not survive the reboot :/
[09:14] <lordievader> ?
[09:15] <soahccc> hmm or it's checking disks has been a view months since last reboot :)
[09:16] <soahccc> we have 3 identical machines and 1 syncs to 2 and 3... on the third one I had 50 rsync tasks in D state whilst the second server is all fine... So I guess something went south there
[09:17] <lordievader> NFS share unreachable?
[09:18] <soahccc> there is no nfs share just rsync over ssh
[09:51] <jamespage> sarnold, do you have an eta for when you might get to look at the MIR in bug 1407695 ?
[09:51] <jamespage> https://bugs.launchpad.net/ubuntu/+source/python-pysaml2/+bug/1407695
[09:56] <ikonia> can I ask why mir is being raised as a server bug ? is mir still only optional on the server install ?
[09:58] <rbasak> ikonia: https://wiki.ubuntu.com/MainInclusionProcess
[09:58] <rbasak> ikonia: MIR != Mir
[10:00] <lordievader> To keep things simple...
[10:10] <ikonia> thank you
[10:16] <abhishek_> can i configure centralised patch management server . I have around 40 Ubuntu servers
[10:18] <Walex> abhishek_: yes. Look at APT repo caching or mirroring.
[10:19] <abhishek_> ok . thank you Walex
[10:19] <Walex> soahccc: 'D' means waiting for IO usually
[10:20] <Walex> abhishek_: look for example at 'apt-cacher', 'approx', 'apt-mirror',
[13:07] <hazzardous> Hi, what is the best IPsec server package ?
[13:09] <jpds> hazzardous: strongSwan.
[13:09] <jpds> !best | hazzardous
[13:11] <hazzardous> jpds, so if you have to connect 2000 machines with a network through VPN, do you choose that solution?
[13:11] <jpds> hazzardous: It's in main and thus gets security updates.
[13:11] <jpds> hazzardous: Just: sudo apt-get install -y strongswan # Done.
[13:11] <jpds> hazzardous: I have lots of experience with it and it just works.
[13:11] <jpds> hazzardous: And for you, it's made in .ch.
[13:12] <hazzardous> jpds, ipsec-tools and openswan are also in standard distrib...
[13:12] <jpds> hazzardous: Not in main.
[13:12] <hazzardous> Swiss is a ++ :-)
[13:12] <jpds> hazzardous: And both projects have been abandoned as far as I know.
[13:13] <hazzardous> ok... so i'll take a look to strongswan !
[13:13] <hazzardous> thank you
[13:14] <jpds> hazzardous: There is no "best", you need to poke around and see what fits your needs.
[13:14] <jpds> I can't think of why strongSwan wouldn't be able to handle 2k clients.
[13:14] <jpds> And it's all open-source software.
[13:18] <hazzardous> jpds, thank you for your advice
[13:18] <jpds> hazzardous: https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes#strongSwan
[14:16] <patdk-wk> heh?
[14:16] <patdk-wk> openswan hasn't been abandoned
[14:16] <patdk-wk> the maintainers don't update it often though, and the orig developer forked it to libreswan
[14:16] <jpds> patdk-wk: It has.
[14:17] <patdk-wk> what do you mean, it has
[14:18] <jpds> patdk-wk: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=736557
[14:18] <patdk-wk> ok the debian package of it has been, but not openswan itself
[14:19] <patdk-wk> though, everyone should have moved onto libreswan though
[14:19] <jpds> strongSwan's also going strong.
[14:20] <patdk-wk> strongswan is strong for a few reasons
[14:20] <patdk-wk> cause it is very diverse
[14:20] <patdk-wk> but that is also it's problem, making it more confusing and heavy
[14:22] <jpds> libreswan documentation seems a bit sparse.
[14:22] <patdk-wk> I thought strongswan was more sparse
[14:23] <jpds> I found it was fairly simple once one got their head around it.
[14:23]  * jpds was comparing https://libreswan.org/wiki/Configuration_examples vs. https://wiki.strongswan.org/projects/strongswan/wiki/UserDocumentation
[14:24] <patdk-wk> well, strongswan does a LOT more than libreswan too
[14:25] <patdk-wk> also requiring more documentation
[14:25] <patdk-wk> gone over those strongswan things over and over many times, till I got it kindof working
[14:25]  * jpds wrote a puppet module for the necessary bits: https://github.com/jpds/puppet-strongswan
[14:25] <patdk-wk> not saying it's not nice, but it's way overkill for most
[14:32] <Walex> put it briefly, both strongSwan and libreSwan are pretty good. Other IPsec implementations exist but are not as actively maintained.
[14:33] <Walex> Libreswan is currently in some ways a bit behind stronSwan but it is being also quite actively developed.
[14:33] <Walex> for most people they are equivalent.
[16:35] <sudormrf> ppetraki, you around? :)
[17:00] <ppetraki> sudormrf, just in time for lunch :) sup?
[17:03] <sudormrf> in a 4 disk mdadm setup using RAID 1, you would have to create two raid arrays to have two usable disks, correct?
[17:11] <ppetraki> sudormrf, min number of disks required form a RAID1 is 2 disks. Then there's the issue of hot spares which is a whatever your policy is
[17:12] <ppetraki> sudormrf, http://www.thegeekstuff.com/2010/08/raid-levels-tutorial/
[17:12]  * ppetraki very high level overview
[17:21] <patdk-wk> sudormrf, depends on what your goal is
[17:21] <patdk-wk> you can do two raid1 (of two disks)
[17:21] <patdk-wk> or a single raid1 (of 4 disks)
[17:21] <Azaril> can you use trusty packages in precise?
[17:21] <patdk-wk> or a raid10 (of 4 disks)
[17:22] <patdk-wk> Azaril, yes and no, it highly depends on the package itself
[17:22] <patdk-wk> but normally the answer is *no*
[17:23] <sudormrf> patdk-wk, the goal would be to have a single array of 4 disks, 2 usable, 2 mirrored in one raid array.  so would that be raid 10?  if so, does mdadm support that?
[17:24] <patdk-wk> yes, raid10
[17:25] <sudormrf> patdk-wk, ok cool.  I am testing all this out in a VM before I go and use it, so just trying to figure things out :)
[17:25] <sudormrf> first time using mdadm
[17:55] <sudormrf> ok that is all setup.  now working on setting up whatever is necessary to make it a time machine client as well :D.
[18:07] <ppetraki> sudormrf, how fast is it? fio --fallocate=none --direct=1 --ioengine=libaio --prioclass=1 --prio=0 --time_based --mem=malloc --randrepeat=0 --norandommap --runtime=10 --bs=4k --rate=0,0 --iodepth=1 --rw=randread --size=0 --offset=0 --name=/dev/loop200 --cpus_allowed=0  --grou
[18:08] <ppetraki> sudormrf, change /dev/loop200 to MD device. *do not do a write test to a block device unless you don't care about the data*
[18:13] <sudormrf> ppetraki, not really caring about a test right now
[18:13] <sudormrf> it is entirely virtual and entirely for testing.  just trying to get my feet wet with it before I actually build something out :)
[18:14] <ppetraki> sudormrf, sure, just good to know. also that should be --group on the end, copy paste error
[18:15] <sudormrf> I can test
[18:16] <sudormrf> if you want.  will probably be pretty slow because VM over USB
[18:16] <sudormrf> but sure
[18:20] <sudormrf> ppetraki, any experience with setting up time machine in ubuntu server?  seeing different tuts all over the net and all of them are slightly different.
[18:21] <ppetraki> sudormrf, its more for your reference, its a good idea just to see what its like. Also on a live array, you can do read block tests over time to see if it's degrading, which is a sign of a backing store beginning to fail, as it's taking longer to complete.
[18:21] <ppetraki> sudormrf, no experience with time machine
[18:23] <sudormrf> ppetraki, thanks :).  I will try it :D.  do you recommend read block tests be done with a cron job on a periodic basis (once a day or so?).  also, what is a backing store?
[18:23] <ppetraki> sudormrf, backing stores are the things that make up the MD
[18:23] <ppetraki> sudormrf, once a month is fine.
[18:24] <sudormrf> once a month.  good to know.  so if a backing store is beginning to fail, does that mean a drive is going to fail?
[18:24] <ppetraki> sudormrf, backing store *is* the drive, the MD device is considered a logical volume
[18:25] <sudormrf> oh.  so when doing the test does it tell you which backing store is having the problem, or does it only show the whole array?
[18:26] <sudormrf> would smart checks accomplish basically the same thing?
[18:26] <ppetraki> sudormrf, So testing against MD0 tells you generally if there's a problem, and if there is then you would start looking at the backing stores e.g. SD devices.
[18:27] <ppetraki> sudormrf, you can run it periodically or make a script
[18:27] <sudormrf> what I was thinking was to use NRPE to do SMART checks
[18:27] <sudormrf> is one method better than the other?
[18:28] <ppetraki> sudormrf, what real SANs do is keep performance counters for all the backing stores and look for descriptiveness , these generally precede smart triggers
[18:28] <sudormrf> ah.  gotcha
[18:29] <ppetraki> not really familiar with nagios, it probably works
[18:30] <sudormrf> there may be an NRPE plugin that does the test that you are describint
[18:30] <sudormrf> describing
[18:30] <sudormrf> I will have to look in to it
[18:31] <ppetraki> probably not, that's work :)
[18:32] <sudormrf> LOL I use nagios right now and like it.  for something as basic and yet as critical as you are describing I would be surprised if someone hasn't created a plugin to do this.
[18:37] <ppetraki> sudormrf, it requires tuning and lots of testing, sure I could write on to generalize it.... and then be inundated with bug reports for false positives
[18:37] <sudormrf> no no no, not asking you to do it.  saying that someone may have already done it :D
[18:43] <ppetraki> sudormrf, maybe
[18:43] <sudormrf> yeah.  I will check in to it :)
[19:45] <sudormrf> well I have made some headway in regards to timemachine.  got it setup on the server and it is showing in the OSX vm.  just can't get it authenticated (doesn't work in finder either), so checking in to that.
[19:46] <ppetraki> cool
[19:47] <nickander> you are running time machine on an ubuntu server?
[20:00] <sudormrf> nickander, trying to setup the server to receive time machine backups
[20:00] <sudormrf> for some reason the OSX vm is having issues connecting to it at all (not just time machine)
[20:00] <sudormrf> trying to track down what is happening
[20:01] <sudormrf> think I found the problem
[20:02] <nickander> sudormrf: are you using smb?
[20:03] <sudormrf> nickander, have you done this setup?
[20:03] <nickander> no, but i work a lot with enterprise mac / linux stuff
[20:03] <sudormrf> oh, nice :D.  well the problem appears to be with the avahi-daemon
[20:03] <nickander> are you trying to use .local addresses?
[20:04] <nickander> because i would not recommend that, i think apple is trying to phase those out
[20:04] <sudormrf> http://paste.ubuntu.com/9788170
[20:04] <sudormrf> shouldn't be
[20:05] <sudormrf> the server is not acting as a DHCP/DNS server, so if .local is appended automatically that is something I would have to look at
[20:05] <nickander> avahi allows a server to interact with the bonjour service
[20:06] <sudormrf> you see the netatalk panic
[20:06] <sudormrf> looks like it may have to do with the order the services start
[20:06] <nickander> haven't played much with netatalk
[20:06] <sudormrf> going to try something
[20:06] <nickander> bonus points for using afp
[20:10] <sudormrf> heh.  I am just looking at the tuts I could find.  if you have a better suggestion (that doesn't have this silly issue with netatalk) and works I am willing to try it :D.  doing this all in a VM first so when I actually build out the system the setup will be quick
[20:32] <sudormrf> made some progress.  can now connect to it through finder, but now time machine doesn't see it.  trying some more things.
[20:38] <sudormrf> got it!
[20:38] <sudormrf> yay
[20:43] <ppetraki> \o/
[21:18] <sudormrf> :D
[21:19] <sudormrf> in reality this isn't going to get used all that much as anything important is on the main server
[21:19] <Guest33455> Hi, I have a (virtual) server that was migrated to another hardware node and no services are started, do you have any recommendation to find what causes the problem? I've manually connected to my server over VNC to enable networking and ssh but otherwise no services are running excepting the default ones
[22:07] <byprdct> Hi everyone. What's the best way to replicate a base server I always use?
[22:07] <nickander> rsync
[22:08] <nickander> oh wait, what do you mean by base server
[22:08] <nickander> as in the base install before services are configured?
[22:12] <Guest33455> follow up on my previous question (which you can ignore), I've located that "initctl list" results in all services are in "stop/waiting" mode, any tips to find cause?
[22:13] <byprdct> hi nickander I was thinking of using a base after I install and modify configuration files like nginx etx
[22:13] <byprdct> etc*
[22:14] <byprdct> so for instance if I setup server A with all the stuff I like to use to host static websites and I want to beable to use that on different hosting provides like digital ocean, aws, joyent etc what would be the best way to use server a on the different hosting providers?
[22:15] <byprdct> without trying to go the docker route
[22:15] <byprdct> overkill I think
[22:15] <klerik> Hi! Just install KVM server, virt-manager. Try run VM from virt-manager and it write "Cannot access backing file /mnt/VM/xpsp3_lv_kvm.qcow: Permission denied"
[22:15] <klerik> Which permissions need?
[22:17] <sudormrf> ok, yep.  everything is now working as expected there.  sweet. that should cover all the stuff I am trying to do with this thing that was new to me (mdadm and timemachine).. weeeeeee
[22:21] <sudormrf> what do you guys use to backup your servers?  I am thinking of just doing a tar backup of everything, but was wondering if there is a better solution.
[22:23] <sudormrf> was thinking of using this method: http://www.aboutdebian.com/tar-backup.htm
[22:25] <ppetraki> sudormrf, rsync.net
[22:26] <ppetraki> sudormrf, [shameless plug to own blog] http://peterpetrakis.blogspot.com/2013/06/automating-and-encrypting-duplicity.html
[22:27] <rberg_> duplicity is pretty convenient if you want encryption / compression
[22:32] <sudormrf> ppetraki, checking out your blog
[22:32] <sudormrf> in this case, encryption is not necessary.  just compressed archives.
[22:32] <ppetraki> sudormrf, yeah you can just skip that part then
[22:33] <sudormrf> backing it up to a different network share in case things explode I can quickly recover
[22:34] <ppetraki> sudormrf, EOD here, hope that helps.
[22:36] <sudormrf> ppe? lol
[22:36] <sudormrf> oh
[22:36] <sudormrf> laters
[22:36] <sudormrf> rberg_, what makes duplicity better than the tar'ing method?  just curious, never used duplicity :)
[22:38] <sudormrf> looking at the info here: http://www.cyberciti.biz/faq/duplicity-installation-configuration-on-debian-ubuntu-linux/ and specifically the exclude section, it looks almost identical
[22:38] <sudormrf> more robust?
[22:40] <sudormrf> built in rotation is nice.
[22:45] <rberg_> everything duplicity does you can do with the standard tools and big pipeline, its just a bit more convenient I think..
[22:46] <MACscr> how can i disable any of this automatic ipv6 stuff on my servers? i only want it setup with what i have in my network/interfaces file, nothing else.
[22:54] <sudormrf> rberg_, yeah that is what it is looking like.  will probably use duplicity due to the ease of rotation
[22:56] <sudormrf> hmm.  maybe not
[22:57] <sudormrf> hmm there we go
[23:01] <sudormrf> testing this out in a VM right now to see how it goes.  if all goes well I will create a script and pop it on to my two servers :D
[23:03] <sudormrf> rberg_, does duplicity use compression by default?
[23:04] <rberg_> thats a question for the man page :)
[23:06] <sudormrf> heh
[23:06] <sudormrf> truf.  brb