[00:43] <Amgine> huh. After 30-some hours of data uploading between old and new server, I went to check progress and got...
[00:43] <Amgine> @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
[00:44] <Amgine> Is my best solution to rip it down and try to rebuild the server more securly?
[00:45] <sarnold> owww
[00:45] <sarnold> whatever you do don't type passwords into that thing until you've figured out what's going on
[00:45] <sarnold> it might be as easy as using sudo ssh by accident, and getting an _ancient_ /root/.ssh/known_hosts file entry or something similar to that
[00:45] <sarnold> or it might be that it's now someone else's computer and they're not very quiet about it
[05:01] <cncr04s> is there any setting to determine how often linux flushes the drive write cache to disk. While I have UPS and 64G of ram, It will read like 4-16GB of data(from network or other disk) before it begins to write to the disk, not sure why it waits that long.
[05:03] <sarnold> cncr04s: I think the sysctls labeled "dirty_" are probably most useful to you https://www.kernel.org/doc/Documentation/sysctl/vm.txt
[05:29] <RoyK> cncr04s: probably not a good idea if you want consistent data in case of a panic or similar
[05:37] <sarnold> RoyK: I think he wants to make it write more frequently :)
[05:38] <RoyK> sarnold: oh - the other way around :)
[05:38] <sarnold> yeah :)
[05:38] <RoyK> cncr04s: possibly ext4 writeback doing it
[08:00] <djc_> Hi setting up Ubuntu 14.04 server for the first time... Should SSH keys be generated
[08:15] <cpaelzer> djc_: what keys do you refer to?
[08:16] <cpaelzer> djc_: to create a key for yourself and how to place it https://help.ubuntu.com/community/SSH/OpenSSH/Keys
[08:16] <cpaelzer> djc_: did you mean this or something else?
[08:47] <cpaelzer> djc_: and in case you might have meant https://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-0285.html - no not an issue
[08:50] <ejat> stgraber: r u here?
[08:52] <jamespage> ddellav, coreycb: I think the keystone unit test failures are due to a unrepresented requirement for a newer oslo.db version
[08:53] <jamespage> there is a fix in a 4.10 - 3277ef3 Capture DatabaseError for deadlock check
[08:53] <jamespage> that looks pertinent
[10:20] <cpaelzer> jamespage: did you have uncommitted changes that brought you to 1609846 ?
[10:21] <jamespage> ?
[10:21] <cpaelzer> I cloned and build against my 16.07 ppa but break at
[10:21] <cpaelzer> make[3]: *** No rule to make target 'debian/python-openvswitch.install', needed by 'distdir'.  Stop.
[10:22] <cpaelzer> just wanted to test build and see If I could help - but it seems I block at this before the unit test failures you reported
[10:30] <cpaelzer> file it searches got deleted by your last commit
[10:30] <cpaelzer> "d/rules,control: Add python3-openvswitch package."
[10:31] <cpaelzer> the question is - accidential delete or missed to remove from debian/automake.mk ?
[10:31] <cpaelzer> because the latter still referes to it
[10:33] <cpaelzer> maybe it should have been created by your new call to python setup.py
[10:33] <cpaelzer> checking if there was a former error in thebuildlog
[10:38] <cpaelzer> no tat seems to be the real install, removing the line in the automake gets me going
[10:38] <cpaelzer> I'll continue that way for now
[10:38] <cpaelzer> later on we can discuss if it was right and if/how you want the commit back
[10:48] <cpaelzer> running unittests now, eager to see if I hit the same that you did jamespage
[10:48] <jamespage> cpaelzer, oh I might sitll have local delta for tha
[10:48] <jamespage> one sec
[10:49] <jamespage> cpaelzer, infact two commits pushed
[10:53] <cpaelzer> :-/
[10:53] <cpaelzer> gotcha
[10:53] <cpaelzer> hehe
[11:05] <jamespage> ddellav, coreycb: I've updated cloud-archive-utils to use i386 schroots for precise and trusty targets, mimicing the behaviour for LP builders.
[11:05] <jamespage> xenial will use amd64 still
[11:06] <jamespage> for arch all builds anyway...
[11:08] <jamespage> changed my mind - bad idea
[11:55] <jonah> Hi has anyone used keepalived?
[12:12] <ikonia> I have used keepalived
[12:12] <ikonia> although there are more options now than when keepalived was king
[12:14] <jonah> ikonia: well I'm just trying to plan how to add failover to a current server - I found this guide: http://gcharriere.com/blog/?p=339
[12:14] <ikonia> ok ?
[12:15] <jonah> ikonia: which looks pretty awesome!! but as my system is already running I wasn't sure the best way to set it up, as that guide starts with two blank machines. Can I just install keepalived on the current running ubuntu server, do the virtual IP bit, update dns/router to virtual IP and then worry about second server later?
[12:15] <ikonia> keepalived is just an application daemon, thats it, nothing more
[12:15] <ikonia> you can install that onto a running host, no problem at all
[12:17] <jonah> ikonia: that's cool. My other problem is how can I clone the whole server to the second box... Is rsync ok for this like set to sync every 10 minutes or something?
[12:18] <ikonia> the whole server ?
[12:18] <ikonia> could you define the whole server please
[12:19] <jonah> ikonia: well this is my problem. I didn't want to start fresh with drbd or some block level network raid as my current server is already running. So just need to sync it up... I can't clone the whole thing I guess or I'd wipe my keepalived setup and slave settings. But need to sync it so all the email, websites, mysql etc goes over to the failover and load balancer. Or would this just not work as it isn't realtime. Maybe I can't use the load
[12:19] <jonah> balancing...?
[12:20] <ikonia> I think you need to look at that in a different way
[12:20] <ikonia> this is not the "whole server"
[12:20] <ikonia> this is just some content
[12:20] <ikonia> so for example, you can use mysql/maria replication for the database
[12:20] <ikonia> the webroot - sure, rsync
[12:20] <ikonia> email ? how are you storing it
[12:21] <ikonia> there are loads of things to look at this is not a two minute "just rsync everything"
[12:32] <RoyK> ikonia: what do you use to keep the disks in sync?
[12:32] <ikonia> RoyK: depends, I normally don't do it at a disk level, but tools like drbd can be useful for that
[12:33] <RoyK> I've used drbd for that - works well
[12:38] <jonah> ikonia: ok so say I've got rsync and mysql syncing set up and few other bits and then I get a hardware failure on the main server so it powers off. The second server picks up from where it left off with the most up-to-date files it has from 10mins ago or whatever. Then someone logs into the hosted website and uploads a file. Then I fix the first server and power it back on. Will that file that was upload then be lost (or email or whatever
[12:38] <jonah> changed on server 2)? Or does that sync back to the first server somehow. I know this all depends on how things are set up but hyperthetically is that easy to set up or will it just not work very well?
[12:40] <jonah> ikonia: same for load balancing, can it even load balance at all if using rsync etc with dynamic sites or email/webmail - or is load balancing just out the question.
[12:40] <jonah> ikonia: I am looking to hire someone to set this up and help me with this stuff, but just trying to understand or think of the best setup before really.
[13:00] <ikonia> jonah: you're trying to do enterprise availability without an enterprise approach (eg: using rsync in a one way sync)
[13:01] <ikonia> I think you need to look at what you've got and what you actual realistic goal is
[13:02] <jonah> ikonia: Well I'm not looking for anything too complex and don't need the best HA or anything. If hardware failure happened I could live with some files being a little out of date, as hopefully it would be rare. Not trying to set any goals unacheivable or anything...
[13:03] <jonah> ikonia: just with that guide saying loadbalancing was achievable: http://gcharriere.com/blog/?p=339 I wondered if it would work, despite running dynamic websites/email server etc
[13:04] <cpaelzer> jamespage: I have reproduced the 2214 unittest several times now
[13:05] <cpaelzer> jamespage: the test itself is one of the new OVN things
[13:05] <cpaelzer> jamespage: so new might mean it has issues - nut sure
[13:05] <cpaelzer> the test does so many things that I need to check what it actually is first that I'm still feeling lost
[13:05] <cpaelzer> jamespage: do you want me to report at least this test (the others seem transient) and set you on CC?
[13:09] <ikonia> jonah: the reality is you want a two way sync
[13:09] <ikonia> which means you have to build logic into your scripts to work out which one is the active node
[13:09] <ikonia> thats all
[13:09] <jonah> ikonia: ok so unison or something?
[13:15] <ikonia> jonah: however your best to handle it
[13:25] <cpaelzer> jamespage: didn't hear form you - but I think (hope) it can't hurt to report that
[13:25] <cpaelzer> I got the same using non debian way to build
[13:25] <jamespage> cpaelzer, +1 yeah that's the one I see failing reliably on i386
[13:26] <jamespage> on amd64 I saw the bfp failure but its transient
[13:26] <cpaelzer> jamespage: I ran it through some loops and envs over lunchtime 15/1149 are both transient - 15 in both, 1149 only in i686
[13:26] <cpaelzer> I just snet the mail out about 2214
[13:27] <cpaelzer> jamespage: I'm on vacation after next week, so I hope we get something uploadable working before to match FF
[13:28] <cpaelzer> jamespage: otherwise I'll have to file a FFE on next Friday
[13:28] <jamespage> cpaelzer, agreed - I am as well
[13:28] <jamespage> cpaelzer, there is still no 2.6 branch :-(
[13:29] <cpaelzer> they have two major features being discussed inthe scope of "please add before branching 2.6" that might stall it
[13:29] <cpaelzer> on the DPDK side of things already the "oh this is broken" fixes start to come up
[13:29] <cpaelzer> similarly on OVS I've seen a few new leak fixes
[13:30] <cpaelzer> this OVN test really is an gigantic oven - if you haven'T written that test you feel lost and close to "well done"
[13:32] <derwood2> Heya again folks :)
[13:33] <cpaelzer> I just hope there is some upstream feedback leading us to the right place
[13:34] <derwood2> Can I ask a silly question here.. MaaS2.0, I have a install on a 80GB HDD I'm doing right now inside a node just as a standalone install.. Can I create an .iso? or something like using DD command, and have MaaS2.0 feed that out as the image after a PXE boot so I can run my blender network rendering image I am setting up right now?
[13:35] <cpaelzer> derwood2: https://maas.ubuntu.com/docs/os-support.html
[13:35] <derwood2> I have/am setting this install up to autologin, then start blender with the network rendering settings set on a DHCP LAN.. so would like to know if I can just feed this image of the drive out to each node as and when I please using MaaS2.0.. Not sure if I'm asking the question in the right manner or syntzx :D
[13:35] <cpaelzer> derwood2: you are just less custom than you would be for a different OS
[13:36] <derwood2> Cheers buddy :D
[13:36] <cpaelzer> roaksoax: ^^ you might have a more sophisticated answer
[13:36] <cpaelzer> derwood2: but reading what you want to do instead of how you want to do it
[13:36] <derwood2> Cheers fellas, much respect :D
[13:36] <cpaelzer> derwood2: wouldn't you just feed some cloud-init config to a usual ubuntu image via maas
[13:37] <cpaelzer> derwood2: that could make the post install setup you need
[13:37] <cpaelzer> derwood2: without you needing to build a custom image
[13:37] <derwood2> I just dont know, I'm still very new to this.. and this is the way I thought about going about it, anything new as in ideas would be awesome :D
[13:38] <cpaelzer> derwood2: https://maas.ubuntu.com/docs/development/preseeds.html
[13:38] <derwood2> Cheers for the link and answers fella :D Awesome as ever :d
[13:38] <cpaelzer> via that you can control how things are set up for you
[13:38] <cpaelzer> good luck
[13:39] <derwood2> :D
[13:47] <roaksoax> derwood2: not sure i follow what you want to do exactly, but for the looks of it, you want MAAS to isntall a machine (i.e, ubuntu), and after the installatio is finished you want to put something in the filesystem ?
[13:48] <derwood2> yes, I would like to PXE boot each node and the OS they will run is 16.4LTE with blender and x11VNC all ready running and auto logged in :D if that make sense.
[13:49] <derwood2> 16.4.1LTE server edition, sorry
[13:50] <Pici> *LTS
[13:50] <roaksoax> derwood2: so, when you say OS, you mean you want to isntall ubuntu Xenial with Blender and x11VNC
[13:50] <roaksoax> derwood2: so you are deploying stock ubuntu from MAAS
[13:50] <roaksoax> derwood2: you are not creating a custom ISO
[13:52] <derwood2> I would like to deploy ubuntu Xenial with Blender and x11VNC, yes :D but I assumed I had to make an .iso file to be fed to the nodes after PXE booting..
[19:54] <nopea> Hi folks, I just did a clean install of Ubuntu Server 12.04.5 and when I try to install Apache 2 I am getting 'missing dependices' errors.  How can I get LAMP on this VM?
[19:55] <sarnold> nopea: sudo apt-get update && sudo apt-get install lamp-server^
[19:56] <sarnold> (the ^ syntax asks apt to install a 'task selection', see e.g. https://help.ubuntu.com/community/Tasksel for information)
[19:56] <nopea> sarnold: yeah I tried that, but the same.
[19:56] <sarnold> nopea: can you pastebin your errors?
[19:56] <OerHeks> oke lets continue here, hi sarnold
[19:56] <nopea> When I run apt-get update I get a bunch of errors about 'failed to fetch...'
[19:57] <nopea> Here is when I run apt-get update... https://drive.google.com/open?id=0B5QmcW_8DZ4MaTZHU3FhRGVrVGc
[19:58] <sarnold> nopea: check dmesg output for storage errors
[19:59] <nopea> sarnold: sorry can you tell me how to do that
[20:00] <sarnold> nopea: run "dmesg" and look for error messages..
[20:00] <sarnold> the storage errors tend to have a lot of {  } and "SENSE" in them :) heh
[20:00] <nopea> sarnold: thanks... looking - but the VM cuts half the screen off argh
[20:01] <sarnold> nopea: you can ssh in and use whatever decent terminal emulator you want that way
[20:01] <sarnold> I almost never interact with VM consoles, they're usually more annoying than ssh
[20:01] <sarnold> they do stupid things like steal mouse and keyboard, and they can't use the same select buffer in X11...
[20:01] <nopea> sarnold: that is the other issue.... I can't even install OpenSSH - I get missing dependicies errors with that as well
[20:02] <sarnold> nopea: ugh
[20:04] <nopea> sarnold: https://drive.google.com/open?id=0B5QmcW_8DZ4MRU9la2l6VWpMSzg
[20:04] <sarnold> nopea: heh, how about that dmesg output?
[20:04] <nopea> it appears that the install did not install some libraries - or perhaps they are out of date?
[20:05] <sarnold> it's all the hash sum mismatches; apt won't install packages it can't authenticate
[20:05] <sarnold> and your package lists aren't authenticating
[20:05] <nopea> sarnold: dmsg... https://drive.google.com/open?id=0B5QmcW_8DZ4MSXVlUjFWakZqbUE
[20:06] <sarnold> that can happen if there are IO errors, and dmesg output would show that if there are any...
[20:06] <sarnold> alright looks boring enough
[20:07] <sarnold> nopea: try sudo rm /var/lib/apt/lists/partial/* ; sudo apt-get update
[20:08] <nopea> same mismatch errors
[20:08] <sarnold> nopea: are you using a proxy such as squid-deb-proxy or apt-cacher-ng? is someone _else_ running e.g. a transparent proxy that you might be using?
[20:09] <nopea> no, I don't think.  I just straight up installed this on a Oracle VM box
[20:13] <nopea> sarnold: I just tried the update again and I got no mismatch errors... I will try lamp server again
[20:14] <OerHeks> should nopea enable backports ?
[20:14] <sarnold> OerHeks: no.. one problem at a time :)
[20:14] <sarnold> nopea: awesome. That saves a huge amount of hassle.
[20:15] <nopea> looks like it is installing
[20:15] <sarnold> OerHeks: I don't recommend the backports repository, it feels vastly unloved these last few years
[20:15] <OerHeks> oh, missed the update error is gone
[20:15] <nopea> looks like it is up - let me check
[20:15] <nopea> Apache is running
[20:15] <sarnold> that's more like it :)
[20:16] <nopea> mysql is running
[20:16] <nopea> woo hoo.... now the question is... what the @#%$^$%^ was going on.
[20:16] <sarnold> OerHeks: I think that if you need newer software than is in an LTS release, it'd probably be better to just grab a newer LTS release
[20:16] <nopea> As it is a VM I may be installing this again... don't want the have to jump thru these hoops again
[20:16] <OerHeks> That would be logical indeed, sarnold
[20:17] <sarnold> nopea: APT enforces a path of trust -- e.g. the file http://us.archive.ubuntu.com/ubuntu/dists/precise/Release must have a valid signature in http://us.archive.ubuntu.com/ubuntu/dists/precise/Release.gpg
[20:17] <OerHeks> oke, have fun nopea
[20:17] <sarnold> nopea: the Release file includes a huge pile of hashes for all the other files
[20:17] <sarnold> nopea: e.g. the file http://us.archive.ubuntu.com/ubuntu/dists/precise/universe/binary-i386/Packages.bz2  (which reported a hash sum mismatch in your screenshot) has a hash listed in the Release file
[20:18] <sarnold> nopea: and when apt checked the downloaded file against the hash in the Release file, they didn't match, and apt refused to use it
[20:18] <nopea> so by removing the list (and the hashes) I was able to match then on next update?
[20:18] <sarnold> yeah
[20:18] <sarnold> and if you've got a caching proxy somewhere in the middle, it might have cached bad versions
[20:18] <sarnold> or it might be serving stale versions
[20:19] <nopea> ok - I will have to remember that.  Not sure how the mismatches happened in the first place
[20:19] <sarnold> apt-cacher-ng had some hilarious bugs when it would store files with the wrong names....
[20:21] <sarnold> nopea: do note that 12.04 LTS will reach end of life in eight months; 14.04 LTS or 16.04 LTS have more time left in their support periods.
[20:21] <nopea> sarnold: OerHeks big thanks guys!
[20:21] <sarnold> nopea: have fun :)
[20:21] <nopea> I am going to see if get the other packages to install
[20:22] <sarnold> it should all be pretty smooth sailing now that your package lists are happy :)
[20:22] <nopea> sarnold: thanks for the info.  I am using 12.04 as that is what my rackspace cloud server is running, and I am trying to match my dev machine as close to it as possible
[20:22] <sarnold> nopea: good plan.
[20:22] <nopea> sarnold: I guess I could clone the cloud server... but I dont want to pay ;)
[20:29] <nopea> sarnold: do you think it could have been becuase my VM network was set to NAT... perhaps Bridged would have been better
[20:31] <sarnold> nopea: maybe, IF the NAT mode meant the VM thingy put a caching proxy in the middle..
[20:31] <nopea> sarnold: I will try another install and set it to bridge first - but of course I will not remove this install that is working now, even SSH ;)
[20:32] <sarnold> haha
[20:33] <sarnold> nopea: it might not be immediately reproducable with either networking type...
[20:33] <sarnold> it's possible to go years without seeing those errors
[20:33] <nopea> sarnold: true.
[20:44] <nopea> sarnold: yeah - the network setting had no effect.  On another install the problem was the same as before
[21:09] <riz0n> Hello friends. I just restored a system backup to a new computer, and when I boot, I have no eth0 interface. How can I reinstall networking in ubuntu?
[21:23] <ikonia> riz0n: you don't re-install network
[21:23] <ikonia> you need to understand why it can't see your device or if it's been renamed
[21:23] <ikonia> copying system backups to a new machine is not a straight forward process for some parts
[21:26] <riz0n> ikonia: after doing ifconfig -a, I saw that the device was there, but under a new name (ens33) so I modified /etc/networking/interfaces, then init 6, now all the bases are loaded and I'm running in for the home run! :)
[21:27] <ikonia> excellent
[21:27] <riz0n> I feel like I'm starting to learn a thing or two about Linux ;)
[21:52] <YamakasY> oh no why is isc-dhcp such a pain in failover