[00:43] huh. After 30-some hours of data uploading between old and new server, I went to check progress and got... [00:43] @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ [00:44] Is my best solution to rip it down and try to rebuild the server more securly? [00:45] owww [00:45] whatever you do don't type passwords into that thing until you've figured out what's going on [00:45] it might be as easy as using sudo ssh by accident, and getting an _ancient_ /root/.ssh/known_hosts file entry or something similar to that [00:45] or it might be that it's now someone else's computer and they're not very quiet about it [05:01] is there any setting to determine how often linux flushes the drive write cache to disk. While I have UPS and 64G of ram, It will read like 4-16GB of data(from network or other disk) before it begins to write to the disk, not sure why it waits that long. [05:03] cncr04s: I think the sysctls labeled "dirty_" are probably most useful to you https://www.kernel.org/doc/Documentation/sysctl/vm.txt [05:29] cncr04s: probably not a good idea if you want consistent data in case of a panic or similar [05:37] RoyK: I think he wants to make it write more frequently :) [05:38] sarnold: oh - the other way around :) [05:38] yeah :) [05:38] cncr04s: possibly ext4 writeback doing it [08:00] Hi setting up Ubuntu 14.04 server for the first time... Should SSH keys be generated [08:15] djc_: what keys do you refer to? [08:16] djc_: to create a key for yourself and how to place it https://help.ubuntu.com/community/SSH/OpenSSH/Keys [08:16] djc_: did you mean this or something else? [08:47] djc_: and in case you might have meant https://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-0285.html - no not an issue [08:50] stgraber: r u here? [08:52] ddellav, coreycb: I think the keystone unit test failures are due to a unrepresented requirement for a newer oslo.db version [08:53] there is a fix in a 4.10 - 3277ef3 Capture DatabaseError for deadlock check [08:53] that looks pertinent [10:20] jamespage: did you have uncommitted changes that brought you to 1609846 ? [10:21] ? [10:21] I cloned and build against my 16.07 ppa but break at [10:21] make[3]: *** No rule to make target 'debian/python-openvswitch.install', needed by 'distdir'. Stop. [10:22] just wanted to test build and see If I could help - but it seems I block at this before the unit test failures you reported [10:30] file it searches got deleted by your last commit [10:30] "d/rules,control: Add python3-openvswitch package." [10:31] the question is - accidential delete or missed to remove from debian/automake.mk ? [10:31] because the latter still referes to it [10:33] maybe it should have been created by your new call to python setup.py [10:33] checking if there was a former error in thebuildlog [10:38] no tat seems to be the real install, removing the line in the automake gets me going [10:38] I'll continue that way for now [10:38] later on we can discuss if it was right and if/how you want the commit back [10:48] running unittests now, eager to see if I hit the same that you did jamespage [10:48] cpaelzer, oh I might sitll have local delta for tha [10:48] one sec [10:49] cpaelzer, infact two commits pushed [10:53] :-/ [10:53] gotcha [10:53] hehe [11:05] ddellav, coreycb: I've updated cloud-archive-utils to use i386 schroots for precise and trusty targets, mimicing the behaviour for LP builders. [11:05] xenial will use amd64 still [11:06] for arch all builds anyway... [11:08] changed my mind - bad idea [11:55] Hi has anyone used keepalived? === Piper-Off is now known as Monthrect === Monthrect is now known as Piper-Off [12:12] I have used keepalived [12:12] although there are more options now than when keepalived was king === JanC is now known as Guest20161 === JanC_ is now known as JanC [12:14] ikonia: well I'm just trying to plan how to add failover to a current server - I found this guide: http://gcharriere.com/blog/?p=339 [12:14] ok ? [12:15] ikonia: which looks pretty awesome!! but as my system is already running I wasn't sure the best way to set it up, as that guide starts with two blank machines. Can I just install keepalived on the current running ubuntu server, do the virtual IP bit, update dns/router to virtual IP and then worry about second server later? [12:15] keepalived is just an application daemon, thats it, nothing more [12:15] you can install that onto a running host, no problem at all [12:17] ikonia: that's cool. My other problem is how can I clone the whole server to the second box... Is rsync ok for this like set to sync every 10 minutes or something? [12:18] the whole server ? [12:18] could you define the whole server please [12:19] ikonia: well this is my problem. I didn't want to start fresh with drbd or some block level network raid as my current server is already running. So just need to sync it up... I can't clone the whole thing I guess or I'd wipe my keepalived setup and slave settings. But need to sync it so all the email, websites, mysql etc goes over to the failover and load balancer. Or would this just not work as it isn't realtime. Maybe I can't use the load [12:19] balancing...? [12:20] I think you need to look at that in a different way [12:20] this is not the "whole server" [12:20] this is just some content [12:20] so for example, you can use mysql/maria replication for the database [12:20] the webroot - sure, rsync [12:20] email ? how are you storing it [12:21] there are loads of things to look at this is not a two minute "just rsync everything" [12:32] ikonia: what do you use to keep the disks in sync? [12:32] RoyK: depends, I normally don't do it at a disk level, but tools like drbd can be useful for that [12:33] I've used drbd for that - works well [12:38] ikonia: ok so say I've got rsync and mysql syncing set up and few other bits and then I get a hardware failure on the main server so it powers off. The second server picks up from where it left off with the most up-to-date files it has from 10mins ago or whatever. Then someone logs into the hosted website and uploads a file. Then I fix the first server and power it back on. Will that file that was upload then be lost (or email or whatever [12:38] changed on server 2)? Or does that sync back to the first server somehow. I know this all depends on how things are set up but hyperthetically is that easy to set up or will it just not work very well? [12:40] ikonia: same for load balancing, can it even load balance at all if using rsync etc with dynamic sites or email/webmail - or is load balancing just out the question. [12:40] ikonia: I am looking to hire someone to set this up and help me with this stuff, but just trying to understand or think of the best setup before really. [13:00] jonah: you're trying to do enterprise availability without an enterprise approach (eg: using rsync in a one way sync) [13:01] I think you need to look at what you've got and what you actual realistic goal is [13:02] ikonia: Well I'm not looking for anything too complex and don't need the best HA or anything. If hardware failure happened I could live with some files being a little out of date, as hopefully it would be rare. Not trying to set any goals unacheivable or anything... [13:03] ikonia: just with that guide saying loadbalancing was achievable: http://gcharriere.com/blog/?p=339 I wondered if it would work, despite running dynamic websites/email server etc [13:04] jamespage: I have reproduced the 2214 unittest several times now [13:05] jamespage: the test itself is one of the new OVN things [13:05] jamespage: so new might mean it has issues - nut sure [13:05] the test does so many things that I need to check what it actually is first that I'm still feeling lost [13:05] jamespage: do you want me to report at least this test (the others seem transient) and set you on CC? [13:09] jonah: the reality is you want a two way sync [13:09] which means you have to build logic into your scripts to work out which one is the active node [13:09] thats all [13:09] ikonia: ok so unison or something? [13:15] jonah: however your best to handle it [13:25] jamespage: didn't hear form you - but I think (hope) it can't hurt to report that [13:25] I got the same using non debian way to build [13:25] cpaelzer, +1 yeah that's the one I see failing reliably on i386 [13:26] on amd64 I saw the bfp failure but its transient [13:26] jamespage: I ran it through some loops and envs over lunchtime 15/1149 are both transient - 15 in both, 1149 only in i686 [13:26] I just snet the mail out about 2214 [13:27] jamespage: I'm on vacation after next week, so I hope we get something uploadable working before to match FF [13:28] jamespage: otherwise I'll have to file a FFE on next Friday [13:28] cpaelzer, agreed - I am as well [13:28] cpaelzer, there is still no 2.6 branch :-( [13:29] they have two major features being discussed inthe scope of "please add before branching 2.6" that might stall it [13:29] on the DPDK side of things already the "oh this is broken" fixes start to come up [13:29] similarly on OVS I've seen a few new leak fixes [13:30] this OVN test really is an gigantic oven - if you haven'T written that test you feel lost and close to "well done" [13:32] Heya again folks :) [13:33] I just hope there is some upstream feedback leading us to the right place [13:34] Can I ask a silly question here.. MaaS2.0, I have a install on a 80GB HDD I'm doing right now inside a node just as a standalone install.. Can I create an .iso? or something like using DD command, and have MaaS2.0 feed that out as the image after a PXE boot so I can run my blender network rendering image I am setting up right now? [13:35] derwood2: https://maas.ubuntu.com/docs/os-support.html [13:35] I have/am setting this install up to autologin, then start blender with the network rendering settings set on a DHCP LAN.. so would like to know if I can just feed this image of the drive out to each node as and when I please using MaaS2.0.. Not sure if I'm asking the question in the right manner or syntzx :D [13:35] derwood2: you are just less custom than you would be for a different OS [13:36] Cheers buddy :D [13:36] roaksoax: ^^ you might have a more sophisticated answer [13:36] derwood2: but reading what you want to do instead of how you want to do it [13:36] Cheers fellas, much respect :D [13:36] derwood2: wouldn't you just feed some cloud-init config to a usual ubuntu image via maas [13:37] derwood2: that could make the post install setup you need [13:37] derwood2: without you needing to build a custom image [13:37] I just dont know, I'm still very new to this.. and this is the way I thought about going about it, anything new as in ideas would be awesome :D [13:38] derwood2: https://maas.ubuntu.com/docs/development/preseeds.html [13:38] Cheers for the link and answers fella :D Awesome as ever :d [13:38] via that you can control how things are set up for you [13:38] good luck [13:39] :D [13:47] derwood2: not sure i follow what you want to do exactly, but for the looks of it, you want MAAS to isntall a machine (i.e, ubuntu), and after the installatio is finished you want to put something in the filesystem ? [13:48] yes, I would like to PXE boot each node and the OS they will run is 16.4LTE with blender and x11VNC all ready running and auto logged in :D if that make sense. [13:49] 16.4.1LTE server edition, sorry [13:50] *LTS [13:50] derwood2: so, when you say OS, you mean you want to isntall ubuntu Xenial with Blender and x11VNC [13:50] derwood2: so you are deploying stock ubuntu from MAAS [13:50] derwood2: you are not creating a custom ISO [13:52] I would like to deploy ubuntu Xenial with Blender and x11VNC, yes :D but I assumed I had to make an .iso file to be fed to the nodes after PXE booting.. === iberezovskiy is now known as iberezovskiy|off [19:54] Hi folks, I just did a clean install of Ubuntu Server 12.04.5 and when I try to install Apache 2 I am getting 'missing dependices' errors. How can I get LAMP on this VM? [19:55] nopea: sudo apt-get update && sudo apt-get install lamp-server^ [19:56] (the ^ syntax asks apt to install a 'task selection', see e.g. https://help.ubuntu.com/community/Tasksel for information) [19:56] sarnold: yeah I tried that, but the same. [19:56] nopea: can you pastebin your errors? [19:56] oke lets continue here, hi sarnold [19:56] When I run apt-get update I get a bunch of errors about 'failed to fetch...' [19:57] Here is when I run apt-get update... https://drive.google.com/open?id=0B5QmcW_8DZ4MaTZHU3FhRGVrVGc [19:58] nopea: check dmesg output for storage errors [19:59] sarnold: sorry can you tell me how to do that [20:00] nopea: run "dmesg" and look for error messages.. [20:00] the storage errors tend to have a lot of { } and "SENSE" in them :) heh [20:00] sarnold: thanks... looking - but the VM cuts half the screen off argh [20:01] nopea: you can ssh in and use whatever decent terminal emulator you want that way [20:01] I almost never interact with VM consoles, they're usually more annoying than ssh [20:01] they do stupid things like steal mouse and keyboard, and they can't use the same select buffer in X11... [20:01] sarnold: that is the other issue.... I can't even install OpenSSH - I get missing dependicies errors with that as well [20:02] nopea: ugh [20:04] sarnold: https://drive.google.com/open?id=0B5QmcW_8DZ4MRU9la2l6VWpMSzg [20:04] nopea: heh, how about that dmesg output? [20:04] it appears that the install did not install some libraries - or perhaps they are out of date? [20:05] it's all the hash sum mismatches; apt won't install packages it can't authenticate [20:05] and your package lists aren't authenticating [20:05] sarnold: dmsg... https://drive.google.com/open?id=0B5QmcW_8DZ4MSXVlUjFWakZqbUE [20:06] that can happen if there are IO errors, and dmesg output would show that if there are any... [20:06] alright looks boring enough [20:07] nopea: try sudo rm /var/lib/apt/lists/partial/* ; sudo apt-get update [20:08] same mismatch errors [20:08] nopea: are you using a proxy such as squid-deb-proxy or apt-cacher-ng? is someone _else_ running e.g. a transparent proxy that you might be using? [20:09] no, I don't think. I just straight up installed this on a Oracle VM box [20:13] sarnold: I just tried the update again and I got no mismatch errors... I will try lamp server again [20:14] should nopea enable backports ? [20:14] OerHeks: no.. one problem at a time :) [20:14] nopea: awesome. That saves a huge amount of hassle. [20:15] looks like it is installing [20:15] OerHeks: I don't recommend the backports repository, it feels vastly unloved these last few years [20:15] oh, missed the update error is gone [20:15] looks like it is up - let me check [20:15] Apache is running [20:15] that's more like it :) [20:16] mysql is running [20:16] woo hoo.... now the question is... what the @#%$^$%^ was going on. [20:16] OerHeks: I think that if you need newer software than is in an LTS release, it'd probably be better to just grab a newer LTS release [20:16] As it is a VM I may be installing this again... don't want the have to jump thru these hoops again [20:16] That would be logical indeed, sarnold [20:17] nopea: APT enforces a path of trust -- e.g. the file http://us.archive.ubuntu.com/ubuntu/dists/precise/Release must have a valid signature in http://us.archive.ubuntu.com/ubuntu/dists/precise/Release.gpg [20:17] oke, have fun nopea [20:17] nopea: the Release file includes a huge pile of hashes for all the other files [20:17] nopea: e.g. the file http://us.archive.ubuntu.com/ubuntu/dists/precise/universe/binary-i386/Packages.bz2 (which reported a hash sum mismatch in your screenshot) has a hash listed in the Release file [20:18] nopea: and when apt checked the downloaded file against the hash in the Release file, they didn't match, and apt refused to use it [20:18] so by removing the list (and the hashes) I was able to match then on next update? [20:18] yeah [20:18] and if you've got a caching proxy somewhere in the middle, it might have cached bad versions [20:18] or it might be serving stale versions [20:19] ok - I will have to remember that. Not sure how the mismatches happened in the first place [20:19] apt-cacher-ng had some hilarious bugs when it would store files with the wrong names.... [20:21] nopea: do note that 12.04 LTS will reach end of life in eight months; 14.04 LTS or 16.04 LTS have more time left in their support periods. [20:21] sarnold: OerHeks big thanks guys! [20:21] nopea: have fun :) [20:21] I am going to see if get the other packages to install [20:22] it should all be pretty smooth sailing now that your package lists are happy :) [20:22] sarnold: thanks for the info. I am using 12.04 as that is what my rackspace cloud server is running, and I am trying to match my dev machine as close to it as possible [20:22] nopea: good plan. [20:22] sarnold: I guess I could clone the cloud server... but I dont want to pay ;) [20:29] sarnold: do you think it could have been becuase my VM network was set to NAT... perhaps Bridged would have been better [20:31] nopea: maybe, IF the NAT mode meant the VM thingy put a caching proxy in the middle.. [20:31] sarnold: I will try another install and set it to bridge first - but of course I will not remove this install that is working now, even SSH ;) [20:32] haha [20:33] nopea: it might not be immediately reproducable with either networking type... [20:33] it's possible to go years without seeing those errors [20:33] sarnold: true. [20:44] sarnold: yeah - the network setting had no effect. On another install the problem was the same as before === pavlushka_ is now known as Guest47235 === Guest47235 is now known as pavlushka [21:09] Hello friends. I just restored a system backup to a new computer, and when I boot, I have no eth0 interface. How can I reinstall networking in ubuntu? === ksx4system_ is now known as ksx4system [21:23] riz0n: you don't re-install network [21:23] you need to understand why it can't see your device or if it's been renamed [21:23] copying system backups to a new machine is not a straight forward process for some parts [21:26] ikonia: after doing ifconfig -a, I saw that the device was there, but under a new name (ens33) so I modified /etc/networking/interfaces, then init 6, now all the bases are loaded and I'm running in for the home run! :) [21:27] excellent [21:27] I feel like I'm starting to learn a thing or two about Linux ;) [21:52] oh no why is isc-dhcp such a pain in failover === unreal_ is now known as unreal