=== Mobutils_ is now known as Mobutils === Mobutils_ is now known as Mobutils [02:43] SO [02:43] I might be driving out to kentucky [02:43] to pick up some candy [02:43] rip [02:43] wrong room [03:09] sarnold: I think I've figured some more out about the local connection troubles I brought up earlier. My laptop is on a 5GHz radio, and I think it's treating it as seperate network. === wolflarson_ is now known as wolflarson === Mobutils_ is now known as Mobutils === mfisch is now known as Guest16197 === JanC_ is now known as JanC === Guest69044 is now known as lordievader [07:45] What way would i go to backup a LAMP server with various other services? tar and mysql dump + transfer via sftp ? === Guest64506 is now known as ahasenack === ahasenack is now known as Guest79511 [07:52] Thumpxr, my goto is usually rsync all live directories nightly to another drive, with configs + dump, then gzip + upload it offsite [07:53] might not be the 'cleanest' but it's pretty effective, then you've got a live backup right there as well as another offsite/online [08:07] cpaelzer: http://paste.ubuntu.com/23587332/ [08:12] Good morning. [08:14] morning lordievader [08:19] Hey monsune, how are you? [08:23] lordievader hungry :) === disposable3 is now known as disposable2 === Piper-Off is now known as Monthrect [10:42] is there a vagrant libvirt box for ubuntu 16.04? [12:21] Hello [12:23] I am wondering what is the best way to migrate a MySQL production server to a new one without Downtime ? [12:31] Genk1: depends on the database structure and use [12:31] Genk1: Ubuntu (and Debian)'s packaging doesn't support that, but it might be possible to arrange it. Maybe a better place to ask would be an upstream venue? [12:32] Stuff like Galera may be relevant, but I don't know much about that. [12:32] Genk1: First of all, consider if a short downtime is acceptable or not? Doing a migration with almost no downtime is a lot easier than absolutely no downtime. [12:33] Genk1: Also, is it acceptable to have a window where the database is availible, but only in a read mode? [12:36] thank you guys [12:36] ikonia, I am using MyISAM as a storage engine [12:37] andol, No the server is in production therefore updates and insert are mandatory, no read only mode === freyes__ is now known as freyes [12:37] Genk1: so you're doing "writes" [12:37] not read only [12:37] and is it transactional [12:38] ikonia, true [12:38] Genk1: so you're only real option is multi-master [12:38] or have an outage [12:38] Genk1: Without being able to provide you with the actual details I can tell you that this will be non-trivial. Especially if you want to do it in a safer manner. [12:38] ikonia, that's what I was told to do [12:39] ikonia, thank you for your help [13:14] anyone of you guys/girls running your own enterprise metal? I'm facing some dilemmas. Like, how do you handle your infra services (dhcp/dns/ldap)? Do you run them in VMs, 1 service per VM? [13:16] youprobably could do dhcp and dns off or [13:16] one server* [13:17] errm [13:17] i mean vm [13:20] as i virtualize pretty much all servers, ip's are deployed by virtualizor, using dns provider with ddos protected and white label name servers costs pretty much same as running own dns servers [13:20] i pay 17.50 usd for 4 dedicated IP's and 50 dns zones, including rdns if needed === deadnull is now known as _deadnull [13:26] spidernik84: why don't you just ask the actual question [13:26] and in relation to ubuntu context [13:28] maybe he wants all that on ubuntu, hence asking [13:28] heh [13:28] ikonia, that was the question. [13:28] it's not a question [13:28] It's not like the general rule "ask the question" applies everytime [13:29] I'm starting a conversation, to get opinions [13:29] it does in this case [13:29] to be honest, wouldnt use ubuntu on such vm's with such services [13:29] at least centos [13:29] what other people do doesn't matter as it's a usecase for your setup, needs and infrastructure [13:29] ikonia, it matters to me. [13:29] it shouldn't [13:29] certainly not without the context of your setup [13:29] spidernik84: I wouldn't put any services *needed* by my virtualisation servers to start up, inside VMs in it. [13:30] rbasak: I'm through all your notes (not only the ones we talked about) and just pushed to the strongswan MP [13:30] thats another thing [13:30] resolver, dhcp, ldap, might certainly count for those [13:30] rbasak: ready for step 4 now (but busy in next room) [13:30] rbasak: I might quickly come by to sync if you are there again [13:30] should be able to get cheap small server for those [13:31] cpaelzer: he's still in the other meeting (afaik) [13:31] We have an infra, and we put all our basic services in VMs, even dhcp. All servers use static addresses. VMs are defined in the puppet nodefile of each kvm host we have. This, naturally, is not sustainable. [13:31] I am looking for a different architecture to support our infra. I'm looking into containers, LXD specifically [13:31] thanks nacc [13:32] LXC i think you mean [13:32] cpaelzer: np [13:32] nope! LXD :) [13:32] mhm [13:32] it's the "successor", or something like that [13:32] never used really [13:32] openvz then kvm [13:32] based on LXC, but providing a daemon with restful API and better image packaging [13:32] it's kinda sweet [13:34] yeah, depending on a project run and managed by canonical is a risk [13:34] a project that canonical contribute to, sure, canonical run and own, hmmm [13:34] yeah [13:34] yeah, well, I can't disagree [13:35] Thankfully we are not talking about another unity, upstart or mir [13:35] how do you know ? [13:35] well, i run all my important services on ubuntu [13:35] looking into bsd [13:35] damn not ubuntu [13:35] centos [13:36] ubuntu is for not important stuff really [13:36] I disagree with that [13:36] Ahah you are in the right channel for such statements :D [13:36] tbh had more problems with ubntu in last 3 months than with debian and centos in last 3 years [13:36] it certainly can be, if you go into it with the risks under control [13:36] * ogra_ guesses wikipedia and netflix would disagree too ... or uber ... [13:37] well, im not calling ubuntu bad OS spidernik84 , just saying theres more stable OS's for things you need to work not break [13:37] ubuntu is a stable OS [13:37] canonical is not a stable planner [13:37] thats the risk [13:37] I agree [13:37] it's certainly a risk that can be managed [13:37] yeah, could be that [13:37] They introduced some major changes between 12.04/14.04/16.04 [13:37] every distro will introduce change [13:38] or it's not developing [13:38] changed directories for critical services, modified how networking worked, etc. [13:38] but tis weird doing stupid apt-get update and upgrade afterwards and see things breaking [13:38] heh [13:38] oh yeah... [13:38] never happened on centos, wont say 100% about debian [13:38] binia: it won't break if you manage yoru system [13:38] i do manage my system :) [13:38] then it wouldn't break [13:38] my network even :) [13:39] gotta admit, servers that has ubuntu have also software that is 3rd party and updating every week [13:39] sometimes they might push some bug by a mistake [13:39] not sure did that happened last time but shit went crazy :D [13:40] binia: so again - ubuntu will not break if you manage your system [13:40] binia: please don't swear, there isn't a need for it [13:40] also sounds like FUD, since not running 'ubuntu' then [13:40] dont spank me for not trusting you :D [13:40] sorrry ikonia wont swear [13:40] no problem [13:40] sorry* [13:41] but like im saying, i sue ubuntu for some things but refer to use centos if i can really [13:41] use* [13:41] that damn keyboard [13:41] omg! [13:41] check batteries, full load [13:41] yet seems like its lacking powah [13:44] It's a tradeoff. I personally did not enjoy their introduction of the dnssresolver [13:45] but I took time to "understand", and started using it as they expected [13:45] some of my colleagues where not as understanding and started ripping packages apart. Now that is something you don't do with ubuntu [13:47] you don't do it with any distro [13:47] thats nothing to do with ubuntu [13:47] thats to do with your collegues [13:48] I've not had an issue with Ubuntu LTS, aside from someone trying to dist-upgrade to 14.04 from 12.04 when the 12.04 system hadn't seen updates since 12.04 was released. [13:49] ikonia, don't tell me... [13:51] then stop talking about ubuntu as if it's behaving different than other distros [13:52] ikonia, man, did you fall from the bed this morning? Are you always that aggressive? :) I mean't to say "tell me about it..." [13:52] s/mean't/meant [13:52] "now that is something you don't do with ubuntu" [13:52] yes yes ok [13:52] as if it's something you do with other distros, but you don't with ubuntu because it has problems [13:52] no need to argue about everything ok? [13:52] no need to make false statements [13:53] ok ? [13:53] oh ffs get a life [13:53] you want a discussion, but you make incorrect statements [13:53] I'm with ikonia on this one. Distro wars are annoying. Aside from package manager, they're all the same. [13:53] then get upset when someone calls it out [13:54] I am pro ubuntu, that was not a way to start a distro war [13:54] you missed the point [13:54] because you didn't make a point [13:54] I use it everywhere [13:54] ....and ? [13:54] and defended it in many occasions, so you're off target [13:55] I'm not targeting anything === Guest16197 is now known as mfisch === mfisch is now known as Guest73037 === Guest73037 is now known as mfisch === mfisch is now known as Guest13281 === Guest13281 is now known as mfisch === mfisch is now known as Guest40664 === Guest40664 is now known as mfisch === the_ktosiek is now known as ktosiek [15:00] hello guys [15:00] i am where with a weird thing at my smart host [15:00] http://serverfault.com/questions/819032/unable-to-redirect-mail-from-outside-domain-to-outside-domain [15:01] anyone? [15:06] I have a smart host working in a MacosX 10.9.5 with Server App 3, and i have notice that i am just able to redirect mail inside of my smart host (Outlook, Mail app and roundcubemail installed in this server), for example i am just able to send mails from user1@domainX.pt to user2@domainX.pt where i have a redirection to user1@domianY.pt, if i try to send mail from user1@domainY.pt to user1@domainX.pt who is redirecting mail [15:06] i will get this message at mail.log: [15:07] Dec 6 14:37:12 remote.domainX.pt postfix/smtp[28504]: 0B8BD259F57: to=, orig_to=, relay=mail.domainX.pt[]:25, delay=0.1, delays=0/0.01/0.07/0.02, dsn=5.0.0, status=bounced (host mail.domainX.pt[] said: 550-Verification failed for 550-No Such User Here 550 Sender verify failed (in reply to RCPT TO command)) [15:07] From what it seems instead of using orig_to= i am getting orig_to=, and my remote mail server will give that response. I am just using domainX.pt instead of remote.domainX.pt that i am not using at all. My mail server remote.domainX.pt is connected to my remote mail server mail.domainX.pt. Anyone knows how can i solve this? [15:08] mail_version = 2.9.4 [16:06] coreycb: i think alembic was a bit too old for neutron...ci should be fine now (i hope) [16:07] zul, ok [16:07] coreycb zul have you tried installing horizon in zesty yet? I've tried a few ways and even when using whats in main i get a invalid syntax error during collect & compress: http://paste.ubuntu.com/23589188/ [16:08] ddellav: corey has [16:13] ddellav, hrm.. that's not good. i've only tested on xenial-ocata so far [16:14] ddellav, this looks odd though: /home/david/.local/lib/python2.7/site-packages/eventlet/__init__.py [16:17] coreycb hmm, yea, thats my venv. I'll disable and try again [16:17] hello everyone i succesfully installed openstack novalxd using openstack, what is the best way to shut down now? [16:18] zul, ^ [16:18] coreycb same error, this time it's just /usr/local/lib/python2.7 heh [16:18] ddellav, i'd recommend using a fresh install [16:18] hasenov: what do you mean shutdown? [16:18] and then when i start up my pc what is best way to start it all back up? [16:19] i mean i have all these containers running [16:19] coreycb alright, ill spin up a vm on serverstack and try it [16:20] but idk if there is a juju command or whatnot to shutdown and start up [16:20] hasenov: when you shutdown the machine the contianers should come back up [16:20] rockstar: ^^^ [16:22] so i can just shutdown the host machine with no problem, and then when i start my host back up just issue "lxc start" on all the n number of containers right? [16:22] like there is no requirement on something needing to start up first [16:22] hasenov: coreect [16:26] coreycb: you are doing debhelper stuff again? [16:28] zul, yeah, did something break? === PaulW2U_ is now known as PaulW2U [16:29] coreycb: http://10.245.168.2:8080/view/Ocata/job/xenial_ocata_nova-lxd/32/ === iberezovskiy is now known as iberezovskiy|off [16:52] hello another ques, how do i figure out which node is the nova compute node? [16:52] for the web ui it looks like i can only spawn vm instances and not containers correct? [16:53] looks like if i want to spawn a lxd container i need to go into the compute node and issue "nova boot --image=trusty --flavor=m1.tiny my-first-openstack-lxd-container [17:03] hasenov: nova doesn't know you're actually firing up a container. It sees them all as "instances". Commonly, those instances are vms, but in the nova-lxd case, they're containers. [17:05] zul, alright i need to try some debhelper backport testing in a ppa [17:05] So you can use horizon to fire up "instances" of nova-lxd that are containers. [17:05] But the image has to be a supported format. [17:15] anyone have any advice on why an intel I340 would randomly not work anymore? === topi` is now known as topi [17:16] i'm running 16.04 and just pushed the 4.4.0-53 update === topi is now known as topi` [17:46] coreycb: fyi https://bugs.launchpad.net/ubuntu/+source/ceilometer/+bug/1647805 [17:46] Launchpad bug 1647805 in ceilometer (Ubuntu) "Ceilometer agent fails to start" [Undecided,New] === Monthrect is now known as Piper-Off [18:48] how do I extend this /boot partition? http://picpaste.com/pics/partitions-j2Tixte5.1481049709.png .. short from backing up and rebuild (posted in #ubuntu as well) === severion is now known as v1k0d3n_ === Mobutils_ is now known as Mobutils [19:52] axisys, I'm no expert but why extend it? do an apt-get autoremove which will probably remove a bunch of kernel packages or whatever and free up a bunch of space in that partition [19:59] axisys, had that exact same issue this am patching [19:59] easy fix === Mobutils_ is now known as Mobutils [20:06] i am not sure why my /boot partition is small.. I bump into this a lot and autoremove is not always enough [20:07] I need to atleast up it to 400M [20:08] These lines are part of a vagrant packer script to build ubuntu boxes: I suspect that they randomly leave something locked so that the next "apt install" blocks forever: https://github.com/boxcutter/ubuntu/blob/master/script/update.sh#L20-L23 What might that lock be? [20:11] nedbat: do you scrape the output of the apt-get -y dist-upgrade command? [20:11] wait [20:11] what's reboot ; sleep 60 do? :) [20:12] sarnold: i didn't write this script. i was confused by that also. [20:12] sarnold: packer continues on from there, without a 60-second pause. [20:12] iirc the 'bash -e' means a failure in apt-get dist-upgrade will cause the script to abort [20:12] sarnold: is there a lock that would make "apt install" block forever? Googling around, I see messages about "could not get lock" [21:19] if you had an "apt install" command that blocked forever, what would you look for as the cause [21:20] ? [21:21] nedbat: can you pastebin strace output of the 'apt install' that's blocking? [21:25] tarpman: that's a good idea, i will try that next time it sticks. [21:28] tarpman: (these are in vagrant packer scripts, and it's only about 50% of the time that it gets stuck) [21:47] zul, debhelper is fixed up for xenial-ocata [21:57] coreycb: ok you going to kick off all those rebuilds right [21:57] zul, well everything that was failing is successful now [21:58] awesome [22:24] guys, how can check out - is root account blocked? [22:46] does anyone here look after the vagrant images? [22:47] i do to a certain extent [22:47] what's up? [22:51] stomplee: hey, wondering if you've seen lp #1569237, where the default username of the box is ubuntu, rather than vagrant (which is what vagrant expects) [22:51] Launchpad bug 1569237 in cloud-images "vagrant xenial box is not provided with vagrant/vagrant username and password" [Undecided,New] https://launchpad.net/bugs/1569237 [22:52] nope haven't run across that one. if necessary i'd just pull down a working one and repakage as my own box to work around the issue instead of having to wait [22:52] i currently use yakkety and works just fine [22:53] you guys stuck using vagrant ssh then? [22:53] cuz the username shouldn't really be a big issue i would think [22:53] ok, but... considering 16.04 is the LTS, my guess is that "works on yakkety" isn't a good resolution [22:53] i used xenial before this without issue [22:54] why is the different username tripping you up? [22:54] out of the box, vagrant expects the user to be "vagrant" - https://www.vagrantup.com/docs/boxes/base.html#quot-vagrant-quot-user [22:55] yes, you can change the username to be ubuntu in your vagrantfile, but there is a lot of code out there that expects the username to be "vagrant" [22:56] you could also spin up the image, make the necessary changes to make the built in user be vagrant instead of ubuntu and repackage it as well [22:56] though it is a pain in the butt [22:57] i mean, i agree i can do those things. it's just that the box known as 'ubuntu/xenial64' has some officalness about it, and it would be nice to make it work out of the box with vagrant [22:58] the box known as 'achiang/xenial64' doesn't quite have the same ring [22:58] lol [22:59] why would you want code to reference the vagrant user in the first place though [22:59] better to just run stuff as root and kick off some script to provision some service account so this issue doesn't mess up the pipeline [23:00] to me anyways, i'm just a noob in this area though so I could be missing something glaring [23:00] i agree it's not great, but that is the default, and people have built out lots of provisioning scripts based on this assumption [23:01] stomplee: i am happy to submit a patch, but i don't know who actually maintains `ubuntu/xenial64`, hence my asking on irc [23:01] it's canonical themselves [23:02] but vagrant recommends some other box, i forget by whom though [23:07] * achiang randomly pings jcastro ;)