[00:10] <Erick3k> yep
[00:10] <Erick3k> sounds like the problem is there
[00:11] <Erick3k> am testing with the kvm image
[00:29] <drab> Erick3k: hi, how are you sertting up kvm?
[00:29] <drab> I'm needing KVM for some stuff, used to containers, and having a hell of a time to get something simple going...
[02:19] <renatosilva> please help, there's some errors in upgrade, how to fix them? http://vpaste.net/y030l
[02:27] <fishcooker> is your /boot partition have enough space, renatosilva?
[02:27] <fishcooker> please send the output of your $ uname -a
[02:28] <renatosilva> Linux <nodename> 2.6.32-042stab120.20 #1 SMP Fri Mar 10 16:52:50 MSK 2017 x86_64 x86_64 x86_64 GNU/Linux
[02:29] <renatosilva> I was trying to upgrade when libc complained about old kernel, I'm trying to upgrade the kernel and getting the above errors
[02:32] <stgraber> that paste won't load for me for some reason, but the kernel you're running isn't supported by any Ubuntu release and indeed isn't supported by any modern version of the C library. Your only option is to run a more recent kernel.
[02:33] <stgraber> this kernel looks like it could be a RedHat kernel as used for an OpenVZ based VPS or something similar to that, certainly not an official Ubuntu kernel
[02:34] <renatosilva> stgraber: https://pastebin.com/raw/zuRnfgN9
[02:34] <fishcooker> how abt list your linux-generic package please paste your output $ dpkg-query -l linux-generic, renatosilva?
[02:34] <stgraber> I'd expect Ubuntu 12.04 userspace to run fine on such a kernel given that we had to support upgrading from Ubuntu 10.04 to 12.04 (and 10.04 was using a 2.6.32) but upgraded to anything after 12.04 would almost certainly fail
[02:35] <stgraber> renatosilva: is that a VPS?
[02:35] <fishcooker> the last time i upgrade the kernel... i just do this $ apt-get upgrade; apt-get install linux-generic then reboot
[02:36] <stgraber> renatosilva: if so, can you post the output of "ls -lh /proc/user_beancounters"
[02:36] <stgraber> renatosilva: if it's an OpenVZ VPS, you can't upgrade the kernel as containers run on the host's kernel, making the kernel you're running up to your provider, not to you
[02:37] <renatosilva> fishcooker: dpkg-query -l linux-generic => dpkg-query: no packages found matching linux-generic
[02:37] <renatosilva> stgraber: yes that's a vps
[02:38] <fishcooker> then the stgraber hypothesis is right
[02:38] <renatosilva> fwiw this is what I tried to update the kernel: apt-get install --install-recommends linux-generic-hwe-16.04
[02:39] <stgraber> renatosilva: your VPS is a container, containers can't run their own kernel. Even if you succeeded to install any of the Ubuntu kernels and bootloader, they'd never run
[02:39] <renatosilva> stgraber:  ls -lh /proc/user_beancounters => -r-------- 1 root root 0 Apr  2 22:39 /proc/user_beancounters
[02:40] <stgraber> renatosilva: ok, that output confirms that your container is OpenVZ based (which makes sense given the host kernel)
[02:40] <stgraber> renatosilva: in such an environment you should stick to Ubuntu 12.04, anything more recent than that is unlikely to be compatible with the kernel you're hosting provider uses
[02:41] <stgraber> renatosilva: which is to say, if you need to move to a version of Ubuntu that won't be unsupported in a few weeks, you may need to move to another hosting provider that's running a less outdated version of the kernel
[02:41] <renatosilva> not sure if my company uses such OpenVZ thing
[02:41] <stgraber> the commercial version would be called Virtuozo
[02:42] <stgraber> virtuozzo*
[02:42] <stgraber> renatosilva: and given that /proc/user_beancounters only exists inside OpenVZ (or Virtuozzo) containers, you are definitely using it
[02:43] <stgraber> that file isn't part of the normal Linux kernel, only kernels built with the OpenVZ/Virtuozzo support patch will have that file
[02:43] <drab> stgraber: are there any plans for lxd to integrate with say virt-manager or something?
[02:43] <renatosilva> 12.04 is pretty old and its support is gone this month :(
[02:44] <drab> I need something to give ppl to manager their containers and virt-manager would be idea since they are all vmware/vbox users
[02:45] <stgraber> drab: we definitely don't have any plans to integrate with libvirt. There are a couple of attempts at a web frontend for LXD around but there again we're quite happy with those being external projects.
[02:45] <renatosilva> stgraber, fishcooker: ok so how do you think I got into this situation? do you think I have been dist-upgrading since 12.04?
[02:45] <drab> stgraber: fair enough, thanks
[02:45] <stgraber> drab: the slightly overkill solution would be to run openstack with nova-lxd, which would then give you access to all the openstack tools and the web frontend, but that's not exactly a lightweight solution :)
[02:46]  * renatosilva afraid of needing to reinstall the whole thing
[02:46] <stgraber> renatosilva: your paste suggests your container is at least partly on Ubuntu 14.04 as it references a number of packages which don't exist on Ubuntu 12.04
[02:48] <stgraber> renatosilva: it looks like you have a few options 1) reinstall on Ubuntu 12.04 and continue running it after it's end of support 2) move that server somewhere else, physical machine or VM or container on a host that's not outdated 3) figure out how to get the host upgraded to a more recent kernel
[02:48] <renatosilva> for now how can I revert this command? apt-get install --install-recommends linux-generic-hwe-16.04
[02:48] <renatosilva> I don't know the current state of the system
[02:49] <stgraber> renatosilva: you can remove all those linux-generic and linux-image packages, since you're in a container, they're not used anyway
[02:49] <stgraber> renatosilva: same goes for grub
[02:49] <renatosilva> so 12.04 runs kernel 2.6? cause it looks pretty old
[02:50] <stgraber> renatosilva: no it doesn't, Ubuntu 12.04 runs a 3.2 kernel
[02:50] <stgraber> renatosilva: again, you are in a container. containers don't run their own kernel, they use the host's kernel.
[02:50] <stgraber> renatosilva: your host is most likely a Red Hat 6 system running OpenVZ/Virtuozzo which came with a 2.6.32 kernel by default
[02:53] <renatosilva> so their created an odd installation of 12.04 with a kernel smaller than 3.2? I see
[02:53] <renatosilva> I think I'll need to reinstall the whole thing from scratch
[02:54] <renatosilva> it makes no sense to allow a dist-upgrade without any kind of notice, they should add something to avoid people struggling with it like me
[02:55] <renatosilva> stgraber: I *think* I have removed those packages, I will reboot and hope it's just ok
[02:55] <renatosilva> (although the libc will be angry)
[02:57] <drab> anybody aware of any good docs on settiungs up kvm on 16.04 ? all I'm finding is old and/or broken
[02:57] <drab> ubuntu-vm-builder fails so that's not an option
[02:57] <lynorian> use libvirt
[02:59] <drab> I tried, that seemed to open a whole new can of worms, a large number of xml files and a whole new set of terminology, but maybe I've been too hasty
[02:59] <drab> I'm 99% containers and happy with it, but I need a kvm instance or two for some zfs stuff
[02:59] <renatosilva> ok rebooted successfully, but the apt-get update output is so small now, is this normal? it used to be much larger https://pastebin.com/raw/UsjZW8gq
[02:59] <drab> and I don't really want to invest in libvirt since there's no compatiblity/future with lxc (altho still waitinfg for a response from their devs on that)
[03:00] <drab> but docs and stuff on lxc and libvirt are at best just not there, ti seems to cater 99% to kvm
[03:04] <renatosilva> is there any way to find out if the current release has been installed from scratch or if it's originated from a dist-upgrade?
[03:28] <drab> stgraber: any chance you're still around? I'm reconsidering that comment you made on kvm inside a container
[03:28] <drab> at least it would allow for some clean experimentation
[03:28] <renatosilva> it seems my vps company is really using kernel 2.6 on ubuntu 16.04, that's a shame!
[03:29] <renatosilva> I think I need to open a ticket for them to address the problem but... how can a such large base of users not have complained about it, no broken servers? weird
[03:30] <renatosilva> thanks anyway stgraber, fishcooker!
[05:42] <cpaelzer> drab: uvtools is what you need
[05:42] <cpaelzer> drab: fro the easy way to a kvm
[05:43] <cpaelzer> drab: getting a simple guest comes doen to
[05:43] <cpaelzer> 1. uvt-simplestreams-libvirt sync --source http://cloud-images.ubuntu.com/daily arch=amd64 label=daily release=xenial
[05:43] <cpaelzer> 2. uvt-kvm create --password=ubuntu xenial-testguest release=xenial arch=amd64 label=daily
[05:45] <cpaelzer> drab: and for kvm in lxd stgraber has a post somewhere which surely is better
[05:45] <cpaelzer> drab: I use http://paste.ubuntu.com/24304999/
[05:46] <cpaelzer> drab: which I combine with the default template when launching containers by tailing with --profile default --profile kvm
[06:21] <ishaved4this> hey guys, Is anyone willing to help someone with some basic stuff out real quick?
[06:25] <cpaelzer> If you would have just asked you'd get an answer or not, but that way you will wait twice ishaved4this
[06:25] <ishaved4this> alright then. Just didn't want to bother anybody
[06:26] <cpaelzer> general IRC rule - just ask, you'll usually be helped or redirected - if neither happens everyone is asleep or you are so off that everybody just goes??? but tha is rare
[06:26] <ishaved4this> haha my bad. I don't really use IRC too much anymore
[06:27] <ishaved4this> anyway, my problem is with permission issues involving plex and transmission reading and writing to my job and 2 other external HDD
[06:33] <ishaved4this> any takers?
[06:40] <cpaelzer> ishaved4this: people don't commit to solve it, ask the question itself what is failing?
[06:41] <lordievader> Good morning
[06:43] <ishaved4this> good morning. What is failing is that I'm still fairly new with using ubuntu server, and transmission doesnt have read write access to my external drives. now I went ahead of my better judgement and followed a youtube tutorial typing in sudo gpasswd
[06:43] <ishaved4this> -a plgudev & -a plex sudo & -a plex (USER), but I know thats super sloppy.
[06:43] <lordievader> Why not fix the permission issues?
[06:44] <lordievader> I.e. give transmission-debian rw access to the download dir.
[06:44] <ishaved4this> well that's what I'm here for, I'm not sure how
[06:44] <lordievader> acl's.
[06:44] <ishaved4this> I also can't remember how to mount my drives. I'm basically a noob when it comes to linux
[06:46] <ishaved4this> acl?
[06:47] <lordievader> !acl
[06:47] <lordievader> ishaved4this: https://help.ubuntu.com/community/FilePermissionsACLs
[06:47] <ishaved4this> event not found
[06:48] <ishaved4this> thanks, let me check that out
[06:49] <ishaved4this> okay so this look like a simple way to add drives/applications to a group to allow for rw access. correct?
[06:50] <lordievader> It is a good way of adding other groups to the rw pool. Normal unix permissions only allow one owner:group combination.
[06:51] <ishaved4this> ahh. well, in 16.04, It said acl is already installed. Yet I cant get to it
[06:51] <ishaved4this> man, there is so much to learn with this os
[06:52] <ishaved4this> sudo tune2fs -l /dev/sdaX |grep acl
[06:52] <ishaved4this> Default mount options:    user_xattr acl
[06:52] <ishaved4this> is see this on the page, but I have no idea what drives are labeled what, and dont know how to find out
[06:54] <cpaelzer> ishaved4this: you are good with the initial setup, you can go on the subsection https://help.ubuntu.com/community/FilePermissionsACLs#Adding_a_Group_to_an_ACL
[06:54] <cpaelzer> ishaved4this: to add the permissions you need to the paths you need them to be
[06:56] <ishaved4this> okay, I believe I am about 4 steps ahead of where I should be.
[06:57] <ishaved4this> I get that I need to add these drives to this list, but I don't know the command to see what drives are connected. I also have'nt mounted them yet
[06:58] <cpaelzer> ishaved4this: there is some reverse way to aproach this and make it less complex
[06:59] <cpaelzer> ishaved4this: the latter to be added USB drives are usually mounted by your user - so called user mounts
[06:59] <cpaelzer> ishaved4this: they get to /media/UID and the UID is what you now don't know
[06:59] <cpaelzer> but as soon as you have another disk you will have a different one again
[06:59] <cpaelzer> ishaved4this: depending on your setup you might considerer this being far easier for you http://askubuntu.com/questions/395291/plex-media-server-wont-find-media-external-hard-drive
[07:00] <cpaelzer> ishaved4this: what you have to consider, is that it gives plex the access that your user has
[07:00] <ishaved4this> okay. So I have 5 externals right now. Let me look at that link, and thank you very much
[07:08] <ishaved4this> I've read in the past that changing plex from its own user is a bad idea
[07:08] <cpaelzer> right it is as I mentioned above, but it is easy - which I want you to choose from at least
[07:08] <ishaved4this> now, I've made plex a user of plugdev, sudo, and my username already. Should I do the same with transmission to accomplish the same thing?
[07:11] <cpaelzer> The user might be transmission-daemon, but if adding to plugdev is what you want you can try
[07:11] <cpaelzer> Although I don't see why it should be in sudo
[07:13] <ishaved4this> me either, I just followed a guide. Sudo would be overkill wouldnt it?
[07:17] <lordievader> Transmission-daemon runs under the user transmission-debian.
[07:19] <ishaved4this> yeah, i've added it to groups plugdev, and my name, and I added my name to its group using passwd
[08:55] <dn`> Is it possible to redirect the installer output/debug log/anything ;-) to a remote syslog with a kernel(?) param while installing?
[10:06] <caribou> cpaelzer: do you have any idea why LP: #1317491 is still stucked in trusty-proposed ?
[10:17] <cpaelzer> caribou: no I don't
[10:17] <cpaelzer> caribou: if anything then because it links bugs that were taken out in the SRU page http://people.canonical.com/~ubuntu-archive/pending-sru
[10:18] <cpaelzer> caribou: but I explained that last week (don't remember who asked)
[10:18] <caribou> cpaelzer: maybe tinoco, he was inquiring about this bug too
[11:13] <sonu_nk> hi i want to create subdomain on my ubuntu server.. what is the best way any link ?
[11:14] <sonu_nk> for apache
[11:56] <fnordahl> zul: good morning! would you have a moment to assess when stable/mitaka horizon 9.1.2 will be available as a ubuntu uca package? it will solve among other things bug 1666827
[12:06] <TafThorne> sonu_nk: Sub-domain is a DNS thing that you configure in a DNS server for the domain.
[12:07] <TafThorne> @sonu_nk If you do not have access to an authoritative domain name server for your parent domain you may not be able to configure a sub-domain for your Apache server to be part of.
[12:33] <zul> fnordahl: yep
[12:40] <Tazmain> Hi all, my /var/sppo/mqueue-client is using 170GB currently , can I somehow get that space back ?
[12:46] <EmilienM> jamespage: hey, since this morning we have some issues with E: Unable to locate package ubuntu-cloud-keyring
[12:47] <EmilienM> jamespage: our repo config: http://logs.openstack.org/28/450628/2/check/gate-puppet-nova-puppet-beaker-rspec-ubuntu-xenial/acdee4a/logs/apt-cache-policy.txt.gz
[12:48] <EmilienM> coreycb: ^
[12:58] <coreycb> EmilienM, it seems to be working fine from the main ubuntu archive. i wonder if the mirror you're using is having issues?
[12:58] <EmilienM> yeah, that's what i'm looking now
[12:58] <EmilienM> but I see it: http://mirror.regionone.osic-cloud1.openstack.org/ubuntu/pool/universe/u/ubuntu-cloud-keyring/
[12:59] <ronator> Is there a reliable way to find out if an Ubuntu server has had an release-upgrade? Meaning, can I prove that a certain Ubuntu16 setup was upgraded from Ubuntu14?
[13:00] <ronator> asides network stuff like "still uses old NIC naming scheme" ..
[13:01] <cpaelzer> yeah ronator, let me check the exact filename for you
[13:03] <ronator> cpaelzer: cool thx
[13:04] <cpaelzer> ronator: /var/log/dist-upgrade/main.log
[13:05] <ronator> cpaelzer: oh that's cool even with timestamps, awesome& quick , thx
[13:05] <cpaelzer> ronator: even better - this only holds your last upgrade, but if you want to know where you started check /var/log/installer/media-info
[13:07] <ronator> cpaelzer:  ' Ubuntu-Server 14.04.2 LTS "Trusty Tahr"  - Release amd64 (20150218.1)' Yeah, that's all the prove I needed, thanks a lot again :)
[13:07] <ronator> I was only lookign into /var/log/apt - installer is so obvious :)
[13:10] <coreycb> EmilienM, i just installed ubuntu-cloud-keyring from http://mirror.regionone.osic-cloud1.openstack.org/ubuntu succesfully so the mirror seems ok from my end
[13:11] <EmilienM> coreycb: ok, I'm doing 'recheck' to see if it's consistent :/
[13:11] <coreycb> EmilienM, ok
[13:48] <EmilienM> coreycb: it works now. Go figure :-)
[13:49] <coreycb> EmilienM, oh good, just a temporary glitch in the matrix
[13:49] <EmilienM> yes
[13:51] <ronator> yeah, fu**** déjà-vus ;-)
[14:50] <keithzg> What's the preferred webmail component out there these days? I remember Roundcube being reworked into Roundcube-next a while back but I can't say I kept track of how that development work went.
[14:51] <erick3k> Can someone help me, i can't solve this just gave up. This happens after you resize the disk on a vm
[14:51] <erick3k> https://i.imgur.com/d2zTZLB.png
[14:51] <erick3k> gets stuck there during boot
[14:51] <keithzg> (Just migrated an email server from RHEL5 to Ubuntu Server 16.04, and with modernizing the backend I've been thinking of convenience additions for the frontend)
[14:54] <keithzg> erick3k: Can you boot into a recovery session?
[14:54] <blackflow> keithzg: I wouldn't recommend using packages for roundcube. it's a very front end application, from universe and isn't patched quite well.
[14:54] <erick3k> keithzg how can i do that?
[14:55] <erick3k> this happens as soon as you change the disk size, yet grow part is installed so not sure whats causing it
[14:55] <keithzg> erick3k: You have to choose that from the GRUB menu when rebooting.
[14:55] <blackflow> keithzg: otherwise, yeh, I'd recommend Roundcube. quite nice webmail.
[14:56] <erick3k> ok i'll try but it boots so fast
[14:58] <keithzg> erick3k: You can change that if you boot from a live session or such and chroot into your existing install, but yeah, default configuration doesn't give much room, heh, I don't think it's even visible that it's an option but sometimes spamming ESC will hit the small window of time.
[14:58] <erick3k> ok
[14:59] <keithzg> blackflow: Has roundcube-next become the main roundcube? If not from universe (which I did suspect would be quite outdated), where best to grab it from? (I'd certainly like to handle the dependencies from the main repos as much as possible)
[15:03] <blackflow> keithzg: roundcube-next is a development project. current latest stable is 1.2.4, grab the tarball directly from roundcube. you could install the dependencies according to the roundcube package from universe, but it's just a few php modules.
[15:03] <blackflow> maybe there's a PPA with latest stable, but I don't trust those anyway, so wouldn't recommend.
[15:06] <erick3k> keithzg i added timeout on grub config but doesn't seem to work
[15:06] <erick3k> is something not correct ? https://i.imgur.com/sH2VCNC.png
[15:09] <keithzg> blackflow: Yeah fair enough. As long as it's pretty much just PHP I'm fine with untarring it into a folder and running it on a server VM.
[15:10] <blackflow> keithzg: and updates are simple. you untar it into another dir, and run the update script, pointing to the running application dir.
[15:10] <keithzg> erick3k: Well, at very least from the config you're showing it'd still be hidden, although it should show up. You ran update-grub, right?
[15:10] <blackflow> keithzg: subscribe to the roundcube mailing list to get mail on new updates and secvuln fixes
[15:11] <keithzg> erick3k: (from within the install itself via a chroot, hopefully with bind mounts!)
[15:11] <keithzg> blackflow: Good idea
[15:11] <erick3k> yes i did
[15:12] <keithzg> erick3k: Well, if you've gotten into the chroot, it should also be possible to set the default boot or just the next boot to be into recovery mode (although I forget how precisely to do that, it's been quite a while)
[15:14] <Score_Under> (moving from #ubuntu) So I want to start a small apt repo for the company I'm in. We're migrating from CentOS (mostly), and so on the machines that still have people's gpg keys for example, the correct scripts to set up an apt repository aren't present. CentOS provides dpkg-scanpackages, but I can't find any other software which is necessary to create the rest of the repo (mostly the Release
[15:14] <erick3k> ok but should this config https://i.imgur.com/1TRRUvC.png show the menu?
[15:14] <Score_Under> files). I tried hacking together a shell script to do it, and I got apt to be able to "update" from it without complaint, but it doesn't create any corresponding /var/lib/apt/lists files and it can't find any packages in those repos. Regarding documentation, I can't find a nice medium between the kind which directs me to a few tools (with some invariably not existing on CentOS) or the kind
[15:14] <Score_Under> which attempts to define everything from the ground up (which provides far more information than is necessary and would take an enormous amount of time to read through and implement)
[15:15] <Score_Under> And my question is either: 1. where should I start debugging this, or 2. where can I get a copy of the scripts required to create these repositories
[15:16] <erick3k> nothing shows it boots instantly
[15:17] <erick3k> hum i think the recovery kernel is not installed
[15:18] <erick3k> perhabs why it doesnt show
[15:18] <erick3k> https://i.imgur.com/BUikNZZ.png
[15:18] <keithzg> erick3k: Hmm, quite possibly.
[15:18] <blackflow> Score_Under: I don't have experience setting my own apt repo, so I can't quite help with that, but I'm failing to understand the use case, or why would you need it.
[15:18] <erick3k> anyway to install it
[15:18] <erick3k> ?
[15:19] <keithzg> erick3k: I can't honestly remember if there's a separate recovery kernel needed, I didn't actually think so.
[15:19] <erick3k> umm
[15:19] <Score_Under> blackflow: The first use case is for version pinning (for the foreseeable future we're stuck on a version of docker which ubuntu has long since stopped providing), and the second is for adding our own software.
[15:19] <keithzg> erick3k: I'd strongly suggest using `pastebinit` from the terminal rather than screenshots, though, they're kindof constrained. Try for instance `cat /etc/default/grub | pastebinit` to get your entire config up and readable
[15:20] <keithzg> Maybe there's still something in there that's tripping it up.
[15:20] <erick3k> oh ok hold on
[15:20] <blackflow> Score_Under: I understand. Seen this? https://help.ubuntu.com/community/Repositories/Personal
[15:21] <keithzg> erick3k: And you can look at /boot/grub/grub.cfg to see what menu items you *should* have
[15:21] <erick3k> here https://0bin.net/paste/+Vd7Jk3Ox7dd5bXO#5vmbd6uoIXFYSgiuze-M+1O5uryBgzD/Dy3v7r6MacH
[15:21] <erick3k> ok
[15:22] <keithzg> erick3k: Hmm yeah that all looks fine. Is this a VM? Perhaps the host is directly booting the kernel or such and thus grub isn't even in the picture.
[15:22] <erick3k> yes it is kvm
[15:22] <Score_Under> blackflow: I took a look at that, but it only deals with software kept on one machine. For comparison our RHEL package repo is currently 3.4GB large, so that may end up requiring regular rsyncs of 3GB-ish directories over 200+ machines
[15:22] <Score_Under> to me it's not an attractive prospect
[15:23] <erick3k> i don't select any kernel in the vm options so it shouldn't
[15:23] <erick3k> i think is disabled on the grub.cfg hold on
[15:23] <keithzg> erick3k: Hmm. I must admit I'm stumped then, sorry
[15:24] <erick3k> yea, never had this happened before
[15:24] <erick3k> ubuntu 14 works like a charm
[15:24] <erick3k> 0 problems
[15:24] <erick3k> https://0bin.net/paste/-AHCy1JjWVAnhva4#pnDBdZmqKlK9PvHXBKugIoOXgKc3Rw7y1A3l3zYlIgi
[15:25] <blackflow> Score_Under: check the accepted answer: http://askubuntu.com/questions/170348/how-to-create-a-local-apt-repository    Sounds like serving the dir over http with apache or nginx is all it takes to make it available in a network.
[15:26] <Score_Under> Yeah. I'm just struggling to get it in a format that apt likes
[15:27] <erick3k> i didnt even change the disk size now, only turned off
[15:27] <erick3k> and look is stuck https://i.imgur.com/7NoZoZp.png
[15:28] <erick3k> i just don't get it is beyond me
[15:28] <nacc> erick3k: are you having issues with iscsi? or what is the problem (sorry only recently joined)
[15:29] <erick3k> hi nacc, is a kvm virtual machine using cloud image ubuntu 16
[15:29] <erick3k> once you shutdown this happens
[15:29] <erick3k> reboots fine tho
[15:30] <erick3k> and no i have centos 6 7 and ubuntu 14 running on the same exact machine type and 0 problems
[15:30] <erick3k> can't get into recovery, booted with system rescue, no errors i mean i don't know about to give up
[15:31] <nacc> erick3k: once you shut down -- what happens?
[15:31] <erick3k> that
[15:31] <erick3k> gets stuck booting at https://i.imgur.com/7NoZoZp.png
[15:31] <nacc> erick3k: ok, please use words ... still waking up :)
[15:31] <nacc> erick3k: got it
[15:31] <erick3k> xD
[15:32] <nacc> erick3k: so `sudo shutdown` then `virsh start` or whtever you use, fails. But from within the instance, `sudo reboot` works?
[15:32] <erick3k> yes, as long as you reboot it boots back again, until you shutdown
[15:32] <erick3k> and start
[15:32] <erick3k> very very very weird
[15:34] <nacc> so if it fails to 'start' again, do you have to create a fresh VM each time?
[15:34] <erick3k> yep
[15:35] <erick3k> and that fresh vm will boot until again shutdown
[15:35] <nacc> erick3k: any particular hw details? using iscsi, etc.?
[15:35] <erick3k> as for disk is using Virtio
[15:35] <erick3k> and console qxl
[15:35] <erick3k> not much else
[15:36] <nacc> hrm
[15:37] <erick3k> gonna try and change those and see if something happens
[15:40] <nacc> virtio should be fine, i'd see if the console makes a difference (or see if you can ping it -- i did see a console=tty1, which seemed a bit surprising
[15:41] <erick3k> yes that might be something
[15:41] <erick3k> what should i change?
[15:42] <erick3k> should be tty0?
[15:43] <nacc> erick3k: that's what i would have expected, but it depends on your config -- did you manually change that?
[15:44] <erick3k> i do think might be something with the console cuz it usually boots right after what's shown on the pic
[15:44] <erick3k> nop, image is as it comes from ubuntu cloud images
[15:44] <erick3k> so defaults
[15:44] <erick3k> hold on i'll show you
[15:47] <dn`> is there any easy way to redirect the netinstaller output or at least the log output to a remote syslog while installing via a param?
[15:48] <erick3k> nacc there are two
[15:48] <erick3k> GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0"
[15:52] <nacc> erick3k: ack, i'm going to try and reproduce locally
[15:53] <erick3k> kool
[15:57] <nacc> erick3k: interesting, which image format did you use? i used current/ disk1.img with virt-manager and it hangs immediately for me
[15:58] <erick3k> nice (that you reproduced) hehe
[15:58] <erick3k> qcow2 i think
[15:58] <erick3k> hold on let me link
[15:58] <erick3k> https://cloud-images.ubuntu.com/releases/xenial/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img
[15:59] <erick3k> says Cloud image for 64-bit computers (QCOW2 disk image file for use with QEMU and KVM)
[16:00] <nacc> erick3k: yea, that's the one i used too
[16:03] <erick3k> so what you think could be the problem?
[16:03] <erick3k> bad image?
[16:05] <erick3k> ubuntu 14 cloud image works like a charm btw
[16:08] <nacc> erick3k: ah! i waited ... a while
[16:08] <nacc> erick3k: and it booted
[16:10] <erick3k> hum
[16:10] <nacc> wht's the default user/password?
[16:11] <erick3k> umm there is none, you have to create on with cloud-init
[16:13] <nacc> heh, duh -- well, it did boot
[16:13] <nacc> let me restart it with some cloud data
[16:16] <dn`> Anyone got an idea how to redirect the output of the installer or the log to a remote syslog?
[16:21] <nacc> smoser: --^ erick3k's issue,  is it because it's trying to contact a ds?
[16:51] <smoser> dn`, https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=605614 . it can be done. i'm almost certain. i dont remember the syntax. its probably kernel command line.
[16:54] <dn`> smoser: thanks, that would be so wonderful helpfull *searhcinggg from that post*
[16:54] <smoser> erick3k, well, if you pass console=ttyS0 (as it appears you are, and is default from the cloud image) then init's messages are going to go to a serial console rather than tty
[16:55] <erick3k> i might just install ubuntu from scratch
[16:55] <erick3k> already spent too many hours on this
[16:55] <erick3k> can't find the root of the problem
[16:55] <smoser> erick3k, but also be aware taht the cloud images are not going to be useful unless you give them some sort of datasource. they're meant to be booted in a cloud.
[16:55] <erick3k> thanks for your help tho
[16:55] <erick3k> yes i know
[16:55] <erick3k> i sell vpses and all other images work great
[16:55] <smoser> where are you booting it ?
[16:56] <erick3k> am using ovirt / rhev
[16:56] <erick3k> has cloud init integrated
[16:56] <smoser> and you're sure its not working ?
[16:56] <smoser> ie, its not that you just aren't seeing something
[16:56] <smoser> but that its actually not doing something.
[16:57] <smoser> erick3k, i'd appreciate your hepl here though... this could be unintended fallout of
[16:57] <smoser>  https://lists.ubuntu.com/archives/ubuntu-devel/2017-February/039697.html
[16:58] <smoser> i'd like to make sure that is not the case.
[16:58] <smoser> could you grant me access to a vps ? or i can sign up if you're a clodu provider.
[16:59] <erick3k> i only like them because it saves time instead of installing from scratch but i know they are not perfect :)
[17:03] <smoser> erick3k, well, ubuntu would rather you use those when you can also
[17:04] <smoser> it gives ubuntu users a standard isntallation across multiplep providers
[17:04] <smoser> so i'd really like to help
[17:04] <smoser> (especially if the datasource identity stuff regressed this)
[17:49] <dn`> is there any way to fix the installer ‘setting up the clock’ - ‘getitng the time from a network time server…’ bug? it kinda always hangs there for ages.. - like some magic presseed value to make it go away?
[17:51] <drab> d-i clock-setup/utc boolean true
[17:51] <drab> d-i time/zone string US/Pacific
[17:51] <drab> d-i clock-setup/ntp boolean true
[17:51] <drab> I have that in my preseed
[17:51] <drab> never had a issue hanging around time
[17:52] <dn`> https://bugs.launchpad.net/ubuntu/+source/debian-installer/+bug/1558166
[17:52] <dn`> I think that’s the bug that I’m facing
[17:53] <drab> strange, I do about half a dozen installs of xenial a day (pxe+preseed) and never had that issue
[17:53] <drab> but I'm not ipv6 only
[17:54] <drab> in fact I disable ipv6 right after via ansible becuase I don't use it and have just had rpoblems with it
[17:54] <dn`> it’s random - I have it on some but not on athers..
[17:54] <drab> same network/same dhcp settings?
[17:55] <drab> dn`: try to pass ipv6.disable=1 to boot cmdline?
[18:19] <drab> mmmh
[18:19] <drab> dd if=/dev/zero bs=4M count=100 | nc -v lxc-srv1 2222
[18:19] <drab> running from another lxc container on the same host
[18:19] <drab> I get 117MB/s, ie Gigabit
[18:20] <drab> but if I use iperf between the same two containers I get 21Gbit
[18:20] <drab> even between container and host I get about same speeds with iperf
[18:21] <sarnold> /dev/zero may not be the fastest way to generate zeros
[18:21] <sarnold> try dd if=/dev/zero of=/dev/null or something
[18:21] <sarnold> see what speeds you get with that
[18:21] <drab> but as soon as I introduce /dev/zero things go down to 1Gbit, how can dev/zero be so slow?
[18:21] <drab> uhm ok
[18:21] <drab> it seemed unreal that dev/zero would be slow...
[18:21] <drab> testing that, thanks, good test
[18:21] <sarnold> page faults aren't the fastest things in the world
[18:21] <sarnold> zeroing pages isn't the fastest thing in the world
[18:22] <drab> ok, dd to dev null is 6.8Gbps
[18:22] <sarnold> if your application zeros a page and just calls write() on that page repeatedly, it'll probably go way quicker
[18:22] <drab> which is still not 21, but much faster than dd + netcat
[18:23] <sarnold> dd + nc is going from kernel -> user, user -> pipe, pipe -> nc, nc -> kernel, kernel -> whatever's listening in the lxc-serv1 container...
[18:24] <drab> true, it's more complex that thar for sure, but 6Gbit seems a heck of a loss for nc and some pipes, even if doubled, but then I don't knwo what I'm talking about really :P
[18:24] <sarnold> dd from zero to null is just two copies, kernel -> dd, dd -> kernel
[18:24] <drab> I don't know much at that level of depth kernel wise so it may very well be reasonable
[18:27] <drab> I wanna try kvm inside lxc and trying to get some baseline
[18:27] <drab> the idea of a bridge on top of a bridge is kinda scary, but again maybe it's just ignorance
[18:27] <sarnold> iirc the linux kernel just smoooshes all the connected bridges together into one
[19:01] <dn`> while installing ubuntu on iscsi (root device) - I run into a kenerl errror(?) https://gist.github.com/anonymous/20af414db286d8893257a588f226557d - anyone got a tip? - I’m using l http://archive.ubuntu.com/ubuntu/dists/xenial/main/installer-amd64/current/images/netboot/ubuntu-installer/amd64 as installer
[19:03] <nacc> dn`: hrm, is it reproducible?
[19:03] <nacc> dn`: seems like a networking issue for the iscsi connection
[19:04] <dn`> nacc: that’s the fun part - kinda. the same machine worked on an other ISCSI device - both are the same brand (synology…); I would bet that it works without authentication - and that it’s a combination of synology doing something stupid
[19:04] <dn`> and something wrong
[19:05] <dn`> let me try without auth (will take a moment) — but before I manual tried it with iscsiadmin - I could login on the target without auth, but not with auth
[19:06] <nacc> dn`: did you get a similar backtrace with iscsiadmin?
[19:12] <dn`> nacc: I didn’t check syslog at that time - but I think I could retry it and check (currently trying without auth/chap)
[19:13] <dn`> oki, also getting the same error without chap/password - but giving it one more try
[19:14] <nacc> dn`: sure, just wondering
[19:15] <dn`> nacc: it’s just kinda odd; I used another Synology for the exact same installtion (pxe, preseed) and both Synologies have the same version running;
[19:17] <dn`> oki, same error twice (without auth)
[19:17] <dn`> that’s annoying
[19:25] <dn`> fun is also, after the installer complaining it can’t login - if I want to retry, I get “You entered an empty password, which is not allowed, Please choose a non-empty password”. but it’s an endless loop with the same error ;-) without any chance of changing it
[19:27] <nacc> dn`: so, just curious (i'm working on curtin iscsi support right now ) -- what's an example target name in your setup?
[19:28] <dn`> iqn.2017-01.com.xxxx:srv-n11-0001
[19:28] <nacc> dn`: thanks :)
[19:28] <dn`> the fun thing is - it worked before
[19:28] <nacc> dn`: that conforms to the spec, wast just wondering (we're reading bunches of specs and fixing our code to match right now)
[19:30] <dn`> The most confusing part for me is that the exact same configuration and names - beside 0003 instead of 0001 ;-) works with another pair of machine <> nas
[19:30] <dn`> the other version works with chap/auth or without - all works fine
[19:30] <dn`> ‘version’ == machine
[20:30] <ThiagoCMC> Hey guys, I'm trying to deploy OpenStack Ocata on Ubuntu 16.04, with Cloud Archive, also, with OVN.
[20:31] <ThiagoCMC> I'm trying to follow this: https://docs.openstack.org/developer/networking-ovn/install.html
[20:31] <ThiagoCMC> however, looks like that the Ubuntu's ovn-common pakcage is missing a binary!
[20:31] <ThiagoCMC> ovn-ctl: command not found
[20:46] <tomreyn> ThiagoCMC: http://packages.ubuntu.com/search?searchon=contents&keywords=ovn-ctl&mode=&suite=xenial&arch=any
[20:47] <tomreyn> not everything is always in PATH
[20:53] <ThiagoCMC> tomreyn, thank you!
[21:08] <Garogat> can anyone help me with high availability web clustering? i have at least two (more are coming) nginxs with php and mysql and i need to sync my files now. what should i use, because the servers are not in the same local network
[21:26] <sarnold> Garogat: you could rsync all your servers from a 'golden' server somewhere; or you could use git
[21:28] <Garogat> are there any common issues with rsync i have to be aware off? (i read about problems with deleted files coming back etc.) git sounds weird to me, because i would have to save my customers file to git.
[21:29] <sarnold> rsync's changes aren't made in any sort of atomic way
[21:30] <sarnold> for ubuntu archive mirrors, this is worked around by copying all the data files before copying the metadata files
[21:30] <sarnold> but this might be complicated to reproduce for arbitrary website files
[21:31] <sarnold> depending upon the sizes of changes, how long it takes to transfer, how many requests your servers get, etc. it might be best to rsync to a target directory and do a directory rename once the transfer is finished
[21:31] <sarnold> git will have similar problems but much shorter window for trouble, if you use branches to manage the changes
[21:34] <Garogat> this is semi professional caused by the diffrent locations with 100mbit up/down anyway, so do you thing i could have a system where node 1 rsyncs to 2 and if node 1 fails i will run node 2 as long as 1 is back and got all data back?
[21:37] <sarnold> Garogat: it depends upon the application you're serving on the thing
[21:38] <Garogat> im having: ddns-service, multiple cms (wordpress with assets) and so on
[21:44] <Garogat> sarnold: is GlusterFS worth to have a look at?
[21:45] <tomreyn> Garogat: not packaged in ubuntu AFAIK, but totally worth a try: https://cernvm.cern.ch/portal/filesystem
[21:46] <sarnold> Garogat: maybe. I had the impression from glusterfs sources that the object storage capabilities were alright but I didn't care for the filesystem part of glusterfs. Maybe that's been improved in the meantime..
[21:46] <sarnold> Garogat: but you can't just add that to an application that wasn't planning on being clustered from the start
[21:50] <Garogat> that so much more complicated than i thought. i also need to find a way to sync php session?!
[21:50] <erick3k> does anyone else here works with cloud-init?
[21:50] <nacc> erick3k: there is a channel for it as well
[21:50] <nacc> erick3k: #cloud-init
[21:50] <erick3k> ty nacc
[21:51] <nacc> erick3k: np
[21:52] <sarnold> Garogat: depending upon the application's goals, yes; it's sometimes common to have haproxy or whatever send all connections from a given host back to the same backend webserver to try to avoid needing to share session state in a cookie or in the database
[22:30] <nacc> powersj: re: LP: #1679357, it's a metapackage
[22:30] <powersj> nacc: rats, thought I checked for that
[22:31] <powersj> nacc: thanks
[22:31] <nacc> powersj: you can check to be sure, but i think it's contentless
[22:31]  * nacc is double checking too
[22:32] <nacc> ok, there are a few binarie, but pretty few -- i guess we could write tests taht they do work when the packages are installed, so i'll leave it
[22:32] <powersj> yeah... ok! thx for checking
[23:04] <cliluw> I have some code that runs as a service. I want to let this service SSH into some of my other machines and run commands. What's the most secure way to go about this?
[23:05] <sarnold> cliluw: ssh-keygen a key for the service to use; distribute the public portion in ~/.ssh/authorized_keys as needed
[23:06] <sarnold> cliluw: decide if you want the key to be used by whoever has the key alone, so the service doesn't require any unlockling or any agent
[23:06] <sarnold> cliluw: .. or if the key shouldn't be useful on its own, and must be unlocked by an agent for use. then you'd want to make sure an ssh-agent is running, set up the environment variables correctly, and then figure out how you'll unlock it every reboot or key expiriry or whatever.
[23:08] <cliluw> sarnold: Ok, thank you.
[23:18] <blackflow> cliluw: this code as a service, is a public service?
[23:18] <cliluw> blackflow: No, not a public service. Just a standard systemd service.
[23:19] <blackflow> cliluw: and what do you want it to do over ssh to other machines?
[23:21] <cliluw> blackflow: Various things. I want it to modify files on the remote machine and run commands to set them up.
[23:22] <blackflow> cliluw: well, giving unbound ssh access from one machine to another is not quite th emost secure thing to do. if that machine is compromised, all the other machines are wide open to compromise as well.
[23:22] <cliluw> blackflow: That's true. What do you suggest?
[23:23] <blackflow> cliluw: depending on what exactly you want to do, it may be wise to run something like SaltStack's reactor.
[23:23] <blackflow> or in other words, have those other machines poll for a "command file" (that's not just a bash script or something like that) from which they can parse what to do.