[01:51] <jayjo_> if I have my Certificate Authority, and I sign my database certificate, and I need to sign the client certificates with the same CA, do I sign them myself and distribute them to the client, or does the client do it somehow by exposing the CA?
[01:55] <tarpman> jayjo_: if you are running a CA (even a local/internal one), it's probably a good idea to research how TLS works, to the point that you can answer a question like that yourself
[01:59] <jayjo_> Is there a recommended resource for reading up on that? I've been using TLDP.org and the ubuntuserver guide, but they are difficult to onboard
[02:01] <tarpman> jayjo_: nothing off the top of my head, but the first couple of google results for "intro to TLS" look reasonable - the apache.org and gnutls.org ones
[02:01] <tarpman> maybe avoid the sans.org one - says it's from 2003, might contain some outdated info
[04:37] <NoHoFoo> $php            command gives same response
[04:37] <NoHoFoo> $php            command gives NO response
[04:38] <NoHoFoo> but php --version
[04:38] <NoHoFoo> works
[04:38] <NoHoFoo> need to run    $php drush status
[04:39] <NoHoFoo> gives a frozen cursor
[04:39] <NoHoFoo> $php            command gives NO response
[04:39] <NoHoFoo> so I'm confused
[04:40] <NoHoFoo> wordpress runs on this server
[04:40] <NoHoFoo> drupal does ALSO
[04:40] <NoHoFoo> trying to install drush
[05:21] <mowthegrass> Hello All - Can someone help me to point key areas that needs to be monitored for local mirror repo
[05:21] <mowthegrass> Someoff them i have noted are 1)Disk usage 2)Repo URL  3)Server Health
[09:17] <adun153> Hi, using apache, how can I make it so that whenever somebody puts is http://myserver.com or https://myserver.com, that it redirects always to https://myserver.com/subdir/      ? Thanks.
[09:55] <van777> adun153: google "html redirect"
[10:19] <iberezovskiy|off> jamespage, hi. could you please tell me what the status of keystone package with keystone tempest plugin?
[10:20] <jamespage> iberezovskiy, its in newton-proposed - you can always look at http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/newton_versions.html
[10:21] <jamespage> iberezovskiy, notifications of changes are also sent to https://lists.ubuntu.com/mailman/listinfo/Cloud-archive-changes
[10:22] <jamespage> for example - https://lists.ubuntu.com/archives/cloud-archive-changes/2016-June/004186.html
[10:22] <jamespage> is the one you are interested in
[10:27] <iberezovskiy> thanks you!
[11:10] <adun153> van777 that did it, thanks!
[11:19] <iberezovskiy> jamespage, what's the policy of adding dependencies for openstack packages if these dependencies aren't mention in requirements.txt?
[11:20] <iberezovskiy> I've faced the issues then python-ddt is required for neutron, but it's not in neutron requirements.txt
[11:20] <iberezovskiy> https://github.com/openstack/neutron/blob/master/neutron/tests/tempest/api/test_extension_driver_port_security.py#L16
[11:20] <iberezovskiy> omg, it's in test-requirements.txt
[11:21] <jamespage> iberezovskiy, well it is a test library :-)
[11:21] <iberezovskiy> yeah, probably it should be installed along with tempest
[11:23] <iberezovskiy> so, the problem isn't in packages, sorry for wrong alarm
[11:39] <jamespage> np
[12:10] <rbasak> caribou: kexec-tools reviewed, thanks. Looks good to upload. Do you want to upload, or do you want me to sponsor?
[12:10] <rbasak> caribou: actually, let me sponsor if you don't mind.
[12:10] <caribou> rbasak: thanks! no worry I'll upload it
[12:11] <rbasak> I'd like to try the tag/upload/push flow.
[12:11] <caribou> rbasak: fine by me as well, got another upload on my way
[12:12] <caribou> rbasak: don't be surprized,it'll sit in proposed until I sync makedumpfile from Debian (which I will do right after I answer to your email)
[12:12] <rbasak> caribou: understood, thanks
[13:28] <caribou> rharper: started to work on multipath-tools merge yet ?
[13:29] <caribou> rharper: I'm working on the lvm / multipath-tools bug I told you about yesterday
[13:39] <rharper> caribou: no, i've not started a merge on multipath-tools yet; did you need something now?
[13:39] <caribou> rharper: I'm preparing an SRU so I need to get  yakkety in sync
[13:40] <caribou> rharper: here is the debdiff : http://paste.ubuntu.com/17361015/
[13:40] <caribou> rharper: moved up the clean-tree & added systemd calls in d/rules
[13:41] <rharper> caribou: sync we're syncing, won;'t we get the systemd link fix automatically ?
[13:42] <rharper> for yakkety, won't we see a new-release from debian, and then a replay of our changes on top ?
[13:42] <caribou> rharper: in order to SRU to Xenial, I need it in Yakkety now but you're right, this will be dropped in the merge
[13:42] <runelind_q> is there a list of containers available for download?
[13:43] <caribou> rharper: and if I SRU to Xenial as is, it'll get rejected since it's not in Yakkety
[13:43] <caribou> runelind_q: on LXC ? lxc image list images, lxc image list ubuntu-daily
[13:43] <rharper> so, we're doing a bug fix for the SRU, to be dropped later in yakkety once we sync
[13:44] <caribou> rharper: sound silly but yes
[13:44] <caribou> rharper: lemme check
[13:45] <rharper> caribou: my concern is that the systemd service (versus the upstart script wrapper) isn't fully baked anyhow ... so I'm not sure what bug we're fixing here...
[13:45] <runelind_q> caribou: sudo lxc image list images just returns a blank list :-/
[13:45] <runelind_q> same with ubuntu-daily
[13:45] <caribou> runelind_q: images:
[13:46] <rharper> lxc image list images:
[13:46] <runelind_q> aha
[13:46] <rharper> the images: hits the remote images.linuxcontainers.org
[13:46] <caribou> rharper: I will recheck but according to our tests, that change was also required in order to fix the issue the clean-tree only was not sufficient
[13:46] <runelind_q> caribou: cheers
[13:47] <caribou> rharper: let me double-check before I decide
[13:47] <rharper> caribou: I mean the bigger picture
[13:47] <rharper> I agree that the changes are needed to enable the systemd multkpath service to "start" property via SD_NOTIFY
[13:47] <rharper> but what *bug* is that solving ?
[13:47] <rharper> xnox had looked the systemd service for multipath and it was lacking vs. the upstart wrapper script (/etc/init.d/multipath)
[13:48] <rharper> so, even if we fix the systemctl start multipath command ... it doesn't always work, especially if it's not fully configured .
[13:48] <caribou> rharper: cyphermox is sitting besides me and he also agrees that it is kindof silly but still required in the meantime
[13:48] <rharper> so I'm backing up and saying, what are we fixing ?  That is, if we fix the systemd service, we also need to address the other issues that xnox raised
[13:48] <caribou> rharper: ok, he's also here so I'll check with him
[13:49] <rharper> ok, lemme find the bug with xnox's investigation
[13:49] <cyphermox> it's a good stop-gap since you might not get to the merge until next week or later?
[13:50] <rharper> sure
[13:50] <rharper> but what bug are we fixing that needs to go itno X right now ?
[13:50] <rharper> https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1583563
[13:50] <rharper> that's the one with the rest of the issues for systemd multipath service
[14:07] <coreycb> ddstreet, hey looking at designate and heat newton ci failures
[14:08] <coreycb> ddstreet, sorry, not meant for you
[14:20] <teward> when trying to rsync data between systems, that are both VMs, i'm getting CPU soft lockups on the target system on the rsync - can I assume this is the hypervisor's fault?
[14:28] <EmilienM> jamespage: hey!
[14:28] <EmilienM> jamespage: iberezovskiy was able to deploy openstack newton & run tempest, with this workaround: https://review.openstack.org/#/c/327678/33/run_tests.sh
[14:29] <EmilienM> do you think we can solve it in packaging?
[14:30] <jamespage> EmilienM: hmm probably not - I'm reticent to include a test requirement for runtime usage...
[14:30] <EmilienM> k
[14:30] <EmilienM> we'll handle it in puppet
[14:37] <runelind_q> hrm, not necessarily an ubuntu issue, but I'm running a CentOS 6 container on 16.04, and an http_proxy variable keeps getting set on login, and I can't figure out where it is being set :-/
[14:39] <sarnold> runelind_q: iirc lxd by default sets up a link-local ipv6 bridge without network connectivity, and then uses an http proxy to get to the world (e.g. for apt..)
[14:39] <runelind_q> sarnold: I have modified my lxc profiles to just bridge with a regular bridge.
[14:39] <runelind_q> and my ubuntu containers do not appear to be setting this http_proxy variable
[14:40] <sarnold> runelind_q: ah interesting
[14:40] <sarnold> runelind_q: grep -r http_proxy /etc and see what you can find?
[14:40] <runelind_q> no hits
[14:43] <runelind_q> nor in yum.conf or /etc/environment
[14:45] <sarnold> alright, check your ~/.bash* files
[14:45] <runelind_q> yup, did that as well.
[14:45] <sarnold> maybe check /lib and /usr as well, systemd unit files can set environment variables
[14:45] <runelind_q> it only appears to be set for root, not my regular user that I just created, so I guess I'm not super worried about it.
[14:46] <runelind_q> just figured someone else may have created a CentOS container on Ubuntu and came across the same issue.
[14:47] <NoHoFoo> trying to install drush for drupal 8  on ubuntu server....nothing works....apt get...composer......all fail....what works
[14:48] <teward> NoHoFoo: what do you mean by 'nothing works' and 'all fail'
[14:48] <teward> and what version of Ubuntu Server are you using
[14:48] <NoHoFoo> 16.04
[14:49] <NoHoFoo> error messgaes and no drush
[14:50] <sarnold> pastebin your errors
[14:50] <teward> is there a way to force rsync to not eat up all the processor power on the target machine
[14:51] <teward> it's literally preventing data migration
[14:51] <NoHoFoo>    Failed to download drush/drush from dist: The zip extension and unzip command are both missing, skipping.
[14:51] <NoHoFoo> The php.ini used by your command-line PHP is: /etc/php/7.0/cli/php.ini
[14:51] <NoHoFoo>     Now trying to download from source
[14:51] <NoHoFoo>   - Installing drush/drush (8.1.2)
[14:51] <NoHoFoo>     Cloning 85b58140d576cfdb9546a23c3ff44b72d0dae5bc
[14:51] <NoHoFoo> noah@ubuntuServer:~ $ drush status
[14:51] <NoHoFoo> The program 'drush' is currently not installed.
[14:52] <runelind_q> teward: use zfs send/recv instead! </smartass answer>
[14:52] <teward> runelind_q: haha, smartass answers don't help me right now
[14:52] <teward> E:OldServer9.10 -> 14.04 server
[14:52] <NoHoFoo> all from      sudo composer require drush/drush:~8
[14:52] <runelind_q> can you nice the process?
[14:52] <teward> runelind_q: it's not the source machine causing the problem - it's the rsync 'server' spawned on the target machine
[14:53] <nacc> NoHoFoo: install 'php-zip'?
[14:54] <NoHoFoo> i thought composer was a package manager.. why doesn't it do it for me?
[14:54] <nacc> NoHoFoo: and use a pastebin in the future, please
[14:54] <nacc> NoHoFoo: are you using Ubuntu's composer package?
[14:54] <sarnold> teward: you could login to the machine with another shell and use renice on the process
[14:54] <teward> sarnold: would, if the ting weren't already locked up
 don't know whose's composer package I'm using
[14:55] <sarnold> teward: reboot it and start over?
[14:55] <teward> sarnold: fifth force-reboot
[14:55] <sarnold> teward: ugh.
[14:55] <teward> seizes up the moment it starts so I can't renice it
[14:55] <teward> so
[14:55] <NoHoFoo> sudo apt install composer
[14:55] <teward> same problem
[14:55] <runelind_q> are you backing up to a raspberry pi?
[14:55] <runelind_q> or a graphing calculator
[14:56] <teward> [2016-06-15 10:20:32] <teward> when trying to rsync data between systems, that are both VMs, i'm getting CPU soft lockups on the target system on the rsync - can I assume this is the hypervisor's fault?
[14:56] <runelind_q> what if you rsync just a few files as a test
[14:56] <teward> y'all reading would help.
[14:56]  * teward is a little frustrated over this issue right now
[14:56] <teward> because i've given the VM a lot of vCPUs
[14:56] <sarnold> teward: maybe skip the vm-to-vm step? mount both images in one qemu process?
[14:56] <teward> so if it's still locking up
[14:56] <teward> sarnold: ESXi hypervisors
[14:56] <teward> option not allowed
[14:56] <runelind_q> it's before 9AM, I have the memory retention of a three year old.
[14:56] <teward> and VMs on different hypervisors
[14:57] <runelind_q> all I know is that I can rsync all day long without having issues.
[14:57] <runelind_q> through ESX
[14:57] <teward> that's accurate
[14:57] <teward> and this is a *new* issue
[14:57] <runelind_q> tens of thousands of files.
[14:59] <teward> indeed
[14:59] <teward> runelind_q: just had the sysadmin move the thing to a different hypervisor in the cluster
[14:59] <teward> maybe that's the problem
[14:59] <runelind_q> could be.
[15:01] <teward> okay, so lets see if it chokes again
[15:01] <teward> and if it does kill -9 on the target side is ready
[15:02] <runelind_q> does the receiving side think it runs out of CPU or does it just lock up?
[15:02] <teward> runelind_q: literal lockup and watchdog complains
[15:02] <teward> all 4 CPUs observably peaked before it just dies off
[15:02] <runelind_q> just wondering if it is a hypervisor cpu issue or a VM cpu issue
[15:02] <sarnold> that sonds broken :/
[15:02] <runelind_q> how big of a job?
[15:02] <teward> it looks like it's running better on the other hypervisor
[15:02] <teward> runelind_q: huge
[15:02] <teward> at least huge for this infra
[15:02] <teward> about 450GB in a go
[15:03] <runelind_q> millions of files?
[15:03] <teward> runelind_q: no, but multiple large files
[15:03] <teward> my *guess* is the hypervisor was misbehaving
[15:03] <teward> because it's working fine now
[15:03] <teward> ooop maybe I spoke too soon
[15:03] <runelind_q> rsync don't care about file sizes (I don't think), but if it is having to calculate on millions of files.
[15:03] <teward> aaaand there it goes peaking
[15:03] <teward> damn it!
[15:04] <teward> i don't have an alternative to rsync because I need to have the ownership preserved and everything
[15:04] <teward> source system chokes on tarballing it
[15:04] <teward> or i'd do that
[15:05] <runelind_q> maybe your hypervisor shouldn't be running on a potato :-P
[15:05] <sarnold> teward: maybe --whole-file  ? feels like a wild guess...
[15:05] <teward> runelind_q: my two cents: that doesn't help
[15:05] <teward> sarnold: running with --whole-file it seized too
[15:05] <teward> if this one fails i'll try that
[15:05] <teward> failing that
[15:05] <teward> scp everything
[15:05] <teward> and then chown by hand
[15:05] <runelind_q> does rsyncing a small test file go ok?
[15:06] <sarnold> teward: how bout tar cf - . | ssh root@remote tar xf -    ---- but with the proper flags added for permisssions and owner preservation?
[15:06] <teward> wouldn't compression be needed here?
[15:07] <teward> also doesn't help the target is /var/mail/ on the target server
[15:07] <teward> gotta move all these mailboxes >.>
[15:07] <teward> well
[15:07] <teward> we'll see if it fails, it looks like it's not choking as quickly
[15:07] <sarnold> depends on the CPUs and network involved.. ssh does some compression, so if you use some other compression after tar, turn it off in ssh
[15:08] <NoHoFoo> here's the mess: http://pastebin.com/b42uDgN3
[15:09] <sarnold> NoHoFoo: looks like you're trying to modify system files but not running as root
[15:09] <teward> sarnold: i think the hypervisor was part one to blame, and rsync the second
[15:09] <teward> ehhh there's more lockups >.<
[15:10] <NoHoFoo> i thought 'sudo' fixed that
[15:10] <teward> i think i'll do the evil method
[15:10] <teward> by-hand it
[15:10] <NoHoFoo> I'm try to insatll drush for drupal 8 on ubuntu server,,,,any way to do it?
[15:10] <sarnold> NoHoFoo: it could but (a) you didn't use sudo on e.g. lines 3, 22, etc..
[15:11] <NoHoFoo> how to become root for a while?
[15:11] <sarnold> NoHoFoo: (b) it may not be a good idea to overwrite those files anyway...
[15:11] <NoHoFoo> ok then
[15:11] <NoHoFoo> I'm try to insatll drush for drupal 8 on ubuntu server,,,,any way to do it?
[15:11] <sarnold> NoHoFoo: you can get a root shell with sudo -s
[15:12] <NoHoFoo> I'm try to install drush for drupal 8 on ubuntu server,,,,any way to do it?
[15:12] <NoHoFoo> what r the commands to do it?
[15:13] <nacc> NoHoFoo: just ask once
[15:13] <nacc> sorry was afk walking dogs, am back now
[15:14] <NoHoFoo> just answer once
[15:14] <nacc> NoHoFoo: no need for a bad attitude.
[15:14] <nacc> NoHoFoo: drupal8 is not supported on ubuntu server, in any case, so anyone helping you is being nice.
 but there is
[15:14] <nacc> at least, not supported here
[15:15] <NoHoFoo> what does 'supported' mean?
[15:15] <nacc> it's not in the ubuntu archives
[15:15] <nacc> (drupal8)
[15:16] <NoHoFoo> what is an ubuntu archive?
[15:16] <nacc> *the* ubuntu archives ... as in http://archive.ubuntu.com/ or mirrors thereof
[15:17] <NoHoFoo> my drupal 8 is running just fine on ubuntu server
[15:17] <NoHoFoo> nice and fast too
[15:19] <nacc> NoHoFoo: that doesn't make it supported or available as part of the ubuntu archives...
[15:19] <nacc> NoHoFoo: and the version of drush supported is the one in the archives, as in the one for drupal7
[15:20] <teward> sarnold: not even scp works...
[15:20] <teward> i'm out of options
[15:21] <sarnold> teward: damn :/ good luck, dinner time
[15:22] <NoHoFoo> sudo apt-get drush gives wrong drush,,,I know this
[15:23] <nacc> NoHoFoo: it gives the drush supported on ubuntu and the one that works with the drupal shipped by ubuntu
[15:24] <nacc> NoHoFoo: in any case, did you try my suggestion from a long time ago? `apt-get install php-zip`?
[15:24] <nacc> possible you also need to `apt-get install unzip`
[15:25] <nacc> i'm assuming that composer can't alter your system-wide settings/plugins for PHP or applications
[15:25] <nacc> it's just a PHP manager
[15:25] <runelind_q> oh great, one of my containers stopped talking on the network, and when I try to restart it, it craps out
[15:26] <nacc> NoHoFoo: note, that's *exactly* what composer told you to do... And you shouldn't run composer as root.
[15:34] <hallyn> arges: rharper: hey, so there's a new qemu in debian worth merging.  I'll get to when I can, but it won't be today or probably tomorrow, so if you have a chance pls feel free.
[15:35] <hallyn> (if i find time i should really spend it on this (&$%)$(* systemd+lxcfs bug)
[15:37] <rharper> hallyn: hi, cool; thanks for the heads up
[15:47] <jgrimm> hallyn, thanks!!
[15:47] <hallyn> np.
[15:53] <teward> sarnold: i think it's because of incomplete updates
[15:54] <teward> looks like libraries but not kernel got updated, for Linux, going to finish updates and hope that's the issue
[15:54] <teward> because this is a new problem :/
[16:32] <coreycb> ddellav, jamespage: for ci today I fixed up ironic (newton - patched and submitted upstream to fix test failure), keystone (newton - patches refreshed), designate (newton - rebuilt),  cinder (mitaka - patches refreshed).
[16:33] <jamespage> coreycb, \o/
[16:33] <ddellav> coreycb nice, good job
[16:33] <coreycb> jamespage, ddellav, lots of failures today though, so there's more to fix
[16:36] <jamespage> coreycb, ddellav: did a sneaky fix to nova-lxd - won't be around tomorrow am todo my normal shift :-)
[16:36] <coreycb> jamespage, ah thanks. I've been ignoring that one. :)
[16:37] <jamespage> coreycb, just wait until I get ceph and ovs branch builds going as well ;-)
[16:37] <coreycb> jamespage, oh sigh..
[16:37] <ddellav> heh
[16:50] <hallyn> rharper: oh, for qemu merge, the easiest way by far is to use the debian git tree;  check out branch ubuntu-dev;  git merge debian-unstable;  merge te changelog, go through patches, and should be all set.
[16:50] <hallyn> but i'll try to get to it friday if you don't have a chance before
[17:04] <nwilson5> anyone know why after router reboot, I cannot ping/ssh into a computer on the network anymore. If I reboot, I can once again.
[17:05] <nacc> nwilson5: are you using dhcp? did your lease expire due to the router reboot?
[17:05] <nwilson5> yes dhcp, but using dhcp reservation for my mac address.
[17:07] <nacc> nwilson5: if you simply restart network on your machine, does it work?
[17:07] <nwilson5> just tried using a different IP and it worked
[17:10] <nwilson5> not certain what that implies
[17:10] <nwilson5> I can ping some computers on the network, just not a few unless I change my IP
[17:43] <Apocope> I'm trying to get either icinga or icinga2-classicui working in Xenial. Under both, the menu on the left displays for a moment and then slides up and is invisible. Anyone seen this?
[18:03] <digs> I setup this server over a year and a half ago. At ome point I somehow restricted IP's on ssh logins. I can't figure out how I did though. I need to add a new IP. I looked at sshd_config and hosts_allow/deny. I don't see anything that would restrict users. I have a new development team that needs access. I can login to the new user I created using my public key but they cannot. They can
[18:03] <digs> login to the new dev server without issue. Both the dev server and the production server are hosted on AWS and in the same security group.
[18:03] <digs> Any ideas?
[18:04] <teward> digs: iptables on the machine?
[18:04] <teward> you can run the same port security group on AWS and on the server itself by implementing iptables.
[18:04] <teward> definitely an issue i've run into
[18:05] <digs> I am running the same security group on both servers.
[18:05] <teward> that's the AWS side of things
[18:05] <teward> I didn't say the AWS level firewall
[18:05] <teward> I said on the server itself
[18:05] <teward> NOT in the AWS control panel
[18:05] <digs> iptables isn'
[18:05] <digs> iptables isn't installed on either server.
[18:07] <digs> IP blocking is the only thing I can think of that would keep them from logging in. I have tried adding their public key, the same one that is working on the dev server, to a known working user on the prod server to no avail.
[18:08] <digs> They get Permission denied (publickey) -- when I try with my key, I get in no problem.
[18:14] <hggdh> digs: so they *are* getting to the server. Are their public keys correctly set up?
[18:15] <digs> Yes. their public keys are setup correctly. I copied them from the working dev server and checked the setup 5 times. I added my key to the list and was able to get in no problem.
[18:19] <teward> are we sure that they are configuring their SSH clients to SSH in using the privkey
[18:19] <teward> (observed this already in the past with some teams)
[18:21] <digs> Yeah, I thought of that... but they can connect to the dev server no problem and they didn't have issues when I set that up, so I would have to guess yes. I will just double check though because I am at a loss.
[18:22] <hggdh> digs: ask one of them to run 'ssh -vvv <server>', and give you the output
[18:22] <digs> okay.
[18:23] <hggdh> perhaps they have different keys for each server
[18:23] <sdeziel> digs: can you pastebin your sshd_config?
[18:24] <digs> I thought of that too... I had them double verify their keys. sdeziel - I could but I don't think it would help... it is identical to the working dev server.
[18:25] <digs> http://codepad.org/hNzH0M4o
[18:27] <sdeziel> digs: looks good to me. Can you provide the auth.log?
[18:29] <digs> here is the -vvv output. http://codepad.org/EVurcEBx
[18:30] <sdeziel> digs: sorry, I meant the server side logs (auth.log)
[18:30] <digs> Yeah, I know... I am working on that.
[18:30] <digs> here is the tail. http://codepad.org/qlOLock9
[18:31] <digs> hggdh - there is the -vvv output http://codepad.org/EVurcEBx
[18:32] <sdeziel> digs: looks like the sshd couldn't decode one key. What does the ~user/.ssh/authorized_keys look like?
[18:33] <digs> a file with keys that are one line per key.
[18:33] <sdeziel> digs: it seems to point to a partial public key
[18:34] <digs> the guy just tried a different key which was a different type... he reported it worked.
[18:34] <digs> I have 6 or 7 keys loaded and two different devs tried... they both failed... one of them has two keys loaded for some reason, and his other key works.
[18:35] <digs> baffled.
[18:35] <sdeziel> fix the formatting of the authorized_keys...
[18:36] <digs> prod - #66-Ubuntu SMP Thu Apr 25 03:47:17 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux ---- dev #118-Ubuntu SMP Thu Dec 17 22:52:10 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[18:36] <digs> I copied the file from dev. the format should be perfect. regardless, I will go over and over it again.
[18:37] <teward> also check ownership of authorized_keys
[18:37] <teward> and make sure it matches the user they're logging into
[18:37] <teward> not just file *content* but *permissions*
[18:37] <digs> They can login with one user and not another now.
[18:38] <sdeziel> sshd[28349]: error: key_read: uudecode AAAAB3NzaC1yc2E... => pretty clearly points to a badly formatted file
[18:38] <digs> They keys that dont end with == <name> work. Keys that end with == <name> dont work.
[18:38] <digs> they work fine on dev though. The dev server is a couple years ahead of the dev server. My guess is that is a version issue.
[18:39]  * sdeziel gives up
[18:39] <digs> I will erase the keys that don't work and re-add them sdeziel thank you for your help.
[18:44] <coreycb> ddellav, jamespage: python-mock 2.0 synced (assert_called_once is actually valid now), and python-keystoneauth1 2.7.0 merged (that *might* fix the heat newton failure)
[18:46] <ddellav> coreycb ok great
[18:53] <digs> thanks for the help all.
[19:37] <genii> !manual
[19:37] <genii> Hm
[19:37] <genii> !guide
[19:37] <genii> Thats the one
[21:52] <hallyn> rharper: arges: meh, it was a trivial merge, pushing a test pkg to ppa:serge-hallyn/virt
[21:52] <hallyn> (qemu)
[21:53] <rharper> hallyn: cool!
[21:53] <rharper> btw, on the gpu thingy ... I think you can copy the previous build in a ppa and select a new arch target ... I think that triggers a new build of the ppa to add the ppc64 build
[21:54] <rharper> yeah, edit your ppa, and then select the arch targets
[21:55] <rharper> then if you copy the package to the same ppa; I think it triggers the rebuild
[21:55] <rharper> did that on the docker one a few times when we were filling out the extra arches
[21:56] <hallyn> rharper: yeah, i just forgot about it.  but anyway, isn't that inthe ubuntu-virt ppa?  so you can do it :)
[21:56] <rharper> hallyn: I didn't know where it was, but I'll look now
[21:57] <rharper> hallyn: I'm not member of ~ubuntu-virt
[21:58] <rharper> hallyn: but it looks like you're admin and can add me
[21:58] <rharper> and I'll poke it
[21:59] <hallyn> one sec
[22:00] <hallyn> raharper?
[22:00] <rharper> yeah, ~raharper
[22:00] <rharper> not that shady rharper fellow
[22:00] <hallyn> done :)
[22:01] <rharper> thx!
[22:07] <hallyn> thank you :)
[23:17] <runelind_q> I'm using ZFS backing