[00:39] <jayjo> I'm trying to setup ssl on a mongodb instance I have on an ubuntu server... I have a pem file and crt file from Letsencrypt using this guide: https://gist.github.com/leommoore/1e773a7d230ca4bbe1c2 so now I can run the mongod with ssl enabled. Do I now use this certificate to produce client certs and distribute them over gpg?
[00:51] <patdk-lap> jayjo, no
[00:51] <patdk-lap> you should not use letsencrypt for your mongo
[00:51] <patdk-lap> unless you plan to have 3rd parties access your mongo directly
[00:58] <jayjo> no I don't, but I thought that self signed certs was not secure
[00:59] <patdk-lap> heh?
[00:59] <patdk-lap> who said anything about selfsigned?
[00:59] <patdk-lap> and why would selfsigned be insecure?
[00:59] <patdk-lap> every certificate you trust, is selfsigned
[01:00] <jayjo> so I generate my own certificates for my mongo instance, and then the .crt that I generate is what I use to produce pem files for clients?
[01:01] <patdk-lap> no
[01:01] <patdk-lap> but you should always generate your own certificates, no matter who signs them
[01:01] <patdk-lap> you need to setup your own CA
[01:01] <patdk-lap> sign your mongo server cert with your ca
[01:01] <patdk-lap> then make your client certs, and sign them with your ca
[01:02] <patdk-lap> the ca must be selfsigned, just like every other ca cert you have
[01:02] <patdk-lap> or cross-signed, but good luck finding someone to do that
[01:03] <runelind_q> can you run something like Gentoo inside an LXD container on 16.04?
[01:04] <jayjo> so if I follow the guide on this page: https://help.ubuntu.com/lts/serverguide/certificates-and-security.html and create my own CA I then use that to sign all of the certificates?
[01:04] <patdk-lap> sure
[01:04] <patdk-lap> don't know if that page goes into enough detail
[01:04] <patdk-lap> think what most people do is use tinyca
[01:05] <runelind_q> I use xca which has a crappy gui interface, but it does the job.
[01:06] <runelind_q> looks like there is a Gentoo template on linuxcontainers.org
[01:08] <runelind_q> but I don't know how to import them :)
[01:08] <patdk-lap> import?
[01:09] <patdk-lap> lxc is just a folder/ partition/ ...
[01:09] <patdk-lap> there is nothing to do
[01:09] <sdeziel> runelind_q: maybe this will help you https://www.stgraber.org/2016/03/30/lxd-2-0-image-management-512/
[01:09] <JanC> there is also pyca & gnomint
[01:10] <JanC> (for CAs)
[01:10] <JanC> and probably more
[01:13] <jayjo> patdk-lap: can I use tinyca on my desktop and use the certs on the server?
[01:13] <patdk-lap> you could
[01:14] <patdk-lap> only one server?
[01:14] <jayjo> yes its just an EC2 instance
[01:14] <patdk-lap> isn't that pointless then? unless your running mongo on one, and the clients on others
[01:14] <patdk-lap> no point to bother with ssl
[01:14] <patdk-lap> as if they can see the traffic, they are root, and can just read it from disk
[01:15] <jayjo> mongo is running on the instance and it needs to be connected to from other instances - those instances dont have mongo
[01:18] <jayjo> when ssl is enabled I need to give my client a certificate, and I know it's that I have a very high-level misunderstanding of what's going on, but that's where I'm stuck. Do I use the crt to generate a pem file for the client, and then distribute it to the client?
[01:18] <patdk-lap> heh?
[01:18] <patdk-lap> what is a crt and pem files?
[01:18] <patdk-lap> pem is a type of encoding for certificates
[01:19] <runelind_q> sdeziel: I'm looking at images on https://jenkins.linuxcontainers.org/view/LXC/view/LXC%20Templates/
[01:19] <patdk-lap> you need to generate a server cert for mongo
[01:19] <patdk-lap> and client certs for the mongo clients
[01:19] <sdeziel> runelind_q: I just tested launching a gentoo container with: lxc launch images:gentoo/current/amd64 gentoo
[01:19] <sdeziel> runelind_q: worked well
[01:20] <runelind_q> oh, jolly good.
[01:20] <jayjo> And those processes are completely separate? How does the server know to trust the client? Do I place their public certificates somewhere?
[01:20] <patdk-lap> jayjay, nope
[01:20] <patdk-lap> that is the whole purpose of signing
[01:24] <runelind_q> sdeziel: trying that now - seems to be stuck at retrieving image 100%
[01:25] <sdeziel> runelind_q: once the image is retrieved, the container is started so maybe it's just taking some time?
[01:25] <runelind_q> oh, there it goes
[01:26] <sdeziel> runelind_q: also, depending on the storage backend, cloning the retrieved image into your new container can take some time. It's almost instant on ZFS
[01:26] <sdeziel> but can take much longer on other backend types
[01:26] <runelind_q> yeah, ZFS backend.
[02:58] <jayjo> So I have no created the ca authority on my ubuntu server. Do I create a pem file for the mongo server, and then using the same CA authority to create the certificates, create an additional one to clients?
[02:59] <jayjo> And I distribute the cacert.crt file along with the generated pem files to clients?
[06:20] <LJHSLDJHSDLJH> guys, I know ubuntu-server is more prefered to install for creating web or mail server box than a normal desktop distro. my question is, what are the reasons?
[06:20] <ivoks> no UI
[06:21] <LJHSLDJHSDLJH> so?
[06:21] <LJHSLDJHSDLJH> how does that makes it better?
[06:21] <LJHSLDJHSDLJH> I was thinking about security as a reason indeed
[06:28] <cpaelzer> LJHSLDJHSDLJH: noUI -> less stuff auto installed -> much less exposure surface regarding security
[06:37] <sarnold> also storage space; no need to pay to store programs you'll never use
[06:42] <LJHSLDJHSDLJH> cpaelzer: are there any scripts for auto ubuntu server installation? where may I find some if so?
[06:46] <sarnold> depends what you need; FAI, preseeding, MAAS, juju, cloud-init, debootstrap, uvt-kvm ..
[06:53] <cpaelzer> LJHSLDJHSDLJH: just wanted to answer, but sarndold already listed most of what came to my mind
[06:53] <cpaelzer> LJHSLDJHSDLJH: the important point is to know where/how you want to automate installs
[06:53] <cpaelzer> sarnold: sorry for that extra d in your nick
[06:54] <sarnold> hah, 'sarndold' :) hehe
[07:01] <LJHSLDJHSDLJH> sarnold: I don't know what those names are! so what I miss knowing here? in other words what are those?
[07:02] <LJHSLDJHSDLJH> does anyone know why my working website is throwing connection error even though I've changed mysql password into connection.php file?
[07:02] <LJHSLDJHSDLJH> I saved into /var/www/html/index.php
[07:03] <sarnold> LJHSLDJHSDLJH: they're all tools that can do automated / customized installs of some sort. debootstrap populates a directory with a distribution. preseeding is the native way to automate the installer. FAI is a network-driven way to automate installs. MAAS gives you the ability to treat a cluster of machines as if they were cloud machines.
[07:03] <sarnold> LJHSLDJHSDLJH: juju has you focus on the tasks you want the "thing" to do, whether it's allocating virtual machines from a cloud provider, a local openstack install, or lxd containers..
[07:04] <sarnold> LJHSLDJHSDLJH: cloud-init automates installing / configuring tasks on 'cloud' providers, local installs.. uvt-kvm is a frontend to virsh/libvirt.
[07:09] <LJHSLDJHSDLJH> cool stuff, I've to find a time slot to go through all those cool things one by one
[07:09] <LJHSLDJHSDLJH> are there any useful url(s)?
[07:12] <sarnold> LJHSLDJHSDLJH: https://help.ubuntu.com/lts/serverguide/cloud-images-and-uvtool.html  http://www.ubuntu.com/cloud/juju  http://www.ubuntu.com/cloud/maas  https://cloudinit.readthedocs.io/en/latest/ http://fai-project.org/
[07:14] <LJHSLDJHSDLJH> really appreciate it sarnold, I saved all those into my todo folder
[07:14] <LJHSLDJHSDLJH> now back to my apach2 server problem
[07:15] <LJHSLDJHSDLJH> I threw all files into /var/www/html/* but facing connection problem , it could be file access problem
[07:15] <LJHSLDJHSDLJH> what chmod level do you usually give to files into /html/ folder?
[07:17] <sarnold> it depends upon who will be managing the files, and how; either 444 or 644 or 664
[09:33] <nymony> Why is ubuntu changing interface naming with almost every version? From eth0 > em1 > p2p1 > p4p1 > and currently some jibberjabber
[09:34] <bekks> nymony: http://askubuntu.com/questions/628217/use-of-predictable-network-interface-names-with-alternate-kernels
[09:43] <toshywoshy> I can boot up ubuntu 14.04 lts trusty with the root partition in an lvm
[09:44] <toshywoshy> I keep getting "Gave up waiting for root device.  Common problems: ALERT!  /dev/mapper/rootvg-rootlv does not exist."
[09:44] <toshywoshy> however when it drops down to the initramfs cmd I can see and mount /dev/mapper/rootvg-rootlv
[09:45] <toshywoshy> s/can/cannot/g
[10:11] <jamespage> coreycb, ddellav: ok finishing up my ci shift now; rebases trove/newton patches, updated dogpile.cache to 0.6.1 including uploades to experimental and yakkety; dealt with transition of dogpile.core->dogpile.cache
[10:12] <jamespage> some other transitent schroot problems - re-ran failed builds OK
[10:12] <jamespage> note failues of designate for liberty; nova/trusty in mitaka - not looked at those
[11:33] <LJHSLDJHSDLJH> how to set these daemons to run automatically after reboot without having to login or run any of them .. apache2, mysql, ufw
[11:33] <LJHSLDJHSDLJH> webmin
[11:34] <sarnold> did any not start correctly?
[11:34] <LJHSLDJHSDLJH> after remote restart from webmin none of them is back online
[11:34] <LJHSLDJHSDLJH> oh I remember now
[11:35] <LJHSLDJHSDLJH> vmware problem
[11:35] <LJHSLDJHSDLJH> it doesn't obtain ip unless I got sudo dhclient
[11:35] <LJHSLDJHSDLJH> how to automate obtaining ip addresses?
[11:35] <sarnold> careful with webmin; I think most of those control-panel things are terrible rubbish that allow anyone on the internet to run anything on your computer. Be sure to firewall it to only -your- IP address.
[11:35] <sarnold> configure /etc/network/interfaces correctly
[11:36] <LJHSLDJHSDLJH> no worries its for training project at the time being
[11:37] <LJHSLDJHSDLJH> is there any reference on how /etc/network/interfaces should be configured?
[11:37] <sarnold> man 5 interfaces   :)
[11:38] <LJHSLDJHSDLJH> oh real men never read man pages :p
[11:43] <bekks> !webmin | LJHSLDJHSDLJH
[11:43] <LJHSLDJHSDLJH> I've already tried auto ens33 into interfaces yesterday but it didn't work
[11:44] <LJHSLDJHSDLJH> ubottu: what is supported currently so that I can use?
[11:44] <LJHSLDJHSDLJH> lol
[11:45] <LJHSLDJHSDLJH> so my question bounces back at you bekks :))
[11:45] <sarnold> LJHSLDJHSDLJH: pastebin the whole /etc/network/interfaces and perhaps someone will spot something
[11:46] <bekks> LJHSLDJHSDLJH: zentyal
[11:50] <LJHSLDJHSDLJH> sarnold: I gave it another try and it worked
[11:50] <LJHSLDJHSDLJH> thanks guys
[11:50] <sarnold> aha ;)
[11:53] <LJHSLDJHSDLJH> I've followed some tutorial to install openssl, created certificate and then redirect all port 80 traffic to 433
[11:53] <LJHSLDJHSDLJH> now https works but no automatic redirection
[11:54] <frickler> jamespage: coreycb: IIUC one of the CVEs in http://lists.openstack.org/pipermail/openstack/2016-June/016489.html is still present in Neutron 8.1.0, would be great to have 8.1.2 released for Xenial
[11:56] <LJHSLDJHSDLJH> please send me pm if you got anything about the ssl redirection, I gotta run
[12:08] <coreycb> frickler, that is ready to release actually
[12:12] <coreycb> beisner, jamespage: ceilometer 1:5.0.3-0ubuntu1~cloud0 and keystone 2:8.1.2-0ubuntu1~cloud0 are ready to promote to liberty-updates when you get a moment
[12:13] <coreycb> frickler, sorry neutron 8.1.2 is not quite ready to release, but it's in the queue.  I'll press on the sru team for a review.
[12:20] <coreycb> jamespage, re: designate for liberty -- it looks like that issue with dh_python not ignoring != is resurfacing.  ddellav was hitting that on the mitaka stable update too.
[12:53] <jamespage> coreycb, that's quite likely - I suspect the dh-python in wily and the backport for trusty both have the same bug that the xenial one did
[12:53] <jamespage> surprised you hit the same problem on a mitaka update tho ddellav
[12:54] <EmilienM> hey jamespage
[12:54] <jamespage> EmilienM: hey!
[12:54] <EmilienM> we're trying to run tempest with the newton repo, and look what we got:
[12:54] <EmilienM> http://logs.openstack.org/78/327678/25/check/gate-puppet-openstack-integration-3-scenario001-tempest-ubuntu-xenial/d6c6085/console.html#_2016-06-14_12_19_44_790
[12:54] <EmilienM> ImportError: No module named keystone_tempest_plugin.plugin
[12:55] <jamespage> hmmm
[12:55] <jamespage> EmilienM: you don't install tempest from packages do you?
[12:55] <EmilienM> jamespage: no, from source
[12:56] <EmilienM> jamespage: should we?
[12:56] <jamespage> EmilienM: we don't
[12:56] <EmilienM> jamespage: how do you deploy / run tempest?
[12:56] <jamespage> EmilienM: git clone, tox -e smoke / full
[12:56] <EmilienM> same
[12:57] <jamespage> that looks like some sort of tempest dep problem for the all-plugin target
[12:59] <EmilienM> yeah
[12:59] <jamespage> EmilienM: do you run tempest directly on the machine that has the cloud deployed on it?
[13:00] <EmilienM> jamespage: on the machine
[13:01] <jamespage> EmilienM: this might be the cause - the python-keystone package does not ship with the keystone_tempest_plugin python module but the module still declares tempest.test_plugins in its setup.cfg
[13:02] <jamespage> EmilienM: I think that if we restore the keystone_tempest_plugin module it will resolve the problem, but I'd also look at the isolation of tempest from the installed system in the way you are testing...
[13:03] <EmilienM> jamespage: yeah, we don't have this problem on rdo platform
[13:04] <EmilienM> jamespage: you run tempest in venv?
[13:04] <jamespage> EmilienM: we can add the keystone tempest plugin to the packaging, but I really don't like this approach to plugin loading from system packages
[13:04] <EmilienM> jamespage: can you show me the source please?
[13:05] <jamespage> EmilienM:  its just straight up use of the tox targets...
[13:05] <EmilienM> but do you have code handy on github or?
[13:05] <jamespage> the test machine is not part of the cloud, so will never have openstack packages installed on it - apart from a few clients
[13:05] <jamespage> EmilienM: erm yeah - one sec
[13:09] <jamespage> EmilienM: http://bazaar.launchpad.net/~uosci/ubuntu-openstack-ci/trunk/view/head:/job-parts/osci_openstack_common.sh#L369
[13:10] <jamespage> we actually appear to build out the tempest venv manually first; and then use the run_tempest.sh script
[13:11] <EmilienM> jamespage: ok I see
[13:11] <EmilienM> jamespage: could we have the keystone plugin loaded in packaging until we sort things out?
[13:11] <EmilienM> iberezovskiy: see the script ^
[13:12] <jamespage> EmilienM: I should think so
[13:12] <jamespage> EmilienM: let me take a look
[13:12] <EmilienM> ok
[13:12] <jamespage> EmilienM: this is a little bit of a problem with tox virtualenvs - by default I think they will use system provided modules
[13:13] <jamespage> so its quite easy to get pollution of the virtualenv from the host os
[13:14] <caribou> rbasak: did you finally have time to look at the kexec-tools merge ?
[13:18] <EmilienM> jamespage: mhh the problem for us it rdo provides packaging with loaded plugins too
[13:19] <jamespage> EmilienM: we'll add them to the packages so as to be feature comparable from your perspective
[13:19] <EmilienM> thanks a lot
[13:19] <jamespage> EmilienM: just checking the packaging change and I'll get it uploaded :-)
[13:19] <iberezovskiy> thanks
[13:19] <EmilienM> iberezovskiy: did you noticed other issues in other jobs ? or tempest was only blocker?
[13:20] <iberezovskiy> only tempest for now
[13:20] <EmilienM> cool
[13:20] <EmilienM> jamespage: so we're close!
[13:21] <gyan> Hi
[13:24] <coreycb> jamespage, beisner: python-os-brick 0.5.0-0ubuntu4~cloud0 is ready to promote to liberty-proposed when you have a moment
[13:32] <jamespage> EmilienM: are you using the UCA or the branch package build PPA atm?
[13:33] <EmilienM> jamespage: the UCA
[13:33] <EmilienM> jamespage: I saw the mail on openstack-dev
[13:33] <EmilienM> but we can come back on the ppa
[13:38] <jamespage> EmilienM: that's fine - I'll do this into the UCA as well; just uploaded to yakkety to kick that process off
[13:38] <EmilienM> ok
[13:56] <jamespage> coreycb, 997
[13:57] <coreycb> jamespage, uploads?
[13:57] <jamespage> yah
[13:57] <jamespage> hehe
[13:57] <jamespage> nearly 6 years worth....
[13:57] <ddellav> jamespage nice
[13:57]  * jamespage ponders what to pick as 1000
[13:57] <ddellav> coreycb is keystone sru the one with the dh_python != issue?
[13:58] <coreycb> jamespage, awesome :)
[13:58] <cpaelzer> jamespage: 1000 = random revert
[13:58] <jamespage> well if you counted my SRU's in pending approval...
[13:58] <coreycb> ddellav, I think so, you are working on it :)
[13:59] <ddellav> coreycb ok, i thought there was a fix for that and we were waiting for it to be accepted upstream
[14:00] <jamespage> ddellav, coreycb: just for future reference watch out for aodh point releases
[14:00] <jamespage> they are not on a cadence with the rest of openstack, so we should have done 2.0.1-0ubuntu1 -> yakkety
[14:00] <jamespage> and done a 2.0.1-0ubuntu0.16.04.1 to xenial
[14:02] <coreycb> jamespage, ah.. they didn't release a b1 for newton
[14:03] <ddellav> jamespage ok i'll make a note of that
[14:03] <jamespage> coreycb, no they won't - they are on independent releases...
[14:03] <jamespage> like ironic for example
[14:07] <coreycb> jamespage, ok.  so we'll need to upload 2.0.1-0ubuntu1 to yakkety and 2.0.1-0ubuntu0.16.04.1 to xenial.
[14:07] <jamespage> to late
[14:07] <jamespage> 2.0.1-0ubuntu1 is already in Xenial proposed
[14:07] <coreycb> jamespage, oh it was accepted
[14:07] <jamespage> so you'll have todo a 2.0.1-0ubuntu2 for yakkety
[14:07] <coreycb> jamespage, ok
[14:17] <jamespage> coreycb, os-brick promoted to liberty-proposed
[14:18] <jamespage> coreycb, doing ceilometer and keystone now
[14:21] <jamespage> coreycb, ok done
[14:21] <coreycb> jamespage, thanks
[14:21] <jamespage> did libvirt-python as well
[14:21] <jamespage> as that's had long enough to bake
[14:22] <jamespage> and qemu - stack of sec updates...
[14:23] <coreycb> ddellav, this is the original dh-python bug 1581065
[14:24] <coreycb> ddellav: so I think we need to investigate why dh-python is not ignoring != in xenial for the case you're hitting
[14:27] <coreycb> ddellav, also we need to look at SRUing the original fix to wily  since designate is now hitting it, assuming it fixes it
[14:36] <caribou> rharper: I'm quite puzzled about the multipath-tools bug I told you about a few hours ago
[14:37] <caribou> rharper: the patch you submitted to debian has the 'clean-tree' statement on build-stamp:
[14:37] <caribou> rharper: +build-stamp: clean-tree
[14:38] <caribou> rharper: if I look at the source package in Xenial I have : clean: clean-tree !!!
[14:39] <caribou> rharper: so my debdiff of the upstream debian against our xenial version has :
[14:39] <caribou> rharper: -build-stamp: clean-tree
[14:39] <caribou> +build-stamp:
[14:45] <coreycb> frickler: neutron 8.1.2 has been accepted into xenial-proposed for testing
[14:45] <rharper> caribou: hrm, so it does seem like we're missing that in X
[14:46] <caribou> rharper: no problem, I'm about to SRU the issue so I'll fix that up
[14:46] <rharper> I may have not included it since we don't use the systemd unit file, now that it's fixed in debian we can sync the change
[14:46] <caribou> rharper: I'll ping you to review the SRU before I upload
[14:47] <rharper> sure
[14:48] <frickler> coreycb: great, thx
[15:46] <jayjo_> I've been asking this question yesterday & today... but i just wanted to clarify again at a high level. I want to secure my mongodb with SSL. I created a CA Authority to sign certificates. I then create a pem file signed by the CA to run the mongod daemon with SSL. That's all fine and good, but then clients need these certs, as well. So I generate them and send them to the client software. Because th
[15:46] <jayjo_> ey all use the CA to sign it, they all know the communication can be trusted. The server has a pem and a CA file, and so does the client. the CA is the same for all clients/servers. Is that broadly correct?
[15:49] <LJHSLDJHSDLJH> will ubuntuServer.iso be bootable if I just dd it on a pin drive?
[15:49] <LJHSLDJHSDLJH> feeling lazy to figurer it myself :D
[15:52] <rbasak> jayjo_: it would be easier/safer to not give the clients certs at all, only the CA to verify the server cert. Then you don't have to worry about a client pretending to be a server (which you can prevent with extensions or a secondary CA layer, but it's more work).
[15:53] <rbasak> jayjo_: specifically: one CA, keep its private key safe. One cert for each server, give to servers with respective private keys only. Only give the CA public cert (no private key at all) to clients.
[15:54] <rbasak> Unless clients check the hostname against the cert DN, that is. But like I said, more work :)
[16:03] <jayjo_> OK - I think that is reasonable. I can implement that. Is there a way to check the details of the server certificate? Like the subject and host it was generated for?
[16:19] <rbasak> Clients can do that. It's most common in HTTPS. It's up to the client to do it though. I'm not sure about the MongoDB client.
[16:24] <rbasak> magicalChicken: may I have an update on your progress on bug 869017, bug 1394403 and bug 1511222 please?
[16:26] <magicalChicken> rbasak: yeah, so I have a patch for 1511222, and I did a quick check and I think it does work, but I need to reproduce the old bug and make sure today
[16:26] <jgrimm> jamespage, fwiw.. this report now fixed up to have an 'ubuntu-openstack' section.
[16:27] <jgrimm> jamespage, this -> http://reqorts.qa.ubuntu.com/reports/rls-mgr/rls-y-tracking-bug-tasks.html#ubuntu-openstack
[16:27] <nacc> sdeziel: are you ok if I assign LP: #1570472 to you while you're working on the yakkety fix?
[16:27] <magicalChicken> I still have not been able to reproduce 1394403, I'm not sure what I'm doing differently from the reporter, but I keep seeing the change in b
[16:27] <sdeziel> nacc: sure, I'll try to make time to get to this
[16:28] <nacc> sdeziel: thanks! i'll keep an eye on it too -- if you dont' have the cycles, just let me konw, i can get it fixed today probably
[16:28] <magicalChicken> rbasak: and I'm not sure how or if 869017 should be fixed
[16:29] <magicalChicken> rbasak: I can definitely handle another bug this week though, I just don't know what to do about the two old ones
[16:29] <sdeziel> nacc: if you can get to it today please do as I won't have time today, maybe tomorrow
[16:30] <nacc> sdeziel: will do, and will note it int he bug
[16:30] <sdeziel> thx
[16:31] <nacc> sdeziel: thank you!
[16:32] <sdeziel> nacc: I'll be able to test stuff for you today if that can help, just ping me
[16:32] <nacc> sdeziel: ah great, yeah, that'd be perfect
[16:39] <nacc> sdeziel: fyi, there's a much newer version of puppet stuck in yakkety-proposed. I'll try and unstick that first, as it'll be an easier yakkety fix
[16:40] <sdeziel> OK
[16:42] <EmilienM> jamespage: can you ping me when the keystone pkg is updated in UCA ? so I can re-run tests
[16:44] <rbasak> magicalChicken: thanks! Please can you take bug 1519120?
[16:46] <magicalChicken> rbasak: sure, I'll test out the patch there and see make a debdiff in the next few days
[16:46] <rbasak> Thanks!
[17:13] <jayjo_> I'm not clear on this... I'm sorry to be persistent but I think it's a high-level misunderstanding so it's hard to dig into documentation. I and just reading as much as I can and I found this blog post about SSL in ubuntu with mongo: http://demarcsek92.blogspot.com/2014/05/mongodb-ssl-setup.html. I was able to connect using this 'client' pem file and this 'server' pem file. They're both referenced in
[17:14] <jayjo_>  the mongo.cnf. The connection works with these two. Am I supposed to then pass out this client pem file to a client I want to be able to connect?
[17:15] <jayjo_> so any client that wants to connect needs to pem from the server AND the pem for the client? It works in this example, but this seems to not be secure
[17:16] <rbasak> I'm not sure about the details of MongoDB in general. But it may help to understand that any SSL connection is automatically secure, but each party cannot verify the identity of the other party without a certificate. So to prevent man in the middle attacks, you need at least for the client to be able to verify the identity of the server by having the server use a certificate.
[17:18] <rbasak> In the other direction (for example server authenticating the client), a password can suffice from a basic perspective, because the client checks that it really is talking to the server securely before revealing the password to it.
[17:18] <rbasak> OTOH, it's also fine for the client to use a certificate, and that's better in some ways because then the server doesn't need to be trusted with the shared secret (the password) either, though it is little more difficult to set up.
[17:20] <rbasak> To verify a certificate, an endpoint can: 1) do nothing, in which case it's useless, but this is a common misconfiguration; 2) verify that the certificiate is signed by an authority on the list of allowed authorities (including your own if you like), but then a client could pretend to be a server to another client; 3) verify that the server is using a certificate marked by the authority as only for
[17:20] <rbasak> servers, but then a server could pretend to be a different server; or 4) verify that the server hostname to which it connected matches the hostname in the certificate, which is what web browsers do with HTTPS.
[17:21] <sdeziel> jayjo_: this blog post give completely insecure instructions. Distributing the server's private key to all clients is really not required nor desired
[17:28] <jamespage> EmilienM: promoted to newton proposed; should build and publish in the next hour
[17:28]  * jamespage eods'
[17:30] <jayjo_> I thought it was insecure because the pem files have both the secret key and certificate... what am I supposed to distribute to clients then? Just the certificate component... don't concat the key?
[17:33] <sdeziel> jayjo_: if you pass the mongodb-cert.crt to the client that would be an improvement
[17:43] <coreycb> jamespage, ddellav: aodh uploaded for newton
[17:43] <ddellav> coreycb ack
[17:43] <coreycb> well, xenial on yakkety that is
[17:43] <coreycb> sigh...
[17:43] <coreycb> mitaka on yakkety
[17:44] <coreycb> ddellav, ^
[17:44] <ddellav> coreycb so you did it for mitaka/yakkety not newton?
[17:45] <coreycb> ddellav, right.  it's 2.0.1 so it is the mitaka point release, uploaded to yakkety.
[17:45] <ddellav> coreycb ok
[17:45] <coreycb> ddellav, the problem is that aodh doesn't have any newton releases right now, so we need to make sure the version in yakkety is > xenial
[17:46] <ddellav> coreycb right, thats what jamespage said this morning
[17:46] <coreycb> ddellav, yeah
[17:50] <jamespage> coreycb, ddellav: as its release-independent its not really mitaka either
[17:50] <jamespage> at least I think so
[17:57] <coreycb> jamespage, as if I needed the confusion :)
[18:06] <EmilienM> jamespage: ack, thanks
[20:21] <EthicalJesusi> watup y'all
[20:22] <EthicalJesusi> anyone recommend a home grade http cache?
[20:22] <bekks> squid
[20:22] <EthicalJesusi> and is it worth it?
[20:22] <bekks> Worth what?
[20:23] <EthicalJesusi> like im on 100mbit fibre with a business grade modem/router
[20:23] <EthicalJesusi> at home
[20:23] <bekks> Define "worth" in that context.
[20:23] <EthicalJesusi> I load google already in like 3ms
[20:23] <EthicalJesusi> 3-6
[20:23] <bekks> I doubt that.
[20:23] <bekks> You have a ping latency to its IP.
[20:23] <EthicalJesusi> It might take longer to check a cache
[20:24] <EthicalJesusi> Ping latency is like 2ms
[20:24] <bekks> And the ping latency says entirely nothing on about how fast the page content is actually loaded.
[20:24] <EthicalJesusi> they have a server across the river from me I believe
[20:24] <bekks> Which doesnt mean you are using it.
[20:24] <EthicalJesusi> traceroute confirms <3
[20:24] <bekks> Really? Do you know the switch names/ip in your area?
[20:25] <EthicalJesusi> I have the local google servers ip, sure
[20:25] <EthicalJesusi> :|
[20:25] <bekks> Which doesnt mean anything.
[20:25] <EthicalJesusi> It does when I traceroute it.....
[20:25] <EthicalJesusi> ?!
[20:25] <bekks> Nope.
[20:26] <EthicalJesusi> this isnt complicated
[20:26] <EthicalJesusi> im not sure why you seem to think it is
[20:26] <bekks> It is far more complicated than you think.
[20:26] <EthicalJesusi> perhaps you could explain
[20:26] <EthicalJesusi> :)
[20:26] <bekks> Based on the outout of traceroute you can determine the number of hops only, you cannot tell for sure where a hop is located.
[20:26] <EthicalJesusi> I mean my certification has lapsed but I was fully cisco accredited at one time lol
[20:27] <bekks> Technically, you can get around half the earth in just one hop.
[20:27] <EthicalJesusi> sure you can they name their servers and they are geolocatable by ip with like 80% accuracy
[20:27] <bekks> EthicalJesusi: Then you should know that...
[20:27] <EthicalJesusi> when its called perth.*sadas*sad8aF*.asf*saf
[20:27] <EthicalJesusi> then its in perth
[20:27] <EthicalJesusi> lol
[20:27] <EthicalJesusi> simples
[20:27] <bekks> You THINK it is.
[20:27] <bekks> NAmes are futile.
[20:28] <EthicalJesusi> well im not getting 6ms loads from south australia
[20:28] <EthicalJesusi> :P
[20:28] <bekks> And you have no guarantees that you get your answers from across the river.
[20:28] <EthicalJesusi> thats like 1000km for you
[20:29] <EthicalJesusi> I dont need guarantees lol, its just my home, but generally even if its 10x that TO go over east, 3500+km, its still only 60-80md
[20:29] <EthicalJesusi> ms
[20:30] <bekks> Which is not in the scope of this discussion. This discussion is about the fact that you cannot tell wether you are using the google server across the river based on traceroute.
[20:31] <EthicalJesusi> lol no, this discussion is about whether or not I should run a local http cache
[20:32] <EthicalJesusi> it is still like 320-380ms to the USA ;<
[20:33] <bekks> That decision is up to you. Your initial question was which http cache you should use.
[20:33] <EthicalJesusi> but most cool kids have cdn's these days
[20:33] <EthicalJesusi> and should get an australian node ffs
[20:34] <EthicalJesusi> yeah then I was talking about the inherent cache latency vs a real life example of internet latency
[20:34] <EthicalJesusi> facebook and stuff take care of themselves really, they only need to update when theres an update except for the small initial load
[20:34] <EthicalJesusi> and they use a cdn
[20:35] <EthicalJesusi> or are a cdn lol
[20:35] <EthicalJesusi> I guess
[20:38] <bekks> USing your real life example, and remembering your former Cisco knowledge, you do know that a 3ms ping means a maximum distance of roughly 150km between source and target.
[20:38] <bekks> Thats a wide river.
[20:42] <EthicalJesusi> 2ms, and everything in Perth is far between ;P
[20:43] <EthicalJesusi> I think it is one of the biggest river systems in Australia though
[20:43] <bekks> I can see two rivers in Perth, on maps.google.com :)
[20:44] <EthicalJesusi> its the same river system
[20:44] <EthicalJesusi> the swan river system
[20:44] <EthicalJesusi> anyway, its a huge governmental thing - and its the only reason I have fibre, they all run straight across the river to the central Perth exchange
[20:45] <EthicalJesusi> for this reason, most people tend to host their servers in subiaco or the likes
[20:45] <bekks> 300km in legth, only.
[20:45] <bekks> not that big :)
[20:45] <EthicalJesusi> haha... YOU SAID IT WAS A GOOD SIZE!
[20:46] <EthicalJesusi> ....
[20:46] <bekks> When did I say that?
[20:46] <EthicalJesusi> shhh
[20:47] <bekks> I guess I'll let you listen to the voices in your head, for a while.
[20:47] <EthicalJesusi> ty
[20:50] <EthicalJesusi> if I had a dns server I bet I could do better optimizations
[20:50] <bekks> Setup one.
[21:05] <EthicalJesusi> 10/10 does not sound like fun
[22:22] <van777> hey all. i've just installed ubuntu-server in VMware. and VMWare tools. How to change the display resolution? "Display" is not active in Virtual Machine settings
[22:26] <patdk-lap> heh? it's just text
[22:26] <compdoc> 640x480?
[22:28] <compdoc> you should be able tweak that. columns, text size, etc
[22:29] <van777> compdoc: ok, i've ssh-es with putty, good res now ))
[22:30] <van777> ssh-ed*
[22:50] <van777> You can setup delay: /set irc.look.smart_filter_delay 15
[22:50] <van777> sorry )