[02:32] <EmilienM> coreycb: all puppet openstack CI is broken since the uca/mitaka update today
[02:39] <EmilienM> http://logs.openstack.org/14/283714/2/check/gate-puppet-openstack-integration-scenario003-tempest-dsvm-trusty/56e2a10/logs/nova/nova-compute.txt.gz#_2016-02-25_01_08_43_468
[03:36] <patdk-lap> sarnold, hmm, it's not turning up libteam that is my issue, it's turning it off
[05:24] <jayjo> if I want to capture just the standard error in a cron job, is it sh my_script.sh 2>? I know that 2>&1 is stderr and stdout, but I don't know what are the significant components of that command
[05:27] <jayjo> If I put that into my cronjob, I get an error Syntax error: end of file unexpected
[07:38] <LostSoul> Hi
[07:38] <LostSoul> I've got this strange issue
[08:02] <spm_draget> Does ubuntu xenial support java 8 with some official package?
[08:38] <jamespage> ddellav, coreycb: more oslo releases - if you guys need a hand I can spare some cycles after git migration next week
[09:27] <shredding> Hello.
[09:27] <shredding> I've done lots of server things lately and docker and stuff and want to dipe deeper into devops / sysadmin to get all the basics. Can someone recommend good resources? Preferably online courses.
[09:59] <Poindexter_> Can someone tell me a little bit about the .htaccess file and is the   .     considered a  " file extension " e.g. such as in windows? I noticed when I do an ls -all command prompt command I see   a    .      and then   ..     or   ...      I have been putting index.html in all of my directories to stop listing on a web site. Any comments? I was reading that an .htaccess file should be hidden in all directories on a server.
[10:02] <Poindexter_> This pertains to Apache2 Ubuntu server.
[10:05] <Poindexter_> I think it is very tedious to put an index.html file in every directory to stop people from listing and seeing the contents of the directory, however, it works but I have read that it is not the best solution.
[10:07] <hateball> Poindexter_: putting a dot in front of files make them hidden per default
[10:08] <hateball> Poindexter_: well you can disable listing in apache
[10:08] <Poindexter_> Would that mean that if I put a .html file in each directory mean that being hidden stop directory listingn?
[10:10] <Poindexter_> I suppose I could do a 404 redirect to the main page would help too.
[10:13] <Poindexter_> Have you seen the Hostage Encryption thing on the news? I get sick to my stomach when I read about that stuff. People have nothing better to do than to make life miserable for others. I suppose there are vulnerabilities in everything.
[10:16] <hateball> Poindexter_: https://wiki.apache.org/httpd/DirectoryListings
[10:17] <hateball> ctrl+f -> prevent
[10:17] <Poindexter_> Hateball I did read that about the .htaccess editing the file. I wanted to hear from someone who had personal experience with it.
[10:18] <hateball> Well it depends if you want to disable it globally or just for certain directories
[10:19] <hateball> if you want it globally, just remove Indexes from httpd.conf
[10:19] <Poindexter_> I years ago used the .htaccess file for password protection. This is the first time I recognized the listing issue because I do listing to double-check my work.
[10:20] <Poindexter_> It is such a basic 101 issue but necessary one that people overlook.
[10:23] <Poindexter_> I use puTTy and winSCP for SSH to any of my work. Nice programs especially the tunneling.
[10:25] <Poindexter_> I wouldn't trust any .html web based client to program a server. Not a good idea.
[10:26] <hateball> I've no idea what you're rambling about here
[10:27] <Poindexter_> I was making conversation. I suppose I chose the wrong channel. Thanks for your help though. I appreciate it.
[10:28] <hateball> Poindexter_: There's #ubuntu-offtopic if one feels chatty :)
[10:28] <hateball> Tho this channel is usually idle enough it harms no one
[10:29] <Poindexter_> :)   I am always chatty. I teach A+ Certification at a Network Academy. I give lectures all day long.
[10:30] <Poindexter_> I am always in search for good Technicians to put on my website.
[10:30] <hateball> On the offtopic topic, I find it strange you have a hard time trusting web based clients, yet you use Microsoft Windows ;)
[10:31] <Poindexter_> I use Microsoft as a GUI but on a serious basis, Ubuntu is more trustworthy as a server. I don't and will not use Microsoft server. I have 2000 server and 2003 server. I dont' like them.
[10:34] <Poindexter_> They work and are good for what they are but Linux or Ubuntu has been a passion for me for years. I used to program in BASIC with Windows and C++ but, that is not what I do anymore. I love the challenge with command prompt in Ubuntu. I have notebooks full of stuff I have learned.
[10:35] <Poindexter_> One of the best tools I use is IRC. I have met many a good Technician and programmers here.
[10:35] <Poindexter_> I have been using IRC for almost 20 years now.
[10:38] <hateball> Heh, for me using Linux is not about any challenge at all. It's about letting me do what I want, and get work done.
[10:39] <Poindexter_> I like that answer. So too with Windows. I have been using Windows for years and I make lots of money with it. It pays to be good at both.
[10:40] <Poindexter_> I do forensic Data Recovery with Windows. $1,000.00/ per customer is nice. So Windows has its benefits though. I like GNU open source and the folks who are motivated by it.
[10:42] <Poindexter_> The funny thing is that I use Debian based software to recover Windows data. Such as Bart's Boot Disks and so on.
[10:43] <Poindexter_> If you can find it check out  Ultimate Boot Disk   It works on virtually any machine.
[10:52] <Poindexter_> Hateball it was nice to make your acquaintance. I bid you a good morning here or day wherever you are. Take care of yourself.
[11:06] <hateball> What a friendly fellow :)
[12:35] <coreycb> jamespage, thanks.  we're making progress on the clients and oslos.  it looks like we're going to need a new package for python-positional.
[12:36] <jamespage> coreycb, what's using that?
[12:37] <coreycb> jamespage, keystoneclient, and it's mainline code
[12:38] <coreycb> https://github.com/morganfainberg/positional/tree/master
[12:38] <coreycb> jamespage, ^
[12:39] <jamespage> coreycb, pretty small package
[12:39] <coreycb> yes
[12:40] <coreycb> jamespage, maybe I could put it together and you could help me get it in the new queue
[12:40] <jamespage> coreycb, yes - prob quicker through debian
[12:40] <jamespage> coreycb, maybe checkin with zigo make sure he's not already doing it
[12:40] <coreycb> jamespage, ok
[12:41] <jamespage> he's pretty hot on picking up new pkgs
[12:41] <coreycb> yes
[12:41] <jamespage> coreycb, no ITP raised so you might be clear for that
[12:42] <coreycb> jamespage, ok
[12:42] <jamespage> coreycb, is this critical path for b3?
[12:43] <jamespage> i.e. do we need the new keystoneclient for b3 ?
[12:43]  * coreycb checks
[12:44] <coreycb> jamespage, it doesn't look like it as of now based on global requirements.  however, sometimes you never know.
[14:09] <coreycb> ddellav, oslo.log and oslo.middleware uploaded
[14:13] <ddellav> coreycb ack
[14:14] <EmilienM> coreycb: we had to disable voting on our ubuntu jobs in Puppet CI, the latest update in proposed broke us
[14:15] <EmilienM> we're sorting things out this week
[14:15] <EmilienM> but imho it would be a nice thing to release a bit more often
[14:15] <coreycb> EmilienM, I saw your message.  smoke tests are passing for us.
[14:16] <EmilienM> cool. Not sure we deploy the same way / components
[14:17] <EmilienM> it's cool it works for you - but for other people it's a bit hard to catchup releases like this. But I might be wrong.
[14:17] <coreycb> EmilienM, we definitely don't, you're probably going to want to debug your failures and let us know if there's a specific bug to look at
[14:17] <coreycb> EmilienM, well we are in beta you know :)
[14:17] <EmilienM> that's what we do since the beginning
[14:17] <EmilienM> we debug and report bugs, aren't we?
[14:18] <EmilienM> right, we're in beta. But we run OpenStack trunk without issue (on centos7 jobs with RDO)
[14:18] <coreycb> EmilienM, I just saw a log from you that has failures that could or could not be real issues
[14:21] <jamespage> hallyn, hey - could you peek at https://launchpadlibrarian.net/242779327/buildlog_ubuntu-trusty-amd64.libvirt_1.3.1-1ubuntu2~cloud0~ubuntu14.04.1~ppa201602251230_BUILDING.txt.gz
[14:21] <jamespage> its a libvirt backport failure for the mitaka UCA - xml tests are failing in some way
[14:22] <jamespage> EmilienM, we can certainly push updates through to proposed more regularly, but that will create more instability rather than less imho
[14:23] <EmilienM> I disagree here
[14:23] <jamespage> EmilienM, its not a release per say - we're still in development so we expect breaks
[14:23] <EmilienM> iterative changes make things failing faster, but also fixed faster
[14:23] <jamespage> EmilienM, well we can try it for a while and see how it goes if you like
[14:24] <EmilienM> I know it's Ubuntu channel, but it's worth sharing feedback: RDO has a special repo that run close to master but is gated by CI.
[14:24] <jamespage> I'm not fussed either way - but backporting from xenial to 14.04 does require some manual intervention from time to time so can lag
[14:24] <EmilienM> it's "mitaka-passed-ci" repo. We use it
[14:25] <EmilienM> I prefer fixing bugs from time to time, rather all in one shot
[14:25] <jamespage> EmilienM, what's the scope of ci that's undertaken on those packages?
[14:25] <EmilienM> jamespage: they gate with a tool called "Weirdo", that is a mirror of what is gating: Puppet OpenStack CI, Kolla CI, Packstack and TripleO.
[14:26] <EmilienM> https://github.com/redhat-openstack/weirdo
[14:31] <EmilienM> jamespage: dmsimard is the guy who initiated all this CI for RDO
[14:32] <EmilienM> jamespage: FWIW, you could run our tests out of the box without anything to do.
[14:32] <EmilienM> we have a script to run: https://github.com/openstack/puppet-openstack-integration
[14:32] <EmilienM> and you just need to export the scenario that you want to run
[14:33] <jamespage> EmilienM, sure - we used to maintain trunk package builds for this purpose (regular CI, earlier break/fix)
[14:33] <jamespage> EmilienM, that said we still used to only upload on milestones; trunk package PPA's where consumable outside of that
[14:33] <jamespage> upload to ubuntu that is
[14:33] <EmilienM> right
[14:34] <EmilienM> but having our tests running in your CI would also help you to get feedback, for free.
[14:34] <EmilienM> like for free. you don't have to make them vote in your release process.
[14:34] <EmilienM> just have it and look at it. and tell me if something will break
[14:34] <EmilienM> (if your CI is public)
[14:34] <dmsimard> I actually did a talk around 2 weeks ago around how we do CI in RDO in case you're interested in how we do things, here's the most relevant part for you: https://www.youtube.com/watch?v=XAWLm3jP7Mg&feature=youtu.be&t=1207
[14:37] <jamespage> dmsimard, just out of interest, what sort of effort is required to keep your packages up-to-date with trunk? I appreciate that this gives you more iterative visibility on breaking changes, but from our past experience its been fairly resource intensive
[14:38] <jamespage> which is why we switch to milestone focus a while back
[14:38] <jamespage> rather than daily focus
[14:38] <dmsimard> jamespage: earlier in that talk I linked, I talk about a tool we called delorean (that's actually in the process of being renamed due to trademark issues T_T)
[14:39] <dmsimard> delorean basically watches upstream git repos for new commits and when there is one, it builds it immediately with the rpm spec files that we have for that project
[14:39] <jamespage> dmsimard, we have something very similar
[14:39] <dmsimard> this allows us to 1) have the latest packages available all the time and 2) detect build failures immediately
[14:39] <EmilienM> we use to fail very fast
[14:39] <EmilienM> almost every week
[14:39] <jamespage> sure - understand the process - just wanted to assess how much effort 2) is these days
[14:40] <EmilienM> but we also fix fast because we involve different communities, tripleo, kolla, puppet, etc
[14:40] <dmsimard> jamespage: so we have this that is updated on every build: http://trunk.rdoproject.org/centos7/report.html
[14:40] <EmilienM> iiuc, it's a lot of effort to put the process in place
[14:40] <dmsimard> and monitored (i.e, nagios) and build failures and then reported and acted on
[14:41] <dmsimard> some build failures are harder than others to fix, thankfully most of them are new dependencies that we already have packages for
[14:41] <jamespage> EmilienM, I'm less worried about the process; more about the cost of acting on build failures...
[14:41] <jamespage> (as in we already have an equivalent process I could ressurect)
[14:42] <coreycb> jamespage, chasing false positives in particular I would think
[14:42] <dmsimard> some build failures are for libraries that we don't have a package for yet, so we need to package these first and then add them to the spec file
[14:42] <jamespage> (right back to essex believe it or not - https://launchpad.net/~openstack-ubuntu-testing/+archive/ubuntu/essex-stable-testing)
[14:42] <EmilienM> jamespage: having the process in place reduces the cost - because you have the right people involved in that
[14:42] <coreycb> dmsimard, do you find yourself wasting much time chasing false positives?
[14:43] <EmilienM> jamespage: imagine if your CI was public and if I could look at my Puppet jobs failing on future ubuntu packages
[14:43] <dmsimard> I don't think we get false positives
[14:43] <EmilienM> jamespage: I'll jump and commit to fix it
[14:43] <jamespage> EmilienM, sure
[14:43] <EmilienM> jamespage: because it will break my CI
[14:43] <dmsimard> I guess sometimes we get CI failures (read: not build failures, but CI failures) that are because of things introduced upstream
[14:43] <EmilienM> jamespage: but right now, I'm passive. I wait, it breaks, I fix.
[14:43] <EmilienM> jamespage: so the cost is expensive for both of us.
[14:44] <dmsimard> For example a while back nova started requiring an API database and no installers consuming RDO packages supported creating that database
[14:44] <jamespage> EmilienM, I understand - seriously - you're the first early consumer of UCA packages outside of canonical
[14:44]  * jamespage ponders this a bit
[14:44] <dmsimard> We reported it to the different projects and helped them resolve the issue -- but before we were ahead of them, their CI in the OpenStack gate didn't break.
[14:45] <dmsimard> s/before/because/
[14:45] <EmilienM> the famous "it works on devstack"
[14:45] <jamespage> dmsimard, ack - we've detected similar issues in stable releases when we used todo this which where not picked up in stable gates in openstack
[14:45] <jamespage> EmilienM, TM
[14:46] <EmilienM> what I propose is that you run Puppet OpenStack CI jobs beside Juju charms CI, gating new packages that you build. (milestone or trunk, whatever).
[14:46] <EmilienM> and give us access to the CI so we can see jobs result
[14:46] <EmilienM> that would be a first step forward
[14:47] <jamespage> EmilienM, our challenge is that Juju charms CI Is still to charm centric; we want something that is packaging centric...
[14:48] <EmilienM> I am interested by testing your packages and I provide you https://github.com/openstack/puppet-openstack-integration that would work out of the box for you
[14:48] <dmsimard> WeIRDO could be made fairly generic (and be called something else for the purpose of an effort in this direction). I did some design decisions to make it non-generic to keep it as simple and straightforward as possible since we have limited resources.
[14:48] <EmilienM> you just need to configure a staging repo before and run one script.
[14:49] <jamespage> EmilienM, right - so we are currently gating the UCA from staging -> proposed based on the testing we do today with charms and tempest....
[14:50] <jamespage> staging is delivered much more iteratively (not trunk)
[14:50] <EmilienM> jamespage: look our scenarios: https://github.com/openstack/puppet-openstack-integration#description
[14:50] <EmilienM> I'm not sure you have such coverage.
[14:52] <jamespage> EmilienM, aside from sahara, trove and ironic, we have the same coverage with charms....
[14:52] <dmsimard> anyway, feel free to poke me if you have any questions regarding how we do things
[14:53] <jamespage> dmsimard, EmilienM: sure - will do - don't have capacity to look at this in the short term but we will review...
[14:53] <EmilienM> jamespage: you have aodh, gnocchi?
[14:54] <jamespage> not yet...
[14:54] <EmilienM> ok :-)
[14:54] <jamespage> missed those...
[14:54] <EmilienM> we're adding zaqar also (WIP)
[14:54] <EmilienM> anyway, like dmsimard said, we're here to help
[14:54] <jamespage> sure
[14:54] <coreycb> ddellav, oslo.cache needs oslo.log dialed down to the right min version in d/control
[14:54] <jamespage> thanks
[14:55] <EmilienM> we work for redhat, we don't have our "red hat" - we just try to help making OpenStack better, so do you.
[14:56] <EmilienM> jamespage: I'll let you know when we have ubuntu jobs green again. Should not be hard to figure, if it works for you
[14:57] <ddellav> coreycb ok, i'll take care of it
[14:59] <ddellav> coreycb i wonder how that happened, i didn't update it.
[15:00] <ddellav> well i can clearly see who did it, i just wonder why they did that
[15:00] <coreycb> ddellav, yeah I didn't see it changed in your logs, maybe someone else messed it up
[15:00] <coreycb> ddellav, typo I think
[15:00] <ddellav> coreycb yea i guess, a typo 4 times in the file lol
[15:05] <ddellav> coreycb oslo.cache updated and good to go
[15:05] <coreycb> EmilienM, thanks for the discussion
[15:05] <Deliant> i keep getting log errors that drupal8 cannot remove some old files that are not in use anymore (i changed the directory they are stored in), and i deleted the folder manually. is there any way to remove these unused fields manually so i dont get the error messages?
[15:06] <Deliant> ups wc
[15:06] <EmilienM> coreycb: anytime
[15:14] <coreycb> ddellav, oslo.cache uploaded, thanks
[15:33] <jamespage> cpaelzer, hey - doing the dpdk stuff now
[15:33] <jamespage> don't make any changes - I got smb's feedback already covered...
[15:41] <cpaelzer> jamespage: I already pushed the two whitespacies
[15:41] <cpaelzer> jamespage: and arges was about to review and upload (at least that was the plan)
[15:41] <jamespage> cpaelzer, I can upload it for you
[15:41] <cpaelzer> jamespage: if you want/will do the upload we just have to get the ack from arges so you two do not collide
[15:42] <jamespage> arges, hey - I've got the dpdk review/upload!
[15:48] <spm_draget> What is the name for the php package… apache2-mod-php5 or php5.0…?
[15:50] <nacc_> spm_draget: in < xenial, it's php5 (and it should pull in the right deps) and in xenial it's php (which will pull in PHP7.0)
[15:51] <spm_draget> Thanks
[15:52] <ddellav> spm_draget this might not work anymore but if using apache, you can install libapache2-mod-php5 and it will grab the right version of apache mpm and install all the right php mods as well
[15:54] <nacc_> ddellav: spm_draget: I believe php5 (and correspondingly php/php7.0 in xenial) depend on libapache2-mod-php5 (and php7.0 in xenial)
[15:55] <arges> jamespage: ack
[15:55] <ddellav> nacc_ that would be annoying if installing php for use with nginx or some other httpd. I used to use libapache2-mod-php5 as a shortcut with apt, instead of typing apt-get install php5, apache2, etc etc, just install libapache2-mod-php5 would grab all that automatically
[15:56] <nacc_> ddellav: spm_draget: the other way around is true as well, in that the libapache2 module depends on php5-cli/php-cli
[15:56] <nacc_> ddellav: it's been that way for a while, afaik, someone did file a bug on it
[15:56] <ddellav> nacc_ i think thats right actually now that i think on it. I've had to delete apache after installing php5 because i primarily use nginx now
[15:56] <spm_draget> ddellav: Well, I do explicitly want apache and php. Not only one and the other as a dependency.
[15:57] <spm_draget> But that works for me. Right now I am wondering why on xenial phpmyadmin still tries to pull php5 while php7 is installed (and apt does not seem to mind)
[15:57] <patdk-wk> installing php5 won't install apache
[15:58] <rbasak> php5 depends on libapache2-mod-php5 OR php5-cgi OR php5-fpm etc.
[15:58] <nacc_> spm_draget: it's a wip
[15:58] <ddellav> spm_draget xenial is still a WIP so the phpmyadmin package might not be ready to support php7 yet *shrugs*
[15:58] <nacc_> spm_draget: that will be fixed in the final
[15:58] <rbasak> If you tell apt what you want, no need to pull in Apache.
[15:58] <ddellav> rbasak ah so you need to install php5-fpm and it will forgo installing apache
[15:59] <nacc_> e.g., apt-get install php5 php5-fpm, iiuc
[15:59] <patdk-wk> or php5-cgi
[15:59] <ddellav> fwiw the last time i did this was in trusty
[15:59] <rbasak> Right. "apt-get install php5 php5-fpm" will not pull in Apache.
[15:59] <patdk-wk> whatever one you plan to use with nginx
[15:59] <ddellav> gotcha
[15:59] <spm_draget> nacc_: Ah okay. Well, testing xenial right now. I guess I will not install phpmyadmin yet.
[15:59] <spm_draget> Thought it was in feature-freeze since 10 days or something.
[15:59] <ddellav> spm_draget take this as an opportunity to learn the mysql cli :P
[16:00] <spm_draget> Yeah, I will manage :)
[16:00]  * patdk-wk has no idea how to use phpmyadmin
[16:01] <ddellav> one look at my access logs and see how many bots out there scan for phpmyadmin installations is enough to get me to never install it ever again. At least not publicly accessible
[16:01] <patdk-wk> I run publically accessable phpmyadmin
[16:01] <patdk-wk> haven't had any issues
[16:01] <nacc_> spm_draget: we are in FF, but the php7 transition is a large one
[16:02] <spm_draget> nacc_: I can imagine. Huge change. But thanks for all the work you people do! I am currently evaluating xenial for our productive server… trying to migrate some services over and might siwtch to productive in april
[16:02] <spm_draget> s/migrate/copy and test
[16:04] <ddellav> yea, all my trusty boxes will get upgraded when it's released.
[16:04] <ddellav> no longer needing to install ppa's to get newer packages
[16:04]  * patdk-wk has already started upgrading a few
[16:05] <nacc_> spm_draget: good to hear, and i appreciate the feedback, i can try and remember to ping you when phpymadmin has been updated (not to say i recommend it or anything)
[16:05] <patdk-wk> ddellav, you will always have to do that
[16:05] <ddellav> patdk-wk eventually yea but right off the bat i wont need a custom ppa to get php 5.6, newer nginx, etc
[16:05] <patdk-wk> one should never expose a management interface to the public :)
[16:05] <patdk-wk> I do it, but that is cause it's customer management, not my management
[16:06] <patdk-wk> I'm the other way
[16:06] <patdk-wk> too many customers that want to keep running php 5.4
[16:06] <ddellav> patdk-wk when i ran a hosting company i had multiple PMA's running, can't really avoid it but now im not doing that so i try to reduce my attack surface as much as possible heh
[16:06] <patdk-wk> due to everything required to make php 5.5+ work
[16:07] <patdk-wk> this php upgrade, even is so highly annoying
[16:07] <ddellav> my largest trusty box runs a single php-based website so php version is important
[16:07] <ddellav> 5.6 gives us array constants which is nice
[16:07] <ddellav> (among other things)
[16:10] <nacc_> patdk-wk: isn't 5.4 EOL? :)
[16:10] <patdk-wk> by who?
[16:10] <patdk-wk> for php sure
[16:10] <patdk-wk> for ubuntu, no
[16:10] <nacc_> patdk-wk: fair enough
[16:11] <nacc_> just seems like those customers *may* want to think about moving soon-ish?
[16:11] <patdk-wk> ya, but that is rather hard
[16:11] <nacc_> yep
[16:11] <patdk-wk> expecially when a lot of them are using *compiled* php code
[16:11] <patdk-wk> that no longer exists
[16:11] <nacc_> ah
[16:11] <nacc_> yeah, that's no good
[16:11] <patdk-wk> and you cannot use compiled code < php5.5 on php 5.5+
[16:11] <patdk-wk> has to be recompiled
[16:12] <patdk-wk> ya, I am running a mix right now
[16:12] <nacc_> patdk-wk: what version of currently supported ubuntu has php5.4? precise?
[16:12] <patdk-wk> everything that doesn't run customer code, is already upgraded
[16:12] <patdk-wk> yes
[16:12] <patdk-wk> well, that is 5.3 though
[16:12] <nacc_> yeah, i see 5.3.10-1ubuntu3.21
[16:13] <patdk-wk> but the compiled code works upto 5.4
[16:13] <nacc_> oh ok
[16:17] <urthmover> is this the right channel for 16.04 talk?
[16:17] <patdk-wk> depends
[16:17] <teward> ^
[16:18] <patdk-wk> and not talking about the underwear
[16:18] <teward> lol
[16:18] <urthmover> well I installed the 16.04 daily and I find it strange that zfs doesn't appear to be installed by default.  I thought that I read somewhere that it would be included.  Any thoughts about this?
[16:18] <urthmover> 16.04 daily server to be exact
[16:19] <patdk-wk> no thoughts, no nothing about it
[16:19] <patdk-wk> but I would find it HIGHLY odd, if it was
[16:19] <Schalla> Anyone here can recommend the Official Ubuntu Server book?
[16:19] <urthmover> patdk-wk: http://blog.dustinkirkland.com/2016/02/zfs-is-fs-for-containers-in-ubuntu-1604.html
[16:19] <ddellav> coreycb oslo-sphinx and python-oslotest are ready for review and upload
[16:20] <urthmover> patdk-wk: I guess it's really only that zfs will be native for lxc containers
[16:20] <nacc_> urthmover: it specifically says it's not installed by default? that is, you have to set it up
[16:20] <patdk-wk> what does, zfs for conainers have to do with, installed by default
[16:21] <patdk-wk> ya, atleast that blog post only claims the kernel module will be installed
[16:21] <patdk-wk> not even administrator utils to manage it will be installed by default
[16:21] <urthmover> I made an incorrect assumption....I thought that the inclusion of zfs for containers would also mean that zfs utils etc. would be installed by default...possibly a choice of filesystem during install
[16:22] <urthmover> not the end of the world...I can do it myself....just a bad assumption on my part
[16:22] <patdk-wk> :)
[16:22] <nacc_> urthmover: i believe it's explicitly not on the install media, as it can't be used for / -- but i might be wrong
[16:22] <JanC> why not for / ?
[16:22] <patdk-wk> does ubuntu grub have the needed zfs parts?
[16:23] <urthmover> if you want it on /...these docs look sound  https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Native-ZFS-Root-Filesystem
[16:23] <JanC> grub doesn't need to access (the later) /
[16:23] <nacc_> JanC: as i said, i might be wrong...
[16:23] <patdk-wk> well, my grub does :)
[16:24] <patdk-wk> atleast considering /boot is on zfs
[16:24] <nacc_> urthmover: that link also does mention you have to do some steps outside the installer
[16:24] <nacc_> urthmover: that's all i meant, really
[16:25] <urthmover> nacc_: ah...I see...yeah there are steps outside the installer
[16:25] <patdk-wk> also very nice to use beadm :) to boot snapshots
[16:32] <ddellav> coreycb python-heatclient fixed and ready for review/upload.
[16:33] <coreycb> ddellav, can you add python-os-client-config to the binary package Depends for oslotest?
[16:33] <coreycb> ddellav, awesome, looking
[16:33] <ddellav> coreycb ok, i saw that and was wondering if i should add it.
[16:43] <ddellav> coreycb oslotest updated
[16:44] <coreycb> ddellav, for heatclient can you update python3-oslo.serialization and tempest-lib in d/control?
[16:45] <ddellav> coreycb ok
[16:47] <ddellav> coreycb tempest-lib has no version in d/control. I was under the impression we do not add one if no version currently exists.
[16:48] <coreycb> ddellav, ah yeah it's not really needed since it didn't exist in trusty (ie. no need to differentiate from what's in trusty when using the cloud archive)
[16:48] <ddellav> coreycb ok so i added that missed serialization update, pushing now
[16:48] <coreycb> ddellav, thx
[16:50] <coreycb> ddellav, oslo-sphinx uploaded
[16:53] <ddellav> coreycb ok great. heatclient updated and pushed
[16:59] <coreycb> ddellav, hmm I can't generate a good orig tar file for oslotest
[17:01] <coreycb> ddellav, not with zigo's workflow, at least
[17:01] <jamespage> cpaelzer, uploaded - then realized the Vcs-Origin fields are foobar - pushed a change to the repo - not worth its own upload
[17:01] <ddellav> coreycb yea, it says it cant verify the tag
[17:01] <ddellav> but i was still able to build the package with gbp
[17:02] <cpaelzer> jamespage: those VCS fields are a reoccurring discussion if/how they should be added
[17:02] <bieb> SSL question.. We have a wildcard ssl cert, it has been installed on a couple subdomains. Our webserver was Windows/IIS and had the SSL cert installed. I have just built an Ubuntu server with Apache to be our new webserver, everything on that end is fine. I was not sure if I will have to rekey our SSL for ubuntu, or can I install the same key used on IIS? I think I have the original keys saved in a zip file from godaddy
[17:03] <cpaelzer> jamespage: last time rbasak did kind of collect the last status and I have a few caht/mail threads to refer - but all ok for now
[17:05] <ddellav> bieb: https://www.sslshopper.com/move-or-copy-an-ssl-certificate-from-a-windows-server-to-an-apache-server.html
[17:15] <coreycb> ddellav, heatclient uploaded
[17:20] <bieb> ddellav: Thanks!!
[17:20] <ddellav> bieb np
[17:20] <ddellav> fwiw it was the first result in google ;)
[17:21]  * RoyK uses let's encrypt
[17:21] <ddellav> RoyK thats the free ssl cert site?
[17:22] <ddellav> i've been using startssl certs for ages, they work really well
[17:24] <bieb> ddellav: I asked here.. because Godaddy told me I would have to rekey the cert, then update any subdomain that is using the current cert.. I figured someonw here would have a better idea.. :)
[17:24] <ddellav> bieb yea, they might, idk, but i do know that moving from IIS to apache is pretty common :P
[17:25] <bieb> ddellav: gotta love support from godaddy..
[17:25] <ddellav> bieb they are pretty much complete garbage. I stopped doing business with them many years ago. I use hover for domains (even though im considering switching since they dont support dnssec).
[17:25] <ddellav> and i've always done my own hosting
[17:26] <RoyK> ddellav: yeah
[17:27] <RoyK> ddellav: it's rather neat with let's encrypt if you have a bunch of subdomains/hosts
[17:28] <RoyK> I only have a single domain (mostly), so I stuff a lot of hosts/subdomains in there, and things like startssl means I have to pay rather a lot for that
[17:28] <bieb> ddellav: we dont host there, we host our own. We have had SSL with them for a few years and domains
[17:30] <ddellav> RoyK oh i use the free startssl certs. If i buy them i use thawte or comodo usually.
[17:33] <RoyK> ddellav: last I checked, you couldn't get multiple host certs on startssl
[17:34] <RoyK> for free, that is
[17:34] <ddellav> RoyK you can't get free wildcards, no but you can get unlimited free subdomain certs
[17:34] <ddellav> so it's a bit more work but it's doable
[17:34] <RoyK> well, I ditched it for let's encrypt, which works well
[17:34] <qman__> I have a paid startssl cert, unlimited names
[17:35] <qman__> The free ones are limited, forget the exact limits
[17:35] <ddellav> wildcards are great, and technically what you should be using if you have multiple domains/subdomains on a single IP
[17:35] <ddellav> you're supposed to have 1 cert per ip address but it's not strictly necessary
[17:36] <qman__> IP addresses are irrelevant, certs only specify names
[17:37] <ddellav> qman__ thats true, cname is only domain however for added security, some browsers will complain if it detects multiple certs from the same ip address so for that reason it should be 1 per ip
[17:37] <ddellav> or at least thats how it used to be
[17:37] <qman__> Read about SNI
[17:38] <qman__> You appear to have an incorrect understanding of the isaue
[17:38] <qman__> Issue*
[17:38] <ddellav> qman__ last time i got in depth with ssl certs was with apache and i dont think SNI was widely supported. But i see now what you mean
[17:54] <jamespage> coreycb, xenial is being awkward - won't start instances
[17:54] <jamespage> afaict messages for port creation don't get to the n-ovs-agent on the compute host
[17:58] <patdk-wk> these days, there is no need to do one cert per ip
[17:58] <patdk-wk> cause everything does SNI, that is not a security issue
[17:59] <patdk-wk> the problem is, people *still* use windows xp, and old java, and custom coded apps
[17:59] <patdk-wk> though, for browser based website access, I have started deploying sni without issues
[17:59] <patdk-wk> (though today we did find out that android does not support sni, if set to tls 1.2 only)
[18:00] <trippeh> 2.x android?
[18:00] <patdk-wk> 5.x android
[18:01] <trippeh> wut
[18:01] <patdk-wk> you have to have ONLY tls 1.2 enabled
[18:01] <patdk-wk> if anything else is, like tls 1.1, it works
[18:01] <patdk-wk> even using tls 1.2 :)
[18:01] <ddellav> i've had a lot of issues with windows applications that use .net to make api requests over ssl. I had to really tweak my ssl settings to allow these apps to get through
[18:02] <patdk-wk> tls 1.2 was added in android 4.1
[18:02] <patdk-wk> in android 4.x it works
[18:04] <qman__> Yeah, you can reasonably expect SNI to work today, only really old stuff and the occasional bug like that are problems
[18:05] <qman__> But even without SNI, the issue is one cert for a given IP, not one IP for a cert
[18:06] <qman__> Because the server must blindly send the cert when no SNI is specified
[18:11] <jamespage> coreycb, gotcha - https://bugs.launchpad.net/ubuntu/+source/neutron/+bug/1549919
[18:11] <jamespage> my revert of the agent crossover was imcomplete.
[18:13] <jamespage> coreycb, fix uploaded
[18:16] <jamespage> coreycb, that took me way to long to find
[18:16] <jamespage> I was poking at deps and hacking debug into code to try to figure out wtf was going on
[18:16] <jamespage> maybe its time to eod
[18:21] <eein> hello. I was looking to install the digital signage concerto on a 14 ubuntu server and the guide reads to me as though the packages are in the repo but I don't find that to be the case. I can add a repo but were these packages in the official repos and removed recently or am i just reading this wrong? https://github.com/concerto/concerto/wiki/Installing-Concerto-2
[18:23] <coreycb> jamespage, ugh, thanks for the fix
[18:24] <eein> hmm I guess it was never in the repo the guide is just organized poorly and a little misleading.
[18:24] <sarnold> eein: step 2 involves running a shell script to add their repository to your apt sources -- it isn't in the ubuntu archives
[18:25] <eein> yeah, thanks sarnold. the headings make it seem like they are seperate options but I see now the main headings have a <hr>
[18:25] <sarnold> eein: .. and it appears that their script is quite old, it adds _saucy_ sources. ubuntu EOLed saucy in july 2014
[18:29] <coreycb> ddellav, I think oslotest needs some fixing because it's missing git tags for the new release, or maybe you just didn't push them?
[18:31] <ddellav> coreycb weird, it shows up on mine: https://www.dropbox.com/s/blxmeuvsemg7v3b/Screenshot%202016-02-25%2013.31.15.png?dl=0
[18:31] <coreycb> ddellav, did you git push --tags?
[18:31] <ddellav> coreycb indeed
[18:32] <coreycb> ddellav, anyway that's why generating the tarball failed. ok. me looks again.
[18:32] <ddellav> coreycb i have the tags and it fails for me too
[18:38] <coreycb> ddellav, ok I think I'm just not picking up the tags on the merge
[18:47] <coreycb> ddellav, ok figured it out, I needed "git remote add --tags". not sure why it usually works for me without that though. anyway..
[18:47] <ddellav> coreycb are you able to gen the orig? im still unable to
[18:48] <coreycb> ddellav, yes, I can now
[18:48] <ddellav> hrm...
[18:56] <Razva> is Liberty ready for production, or should I go with Kilo?
[19:01] <coreycb> Razva, Liberty released last Oct and most if not all of the core projects have had at least one stable point release since then
[19:02] <coreycb> so they've had at least a round of bug fixes at this point, neutron just had it's third point release
[19:03] <coreycb> you'll also get an extra 6 months of support out of Liberty: https://wiki.ubuntu.com/ServerTeam/CloudArchive
[19:34] <coreycb> ddellav, I just uploaded a new python-monotonic if you want to retry oslo.utils once it builds
[21:00] <coreycb> ddellav, oslotest uploaded
[21:01] <ddellav> coreycb ack