[00:06] <Geom> is unfrar-free=unrar?
[00:09] <Ben64> yes
[01:10] <Guest4915> anyone able to help out with this question of mine? anyone able to help out with this ? http://askubuntu.com/questions/838981/local-apt-mirror-says-release-cant-be-found-but-its-there-what-am-i-missing
[01:17] <tarpman> Guest46640: trying to reach https://apt.devita.co/puppet to have a look at the files, but it's timing out
[01:18] <tarpman> gorelative1: sorry, tab-complete fail ^
[01:18] <gorelative1> probably dns
[01:18] <tarpman> apt.devita.co is an alias for devita.co.
[01:18] <tarpman> devita.co has address 68.2.71.66
[01:19] <gorelative1> yeah add an alias to that ip
[01:19] <gorelative1> its up i just got to it externally
[01:19] <gorelative1> make sure itse https
[01:20] <tarpman> can see it now, looking
[01:20] <gorelative1> thanks
[01:20] <gorelative1> tag me when you rsepond, im windowed out
[01:26] <tarpman> gorelative1: looks like your webserver isn't sending the intermediate certificate
[01:27] <gorelative1> its self signed internally it resolves right here in the lan
[01:28] <gorelative1> and thats actually a comodo wildcard cert
[01:29] <tarpman> gorelative1: the end certificate is fine, the root is fine and is in ca-certificates, it's the intermediate in between that's missing
[01:30] <gorelative1> hmm
[01:30] <tarpman> gorelative1: but that's different from the error you pasted, so I'll bypass that and carry on
[01:30] <gorelative1> yah
[01:32] <gorelative1> i dont think thats it because i dont get ssl warnigns on the server
[01:34] <tarpman> gorelative1: after bypassing the ssl problem and adding the gpg key, I'm getting:
[01:34] <tarpman> E: Failed to fetch https://apt.devita.co/puppet/dists/xenial/PC1/binary-all/Packages  404  Not Found
[01:34] <gorelative1> what key did you add, what are those commands?
[01:34] <tarpman> sources.list entry is
[01:34] <tarpman> deb https://apt.devita.co/puppet xenial PC1 main
[01:35] <tarpman> key was https://apt.puppetlabs.com/DEB-GPG-KEY-puppet
[01:37] <tarpman> gorelative1: copied the sources.list from your askubuntu verbatim, and that's the only error I'm seeing - not seeing the error you originally posted at all :\
[01:37] <gorelative1> hmm
[01:37] <gorelative1> ran wget -qO - https://apt.puppetlabs.com/DEB-GPG-KEY-puppet | sudo apt-key add -
[01:38] <gorelative1> tried apt-get update agsin
[01:38] <gorelative1> https://gist.github.com/mikedevita/32288c4a6b87a24438cc83c64a593a86
[01:39] <gorelative1> if you look Packages is indeed missin
[01:39] <gorelative1> binary-amd64 is the onyl one there
[01:41] <tarpman> gorelative1: fine, but I don't get why you and I are getting different results. is your webserver/repo config at all different internally vs externally?
[01:41] <gorelative1> nope
[01:41] <gorelative1> its just nginx
[01:41] <gorelative1> could my apt be caching something?
[01:42] <tarpman> gorelative1: try apt-get -o Debug::acquire::https=1 update
[01:43] <tarpman> gorelative1: yeah, there's probably some sort of caching - clear out /var/lib/apt/lists if you want to be sure
[01:43] <gorelative1> https://gist.github.com/mikedevita/0ecff235d736e5e9f4d6c26b0a0dfe03
[01:44] <tarpman> gorelative1: hmm. do you have the ca-certificates package installed?
[01:44] <gorelative1> yeah i need to add the intermediate looks like
[01:44] <gorelative1> let me switch to http:// and see how it does
[01:45] <tarpman> gorelative1: the intermediate is one issue, sure, but the gist you just put up looks like your /etc/ssl/certs/ca-certificates.crt is screwed up
[01:45] <tarpman> gorelative1: if you don't have ca-certificates installed, install it; if you do, maybe run update-ca-certificates to regenerate that
[01:46] <gorelative1> https://gist.github.com/mikedevita/0ecff235d736e5e9f4d6c26b0a0dfe03
[01:46] <gorelative1> it throws warning
[01:46] <gorelative1> i removed the ca's i added
[01:47] <gorelative1> errors went away for update-ca-certificates
[01:47] <gorelative1> looks like my ca's are messed up
[01:48] <tarpman> gorelative1: so apt-get update gets as far as mine did now?
[01:48] <gorelative1> no lol
[01:48] <tarpman> :<
[01:48] <gorelative1> i even changed sources.list  to use http and its still trying to use https
[01:48] <gorelative1> i think hold on
[01:49] <gorelative1> im forcing ssl
[01:49] <gorelative1> k no errors with http
[01:49] <gorelative1> so its the CAs i added
[01:50] <gorelative1> gd comodo
[01:51] <tarpman> godaddy and comodo? those are both normally part of the default root list anyway...
[01:51] <gorelative1> namecheap
[01:51] <gorelative1> not part of apparently lol
[01:51] <gorelative1> no ca included and it fails
[01:51] <gorelative1> with https
[01:52] <gorelative1> ill combine the ca chain with my cert and see what that does
[01:57] <tarpman> gorelative1: popping out for a bit, back later if you get stuck again
[02:02] <gorelative1> tarpman, looks like the ca chain is borked and i cant get it to work :\ i just moved to http://
[02:03] <gorelative1> using the latest chain from namecheap with cat'ing it together domain.crt ca-bundle.crt > domain-full.crt
[02:03] <gorelative1> https://www.ssllabs.com/ssltest/analyze.html?d=apt.devita.co
[02:06] <gorelative1> had to add [arch=all] to apt-mirror list to get the binary-all
[02:38] <gorelative1> thanks gain tarpman i set your answer as the right one
[06:55] <pavlos> uvt-kvm: error: libvirt: Domain not found: no domain with matching name 'secondttest'
[06:55] <pavlos> when trying to create a vm using uvtools
[07:02] <pavlos> using this page ... https://help.ubuntu.com/lts/serverguide/cloud-images-and-uvtool.html
[07:03] <pavlos> virsh all shows both firsttest and secondtest running
[07:44] <Javezim> Anyone have an issue where ISCSITARGET maxes out 100% of one CPU Core?
[07:45] <Javezim> As soon as a windows client connects to it, bam, 100% CPU Core
[07:45] <Javezim> and it locks up
[13:16] <zul> jamespage/coreycb: ping zesty isnt open yet, where should we stuff stuff for when its ready?
[13:17] <coreycb> zul, I think you can upload and it'll sit in the queue for now
[13:17] <coreycb> zul, we can also upload to the daily build ppas to get CI working
[13:18] <zul> ack
[13:18] <coreycb> zul, https://launchpad.net/~openstack-ubuntu-testing/+archive/ubuntu/ocata
[13:18] <coreycb> zul, I'm going to start working through ci failures
[13:18] <zul> coreycb: just packaging a new dependency
[13:19] <coreycb> zul, ah, which one?  I just noticed monasca-statsd is needed by designate
[13:19] <zul> yes that one
[13:20] <coreycb> zul, cool
[13:21] <coreycb> zul, I'll look at heat
[13:21] <zul> coreycb: keystone needs a newer oslo.policy
[13:22] <coreycb> zul, ok
[13:43] <zul> coreycb: hah no python3 for monasca-statsd
[13:43] <coreycb> zul, really?  it shouldn't have made it through global-requirements review if that's the case.
[13:44] <zul> coreycb: yeah that file is empty
[13:44] <coreycb> zul, I'd open a bug
[13:46] <zul> coreycb: https://bugs.launchpad.net/monasca/+bug/1634901
[13:57] <zul> coreycb: monasca-statsd uploaded to the ppa
[13:59] <coreycb> zul, great. are you pushing that repo to ubuntu-server-dev?
[13:59] <zul> coreycb: yeah sure
[14:00] <coreycb> zul, ok
[14:00] <zul> coreycb: how?
[14:01] <coreycb> zul, I'd model it after the existing packages and use this workflow: https://wiki.ubuntu.com/OpenStack/CorePackages
[14:26] <zul> lp:~ubuntu-server-dev/ubuntu/+source/monasca-statsd
[16:05] <Braven> is there away to control if an interface registers in DNS
[17:08] <sweb> what's the best solution for  high availability in ubuntu servers ?
[17:33] <andol> sweb: That's entirely service specific.
[17:34] <sweb> andol: i read somthing about bgp anycast ... can i run this solution by software ? or i need hardware and ISP configuration ?
[17:34] <sweb> i used DNS round robin (multiple A record) but seems be it's not for HA
[17:35] <sweb> i need the solution can be used entirely with software (Operation System and Software)
[17:38] <andol> sweb: Had you been doing your own BGP you would very likely have known that. So yeah, an anycast solution would require the assistance of your ISP,
[17:40] <andol> sweb: Having a DNS failover is valuable as a much-better-than-nothing alternative when an entire site falls down. Yet, there is a lot to gain by making each individual site more resilient.
[17:42] <sweb> andol: best solution is Dns round robin ... cause end user can better find out which server can accessible ... butin solutoin like dns health checker will be check server network from server to server and that's not good enough ... but i can find out why this good soltion is not implemented well on clients like wget ... modern browsers use it but with long timeout check
[17:43] <Logos01> Howdy, folks. Anyone have a notion as to why an Ubuntu 16.04 box created via Vagrant would fail to generate its ssh host keys upon first startup?
[17:45] <andol> Logos01: As in it has no ssh host keys, or as it doesn't get a new unique one?
[17:45] <Logos01> andol: As in it somehow winds up with none and doesn't generate any.
[17:46] <andol> Logos01: Sounds like a problem with a particular box? Not seeing that issue with the official Ubuntu boxes.
[17:48] <Logos01> andol: I'm using the bento repo's boxes and building them myself via packer; the initial run works fine, but once I make my local customizations and do a vagrant package, somehow the ssh keys are getting purged and they don't get created when using that box later.
[17:48] <Logos01> For now I've put a hack in place by having a oneshot service invoke a script to regenerate the keys if they're absent before SSH starts but that is peculiar.
[17:49] <Logos01> (I can't actually use the official Ubuntu boxes for a few reasons one of which being that they only support Virtualbox.)
[17:50] <andol> Afraid I don't know Packer well enough to help you there.
[17:51] <Logos01> andol: It's possible that the "Vagrant Package" command strips the keys out but that's irritating.
[17:51] <andol> Not a big fan of VirtualBox either, but a while back I decided that my Vagrant usage would become so much easier if I just accepted having VirtualBox in the background.
[17:51] <Logos01> andol: I've been doing alright with libvirt mostly.
[17:51] <Logos01> I suspect I'd have this problem no matter what though because if it's anywhere that it's breaking down, it's the packaging process.
[19:04] <coreycb> zul, jamespage, ddellav: ok I think we're all populated: https://code.launchpad.net/~ubuntu-server-dev/+git
[19:05] <coreycb> the new repos will need new upstream releases before they're useful.   pristine-tar and upstream branches are empty right now.
[19:05] <zul> coreycb: can you put the script somewhere so if you did miss anything then we can rerun
[19:05] <coreycb> zul, sure
[19:05] <zul> coreycb:  sweet....lets get busy
[19:05] <zul> rhetorically
[19:06] <coreycb> awkward silence
[19:06] <zul> heh
[19:06] <coreycb> :)
[19:13] <coreycb> zul, https://github.com/coreycb/pkg-scripts/blob/master/pkg-lp-to-ubuntu-server-dev
[19:14] <nacc> coreycb: zul: fwiw, have you looked at https://wiki.ubuntu.com/UbuntuDevelopment/Merging/GitWorkflow ?
[19:14] <nacc> it's what the server team is using for managing source packages in git
[19:14] <coreycb> nacc, no but I've been meaning to
[19:14] <nacc> coreycb: :)
[19:15] <coreycb> nacc, thanks for the reminder :)
[19:15] <nacc> i'm going to be sending a follow-up e-mail today hopefully to the MLs, with the latest developments, etc
[19:17] <nacc> it's not in and of itself dgit/gbp compatible necessarily (no pristine-tar branch, etc.), but i'm open to feedback and comments :)
[19:27] <rbasak> The difference here is that coreycb is the source of the packaging, rather than a consumer as in most of the packages our team looks after.
[19:28] <rbasak> He might still find git-dsc-commit useful if another Ubuntu developer uploads without using the official git tree.
[19:28] <rbasak> But otherwise, I'm not sure our workflow makes sense for him. He doesn't do merges, for example, only new upstream versions.
[19:32] <nacc> rbasak: ah ok
[19:41] <coreycb> rbasak, nacc: this might be useful to us, thanks for sharing.  we do a little bit of merging.  one of the issues we have is that new releases of openstack are developed in experimental, so we don't get any merge-o-matic benefits.
[19:43] <Braven> My servers are multihomed. They have two network interfaces.  I only want to register Interface One in Windows DNS and not register interface two.  I have created a static entry on Windows DNS server using Interface One's IP.  But since the servers are part of Active Directory, they can up date their DNS record and the servers are randomly updating DNS with Interface TWO's IP. I would like to know if I can prevent
[19:43] <Braven>  the servers from updating DNS with Interface TWO's IP.
[19:44] <tarpman> Braven: I'm not aware of anything on the ubuntu side that would be automatically updating DNS. normally that's done by the DHCP server as part of handling the DHCP request.
[19:44] <Braven> I have network trace show it
[19:45] <Braven> the IP are static
[19:45] <Braven> so there is no setting in ubuntu that say do not update DNS with this IP
[19:46] <Braven> or do not register this IP in dns
[19:46] <tarpman> I don't know. the fact that it would be doing it at all is news to me
[19:46] <tarpman> shutting up now, sorry I don't know enough about that to help
[19:47] <Braven> in windows u just uncheck a box
[19:52] <nacc> coreycb: we import anyting that launchpad sees as published, so experimental, if used does get picked up
[19:52] <nacc> coreycb: if you want to send me a source pacakge, i can do a test import for you to see what the tree looks like
[19:52] <nacc> *source package name
[19:56] <rbasak> Braven: how are you configuring the network? /etc/network/interfaces? If using DHCP, then the configuration of dhclient might be relevant here. But I didn't think it did DNS updates by default.
[19:56] <rbasak> My guess would be that the Windows side is doing it in your case.
[19:57] <rbasak> I'd look into the configuration of your Windows DHCP server.
[19:57] <coreycb> nacc, thanks. let me get back to you.  i want to use a package that needs a merge so I can go through the workflow.
[19:57] <rbasak> But if you're using DHCP on the Ubuntu side, you can definitely tweak pretty much the entire DHCP request process in dhclient's configuration.
[19:58] <nacc> coreycb: ack, sounds good
[20:07] <Braven> rbasak: interface ONE is using MAAS for DHCP
[20:08] <Braven> sorry I mean interface TWO is using MAAS for DHCP
[20:12] <Braven> rbasak: are familiar with MAAS?
[20:54] <Braven> I am I the only person on earth does not want ubuntu to register itself in windows DNS server
[20:58] <Smurphy> I am using linux servers for everything. why?
[20:59] <Smurphy> Only, I don't use Windows to screw my network. I configure the linux servers as being authoritative, and Windows has nothing to say. Period.
[20:59] <Smurphy> It works.
[21:54] <patdk-lap> braven, yes
[21:55] <patdk-lap> it's normal for a dhcp client to resgister itself wit hthe authorative dns server for what it was assigned
[21:55] <patdk-lap> doesn't matter if it is windows or any other server
[22:37] <nacc> rbasak: one thing i meant to say earlier; even if not directly useful to coreycb, I think based upon smoser's experience, it is pretty handy to have a `git blame` for files ina  source package :) and i was basing it purely off looking at the script linked to and it resembles in some ways what the importer does (for the latest version, at least)
[22:53] <echosystm> hi guys
[22:53] <echosystm> i need to run a DNS server for a delegated subdomain
[22:54] <echosystm> all i need is some A records
[22:54] <echosystm> what is the easiest way to do this? i'd like to avoid bind if possible
[22:54] <echosystm> are any alternatives worth investigating? nsd? ??
[23:21] <mwhudson> can an ~ubuntu-server admin subscribe the team to bugs on https://launchpad.net/ubuntu/+source/golang-1.7
[23:23] <rbasak> mwhudson: should that still be an ~ubuntu-server thing or should it be foundations now?
[23:23] <rbasak> jgrimm: ^
[23:23] <mwhudson> rbasak: good point
[23:24] <rbasak> I don't want to block you though. Shall I do it anyway, and you can think/ask about it?
[23:24] <rbasak> Given that previously the previous version is already ~ubuntu-server.
[23:28] <mwhudson> rbasak: would be nice
[23:29] <mwhudson> i mean, in practice i'm going to handle the bugs and i'm subscribed already
[23:29] <mwhudson> but this is about what if i am MIA
[23:29] <mwhudson> such as e.g. paternity leave...
[23:34] <nacc> rbasak: jgrimm: http://paste.ubuntu.com/23351245/
[23:34] <nacc> from http://paste.ubuntu.com/23351249/
[23:34] <nacc> adding vcs to the update-maintainer parameters
[23:49] <rbasak> mwhudson: done, though it looks like foundations are already subscribed?
[23:50] <mwhudson> rbasak: yeah, turns out i could do that myself
[23:50] <mwhudson> rbasak: just wanted to make it match golang-1.6, if we decide something different is more appropriate we should change both i guess
[23:50] <rbasak> OK
[23:51] <rbasak> That makes sense.
[23:57] <notuvo> is self-hosting an email server difficult?