[05:48] <pradiprwt> Hi everyone, I am facing issue with provider network it is not able to acceesseble from outside . and route gateway interface showing always down
[05:49] <pradiprwt> can any one please help me ho I can trouble shoot
[05:49] <pradiprwt> I have deployed openstack using autopilet
[07:53] <tjbiddle> Hi all. Could really use your help. I’m going on day 3 of banging my head on the wall without success here. Trying to setup a NAT server so that another server can have internet access. The NAT server has full Internet access, the client server has DNS working - but ping, curl, etc. will not work. All ports are wide open for internet and intranet from my hosting providers security group ACLs. Here’s some debug information on both machines -
[07:53] <tjbiddle> https://gist.github.com/thomasbiddle/ef9ad16d33df722f5061106042c2d2ae
[08:13] <jamespage> ddellav, coreycb: nova b3 underway
[08:16] <jamespage> ditto cinder
[09:00] <pradiprwt> Hi Everyone, Can anyone help me to understand openstack-neutron in autopilot deploymnet
[09:27] <jamespage> ddellav, coreycb: nova, cinder and networking-ovn all uploaded
[09:30] <jamespage> moving on to keystone
[09:43] <jamespage> as well as manila
[09:57] <jamespage> coreycb, ddellav: doing glance
[09:57] <jamespage> keystone uploaded
[10:06] <jamespage> coreycb, ddellav: manila uploaded
[10:06] <jamespage> glance needs cursive to work its way in - in the Ubuntu binary NEW queue atm
[10:44] <jamespage> ddellav, coreycb: picking ceilometer up
[10:44] <jamespage> branch builds make this high throughput - thankyou for helping to make that effective!
[10:46] <jamespage> and aodh
[10:52] <zioproto> hello all
[10:52] <zioproto> jamespage: are you here today ? :)
[10:52] <jamespage> I am
[10:53] <zioproto> I tried to rebuild the stable/liberty horizon package
[10:53] <zioproto> and did not build for trusty
[10:53] <zioproto> https://www.dropbox.com/s/faadi3sfkmfjgeb/horizon_8.0.1-0ubuntu1_amd64-20160902-0955.build?dl=0
[10:54] <zioproto> where is the proper place to fill a bug for this compile issues ?
[10:54] <jamespage> zioproto, ok so horizon is a little awkward to build, due to a packaging nuance - it uses two orig.tar.gz 's
[10:54] <jamespage> one for horizon, and then we bundle all of the xstatic dependencies in a second one
[10:55] <jamespage> zioproto, have a look at debian/README.source
[10:55] <jamespage> for details on how to generate that second tarball
[10:55] <zioproto> OK will try to build it after reading the README and I will give you feedbacl
[10:56] <zioproto> I noticed because I am upgrading the documentation I have not github
[11:05] <jamespage> coreycb, ddellav: aodh uploaded
[11:07] <ddellav> jamespage very productive this morning
[11:09] <ddellav> jamespage is there a list somewhere of what's left? Or is it just todays releases - what you've just done?
[11:09] <ddellav> s/-/minus
[11:13] <zioproto> jamespage: so running the command ./debian/rules refresh-xstatic will create the tarball
[11:13] <zioproto> it creates it in the wrong directory
[11:13] <zioproto> I moved it build-area
[11:13] <zioproto> sorry I mean the tarball horizon_8.0.1.orig-xstatic.tar.gz
[11:14] <ddellav> zioproto i found it's better to use debuild with horizon.
[11:14] <ddellav> i've not gotten gbp to work
[11:16] <zioproto> ddellav: what I have to exactly that will differ from here: https://wiki.ubuntu.com/OpenStack/CorePackages ?
[11:17] <ddellav> zioproto it's basically the same, except instead of using gbp buildpackage -S you use debuild -S
[11:17] <ddellav> obviously the xstatic-orig tarball needs to be in the same directory as the orig.
[11:18] <zioproto> so I should call before manually ./debian/rules refresh-xstatic and place the tarball in the right place
[11:18] <zioproto> I will try
[11:18] <ddellav> you call refresh-xstatic, that will put the extra orig tarball in ../
[11:18] <ddellav> then from inside the same directory you run debuild
[11:18] <ddellav> it's a drop-in replacement for gbp
[11:22] <zioproto> ddellav: when I run gbp buildpackage -S it will download in the folder ../build-area the tarballs. But when I use debuild -S -us -uc it complains there are no tarballs
[11:24] <ddellav> zioproto are you trying to update horizon or just build what's there in the repo already?
[11:24] <zioproto> just build what is in the repo already
[11:25] <zioproto> ddellav: started with debcheckout --git-track='*' horizon
[11:25] <ddellav> ah ok, well gbp will pull out the existing orig tar from the pristine-tar branch, hence it's name: git-buildpackage. I thought you were updating, so you would've had to pull an updated orig tar ball for that to work.
[11:25] <zioproto> then I checkout the stable/liberty branch
[11:25] <zioproto> I pasted the build log earlier
[11:26] <ddellav> ok, since you're not updating, you don't want to do refresh-xstatic. That's only if you're updating a non-stable release. What you want to do is pull the existing orig-xstatic tarball from the archive and use that
[11:29] <ddellav> zioproto i assume you're using the 8.0.1 release? Here is the existing xstatic tarball: https://launchpad.net/ubuntu/+archive/primary/+files/horizon_8.0.1.orig-xstatic.tar.gz
[11:30] <ddellav> you put that, plus this: https://launchpad.net/ubuntu/+archive/primary/+files/horizon_8.0.1.orig.tar.gz in the directory directly above the horizon repo directory, then from INSIDE horizon, run debuild -S -us -uc
[11:30] <ddellav> and that will get you the .dsc so you can run sbuild or do whatever it is you want to do
[11:43] <coreycb> ddellav, zioproto: updatd https://wiki.ubuntu.com/OpenStack/CorePackages for horizon
[11:44] <coreycb> ddellav, jamespage: we can follow here to see where we're at with b3: https://private-fileshare.canonical.com/~coreycb/ca_upstream_versions_newton.html
[11:44] <ddellav> coreycb ah right, good call
[11:44] <coreycb> ddellav, jamespage: I fixed it up to favor upper-constraints versions over what's in github.com/openstack/releases/deliverables/newton
[11:53] <zioproto> ddellav: I was able to build horizon using the debuild command
[11:53] <zioproto> But I really want to understand
[11:53] <zioproto> I did a diff of the dsc files
[11:53] <zioproto> file
[11:53] <zioproto> I mean the one created with debuild and the one created with gbp buildpackage
[11:53] <zioproto> http://pastebin.com/j8cXRWFQ
[11:54] <zioproto> so different horizon_8.0.1-0ubuntu1.debian.tar.xz are generated ?
[11:54] <ddellav> so im confused, you ran gbp and it completed and generated a dsc?
[11:55] <zioproto> yes
[11:55] <ddellav> ok then what was the issue?
[11:55] <zioproto> the fail is later, when I run sbuild-liberty against the dsc
[11:55] <ddellav> i mean if that worked you didn't need to use debuild
[11:55] <zioproto> exact, but the dsc generated by debuild is proper so that later I can build sucessfully with sbuild-liberty
[11:55] <ddellav> what is the issue with sbuild? is it saying it can't import xstatic?
[11:56] <zioproto> https://www.dropbox.com/s/faadi3sfkmfjgeb/horizon_8.0.1-0ubuntu1_amd64-20160902-0955.build?dl=0
[11:56] <zioproto> here the build log
[11:57] <ddellav> yea, no module named xstatic.main
[11:57] <ddellav> that's what always happened when i used gbp as well which is why i use debuild. For whatever reason gbp does not properly import the xstatic tarball
[11:58] <ddellav> though it's showing a weird patch problem as well which shouldn't be happening
[12:01] <zioproto> yes I am bit confused what is the real problem
[12:01] <zioproto> btw I found the workaound of using debuild
[12:13] <coreycb> jamespage, ddellav, I'm working on horizon
[12:13] <zioproto> coreycb: fyi I was on the stable/liberty branch
[12:14] <coreycb> zioproto, ok.  sorry I was talking about newton b3 there.
[12:14] <zioproto> no problem
[12:15] <ddellav> coreycb jamespage im working on neutron
[12:15] <ddellav> i mean neutron-*
[12:23] <zioproto> something not openstack specific. The Vagrant image ubuntu/xenial64 has this bug open since long: https://bugs.launchpad.net/cloud-images/+bug/1565985 Do you guys know who to ping about this ?
[12:24] <Odd_Bloke> zioproto: I believe the fix to that is sitting in -proposed at the moment. :)
[12:28] <zioproto> I guess a lot of Vagrant users will not move to xenial until the image is stable as the trusty one
[12:28] <zioproto> I updated my tool to build the UCA packages https://github.com/zioproto/ubuntu-cloud-archive-vagrant-vm
[12:28] <zioproto> now it is aligned to the official docs
[12:30] <ddellav> coreycb please review: lp:~ddellav/ubuntu/+source/neutron-fwaas
[12:31] <ddellav> jamespage coreycb working on murano
[12:32] <coreycb> ddellav, we sync murano from debian
[12:32] <coreycb> ddellav, looking at neutron-fwaas
[12:32] <ddellav> coreycb ok
[12:33] <ddellav> coreycb i'll take trove then
[12:33] <coreycb> ddellav, +1
[12:34] <ddellav> coreycb nvm, trove b3 is already pushed just not released
[12:35] <coreycb> ddellav, you  sure?
[12:35] <ddellav> coreycb oh i didn't see the dev release there, nvm
[12:36] <ddellav> coreycb ready for review: lp:~ddellav/ubuntu/+source/neutron-vpnaas
[12:37] <coreycb> ddellav, can you move psycopg2 and pymysql to BD-Indep?
[12:37] <ddellav> coreycb sure
[12:37] <coreycb> ddellav, nm I'll do it, quicker that way
[12:38] <ddellav> ok
[12:46] <jamespage> zioproto, gbp does not support multi-orig tar balls
[12:46] <jamespage> zioproto, so yes you have to use debuild to generate the source package
[12:46] <jamespage> its less than ideal - apologies
[12:47] <zioproto> ok great ! at least we have a specific reason for that !
[12:56] <coreycb> ddellav, neutorn-fwaas pushed/uploaded
[13:04] <jamespage> coreycb, ddellav: ok glance uploaded
[13:04] <jamespage> cursive published (was blocking)
[13:04] <jamespage> I'll raise the MIR for that now
[13:06] <coreycb> jamespage, awesome, moving right along!
[13:06] <coreycb> ddellav, neutron-vpnaas pushed/uploaded
[13:09] <coreycb> jamespage, ddellav: horizon's going to take a bit, I want to test it a bit since the xstatic imports/patch have changed quite a bit
[13:09] <coreycb> how many times can I say bit
[13:09] <jamespage> coreycb, ta - thanks for taking that one
[13:09]  * jamespage least favourite
[13:11] <ddellav> i think it's everyones least favorite
[13:12] <drfritznunkie> Anyone here from Ubuntu who maintains the community AWS AMIs? It looks like all instance store AMIs in us-east-1 are gone/invalid
[13:12]  * ddellav thinks horizon is a great candidate for snapping
[13:13] <jamespage> coreycb, could we add ubuntu-proposed to the version tracker?
[13:13] <jamespage> that might be useful for spotting blockages
[13:13] <coreycb> jamespage, I'll look into it, should be fairly simple
[13:14] <jamespage> coreycb, ddellav: oslo's look up-to-date now - do  we have some work todo on clients as well still?
[13:14] <jamespage> I think we do looking at the report
[13:15] <drfritznunkie> we're seeing errors when launching new instances like: Client.InvalidManifest: HTTP403 (Forbidden) for URL ubuntu-us-east-1/images/hvm-instance/ubuntu-trusty-14.04-amd64-server-20160620.manifest.xml﻿⁠⁠⁠⁠
[13:15] <coreycb> jamespage, yeah, they could use some work.  they should be fairly small bumps.  I took a pass earlier this week but they've released new versions since.
[13:15] <ddellav> coreycb please review lp:~ddellav/ubuntu/+source/neutron-lbaas
[13:19] <jamespage> coreycb, I'll take a run through
[13:20] <coreycb> jamespage, ok thanks
[13:24] <coreycb> ddellav, neutron-lbaas pushed/uploaded, thanks
[13:24] <freenerd> I'm seeing the same problem as drfritznunkie in us-east-1 for instance store AMIs. All (even some over a year old) are gone.
[13:25] <freenerd> 14.04 LTS that is
[13:33] <jamespage> coreycb, ddellav: cinder and congress clients uploaded
[13:36] <coreycb> jamespage, ddellav: working on backporting some things to the uca in the background here
[13:49] <jamespage> ddellav, coreycb: python-ironic* uploaded
[13:49] <jamespage> fwiw I'm prepping them as uploads for experimental and pushing to git, then versioning for ubuntu and direct uploading
[13:49] <jamespage> I'm not pushing the ubuntu versions to git
[13:50] <coreycb> jamespage, ok that's what I've done as well
[13:51] <jamespage> coreycb, ok
[14:01] <coreycb> jamespage, ddellav: I'm working on os-brick
[14:03] <jamespage> coreycb, doing clients still
[14:03] <jamespage> coreycb, ddellav: python-keystone* done
[14:03] <jamespage> auth and middleware
[14:03] <jamespage> doing magnum now
[14:11] <zioproto> Odd_Bloke: I found as a workaround to install vagrant-vbguest plugin
[14:12] <coreycb> jamespage, ddellav: os-brick uploaded.  looking at oslo.db/messaging.
[14:12] <jamespage> coreycb, ddellav: magnumclient done
[14:14] <jamespage> manilaclient next
[14:15] <coreycb> jamespage, oh yay no more merging of oslo.messaging, thanks
[14:17] <jamespage> coreycb, \o/
[14:17] <jamespage> yeah that should help a bit
[14:23] <jamespage> coreycb, you might want to hold on oslo.db - .1 broke gnocchi
[14:23] <coreycb> jamespage, ah ok will do
[14:24] <coreycb> jamespage, I'll just push it
[14:28] <jamespage> coreycb, ddellav: manila and murano client libs done
[14:28]  * jamespage looks for next
[14:29] <jamespage> neutron and nova clients on my list next
[14:29] <dlloyd> anyone else seeing 403s with ubuntu amis from cloud-images.ubuntu.com in us-east-1?
[14:30] <Odd_Bloke> dlloyd: Yep, it's a known problem; we're working to fix it.
[14:30] <dlloyd> ok cool, thanks
[14:33] <Odd_Bloke> dlloyd: It should only affect instance-store AMIs, so you could switch to using EBS whilst we sort it out. :)
[14:33] <dlloyd> hah, fair
[14:33] <dlloyd> its also on the mile long todo list to vendor our own ;)
[14:34] <dlloyd> out of curiosity is this the best forum to see status about similar issues in the future?
[14:37] <ddellav> coreycb can you review liberty branch of lp:~ddellav/ubuntu/+source/neutron for sru?
[14:37] <drfritznunkie> the AMI outage in us-east-1 has been ongoing since 5AM EDT
[14:37] <drfritznunkie> where would the best place, as dlloyd mentions, for problems like this? here?
[14:39] <drfritznunkie> Odd_Bloke: any idea how soon the AMIs will be back up in us-east-1?
[14:41] <jamespage> ddellav, coreycb: nova and neutron clients uploaded
[14:41] <jamespage> that's the lot I think
[14:42] <coreycb> jamespage, \o/
[14:43] <jamespage> coreycb, ok
[14:44] <coreycb> jamespage, ddellav: oslo.messaging uploaded
[14:44] <ddellav> jamespage :D
[14:46] <jamespage> aodhclient got missed - doing that now
[15:21] <coreycb> ddellav, neutron 7.1.2 uploaded to liberty-staging
[15:21] <ddellav> coreycb thanks
[15:26] <Odd_Bloke> drfritznunkie: We're restoring now, it's just a matter of data transfer. :)
[15:33] <drfritznunkie> Odd_Bloke: the AMI ids are not going to change, correct?
[15:33] <Odd_Bloke> drfritznunkie: Correct.
[15:48] <Odd_Bloke> drfritznunkie: dlloyd: You should be seeing AMIs becoming usable again now; we're doing August, then 2016, then everything.
[16:01] <dlloyd> Odd_Bloke: awesome, thanks for the update
[16:06] <drfritznunkie> Looks like August and June are back up Odd_Bloke, thanks for the hard work!
[16:07] <drfritznunkie> Any idea what happened?
[16:08] <Odd_Bloke> drfritznunkie: Yep, a job accidentally "cleaned up" all the files in our us-east-1 S3 bucket. :)
[16:09] <drfritznunkie> ha BTDT, I feel your pain
[16:13] <drfritznunkie> Odd_Bloke: this is pushing up our backlog for vendoring our own images, do you all have your build scripts/process posted somewhere? I can find anything that is recent
[16:13] <drfritznunkie> ...can't find anything recent
[16:16] <Odd_Bloke> drfritznunkie: Unfortunately not, it's a lot of Jenkins and shell scripts.
[16:17] <Odd_Bloke> drfritznunkie: And our case is pretty specialised (i.e. publish the same image in _every_ region for every storage type and every virt type), so you probably wouldn't need a lot of the complexity that we have. :)
[16:21] <drfritznunkie> I figured as much, but our AMI rolling scripts are 3+ years old at this point, and didn't know if we've missed the state-of-the-art in AMI rolling ;)
[16:58] <coreycb> ddellav, jamespage: I uploaded a couple remaining client packages for designate,mistral,senlin,openstacksdk, and will be uploading zaqarclient, openstackclient shortly.
[17:02] <jamespage> coreycb, glance-store as well?
[17:02] <jamespage> tbh I'm done for today - I'll leave the rest in your's and ddellav's capable hands
[17:05] <coreycb> jamespage, I'll take a look, see ya
[17:25] <renatosilva> anyone had problems upgrading from 14.04 to 16.04?
[17:28] <RoyK> quite possibly ;)
[17:29] <RoyK> renatosilva: sometimes some libs get messed up, especially if you're using 3rd party repos
[17:29] <RoyK> but mostly it just works
[17:42] <ducasse> is it maybe time to remove the last section of the topic?
[17:47] <RoyK> ducasse: perhaps ;)
[19:26] <blizzow> Urgh, running my root partition on top of an mdadm raid1 device is destroying my life. Does anyone else here experience terribly slow performance with this?
[19:27] <blizzow> Running apt-get update just sends dpkg into a D state for minutes at a time.
[19:33] <blizzow> All the drives are returning good info via smartctl
[19:33] <blizzow> Nothing in the logs is saying anything bad.
[19:34] <blizzow> But a dist upgrade that hits the linux kernel/headers packages takes more than 15-30 minutes. :(
[19:37] <sarnold> ouch
[19:40] <genii> Maybe your /boot is getting small
[19:42] <sarnold> misaligned sectors comes to mind as a possibility but I have trouble seeing how even that could force twenty minute dist-upgrades
[19:55] <compdoc> raid1 shouldnt be slow
[19:55] <compdoc> you could test it by breaking the mirror and running on one disk
[20:16] <sbeattie> blizzow: check your syslogs for disk errors as well, smartctl sees most things, but not always.
[20:35] <JanC> if it's SATA controller / SATA communication errors, SMART might not see it, I guess
[20:40] <patdk-lap> smart log would show it
[20:40] <patdk-lap> atleast if the error made it to the cable
[20:41] <patdk-lap> blizzow, what drive are you using, and what is it's age?
[20:41] <patdk-lap> fill filesystem will make it slow
[20:41] <patdk-lap> bad sectors will make it slow, until they actually fail, and they can take 45min to fail
[20:42] <patdk-lap> using one of these new shingle drives and a non-cow filesystem will be painful
[20:42] <sarnold> oww
[20:43] <patdk-lap> cow isn't so important, as much as writting data as a log is
[20:44] <blizzow> sbeattie: I've looked in syslog.  The only weird thing I see is this:  dev-disk-by\x2dpartlabel-EFI\x5cx20System\x5cx20Partition.device: Dev dev-disk-by\x2dpartlabel-EFI\x5cx20System\x5cx20Partition.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:04.0/0000:02:00.0/host8/port-8:7/end_device-8:7/target8:0:7/8:0:7:0/block/sdi/sdi1 and /sys/devices/pci0000:00/0000:00:04.0/0000:02:00.0/host8/
[20:45] <blizzow> patdk-lap: brand new SAS 7200 RPM seagate 4TB drives.
[20:45] <sarnold> standard systemdism, those happen everywhere :(
[20:45] <patdk-lap> that looks fine
[20:45] <patdk-lap> ah, so scsi smart, that is completely different from sata smart
[20:46] <patdk-lap> better and worse at the same time, the drive has smarts in it, so it doesn't show you raw info
[20:47] <blizzow> Filesystems are super clean. 2.8GB of 15GB used for the root partition.  3.6MB of 511MB used for /boot/efi
[20:52] <blizzow> sarnold: I did an alignment check on all the drives, they're optimal.
[20:53] <sarnold> topped up on blinker fluid?
[20:53] <patdk-lap> :)
[20:53] <blizzow> The pokey bits are plugged into the recepticals and the electrical faeries are flowing as far as I can tell.
[20:55] <sarnold> hehe
[20:56]  * patdk-lap goes to change his headlight fluid
[21:10] <tarpman> make sure the switch is set to MORE MAGIC, etc