[05:48] Hi everyone, I am facing issue with provider network it is not able to acceesseble from outside . and route gateway interface showing always down [05:49] can any one please help me ho I can trouble shoot [05:49] I have deployed openstack using autopilet === ogra_ is now known as ogra [07:53] Hi all. Could really use your help. I’m going on day 3 of banging my head on the wall without success here. Trying to setup a NAT server so that another server can have internet access. The NAT server has full Internet access, the client server has DNS working - but ping, curl, etc. will not work. All ports are wide open for internet and intranet from my hosting providers security group ACLs. Here’s some debug information on both machines - [07:53] https://gist.github.com/thomasbiddle/ef9ad16d33df722f5061106042c2d2ae [08:13] ddellav, coreycb: nova b3 underway [08:16] ditto cinder [09:00] Hi Everyone, Can anyone help me to understand openstack-neutron in autopilot deploymnet [09:27] ddellav, coreycb: nova, cinder and networking-ovn all uploaded [09:30] moving on to keystone [09:43] as well as manila [09:57] coreycb, ddellav: doing glance [09:57] keystone uploaded [10:06] coreycb, ddellav: manila uploaded [10:06] glance needs cursive to work its way in - in the Ubuntu binary NEW queue atm [10:44] ddellav, coreycb: picking ceilometer up [10:44] branch builds make this high throughput - thankyou for helping to make that effective! [10:46] and aodh [10:52] hello all [10:52] jamespage: are you here today ? :) [10:52] I am [10:53] I tried to rebuild the stable/liberty horizon package [10:53] and did not build for trusty [10:53] https://www.dropbox.com/s/faadi3sfkmfjgeb/horizon_8.0.1-0ubuntu1_amd64-20160902-0955.build?dl=0 [10:54] where is the proper place to fill a bug for this compile issues ? [10:54] zioproto, ok so horizon is a little awkward to build, due to a packaging nuance - it uses two orig.tar.gz 's [10:54] one for horizon, and then we bundle all of the xstatic dependencies in a second one [10:55] zioproto, have a look at debian/README.source [10:55] for details on how to generate that second tarball [10:55] OK will try to build it after reading the README and I will give you feedbacl [10:56] I noticed because I am upgrading the documentation I have not github [11:05] coreycb, ddellav: aodh uploaded === bilde2910_ is now known as bilde2910 [11:07] jamespage very productive this morning [11:09] jamespage is there a list somewhere of what's left? Or is it just todays releases - what you've just done? [11:09] s/-/minus [11:13] jamespage: so running the command ./debian/rules refresh-xstatic will create the tarball [11:13] it creates it in the wrong directory [11:13] I moved it build-area [11:13] sorry I mean the tarball horizon_8.0.1.orig-xstatic.tar.gz [11:14] zioproto i found it's better to use debuild with horizon. [11:14] i've not gotten gbp to work [11:16] ddellav: what I have to exactly that will differ from here: https://wiki.ubuntu.com/OpenStack/CorePackages ? [11:17] zioproto it's basically the same, except instead of using gbp buildpackage -S you use debuild -S [11:17] obviously the xstatic-orig tarball needs to be in the same directory as the orig. [11:18] so I should call before manually ./debian/rules refresh-xstatic and place the tarball in the right place [11:18] I will try [11:18] you call refresh-xstatic, that will put the extra orig tarball in ../ [11:18] then from inside the same directory you run debuild [11:18] it's a drop-in replacement for gbp [11:22] ddellav: when I run gbp buildpackage -S it will download in the folder ../build-area the tarballs. But when I use debuild -S -us -uc it complains there are no tarballs [11:24] zioproto are you trying to update horizon or just build what's there in the repo already? [11:24] just build what is in the repo already [11:25] ddellav: started with debcheckout --git-track='*' horizon [11:25] ah ok, well gbp will pull out the existing orig tar from the pristine-tar branch, hence it's name: git-buildpackage. I thought you were updating, so you would've had to pull an updated orig tar ball for that to work. [11:25] then I checkout the stable/liberty branch [11:25] I pasted the build log earlier [11:26] ok, since you're not updating, you don't want to do refresh-xstatic. That's only if you're updating a non-stable release. What you want to do is pull the existing orig-xstatic tarball from the archive and use that [11:29] zioproto i assume you're using the 8.0.1 release? Here is the existing xstatic tarball: https://launchpad.net/ubuntu/+archive/primary/+files/horizon_8.0.1.orig-xstatic.tar.gz [11:30] you put that, plus this: https://launchpad.net/ubuntu/+archive/primary/+files/horizon_8.0.1.orig.tar.gz in the directory directly above the horizon repo directory, then from INSIDE horizon, run debuild -S -us -uc [11:30] and that will get you the .dsc so you can run sbuild or do whatever it is you want to do [11:43] ddellav, zioproto: updatd https://wiki.ubuntu.com/OpenStack/CorePackages for horizon [11:44] ddellav, jamespage: we can follow here to see where we're at with b3: https://private-fileshare.canonical.com/~coreycb/ca_upstream_versions_newton.html [11:44] coreycb ah right, good call [11:44] ddellav, jamespage: I fixed it up to favor upper-constraints versions over what's in github.com/openstack/releases/deliverables/newton [11:53] ddellav: I was able to build horizon using the debuild command [11:53] But I really want to understand [11:53] I did a diff of the dsc files [11:53] file [11:53] I mean the one created with debuild and the one created with gbp buildpackage [11:53] http://pastebin.com/j8cXRWFQ [11:54] so different horizon_8.0.1-0ubuntu1.debian.tar.xz are generated ? [11:54] so im confused, you ran gbp and it completed and generated a dsc? [11:55] yes [11:55] ok then what was the issue? [11:55] the fail is later, when I run sbuild-liberty against the dsc [11:55] i mean if that worked you didn't need to use debuild [11:55] exact, but the dsc generated by debuild is proper so that later I can build sucessfully with sbuild-liberty [11:55] what is the issue with sbuild? is it saying it can't import xstatic? [11:56] https://www.dropbox.com/s/faadi3sfkmfjgeb/horizon_8.0.1-0ubuntu1_amd64-20160902-0955.build?dl=0 [11:56] here the build log [11:57] yea, no module named xstatic.main [11:57] that's what always happened when i used gbp as well which is why i use debuild. For whatever reason gbp does not properly import the xstatic tarball [11:58] though it's showing a weird patch problem as well which shouldn't be happening [12:01] yes I am bit confused what is the real problem [12:01] btw I found the workaound of using debuild [12:13] jamespage, ddellav, I'm working on horizon [12:13] coreycb: fyi I was on the stable/liberty branch [12:14] zioproto, ok. sorry I was talking about newton b3 there. [12:14] no problem [12:15] coreycb jamespage im working on neutron [12:15] i mean neutron-* [12:23] something not openstack specific. The Vagrant image ubuntu/xenial64 has this bug open since long: https://bugs.launchpad.net/cloud-images/+bug/1565985 Do you guys know who to ping about this ? [12:23] Launchpad bug 1565985 in cloud-images "vagrant vb ubuntu/xenial64 cannot mount synced folders" [Undecided,In progress] [12:24] zioproto: I believe the fix to that is sitting in -proposed at the moment. :) [12:28] I guess a lot of Vagrant users will not move to xenial until the image is stable as the trusty one [12:28] I updated my tool to build the UCA packages https://github.com/zioproto/ubuntu-cloud-archive-vagrant-vm [12:28] now it is aligned to the official docs [12:30] coreycb please review: lp:~ddellav/ubuntu/+source/neutron-fwaas [12:31] jamespage coreycb working on murano [12:32] ddellav, we sync murano from debian [12:32] ddellav, looking at neutron-fwaas [12:32] coreycb ok [12:33] coreycb i'll take trove then [12:33] ddellav, +1 [12:34] coreycb nvm, trove b3 is already pushed just not released [12:35] ddellav, you sure? [12:35] coreycb oh i didn't see the dev release there, nvm [12:36] coreycb ready for review: lp:~ddellav/ubuntu/+source/neutron-vpnaas [12:37] ddellav, can you move psycopg2 and pymysql to BD-Indep? [12:37] coreycb sure [12:37] ddellav, nm I'll do it, quicker that way [12:38] ok [12:46] zioproto, gbp does not support multi-orig tar balls [12:46] zioproto, so yes you have to use debuild to generate the source package [12:46] its less than ideal - apologies [12:47] ok great ! at least we have a specific reason for that ! [12:56] ddellav, neutorn-fwaas pushed/uploaded [13:04] coreycb, ddellav: ok glance uploaded [13:04] cursive published (was blocking) [13:04] I'll raise the MIR for that now [13:06] jamespage, awesome, moving right along! [13:06] ddellav, neutron-vpnaas pushed/uploaded [13:09] jamespage, ddellav: horizon's going to take a bit, I want to test it a bit since the xstatic imports/patch have changed quite a bit [13:09] how many times can I say bit [13:09] coreycb, ta - thanks for taking that one [13:09] * jamespage least favourite [13:11] i think it's everyones least favorite [13:12] Anyone here from Ubuntu who maintains the community AWS AMIs? It looks like all instance store AMIs in us-east-1 are gone/invalid [13:12] * ddellav thinks horizon is a great candidate for snapping [13:13] coreycb, could we add ubuntu-proposed to the version tracker? [13:13] that might be useful for spotting blockages [13:13] jamespage, I'll look into it, should be fairly simple [13:14] coreycb, ddellav: oslo's look up-to-date now - do we have some work todo on clients as well still? [13:14] I think we do looking at the report [13:15] we're seeing errors when launching new instances like: Client.InvalidManifest: HTTP403 (Forbidden) for URL ubuntu-us-east-1/images/hvm-instance/ubuntu-trusty-14.04-amd64-server-20160620.manifest.xml⁠⁠⁠⁠ [13:15] jamespage, yeah, they could use some work. they should be fairly small bumps. I took a pass earlier this week but they've released new versions since. [13:15] coreycb please review lp:~ddellav/ubuntu/+source/neutron-lbaas [13:19] coreycb, I'll take a run through [13:20] jamespage, ok thanks [13:24] ddellav, neutron-lbaas pushed/uploaded, thanks [13:24] I'm seeing the same problem as drfritznunkie in us-east-1 for instance store AMIs. All (even some over a year old) are gone. [13:25] 14.04 LTS that is [13:33] coreycb, ddellav: cinder and congress clients uploaded [13:36] jamespage, ddellav: working on backporting some things to the uca in the background here [13:49] ddellav, coreycb: python-ironic* uploaded [13:49] fwiw I'm prepping them as uploads for experimental and pushing to git, then versioning for ubuntu and direct uploading [13:49] I'm not pushing the ubuntu versions to git [13:50] jamespage, ok that's what I've done as well [13:51] coreycb, ok [14:01] jamespage, ddellav: I'm working on os-brick [14:03] coreycb, doing clients still [14:03] coreycb, ddellav: python-keystone* done [14:03] auth and middleware [14:03] doing magnum now [14:11] Odd_Bloke: I found as a workaround to install vagrant-vbguest plugin [14:12] jamespage, ddellav: os-brick uploaded. looking at oslo.db/messaging. [14:12] coreycb, ddellav: magnumclient done [14:14] manilaclient next [14:15] jamespage, oh yay no more merging of oslo.messaging, thanks [14:17] coreycb, \o/ [14:17] yeah that should help a bit [14:23] coreycb, you might want to hold on oslo.db - .1 broke gnocchi [14:23] jamespage, ah ok will do [14:24] jamespage, I'll just push it [14:28] coreycb, ddellav: manila and murano client libs done [14:28] * jamespage looks for next [14:29] neutron and nova clients on my list next [14:29] anyone else seeing 403s with ubuntu amis from cloud-images.ubuntu.com in us-east-1? [14:30] dlloyd: Yep, it's a known problem; we're working to fix it. [14:30] ok cool, thanks [14:33] dlloyd: It should only affect instance-store AMIs, so you could switch to using EBS whilst we sort it out. :) [14:33] hah, fair [14:33] its also on the mile long todo list to vendor our own ;) [14:34] out of curiosity is this the best forum to see status about similar issues in the future? [14:37] coreycb can you review liberty branch of lp:~ddellav/ubuntu/+source/neutron for sru? [14:37] the AMI outage in us-east-1 has been ongoing since 5AM EDT [14:37] where would the best place, as dlloyd mentions, for problems like this? here? [14:39] Odd_Bloke: any idea how soon the AMIs will be back up in us-east-1? [14:41] ddellav, coreycb: nova and neutron clients uploaded [14:41] that's the lot I think [14:42] jamespage, \o/ [14:43] coreycb, ok [14:44] jamespage, ddellav: oslo.messaging uploaded [14:44] jamespage :D [14:46] aodhclient got missed - doing that now [15:21] ddellav, neutron 7.1.2 uploaded to liberty-staging [15:21] coreycb thanks [15:26] drfritznunkie: We're restoring now, it's just a matter of data transfer. :) [15:33] Odd_Bloke: the AMI ids are not going to change, correct? [15:33] drfritznunkie: Correct. [15:48] drfritznunkie: dlloyd: You should be seeing AMIs becoming usable again now; we're doing August, then 2016, then everything. [16:01] Odd_Bloke: awesome, thanks for the update [16:06] Looks like August and June are back up Odd_Bloke, thanks for the hard work! [16:07] Any idea what happened? [16:08] drfritznunkie: Yep, a job accidentally "cleaned up" all the files in our us-east-1 S3 bucket. :) [16:09] ha BTDT, I feel your pain [16:13] Odd_Bloke: this is pushing up our backlog for vendoring our own images, do you all have your build scripts/process posted somewhere? I can find anything that is recent [16:13] ...can't find anything recent [16:16] drfritznunkie: Unfortunately not, it's a lot of Jenkins and shell scripts. [16:17] drfritznunkie: And our case is pretty specialised (i.e. publish the same image in _every_ region for every storage type and every virt type), so you probably wouldn't need a lot of the complexity that we have. :) [16:21] I figured as much, but our AMI rolling scripts are 3+ years old at this point, and didn't know if we've missed the state-of-the-art in AMI rolling ;) === iberezovskiy_ is now known as iberezovskiy_off [16:58] ddellav, jamespage: I uploaded a couple remaining client packages for designate,mistral,senlin,openstacksdk, and will be uploading zaqarclient, openstackclient shortly. [17:02] coreycb, glance-store as well? [17:02] tbh I'm done for today - I'll leave the rest in your's and ddellav's capable hands [17:05] jamespage, I'll take a look, see ya [17:25] anyone had problems upgrading from 14.04 to 16.04? [17:28] quite possibly ;) [17:29] renatosilva: sometimes some libs get messed up, especially if you're using 3rd party repos [17:29] but mostly it just works [17:42] is it maybe time to remove the last section of the topic? [17:47] ducasse: perhaps ;) [19:26] Urgh, running my root partition on top of an mdadm raid1 device is destroying my life. Does anyone else here experience terribly slow performance with this? [19:27] Running apt-get update just sends dpkg into a D state for minutes at a time. [19:33] All the drives are returning good info via smartctl [19:33] Nothing in the logs is saying anything bad. [19:34] But a dist upgrade that hits the linux kernel/headers packages takes more than 15-30 minutes. :( [19:37] ouch [19:40] Maybe your /boot is getting small [19:42] misaligned sectors comes to mind as a possibility but I have trouble seeing how even that could force twenty minute dist-upgrades [19:55] raid1 shouldnt be slow [19:55] you could test it by breaking the mirror and running on one disk [20:16] blizzow: check your syslogs for disk errors as well, smartctl sees most things, but not always. [20:35] if it's SATA controller / SATA communication errors, SMART might not see it, I guess [20:40] smart log would show it [20:40] atleast if the error made it to the cable [20:41] blizzow, what drive are you using, and what is it's age? [20:41] fill filesystem will make it slow [20:41] bad sectors will make it slow, until they actually fail, and they can take 45min to fail [20:42] using one of these new shingle drives and a non-cow filesystem will be painful [20:42] oww [20:43] cow isn't so important, as much as writting data as a log is [20:44] sbeattie: I've looked in syslog. The only weird thing I see is this: dev-disk-by\x2dpartlabel-EFI\x5cx20System\x5cx20Partition.device: Dev dev-disk-by\x2dpartlabel-EFI\x5cx20System\x5cx20Partition.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:04.0/0000:02:00.0/host8/port-8:7/end_device-8:7/target8:0:7/8:0:7:0/block/sdi/sdi1 and /sys/devices/pci0000:00/0000:00:04.0/0000:02:00.0/host8/ [20:45] patdk-lap: brand new SAS 7200 RPM seagate 4TB drives. [20:45] standard systemdism, those happen everywhere :( [20:45] that looks fine [20:45] ah, so scsi smart, that is completely different from sata smart [20:46] better and worse at the same time, the drive has smarts in it, so it doesn't show you raw info [20:47] Filesystems are super clean. 2.8GB of 15GB used for the root partition. 3.6MB of 511MB used for /boot/efi [20:52] sarnold: I did an alignment check on all the drives, they're optimal. [20:53] topped up on blinker fluid? [20:53] :) [20:53] The pokey bits are plugged into the recepticals and the electrical faeries are flowing as far as I can tell. [20:55] hehe [20:56] * patdk-lap goes to change his headlight fluid [21:10] make sure the switch is set to MORE MAGIC, etc