[06:21] <Prabakaran> Hello Team, I have created a new bug for my charm and after linking bug id to the trunk branch in the lauchpad i am not able to see my charm in the review queue.  Could you please advise on the same?
[12:22] <gnuoy> jamespage, charmhelper mp as discussed https://code.launchpad.net/~gnuoy/charm-helpers/service-framework-loader/+merge/276519
[13:13] <jamespage> gnuoy, landed - thankyou!
[13:14] <jamespage> coreycb, hey - thanks for the  tox landing yesterday - I updated the two borked branches and generated a new one for ceph-radosgw
[13:15] <jamespage> I think that just leaves swift stuff and cinder-ceph
[13:36] <gnuoy> jamespage, follow up for the charm itself, sync + minor change to services.py https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api-odl/ch-sync/+merge/276520
[13:37] <jamespage> gnuoy, +1
[13:37] <gnuoy> ta
[14:14] <gnuoy> jamespage, In the interests of not duplicating effort I'm going to start on the unit and amulet tests for odl-controller
[14:14] <jamespage> gnuoy, ack
[14:15] <jamespage> gnuoy, I'll deploy ovs-odl and start looking at the north/south traffic issues BjornT hit
[14:15] <gnuoy> kk
[14:27] <coreycb> jamespage, no problem, those other two branches are landed now
[14:28] <jamespage> coreycb, thanks
[14:28] <coreycb> jamespage, do you have swift/cinder-ceph in the works?  if not I can pick those up if you want.
[14:29] <jamespage> coreycb, yeah done this morning - links on the pad - http://pad.ubuntu.com/tox-charm-landings
[14:29] <coreycb> jamespage, great I'll review them
[15:26] <gnuoy> jamespage, git+ssh://git.launchpad.net/~sdn-charmers/charms/+source/openvswitch-odl is proving problematic with amulet. There's no series in there and I'm going to need to update python-charmworldlib to be able to parse it
[15:27] <jamespage> gnuoy, oh I pushed that to lp:~openstack-charmers/charms/trusty/openvswitch-odl/trunk this monring
[15:27] <gnuoy> jamespage, \o/
[15:27] <jamespage> that sounds like a nasty blocker to git migration...
[15:28] <gnuoy> ah well, lets stay on bzr then
[15:44] <bdx> hey whats going on? Im testing out "enable-local-dhcp-and-metadata
[15:45] <gnuoy> jamespage, Using http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/ovs-odl/ovs-odl.yaml relates neutron-gateway and openvswitch-odl with the juju-info interface, is that right?
[15:45] <jamespage> gnuoy, it is
[15:45] <gnuoy> ok
[15:46] <bdx> I'm having a hard time locating the metadataproxy datasource that instances get cloud-init userdata from when enable-local-dhcp-and-metadata is set
[15:46] <bdx> has anyone had luck using enable-local-dhcp-and-metadata ?
[15:47] <gnuoy> bdx, I have had lots of luck with it
[15:47] <bdx> I'm guessing the metadata api endpoint is also the dhcp port then yea?
[15:48] <bdx> or the dhcp port will at least forward traffic to 169.254.169.254/32 correct?
[15:49] <gnuoy> bdx, if I remember correctly the dhcp request returns a static route pointing 169.254.169.254 at the nic inside the dhcp namespace
[15:50] <gnuoy> bdx, give me a minute and I'll fire up a deploy and remind myself
[15:51] <bdx> gnuoy: that is consistent with what I'm seeing....
[15:51] <bdx> gnuoy: when you do, create your tenant networks as vlan networks
[15:51] <bdx> if you don't mind
[15:51] <gnuoy> bdx, I do
[15:51] <bdx> ok
[15:51] <gnuoy> bdx, haha that was out of sync
[15:52] <gnuoy> I meant 'I do create them as vlan networks'
[15:52] <gnuoy> bdx, ^ I didn't mean I minded
[15:52] <bdx> totally
[15:52] <bdx> ha
[15:52] <bdx> gotcha
[16:07] <mattyw> jose, ping?
[16:08] <jose> mattyw: pong!
[16:25] <bdx> Nodes fail to reboot/powercycle when ceph-osd + (nova-compute + hugepages) due to deadlock
[16:25] <bdx> I'll be filing a bug for it in a few
[16:25] <bdx> its had me stopped up for the last few days
[16:31] <Prabakaran> Hello Team, I have created a new bug for my charm and after linking bug id to the trunk branch in the lauchpad i am not able to see my charm in the review queue.  Could you please advise on the same?
[16:32] <gnuoy> bdx, ok, I'm up and running with enable-local-dhcp-and-metadata
[16:33] <gnuoy> bdx, http://paste.ubuntu.com/13092618/
[16:33] <bdx> gnuoy: nice!
[16:34] <gnuoy> so those networks are provider networks
[16:34] <bdx> entirely.....
[16:34] <Prabakaran> My charm link is "https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-platform-rtm/trunk"
[16:35] <gnuoy> bdx, so your guests get an ip but fail to retrieve metadata?
[16:36] <bdx> gnuoy: yes.....I just found an inconsistency on one of my physical nodes  port <-> vlan trunk was on eth4 ....whereas the rest of my nodes use eth3...grrr
[16:36] <gnuoy> bdx, ah !
[16:38] <gnuoy> bdx, you can use mac addresses in the data-port setting for neutron-openvswitch
[16:38] <gnuoy> to cope with inconsistencies
[16:38] <bdx> gnuoy: oh no way tahts huge!!!!!!
[16:38] <bdx> hah
[16:38] <bdx> I must of missed that
[16:38] <gnuoy> bdx, http://paste.ubuntu.com/13092664/
[16:39] <bdx> oh taht is awesome!
[16:39] <bdx> that*
[16:39] <gnuoy> yeah, it really useful
[17:29] <tvansteenburgh> hazmat, you around?
[17:55] <Prabakaran> Hello Team, I have created a new bug for my charm and after linking bug id to the trunk branch in the lauchpad i am not able to see my charm in the review queue.  Could you please advise on the same?  Charm link is "https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-platform-rtm/trunk"
[17:58] <tvansteenburgh> Prabakaran: see https://jujucharms.com/docs/stable/authors-charm-store#recommended-charms, point #3
[17:59] <tvansteenburgh> Prabakaran: i think you need to subscribe the charmers team to your bug
[18:08] <tvansteenburgh> Prabakaran: i subscribed charmers for you, so your bug should show up in the Review Queue shortly
[18:19] <Prabakaran> Thanks <tvansteenburgh>
[20:02] <firl> any openstackers on able to help me diagnose something?
[20:16] <catbus1> firl: Hi, if you ask the question about the specific issue you encounter, chances are higher for someone who can help to respond.
[20:17] <firl> catbus1, yeah I am trying to diagnose why I can’t spawn an instance and see the actual log file still, but haven’t been able to get anything more than a vague python error
[20:21] <catbus1> firl: so you have got openstack cloud deployed using juju, and are you using juju to deploy a workload to this openstack environment? or are you using openstack dashboard to try to launch an instance?
[20:21] <firl> I have a juju deployed kilo setup
[20:21] <firl> last night I had to reboot the state machines
[20:22] <firl> and now when I launch a vm from horizon I get this error:
[20:22] <firl> http://pastebin.com/RZjcMzUz
[20:22] <firl> been trying to find the issue in the log files, but 42 nodes and multiple log files lends it self to taking a while
[20:28] <catbus1> firl: Thanks for the error info. I personally can't help.. wait to see if others on this channel can help.
[20:28] <firl> :)
[20:28] <firl> it might be related to neutron / rabbitmq but no idea how to make sure
[20:29] <catbus1> firl: do you think this is related to Juju? if it's openstack kilo, wanna also check #openstack for help?
[20:30] <firl> I was hoping from the juju side to help figure out how to diagnose it since the bundle created it, and I can upgrade via juju if there is a patch etc
[20:30] <firl> but no I haven't
[20:31] <catbus1> understood.
[20:32] <firl> like, I think it’s rabbitmq but I don’t know how to check the credentials because I didn’t set it up juju did
[20:34] <catbus1> firl: can you share the kilo bundle yaml file you used?
[20:35] <firl> http://pastebin.com/T9uE9yJB
[22:39] <bdx> hey whats going on? Quick question on deploys using enable-local-dhcp-and-metadata....
[22:40] <bdx> If enable-local-dhcp-and-metadata is set, then floating ip assignment can't happen, because no neutron router right?
[22:41] <bdx> and well to add to that, neutron routers can't exist either right?
[22:45] <bdx> core, dev:^^
[22:52] <bdx> any takers^^?