[06:21] Hello Team, I have created a new bug for my charm and after linking bug id to the trunk branch in the lauchpad i am not able to see my charm in the review queue. Could you please advise on the same? === axino` is now known as axino [12:22] jamespage, charmhelper mp as discussed https://code.launchpad.net/~gnuoy/charm-helpers/service-framework-loader/+merge/276519 [13:13] gnuoy, landed - thankyou! [13:14] coreycb, hey - thanks for the tox landing yesterday - I updated the two borked branches and generated a new one for ceph-radosgw [13:15] I think that just leaves swift stuff and cinder-ceph [13:36] jamespage, follow up for the charm itself, sync + minor change to services.py https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api-odl/ch-sync/+merge/276520 [13:37] gnuoy, +1 [13:37] ta [14:14] jamespage, In the interests of not duplicating effort I'm going to start on the unit and amulet tests for odl-controller [14:14] gnuoy, ack [14:15] gnuoy, I'll deploy ovs-odl and start looking at the north/south traffic issues BjornT hit [14:15] kk [14:27] jamespage, no problem, those other two branches are landed now [14:28] coreycb, thanks [14:28] jamespage, do you have swift/cinder-ceph in the works? if not I can pick those up if you want. [14:29] coreycb, yeah done this morning - links on the pad - http://pad.ubuntu.com/tox-charm-landings [14:29] jamespage, great I'll review them === mwenning is now known as mwenning-wfh [15:26] jamespage, git+ssh://git.launchpad.net/~sdn-charmers/charms/+source/openvswitch-odl is proving problematic with amulet. There's no series in there and I'm going to need to update python-charmworldlib to be able to parse it [15:27] gnuoy, oh I pushed that to lp:~openstack-charmers/charms/trusty/openvswitch-odl/trunk this monring [15:27] jamespage, \o/ [15:27] that sounds like a nasty blocker to git migration... [15:28] ah well, lets stay on bzr then [15:44] hey whats going on? Im testing out "enable-local-dhcp-and-metadata [15:45] jamespage, Using http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/ovs-odl/ovs-odl.yaml relates neutron-gateway and openvswitch-odl with the juju-info interface, is that right? [15:45] gnuoy, it is [15:45] ok [15:46] I'm having a hard time locating the metadataproxy datasource that instances get cloud-init userdata from when enable-local-dhcp-and-metadata is set [15:46] has anyone had luck using enable-local-dhcp-and-metadata ? [15:47] bdx, I have had lots of luck with it [15:47] I'm guessing the metadata api endpoint is also the dhcp port then yea? [15:48] or the dhcp port will at least forward traffic to 169.254.169.254/32 correct? [15:49] bdx, if I remember correctly the dhcp request returns a static route pointing 169.254.169.254 at the nic inside the dhcp namespace [15:50] bdx, give me a minute and I'll fire up a deploy and remind myself [15:51] gnuoy: that is consistent with what I'm seeing.... [15:51] gnuoy: when you do, create your tenant networks as vlan networks [15:51] if you don't mind [15:51] bdx, I do [15:51] ok [15:51] bdx, haha that was out of sync [15:52] I meant 'I do create them as vlan networks' [15:52] bdx, ^ I didn't mean I minded [15:52] totally [15:52] ha [15:52] gotcha [16:07] jose, ping? [16:08] mattyw: pong! [16:25] Nodes fail to reboot/powercycle when ceph-osd + (nova-compute + hugepages) due to deadlock [16:25] I'll be filing a bug for it in a few [16:25] its had me stopped up for the last few days [16:31] Hello Team, I have created a new bug for my charm and after linking bug id to the trunk branch in the lauchpad i am not able to see my charm in the review queue. Could you please advise on the same? [16:32] bdx, ok, I'm up and running with enable-local-dhcp-and-metadata [16:33] bdx, http://paste.ubuntu.com/13092618/ [16:33] gnuoy: nice! [16:34] so those networks are provider networks [16:34] entirely..... [16:34] My charm link is "https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-platform-rtm/trunk" [16:35] bdx, so your guests get an ip but fail to retrieve metadata? [16:36] gnuoy: yes.....I just found an inconsistency on one of my physical nodes port <-> vlan trunk was on eth4 ....whereas the rest of my nodes use eth3...grrr [16:36] bdx, ah ! [16:38] bdx, you can use mac addresses in the data-port setting for neutron-openvswitch [16:38] to cope with inconsistencies [16:38] gnuoy: oh no way tahts huge!!!!!! [16:38] hah [16:38] I must of missed that [16:38] bdx, http://paste.ubuntu.com/13092664/ [16:39] oh taht is awesome! [16:39] that* [16:39] yeah, it really useful [17:29] hazmat, you around? [17:55] Hello Team, I have created a new bug for my charm and after linking bug id to the trunk branch in the lauchpad i am not able to see my charm in the review queue. Could you please advise on the same? Charm link is "https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-platform-rtm/trunk" [17:58] Prabakaran: see https://jujucharms.com/docs/stable/authors-charm-store#recommended-charms, point #3 [17:59] Prabakaran: i think you need to subscribe the charmers team to your bug [18:08] Prabakaran: i subscribed charmers for you, so your bug should show up in the Review Queue shortly [18:19] Thanks [20:02] any openstackers on able to help me diagnose something? [20:16] firl: Hi, if you ask the question about the specific issue you encounter, chances are higher for someone who can help to respond. [20:17] catbus1, yeah I am trying to diagnose why I can’t spawn an instance and see the actual log file still, but haven’t been able to get anything more than a vague python error [20:21] firl: so you have got openstack cloud deployed using juju, and are you using juju to deploy a workload to this openstack environment? or are you using openstack dashboard to try to launch an instance? [20:21] I have a juju deployed kilo setup [20:21] last night I had to reboot the state machines [20:22] and now when I launch a vm from horizon I get this error: [20:22] http://pastebin.com/RZjcMzUz [20:22] been trying to find the issue in the log files, but 42 nodes and multiple log files lends it self to taking a while [20:28] firl: Thanks for the error info. I personally can't help.. wait to see if others on this channel can help. [20:28] :) [20:28] it might be related to neutron / rabbitmq but no idea how to make sure [20:29] firl: do you think this is related to Juju? if it's openstack kilo, wanna also check #openstack for help? [20:30] I was hoping from the juju side to help figure out how to diagnose it since the bundle created it, and I can upgrade via juju if there is a patch etc [20:30] but no I haven't [20:31] understood. [20:32] like, I think it’s rabbitmq but I don’t know how to check the credentials because I didn’t set it up juju did [20:34] firl: can you share the kilo bundle yaml file you used? [20:35] http://pastebin.com/T9uE9yJB === natefinch is now known as natefinch-afk [22:39] hey whats going on? Quick question on deploys using enable-local-dhcp-and-metadata.... [22:40] If enable-local-dhcp-and-metadata is set, then floating ip assignment can't happen, because no neutron router right? [22:41] and well to add to that, neutron routers can't exist either right? [22:45] core, dev:^^ [22:52] any takers^^?