[00:44] <magicaltrout> what the hell
[00:45] <magicaltrout> why is juju deploy trying to open a browser
[00:45] <magicaltrout> on a server which runs headless
[00:49] <magicaltrout> balls
[00:49] <magicaltrout> now i  juju logout
[00:49] <magicaltrout> and can't juju login
[00:53] <magicaltrout> okay logged back in
[00:53] <magicaltrout> still getting a browser prompt
[01:24] <lazypower> magicaltrout: that should give you the url you need to copy/paste if its in headless mode
[07:08] <magicaltrout> lazypower: yes it did after a delay
[07:08] <magicaltrout> but then when i run that url on a server that isn't my remote one, what difference does it make?
[07:08] <magicaltrout> or do I curl it?
[07:09] <magicaltrout> in the end i logged in with lynx
[09:42] <lazypower> magicaltrout: should be token based. its polling/waiting on a socket to get that auth code back. Pasting that into your workstation browser should have gotten you through.
[09:42] <lazypower> if its not, we need to tag and bag that bug
[10:15] <rock> Hi. Can't we install juju2.0 on ubuntu 14.04.1(trusty)?
[10:17] <rock> If we can, could anyone please provide me a reference link for that.
[10:32] <rock> Actually, I want to do [MAAS+OpenStack base bundle] setup on physical servers. I have taken 5 servers. On one server I configured MAAS 1.9.4 on trusty(14.04). From MAAS UI , I commissioned all remaining four nodes successfully.
[10:33] <rock> Now I want to deploy OpenStack base bundle. What exactly I need to do.
[10:35] <rock> I integrated our cinder-storage driver charm with Openstack bundle. And I pushed this bundle to jujustore as our own bundle. So now I want to deploy our bundle on MAAS nodes. What I need to do exactly here. Please provide me the clear information.
[12:18] <junaidali> Hi everyone, I'm having an issue with juju 2.0 on xenial. The lxds are getting IPs from lxdbr0 bridge rather than the Openstack management network. Any idea what might be the cause?
[12:18] <junaidali> I'm using maas 2.1.0alpha3 and juju 2.0-beta18
[12:18] <beisner> hi junaidali, is this when deploying on top of openstack using juju?
[12:18] <junaidali> yes
[12:20] <junaidali> in /var/log/lxd/<lxd-name>/lxc.conf, lxc.network.link = lxdbr0
[12:20] <junaidali> can this might be the issue?
[12:21] <beisner> junaidali, it is a known issue.  https://bugs.launchpad.net/juju/+bug/1615917  we do a lot of juju deploys on top of openstack, and have great success in just placing all units in their own nova instance.
[12:21] <mup> Bug #1615917: juju openstack provider --to lxd results in unit behind NAT (unreachable) <openstack-provider> <uosci> <juju:Triaged> <https://launchpad.net/bugs/1615917>
[12:28] <junaidali> beisner, when you say deploying on top of openstack, do you mean deploying openstack over openstack?
[12:30] <beisner> junaidali, are you deploying to maas with juju and getting that network issue?  or do you have an openstack deployed, where you are deploying some other thing on top of that with juju?
[12:30] <junaidali> i'm deploying to maas with juju
[12:30] <junaidali> Actually this is happening in a fresh  deployment
[12:30] <junaidali> sorry for the confusion
[12:30] <beisner> ok, so i'm confused
[12:31] <beisner> the openstack networking won't be in play at that point
[12:31] <rick_h_> junaidali: if MAAS is setup to provide dhcp the lxd containers should come up with IP addresses on the network and be reachable across the hosts.
[12:31] <junaidali> machines are getting correct IP, its just the lxds that've the issue
[12:33] <junaidali> yes, but in my case, it is not getting IP from maas.
[12:34] <junaidali> I was previously using an RC release of maas. The error came up when I upgraded to maas 2.1.0 alpha3
[12:35] <junaidali> I deleted the maas and recreated the whole environment but it didn't help
[12:36] <junaidali> lxc.conf for an lxd (/var/log/lxd/<lxd-name>/lxc.conf) http://paste.ubuntu.com/23202676/
[12:37] <rick_h_> junaidali: maybe check out https://bugs.launchpad.net/juju/+bug/1566791 where not all interfaces get bridged ootb
[12:37] <mup> Bug #1566791: VLANs on an unconfigured parent device error with "cannot set link-layer device addresses of machine "0": invalid address <2.0> <4010> <cpec> <network> <juju:In Progress by dimitern> <https://launchpad.net/bugs/1566791>
[12:37] <rick_h_> junaidali: a fix for that is on the way, but not 100% sure it's what you're hitting
[12:38] <coreycb> junaidali, I'm hitting similar issues but not sure it's the same as yours.  here's something you can fix possibly: https://lists.ubuntu.com/archives/juju/2016-September/007801.html
[12:39] <coreycb> junaidali, also when creating your model try this: juju add-model --config enable-os-upgrade=false --config enable-os-refresh-update=false <model-name>
[12:40] <junaidali> thanks coreycb, let me try your suggestion
[12:41] <coreycb> junaidali, hopefully it helps. I've not gotten around my issue yet so I'll keep you posted on any results.
[12:42] <marcoceppi> coreycb junaidali you can also make that the default in beta18 `juju model-defaults`
[12:43] <coreycb> junaidali, fyi I think rc1 comes out tomorrow for juju and from what I understand the above bugs are fixed in it
[12:43] <coreycb> junaidali, actually the model issue may be an images or cloud-init bug, not sure
[15:05] <junaidali> I tried the suggestions, still hitting the same issue.  I hope this is fixed in rc1 now
[15:52] <kjackal> cory_fu kwmonroe: I will be doing the kafka ingestion bundle again with bigtop charms this time. Where should this bunlde live? I think it should be outside the bigtop source tree since it will have apache-flume in it. What do you thing?
[15:52] <kjackal> *think
[15:54] <kwmonroe> kjackal: i think it should live in bigtop-deploy, because i expect it to be updated to use bigtop-flume once we get that charmed
[15:54] <kwmonroe> i don't think there will be too much concern if we have a non bigtop charm in a bigtop bundle, as long as the expectation is that all charms will eventually be bigtopped
[15:55] <kjackal> kwmonroe: I see your point, so we put it inside bigtop and as soon as we get flume ready we create a PR
[15:55] <kjackal> sounds good
[15:57] <cory_fu> kjackal: In particular, I would include a comment in the bundle yaml next to the apache-flume charm URLs saying that they will be swapped out for the Bigtop versions ASAP
[17:02] <lazypower> http://imgur.com/a/xrOmO -- not sure if juju is telling prophecies or coincidence
[17:08] <cory_fu> @kwmonroe @kjackal How do you feel about changing our repo naming convention to follow the charm-, layer-, interface- prefix convention (where we currently use layer- for both charm layers and base layers)?
[17:09] <kjackal> cory_fu: isn't there a case were we might build a new charm over what we consider chamr layer now?
[17:10] <kjackal> cory_fu: for example could someone at any point use the client layer to put something ontop?
[17:11] <cory_fu> kjackal: Yes, there is a possibility that a charm (layered or non-layered) could serve as a base layer, but it is less common and I think it's still useful to indicate that the "base" is a charm in its own right
[17:11] <cory_fu> i.e., layer- would be reserved for things that could never function as a charm on their own
[17:14] <cory_fu> Hrm.  I thought there were more repos in juju-solutions that followed that convention, but I only actually see one.
[17:15] <cory_fu> marcoceppi: Was I correct in recalling that we wanted to encourage that convention?  ^
[17:15] <marcoceppi> cory_fu: we never really have enforced /that/
[17:15] <marcoceppi> cory_fu: I've been doing layer-* reuse and charm-* as either a top layer or built charm
[17:16] <cory_fu> marcoceppi: I was suggesting charm-* as top layer.
[17:16] <marcoceppi> we also have some layers in the index that are actually charm layers and probably shouldn't be
[17:16] <marcoceppi> cory_fu: right, but I also use it for classic charms that have not been layered
[17:16] <cory_fu> Indeed.  IOW, anything that can be deployed (possibly after running through `charm build`)
[17:17] <marcoceppi> cory_fu: this whole "options not defined" bug is killing me
[17:17] <smgoller> Hey all, I'm trying to deploy openstack on maas 2 with juju 2 beta 18 using ubuntu 16.04 everywhere. Everything comes up, but I can't get networking to work. The bundle configuration references eth1 as presumably the physical interface connected to the provider network, which is eno2 in xenial-land. I can't seem to figure out what's going on. any of gnuoy, thedac, or tinwood awake?
[17:17] <marcoceppi> yes
[17:17] <cory_fu> marcoceppi: Really?  I thought we sorted that out?
[17:17] <marcoceppi> cory_fu: we just found where it happens, never patched it
[17:17] <marcoceppi> smgoller: I hit that earlier today, but someone else fixed it and I'm not sure how
[17:18] <thedac> smgoller: I'll get back to you in just a minute when my meeting is done. OK?
[17:18] <lazypower> cory_fu: marcoceppi - +1 to options undefiend bug
[17:18] <smgoller> thedac: awesome! thanks
[17:18] <lazypower> workaround works for now, but its not obvious
[17:18] <lazypower> *undefined
[17:18] <cory_fu> marcoceppi, lazypower: Somebody should fix that.  >_>
[17:18] <smgoller> afk for a sec to run to the toilet
[17:19] <lazypower> cory_fu: you fix it in the repo, i'll fix it in charmbox :D
[17:21] <smgoller> back
[17:24] <kjackal> cory_fu: kwmonroe: am going to push to bigdata-dev a build of the plugin in trusty with openjdk optional. The reason is that we endup with this deployment http://pastebin.ubuntu.com/23203686/ where the plugin must be in trusty to relate to flume and it also needs openjdk
[17:25] <cory_fu> kjackal: I think the plugin might require some additional work, since it currently expects java to be proxied through the principal
[17:25] <kwmonroe> hold up kjackal
[17:26] <kjackal> cory_fu: kwmonroe: didn't we make openjdk optional in the base layer?
[17:26] <kjackal> then I might need to use the apache plugin
[17:26] <cory_fu> We did, but the plugin was previously a special case (because we had a misunderstanding about how relations between two subordinates work)
[17:27] <cory_fu> kjackal, kwmonroe: Apparently, we can't have a bundle and a charm with the same name.  What should we call the insightedge bundle?  insightedge-core?
[17:29] <marcoceppi> cory_fu: I tried, and failed, the logic is too complex for me
[17:29] <thedac> smgoller: Hi, so Xenial has slightly more unpredictable interface names based on the device type. There are a handful of configuration knobs you can set in the charms to work this. Let me get you a list. One sec
[17:29] <kwmonroe> kjackal: can you try this one? http://jujucharms.com/u/bigdata-dev/bigtop-plugin-trusty
[17:30] <kjackal> thanks kwmonroe.
[17:30] <kwmonroe> kjackal: cory_fu: ^^ that's a bigtop plugin, specifically built for trusty.. it pulls in puppet from puppetlabs since the trusty archive won't have a new enough puppet.
[17:30] <smgoller> thedac: yeah, maas tries to keep it a little more consitent, but xenial instances i've brought up on esxi have names like ens160, ens192...
[17:31] <kjackal> cory_fu: how about insightedge-bunlde or sulution? the core sounds like "limited functionality, only for internal use"
[17:32] <cory_fu> kjackal: I wish we could rename the charm and just call the bundle "insightedge" but that's impossible (within the bigdata-dev namespace) now
[17:32] <kwmonroe> kjackal: cory_fu, bigtop-plugin-trusty should *not* need openjdk.  that's the charm i was using during the summit specifically because some of our big data charms weren't xenial... i'm like 92% sure it worked without java.
[17:33] <kjackal> ah that is my doing (I think) we cannot rename a charm :(
[17:33] <cory_fu> kjackal: Also, I was thinking "insightedge-core" like "kubernetes-core" since we will want to build on the bundle to include things other than just Spark
[17:34] <cory_fu> kwmonroe: We can't mix Bigtop and vanilla plugins, can we?
[17:34] <thedac> smgoller: In neutron-gateway set the ext-port:
[17:34] <thedac> https://github.com/openstack/charm-neutron-gateway/blob/master/config.yaml#L85
[17:34] <thedac> If you are running HA set vip_iface and ha-bind interface
[17:34] <thedac> https://github.com/openstack/charm-keystone/blob/master/config.yaml#L186
[17:34] <kwmonroe> cory_fu: i didn't try
[17:34] <thedac> https://github.com/openstack/charm-keystone/blob/master/config.yaml#L198
[17:34] <thedac> And the corosync_bindifcace for hacluster
[17:34] <cory_fu> I'm sure we can't
[17:34] <thedac> https://github.com/openstack/charm-hacluster/blob/master/config.yaml#L30
[17:35] <cory_fu> kwmonroe: I was somehow under the impression that kjackal was dealing with vanilla Hadoop
[17:35] <smgoller> thedac: we're not doing HA, this is just a basic install to get things rolling
[17:35] <cory_fu> I think I read something backwards, though
[17:35] <cory_fu> kwmonroe: Thoughts on the insightedge bundle name?
[17:35] <thedac> smgoller: ok, then it should just be the neutron-gateway ext-port.
[17:36] <smgoller> ok, so i've tried a bunch of different bundles, and the last bundle i tried was your stable one on github.
[17:36] <smgoller> I've done the ext-port change and that one didn't seem to make a difference.
[17:36] <thedac> smgoller: oh, ok, then I need more info on *how* things are breaking
[17:37] <kwmonroe> cory_fu: what's in the bundle?  spark + insightedge?
[17:37] <smgoller> thedac: heh, that's a good question. I'm trying to figure that out myself.
[17:37] <cory_fu> kwmonroe: And Zeppelin
[17:37] <cory_fu> kwmonroe: Basically, it recreates what the InsightEdge release provides.
[17:37] <smgoller> but first, this is the one i've been using most recently. https://github.com/openstack-charmers/openstack-bundles/tree/master/stable/openstack-base
[17:38] <cory_fu> (But in a more modular way)
[17:38] <thedac> smgoller: and you are changing ext-port for the gateway charm? That is needed.
[17:38] <kwmonroe> cory_fu: i think i agree with kjackal.. -core kinda implies just the minimum.  i wouldn't expect zepp in a core bundle.  unless that's the only interface into insightedge
[17:39] <smgoller> http://pastebin.com/peLb4Zme
[17:39] <smgoller> do I need to set ext-port as well?
[17:39] <smgoller> because the docs say that was deprecated in favor of the options in this one
[17:40] <smgoller> which is why I switched to this bundle over the one on jujucharms.com
[17:40] <thedac> ah, let me validate that for you. One sec
[17:44] <smgoller> Yeah, that data-port line is the only thing I changed from github, from eth1 to eno2
[17:52] <thedac> smgoller: sorry, I had to read the docs myself. I am trying to figure out if data-port is *only* used in a flat-network setup. I'd like to run a quick test on my end.
[17:53] <smgoller> ok
[17:53] <thedac> smgoller: in the meantime, can you say one way or the other that only external networking is failing?
[17:53] <smgoller> well, I can't ping the openstack router provider interface.
[17:54] <thedac> ok
[17:54] <thedac> smgoller: `ip netns exec qrouter-$ID ping $INSTANCE_IP`
[17:54] <thedac> try that ^^. That will tell us if the internal ovs is working
[17:55] <smgoller> on the compute node?
[17:55] <thedac> on the gateway node
[17:55] <smgoller> got it
[17:56] <smgoller> nope
[17:56] <smgoller> no pings
[17:57] <thedac> ok
[17:57] <thedac> smgoller: let me run this test then and get back to you in just a bit
[17:57] <smgoller> ok
[17:57] <smgoller> thanks!
[18:29] <kwmonroe> cory_fu: back to your naming convention, we would then have 'layer-bigtop-base' and 'charm-hadoop-namenode'?
[18:29] <cory_fu> kwmonroe: charm-hadoop-namenode because it is deployed (after being built)
[18:30] <cory_fu> kwmonroe: It's unclear whether we would have charm-hadoop-datanode or layer-hadoop-datanode, since we use it as a base layer and don't publish it in the store, but it could conceivably be built and deployed directly
[18:31] <cory_fu> kwmonroe: Actually, to be pedantic, we wouldn't have either charm-hadoop-namenode nor layer-hadoop-namenode because that lives in the Bigtop repo.  ;)
[18:33] <cory_fu> kwmonroe, kjackal: I went ahead and used the charm- prefix for https://github.com/juju-solutions/charm-insightedge-core since I was renaming it anyway
[18:33] <kwmonroe> very well
[18:33] <kwmonroe> i'll let you decide how much confusion you've just injected to the datanode/nodemgr naming.
[18:34] <cory_fu> :)
[18:34] <cory_fu> I don't plan on changing them
[18:34] <cory_fu> And anyway, I was just considering this for future repos
[18:35] <cory_fu> Though all of our charm repos should live upstream anyway, so it shouldn't really make any difference
[18:37] <thedac> smgoller: ok, so our test setup does use data-port however, it sets it to the MAC address rather than the interface name ie: br-ex:fa:16:3e:ec:79:d5 Can you test setting the MAC address of the "external" interface? And we can go from there
[18:40] <smgoller> ok, how would I do that?
[18:40] <smgoller> oh
[18:40] <smgoller> i see
[18:41] <smgoller> the config line explicitly sets a mac address? or do I need it to match to something else
[18:41] <thedac> With MAAS you can tag a host as the gateway. Then use constraints: tags=$GATEWAY_TAG. Then find the MAC address of the interface and in the bundle have data-port: br-ext:$MAC
[18:42] <smgoller> ok, but since i've already got something deployed i should just ssh in and grab the mac and redeploy?
[18:42] <smgoller> or do i need to tear this down and deploy from scratch?
[18:43] <thedac> smgoller: you could test with the current deploy as a first step
[18:43] <smgoller> i guess i can just set config on gateway-api?
[18:43] <thedac> Grab the mac and juju set neutron-gateway data-port="br-ext:$MAC"
[18:44] <smgoller> ok, that's done.
[18:45] <smgoller> do I need to do anything to the charm to get it to reconfigure?
[18:45] <thedac> Let that settle a moment and see if you can ping the neutron router
[18:46] <smgoller> still no go. you mean the mac address of eno2, the physical interface connected to the external physical network, yes?
[18:46] <smgoller> just to be clear.
[18:46] <thedac> yes, correct
[18:47] <smgoller> should i reboot the machine?
[18:47] <thedac> ok, can I see `ovs-vsctl show` from the gateway and one of the compute nodes?
[18:47] <thedac> I don't think that will help
[18:48] <smgoller> /paste.ubuntu.com/23203969/
[18:48] <smgoller> http://paste.ubuntu.com/23203969/
[18:48] <smgoller> that's the gateway
[18:48] <coreycb> junaidali, fyi I just tested with a pre-release of juju rc1 and it fixed the lxd bridge issues I was hitting
[18:48] <thedac> ok, looking
[18:49] <smgoller> http://paste.ubuntu.com/23203974/
[18:49] <smgoller> compute node
[18:53] <thedac> smgoller: simple test. From 172.21.0.4 can you ping 172.21.0.7? Just making sure the tunnels should be expected to work.
[18:54] <smgoller> yes
[18:54] <thedac> thanks
[18:59] <thedac> smgoller: so I am not finding a smoking gun the ovs output looks good.
[19:00] <smgoller> ok.
[19:00] <thedac> here is a doc I often follow from ODS for troubleshooting neutron http://www.slideshare.net/SohailArham/troubleshoot-cloud-networking-like-a-pro
[19:00] <thedac> I think the next step would be a re-deploy and if that still does not work follow this doc until we have a smoking gun
[19:01] <smgoller> Sounds good. Thank you so much for your help! I'll report back once it's redeployed.
[19:01] <thedac> smgoller: great. And do set the data-port with the MAC in the bundle
[19:01] <smgoller> thedac: yup, i just set that.
[19:01] <thedac> cool, I'll hear from you soon
[19:04] <kjackal> cory_fu: kwmonroe: After some debugging... Seems we are hitting a permission problem on HDFS "Permission denied: user=root, access=WRITE, inode="/user/flume/...."
[19:05] <kjackal> cory_fu: kwmonroe: I do not see any way of onfiguring the output directory of flume-hdfs https://jujucharms.com/u/bigdata-dev/apache-flume-hdfs/trusty/34
[19:06] <kjackal> Is it acceptable to ask the user to change the permissions of that dir or should we use the apache-hadoop?
[19:06] <cory_fu> kjackal: I remember when kwmonroe originally hit that permissions issue and it was sorted out, I thought in the flume charm.
[19:06] <cory_fu> kjackal: That's not an on-disk path, that's an HDFS path.  The Flume charm should be creating and managing that directory inside HDFS
[19:07] <kwmonroe> kjackal: do an 'hdfs dfs -ls -R /user' and see if the /user/flume dir is there
[19:07] <cory_fu> kjackal: https://github.com/juju-solutions/layer-apache-flume-base/blob/master/lib/charms/layer/apache_flume_base.py#L122
[19:08] <kjackal> cory_fu: true. but bigtop hdfs pre-creates all directories with permissions we do not like for this usecase
[19:08] <kjackal> kwmonroe: yes, /user/flume is there and is owned by flume in the hadoop group
[19:09] <kwmonroe> cool kjackal, now you need to find out why 'root' is trying to write there.. writes from flume-hdfs should be coming from the 'flume' user
[19:10] <kjackal> I see!
[19:10] <kjackal> Back to debugging :)
[19:10] <kwmonroe> fwiw kjackal, the flume source will be setting the output dir based on the 'event_dir'.  so apache-flume-syslog defaults that to 'flume-syslog', which would appear in hdfs as '/user/flume/flume-syslog'
[19:11] <kjackal> kwmonroe: yeap
[19:26] <junaidali> thanks coreycb
[20:37] <smgoller> so i have a maas machine that failed deployment according to maas. Can I recover from this from juju's standpoint or do I need to destroy the model and start over?
[20:37] <smgoller> juju doesn't think the machine is in an error state but maas is pissed
[20:47] <rick_h_> smgoller: so if it's a deployment you can destroy the application and redeploy?
[20:47] <rick_h_> smgoller: need more info on what happened I guess.
[20:47] <rick_h_> smgoller: does juju status --format=yaml show more details on the machine?
[20:47] <smgoller> so I did a deployment of a bundle, which required 4 machines
[20:47] <smgoller> i blew the model away and redeployed.
[20:48] <smgoller> but for the record, one of the machines failed to deploy according to maas
[20:48] <rick_h_> smgoller: k, if it happens again let us know and we can try to help see what's up.
[20:49] <smgoller> and juju status showed that machine as "error" state. but I'm sorry I was too impatient. :) If it happens again i'll leave it :)
[21:17] <kjackal> kwmonroe: cory_fu: need some help with bundles and the store
[21:17] <cory_fu> kjackal: Sure, what's up?
[21:17] <kjackal> there is no charm build step for bundles, right?
[21:17] <kjackal> cory_fu: charm show cs:~bigdata-dev/bundle/kafka-ingestion-0
[21:18] <kjackal> cory_fu: I pushed the bundle but I am not sure where did it go in the store
[21:18] <kjackal> cory_fu: I do not see it here: https://jujucharms.com/q/bigdata-dev?type=bundle
[21:19] <kjackal> I must be mising some thing, eg a metadata.yaml
[21:19] <cory_fu> kjackal: 1) Did you do `charm release cs:~bigdata-dev/bundle/kafka-ingestion-0`?  2) Did you do `charm grant cs:~bigdata-dev/bundle/kafka-ingestion everyone`?
[21:20] <cory_fu> kjackal: (Hint, you forgot to grant)
[21:20] <cory_fu> https://jujucharms.com/u/bigdata-dev/kafka-ingestion/0
[21:20] <kjackal> http://pastebin.ubuntu.com/23204426/
[21:22] <cory_fu> kjackal: Odd.  http://pastebin.ubuntu.com/23204431/
[21:22] <kjackal> So cory_fu, is this a permissions issue?
[21:23] <kjackal> Is the series = bundle correct?
[21:23] <cory_fu> kjackal: Oh, I think what happened is that you granted before you released, and the stable channel didn't exist yet, so the grant ended up being a no-op
[21:23] <cory_fu> kjackal: Try granting again
[21:23] <kjackal> ok
[21:23] <cory_fu> kjackal: Yes, the commands look fine to me
[21:24] <kjackal> https://jujucharms.com/u/bigdata-dev/kafka-ingestion
[21:24] <kjackal> Awesome, thanks!
[21:54] <smgoller> thedac: ok, I've got it redeployed. I've created the external network according to the docs again. I've also got a manually deployed host as well.
[21:55] <smgoller> thedac: at this point, the manual host can ping the router. however, pinging via ip nets exec on the qrouter fails to ping the router.
[21:55] <smgoller> the manual host is mainly just to validate layer1 connectivity
[21:56] <thedac> smgoller: hmm, ok.
[21:56] <thedac> I guess I need you to unpack "manually deployed host".
[21:56] <thedac> Is that a booted instance on the cloud or something else
[21:56] <smgoller> a host manually deployed with maas with ubuntu on it
[21:56] <smgoller> outside openstack
[21:57] <thedac> but outside the softare defined network .. ok
[21:57] <smgoller> basically something else connected to the provider network that's not the router
[21:57] <smgoller> i have not created a internal network
[21:58] <thedac> Being able to ping the router at all is a good sign. I would finish the config and boot an instance and we can debug from there
[21:58] <smgoller> this is what i used to create the external network "./neutron-ext-net -g 10.118.28.1 -c 10.118.28.0/24 -f 10.118.28.10:10.118.28.254 ext_net"
[21:59] <thedac> smgoller: remember there are probably secgroups aslo at play. You might consider making the default secgroup wide open for early testing.
[22:00] <smgoller> the default secgroup seems to be wide open by default
[22:02] <smgoller> downloading an ubuntu image so i can launch an instance.
[22:03] <smgoller> thedac: do you want me to go ahead and create an internal network and launch an instance? Or should we continue trying to debug the external part of this?
[22:04] <thedac> I would go ahead and create the internal network and boot an instance.
[22:05] <thedac> Having said that when you say the manual host can ping the router. Do you mean 10.118.28.1 or 10.118.28.10 (the likey SDN router IP)
[22:05] <stokachu> cory_fu: https://github.com/battlemidget/charm-layer-ghost/pull/5
[22:05] <stokachu> when you get a chance
[22:08] <smgoller> 10.118.28.1
[22:09] <smgoller> thedac: the upstream router, not the router in openstack. my apologies.
[22:09] <thedac> smgoller: which is an external device corect? ... ok, I was confused
[22:09] <cory_fu> stokachu: LGTM, but I didn't test it
[22:09] <smgoller> yes
[22:09] <stokachu> cory_fu: thanks, im testing it now
[22:09] <thedac> smgoller: what does neutron router-list show as the IP of the SDN router?
[22:10] <smgoller> 10.118.28.10
[22:10] <stokachu> cory_fu: im noticing 5 minute waits between the APT layer running ensure_package_status
[22:10] <stokachu> not sure what that's about
[22:10] <thedac> smgoller: ok, and can you ping that?
[22:10] <thedac> either from the manual host or inside the netns?
[22:10] <smgoller> sudo ip netns exec qrouter-c2a5b6f2-8006-4a78-ad07-c82f8c1fd7ef ping 10.118.28.10
[22:10] <smgoller> succeeds.
[22:10] <stokachu> you can see that here: http://paste.ubuntu.com/23204650/
[22:10] <smgoller> thedac: fails from the manual host
[22:11] <thedac> and what is the IP of the manual host?
[22:11] <cory_fu> stokachu: Strange.  Sounds like maybe it's queueing it but not acting on it during that hook, so it gets processed during the next hook (presumably update-status, after 5 min)
[22:11] <smgoller> 10.118.28.8
[22:11] <thedac> smgoller: ok, on the neutron-gateway host are eth0 and eth1 in the same VLAN?
[22:12] <thedac> smgoller: they would have to be for that to work
[22:12] <stokachu> cory_fu: im not sure what package it's queueing as everything was installed on line 973
[22:12] <stokachu> stub: ^ any idea on that?
[22:12] <smgoller> thedac: you mean the physical interfaces on the machine running neutron-gateway?
[22:12] <thedac> yes
[22:13] <cory_fu> stokachu: Where do you see the 5 minute gap?
[22:13] <smgoller> the physical interfaces are both untagged, but they are on separate vlans on the physical switch they're connected to
[22:13] <stokachu> cory_fu: line 1118 and line 1119
[22:14] <thedac> smgoller: Because 10.118.28.8 and 10.118.28.10 are both in the 10.118.28.0/24 network they need to be in the same broadcast domain. Or you need a different set of network address space for the ext interface
[22:14] <thedac> make sense?
[22:15] <cory_fu> stokachu: It's not actually installing anything there.  That's just update-status running after 5 min of doing nothing.  The apt layer always logs that it's initializing at the start of every hook, even if there's nothing for it to do
[22:15] <smgoller> thedac: I'm misinterpreting something. the manual host's ethernet and the "external" interface (eno2) on the machine running neutron-gateway are in the same vlan
[22:15] <cory_fu> stokachu: Probably worth filing a bug for stub to remove that log message, as it doesn't seem useful
[22:15] <stokachu> cory_fu: hmm, ghost is sitting at installing NPM dependencies
[22:16] <smgoller> thedac: my initial answer was for the two physical interfaces on the machine running neutron-gateway
[22:16] <thedac> smgoller: ooh, ok, so it has a second ethernet interace in the same vlan as the neutron-gateway's external interface?
[22:16] <cory_fu> stokachu: Guessing that there's a handler order dependency in how you update your status
[22:16] <smgoller> it being the manual host? yes.
[22:16] <thedac> yes, ok
[22:16] <cory_fu> stokachu: Give me a few and I'll take a closer look.  It's probably actually ready
[22:16] <stokachu> cory_fu: ok
[22:17] <thedac> smgoller: so we expect that ping to work. But to humor me. Will you set the default secgroup wide open with: http://pastebin.ubuntu.com/23204687/
[22:17] <smgoller> thedac: on it
[22:17] <thedac> in particular the icmp bit
[22:18] <smgoller> thedac: still no go
[22:19] <thedac> ok, ... /me thinks for a bit
[22:20] <cory_fu> marcoceppi: Is this the issue you were hitting with resources not downloading?  http://pastebin.ubuntu.com/23204673/
[22:20] <cory_fu> If so, will it eventually recover?
[22:20] <marcoceppi> cory_fu: I didn't hit it, lazypower and mbruzek did
[22:21] <marcoceppi> cory_fu: and no, there was no recover from what I remember
[22:21] <cory_fu> Ok, then mbruzek, how would handle that?
[22:22] <cory_fu> i.e., do I have to redeploy the app, start a new model, or tear down the entire controller?
[22:22] <mbruzek> cory_fu:  is that with juju 2.0 ?
[22:22] <cory_fu> Yes
[22:23] <mbruzek> Did you juju attach the resource
[22:24] <cory_fu> mbruzek: Yes, it's in the store.  This worked several times before this one choked
[22:24] <cory_fu> Oh, wiat
[22:24] <cory_fu> I deployed from local this time
[22:24] <cory_fu> ha!
[22:24] <cory_fu> Thanks, mbruzek
[22:25] <mbruzek> we have code that checks the size of a resource
[22:25] <mbruzek> and I do resource-get in a try catch
[22:25] <cory_fu> mbruzek: Does it sometimes return an empty file?
[22:25] <mbruzek> try/except
[22:25] <mbruzek> We did this so we can specify zero byte resources
[22:28] <thedac> smgoller: ok, looking at the neutron-ext-net script it defaults to network-type GRE. For your external network you actually want --network-type flat. So, I would remove the router and networks and re-run neutron-ext-net with --network-type flat
[22:28] <smgoller> aha
[22:29] <thedac> smgoller: you can prove this to yourself with neutron net-show ext-net
[22:29] <thedac> look at  provider:network_type
[22:30] <smgoller> the version of the script i have doesn't have support that argument
[22:31] <thedac> just to clear up. Are you using the openstack-charm-testing repo to get that script?
[22:31] <smgoller> no, but I can grab it.
[22:32] <smgoller> is that on launchpad or github?
[22:32] <thedac> interesting, so before I send you there. What does neutron net-show ext-net show for provider:network_type
[22:33] <thedac> lp:~ost-maintainers/openstack-charm-testing/trunk/
[22:33] <thedac> That has our version of the script in bin
[22:33] <smgoller> | provider:network_type     | gre                                  |
[22:34] <thedac> ok, so that is still a problem.
[22:34] <thedac> Let me track down the neutron commands directly so we are not depending on a script. One sec
[22:35] <stokachu> cory_fu: yea something is causing dpkg to go into a unconfigured state
[22:35] <smgoller> thedac: to be clear, that was the old one. I'm checking out that one from launchpad now
[22:35] <thedac> ok
[22:37] <cory_fu> marcoceppi, tvansteenburgh: To get the new version of jujuclient on Xenial, do you recommend `pip install --upgrade` over the one provided by python-jujuclient, or is there a ppa I should use instead?
[22:38] <smgoller> thedac: that was it. I can now ping from the netns to the provider router
[22:38] <thedac> ok, great. I think that should get you unblocked
[22:38] <smgoller> and i can ping the manual host as well
[22:39] <smgoller> thank you so much. So should I be basing my work of the bundle there as well?
[22:39] <stokachu> huh, charm build layer-nginx puts the built dir in ~/charms/trusty, where building my ghost charm places it in ~/charms/build/ghost
[22:39] <thedac> smgoller: no, I the one you are working with is the state of the art
[22:39] <thedac> s/I//
[22:39] <smgoller> thedac: roger that.
[22:39] <marcoceppi> cory_fu: think about what you just said.
[22:40] <smgoller> but the internal network as gre should be fine, yes?
[22:40] <thedac> that is correct
[22:40] <smgoller> awesome.
[22:40] <cory_fu> marcoceppi: pip install, got it.  ;)
[22:40] <cory_fu> marcoceppi: What is the ppa, then?
[22:41] <marcoceppi> https://launchpad.net/~tvansteenburgh/+archive/ubuntu/ppa
[22:41] <smgoller> thedac: you rock sir, thank you so much for your time.
[22:41] <thedac> smgoller: in that repo you can look at profiles/default for the commands run and use that as a crib sheet
[22:41] <thedac> no problem
[22:41] <smgoller> thedac: will do
[22:41] <cory_fu> marcoceppi: Thank you!
[22:42] <cory_fu> marcoceppi: Just for that, I'll go ahead and fix dhx for 2.0
[22:42] <marcoceppi> cory_fu: what about charm build ;)
[22:42] <cory_fu> marcoceppi: meh
[22:43] <cory_fu> marcoceppi: I probably won't get dhx fixed tonight either, actually.
[22:49] <bdx_> hows it going all?
[22:49] <bdx_> can someone point me at an example of how to bootstrap to rackspace pls?
[22:50] <stokachu> cory_fu: im thinking it's something with the apt layer, i can't get my nginx layer to deploy either
[22:52] <cargill> hi, is there an example how to use charmhelpers.core.services.helpers.TemplateCallback or an equivalent?
[22:52] <bdx_> stokachu: one thing that got me hung up with layer apt when included by a bottom layer, is that the configs specified in config.yaml for the bottom layer werent making into the built artifact
[22:53] <stokachu> hmm
[22:54] <bdx_> stokachu: I had to add them to the top layer to get them to persist through to the built charm
[22:55] <stokachu> ugh
[22:55]  * stokachu sad
[22:55] <bdx_> was that it?
[22:55] <stokachu> bdx_: no lol
[22:56] <stokachu> it pulls the package from my layer.yaml but then sits in this loop checking package_ensure_status
[22:56] <stokachu> nothing ever finishes after the apt layer does its thing
[22:58] <bdx_> stokachu: I modeled layer-nginx-passenger after your nginx layer, it uses layer-apt -> https://github.com/jamesbeedy/layer-nginx-passenger
[22:58] <bdx_> stokachu: are you using it differently then I?
[22:58] <stokachu> bdx_: have you done a charm build/juju deploy recently?
[22:58] <bdx_> errr, not in the last day or so
[22:58] <stokachu> nah i just built the nginx layer and tried to deploy it locally
[22:59] <bdx_> ahh changes in layer-apt then
[22:59] <stokachu> going to download yours and try it
[22:59] <bdx_> layer apt hasn't changed
[22:59] <stokachu> bdx_: yea so im not sure whats going on
[23:01] <bdx_> this is how im using it -> https://github.com/jamesbeedy/layer-nginx-passenger/blob/master/reactive/nginx_passenger.py#L22,L24
[23:01] <bdx_> not sure if that is correct or not, but it work
[23:01] <bdx_> s
[23:01] <stokachu> bdx_: deploying your layer now to see what happens
[23:04] <stokachu> bdx_: yours fails too
[23:05] <stokachu> http://paste.ubuntu.com/23204830/
[23:07]  * stokachu super sad
[23:07] <cholcombe> anyone remember the cmd to get the reactive states that are set when debugging?
[23:13] <smgoller> thedac: launched an instance and was able to ssh into it just fine.
[23:13] <thedac> fantastic!
[23:16] <smgoller> thedac: so the host can't really resolve anything, including its own hostname. I'm guessing my options are either define a DNS server to use when I create the internal network, or maybe install designate and designate-bind to get route53-like functionality?
[23:16] <smgoller> s/DNS server/external DNS server/
[23:17] <thedac> smgoller: yes, when you define the internal network set a DNS server. As long as it can route there that will work.
[23:17] <smgoller> would designate/designate-bind fulfill that as well?
[23:17] <thedac> designate is a whole other ballgame. More if you want to serve DNS as a service
[23:17] <smgoller> ok
[23:19] <smgoller> thedac: it seems like if I set designate up and link it to nova-compute, then when instances came up, their names would resolve, plus they'd be able to resolve things on the internet normally? Like, I have an instance called "xenial-test". It believes that is its hostname, but it doesn't resolve to anything. designate seems like it would allow that to work.
[23:19] <smgoller> "out of the box" that is
[23:20] <bdx_> stokachu: mine fails similarly to yours?
[23:20] <thedac> smgoller: I am not going to stop you from using designate. It is a good solution but it is adding complexity to a fairly simple problem
[23:20] <thedac> if you find that interesting, go for it
[23:20] <smgoller> thedac: a very diplomatic answer. Thank you. :) I'll stick with defining DNS.
[23:20] <smgoller> and leave designate for another day
[23:22] <smgoller> thedac: one last question before I let you escape: Is it possible to configure the bundle such that the openstack console for instances works?
[23:23] <smgoller> I see you can pass custom configuration to nova-compute, but it feels like the configuration for console needs knowledge of its own ip address for it to work, which I don't know if you can model in a bundle
[23:23] <thedac> smgoller: it has been a while since I have plaeyed with that.
[23:23] <smgoller> ok, then i won't worry about it.
[23:23] <smgoller> thanks!
[23:24] <thedac> smgoller: if the charm is missing anything, patches are welcome :)
[23:24] <smgoller> oh for sure
[23:24] <smgoller> If I come up with anything I'll contribute back, definitely.
[23:25] <bdx_> stokachu: I figured it out to some extent
[23:26] <bdx_> stokachu: add a series tag to your metadata.yaml
[23:26] <bdx_> stokachu: xenial is what I used that worked