[00:44] what the hell [00:45] why is juju deploy trying to open a browser [00:45] on a server which runs headless [00:49] balls [00:49] now i juju logout [00:49] and can't juju login [00:53] okay logged back in [00:53] still getting a browser prompt [01:24] magicaltrout: that should give you the url you need to copy/paste if its in headless mode === frankban|afk is now known as frankban [07:08] lazypower: yes it did after a delay [07:08] but then when i run that url on a server that isn't my remote one, what difference does it make? [07:08] or do I curl it? [07:09] in the end i logged in with lynx [09:42] magicaltrout: should be token based. its polling/waiting on a socket to get that auth code back. Pasting that into your workstation browser should have gotten you through. [09:42] if its not, we need to tag and bag that bug === saibarspeis is now known as saibarAuei [10:15] Hi. Can't we install juju2.0 on ubuntu 14.04.1(trusty)? [10:17] If we can, could anyone please provide me a reference link for that. [10:32] Actually, I want to do [MAAS+OpenStack base bundle] setup on physical servers. I have taken 5 servers. On one server I configured MAAS 1.9.4 on trusty(14.04). From MAAS UI , I commissioned all remaining four nodes successfully. [10:33] Now I want to deploy OpenStack base bundle. What exactly I need to do. [10:35] I integrated our cinder-storage driver charm with Openstack bundle. And I pushed this bundle to jujustore as our own bundle. So now I want to deploy our bundle on MAAS nodes. What I need to do exactly here. Please provide me the clear information. [12:18] Hi everyone, I'm having an issue with juju 2.0 on xenial. The lxds are getting IPs from lxdbr0 bridge rather than the Openstack management network. Any idea what might be the cause? [12:18] I'm using maas 2.1.0alpha3 and juju 2.0-beta18 [12:18] hi junaidali, is this when deploying on top of openstack using juju? [12:18] yes [12:20] in /var/log/lxd//lxc.conf, lxc.network.link = lxdbr0 [12:20] can this might be the issue? [12:21] junaidali, it is a known issue. https://bugs.launchpad.net/juju/+bug/1615917 we do a lot of juju deploys on top of openstack, and have great success in just placing all units in their own nova instance. [12:21] Bug #1615917: juju openstack provider --to lxd results in unit behind NAT (unreachable) [12:28] beisner, when you say deploying on top of openstack, do you mean deploying openstack over openstack? [12:30] junaidali, are you deploying to maas with juju and getting that network issue? or do you have an openstack deployed, where you are deploying some other thing on top of that with juju? [12:30] i'm deploying to maas with juju [12:30] Actually this is happening in a fresh deployment [12:30] sorry for the confusion [12:30] ok, so i'm confused [12:31] the openstack networking won't be in play at that point [12:31] junaidali: if MAAS is setup to provide dhcp the lxd containers should come up with IP addresses on the network and be reachable across the hosts. [12:31] machines are getting correct IP, its just the lxds that've the issue [12:33] yes, but in my case, it is not getting IP from maas. [12:34] I was previously using an RC release of maas. The error came up when I upgraded to maas 2.1.0 alpha3 [12:35] I deleted the maas and recreated the whole environment but it didn't help === saibarAuei is now known as saibarspeis [12:36] lxc.conf for an lxd (/var/log/lxd//lxc.conf) http://paste.ubuntu.com/23202676/ [12:37] junaidali: maybe check out https://bugs.launchpad.net/juju/+bug/1566791 where not all interfaces get bridged ootb [12:37] Bug #1566791: VLANs on an unconfigured parent device error with "cannot set link-layer device addresses of machine "0": invalid address <2.0> <4010> [12:37] junaidali: a fix for that is on the way, but not 100% sure it's what you're hitting [12:38] junaidali, I'm hitting similar issues but not sure it's the same as yours. here's something you can fix possibly: https://lists.ubuntu.com/archives/juju/2016-September/007801.html [12:39] junaidali, also when creating your model try this: juju add-model --config enable-os-upgrade=false --config enable-os-refresh-update=false [12:40] thanks coreycb, let me try your suggestion [12:41] junaidali, hopefully it helps. I've not gotten around my issue yet so I'll keep you posted on any results. [12:42] coreycb junaidali you can also make that the default in beta18 `juju model-defaults` [12:43] junaidali, fyi I think rc1 comes out tomorrow for juju and from what I understand the above bugs are fixed in it [12:43] junaidali, actually the model issue may be an images or cloud-init bug, not sure === petevg is now known as petevg_afk [15:05] I tried the suggestions, still hitting the same issue. I hope this is fixed in rc1 now [15:52] cory_fu kwmonroe: I will be doing the kafka ingestion bundle again with bigtop charms this time. Where should this bunlde live? I think it should be outside the bigtop source tree since it will have apache-flume in it. What do you thing? [15:52] *think [15:54] kjackal: i think it should live in bigtop-deploy, because i expect it to be updated to use bigtop-flume once we get that charmed [15:54] i don't think there will be too much concern if we have a non bigtop charm in a bigtop bundle, as long as the expectation is that all charms will eventually be bigtopped [15:55] kwmonroe: I see your point, so we put it inside bigtop and as soon as we get flume ready we create a PR [15:55] sounds good [15:57] kjackal: In particular, I would include a comment in the bundle yaml next to the apache-flume charm URLs saying that they will be swapped out for the Bigtop versions ASAP === dames is now known as thedac [17:02] http://imgur.com/a/xrOmO -- not sure if juju is telling prophecies or coincidence [17:08] @kwmonroe @kjackal How do you feel about changing our repo naming convention to follow the charm-, layer-, interface- prefix convention (where we currently use layer- for both charm layers and base layers)? [17:09] cory_fu: isn't there a case were we might build a new charm over what we consider chamr layer now? [17:10] cory_fu: for example could someone at any point use the client layer to put something ontop? [17:11] kjackal: Yes, there is a possibility that a charm (layered or non-layered) could serve as a base layer, but it is less common and I think it's still useful to indicate that the "base" is a charm in its own right [17:11] i.e., layer- would be reserved for things that could never function as a charm on their own [17:14] Hrm. I thought there were more repos in juju-solutions that followed that convention, but I only actually see one. [17:15] marcoceppi: Was I correct in recalling that we wanted to encourage that convention? ^ [17:15] cory_fu: we never really have enforced /that/ [17:15] cory_fu: I've been doing layer-* reuse and charm-* as either a top layer or built charm [17:16] marcoceppi: I was suggesting charm-* as top layer. [17:16] we also have some layers in the index that are actually charm layers and probably shouldn't be [17:16] cory_fu: right, but I also use it for classic charms that have not been layered [17:16] Indeed. IOW, anything that can be deployed (possibly after running through `charm build`) [17:17] cory_fu: this whole "options not defined" bug is killing me [17:17] Hey all, I'm trying to deploy openstack on maas 2 with juju 2 beta 18 using ubuntu 16.04 everywhere. Everything comes up, but I can't get networking to work. The bundle configuration references eth1 as presumably the physical interface connected to the provider network, which is eno2 in xenial-land. I can't seem to figure out what's going on. any of gnuoy, thedac, or tinwood awake? [17:17] yes [17:17] marcoceppi: Really? I thought we sorted that out? [17:17] cory_fu: we just found where it happens, never patched it [17:17] smgoller: I hit that earlier today, but someone else fixed it and I'm not sure how [17:18] smgoller: I'll get back to you in just a minute when my meeting is done. OK? [17:18] cory_fu: marcoceppi - +1 to options undefiend bug [17:18] thedac: awesome! thanks [17:18] workaround works for now, but its not obvious [17:18] *undefined [17:18] marcoceppi, lazypower: Somebody should fix that. >_> [17:18] afk for a sec to run to the toilet [17:19] cory_fu: you fix it in the repo, i'll fix it in charmbox :D [17:21] back [17:24] cory_fu: kwmonroe: am going to push to bigdata-dev a build of the plugin in trusty with openjdk optional. The reason is that we endup with this deployment http://pastebin.ubuntu.com/23203686/ where the plugin must be in trusty to relate to flume and it also needs openjdk [17:25] kjackal: I think the plugin might require some additional work, since it currently expects java to be proxied through the principal [17:25] hold up kjackal [17:26] cory_fu: kwmonroe: didn't we make openjdk optional in the base layer? [17:26] then I might need to use the apache plugin [17:26] We did, but the plugin was previously a special case (because we had a misunderstanding about how relations between two subordinates work) [17:27] kjackal, kwmonroe: Apparently, we can't have a bundle and a charm with the same name. What should we call the insightedge bundle? insightedge-core? [17:29] cory_fu: I tried, and failed, the logic is too complex for me === frankban is now known as frankban|afk [17:29] smgoller: Hi, so Xenial has slightly more unpredictable interface names based on the device type. There are a handful of configuration knobs you can set in the charms to work this. Let me get you a list. One sec [17:29] kjackal: can you try this one? http://jujucharms.com/u/bigdata-dev/bigtop-plugin-trusty [17:30] thanks kwmonroe. [17:30] kjackal: cory_fu: ^^ that's a bigtop plugin, specifically built for trusty.. it pulls in puppet from puppetlabs since the trusty archive won't have a new enough puppet. [17:30] thedac: yeah, maas tries to keep it a little more consitent, but xenial instances i've brought up on esxi have names like ens160, ens192... [17:31] cory_fu: how about insightedge-bunlde or sulution? the core sounds like "limited functionality, only for internal use" [17:32] kjackal: I wish we could rename the charm and just call the bundle "insightedge" but that's impossible (within the bigdata-dev namespace) now [17:32] kjackal: cory_fu, bigtop-plugin-trusty should *not* need openjdk. that's the charm i was using during the summit specifically because some of our big data charms weren't xenial... i'm like 92% sure it worked without java. [17:33] ah that is my doing (I think) we cannot rename a charm :( [17:33] kjackal: Also, I was thinking "insightedge-core" like "kubernetes-core" since we will want to build on the bundle to include things other than just Spark [17:34] kwmonroe: We can't mix Bigtop and vanilla plugins, can we? [17:34] smgoller: In neutron-gateway set the ext-port: [17:34] https://github.com/openstack/charm-neutron-gateway/blob/master/config.yaml#L85 [17:34] If you are running HA set vip_iface and ha-bind interface [17:34] https://github.com/openstack/charm-keystone/blob/master/config.yaml#L186 [17:34] cory_fu: i didn't try [17:34] https://github.com/openstack/charm-keystone/blob/master/config.yaml#L198 [17:34] And the corosync_bindifcace for hacluster [17:34] I'm sure we can't [17:34] https://github.com/openstack/charm-hacluster/blob/master/config.yaml#L30 [17:35] kwmonroe: I was somehow under the impression that kjackal was dealing with vanilla Hadoop [17:35] thedac: we're not doing HA, this is just a basic install to get things rolling [17:35] I think I read something backwards, though [17:35] kwmonroe: Thoughts on the insightedge bundle name? [17:35] smgoller: ok, then it should just be the neutron-gateway ext-port. [17:36] ok, so i've tried a bunch of different bundles, and the last bundle i tried was your stable one on github. [17:36] I've done the ext-port change and that one didn't seem to make a difference. [17:36] smgoller: oh, ok, then I need more info on *how* things are breaking [17:37] cory_fu: what's in the bundle? spark + insightedge? [17:37] thedac: heh, that's a good question. I'm trying to figure that out myself. [17:37] kwmonroe: And Zeppelin [17:37] kwmonroe: Basically, it recreates what the InsightEdge release provides. [17:37] but first, this is the one i've been using most recently. https://github.com/openstack-charmers/openstack-bundles/tree/master/stable/openstack-base [17:38] (But in a more modular way) [17:38] smgoller: and you are changing ext-port for the gateway charm? That is needed. [17:38] cory_fu: i think i agree with kjackal.. -core kinda implies just the minimum. i wouldn't expect zepp in a core bundle. unless that's the only interface into insightedge [17:39] http://pastebin.com/peLb4Zme [17:39] do I need to set ext-port as well? [17:39] because the docs say that was deprecated in favor of the options in this one [17:40] which is why I switched to this bundle over the one on jujucharms.com [17:40] ah, let me validate that for you. One sec [17:44] Yeah, that data-port line is the only thing I changed from github, from eth1 to eno2 [17:52] smgoller: sorry, I had to read the docs myself. I am trying to figure out if data-port is *only* used in a flat-network setup. I'd like to run a quick test on my end. [17:53] ok [17:53] smgoller: in the meantime, can you say one way or the other that only external networking is failing? [17:53] well, I can't ping the openstack router provider interface. [17:54] ok [17:54] smgoller: `ip netns exec qrouter-$ID ping $INSTANCE_IP` [17:54] try that ^^. That will tell us if the internal ovs is working [17:55] on the compute node? [17:55] on the gateway node [17:55] got it [17:56] nope [17:56] no pings [17:57] ok [17:57] smgoller: let me run this test then and get back to you in just a bit [17:57] ok [17:57] thanks! [18:29] cory_fu: back to your naming convention, we would then have 'layer-bigtop-base' and 'charm-hadoop-namenode'? [18:29] kwmonroe: charm-hadoop-namenode because it is deployed (after being built) [18:30] kwmonroe: It's unclear whether we would have charm-hadoop-datanode or layer-hadoop-datanode, since we use it as a base layer and don't publish it in the store, but it could conceivably be built and deployed directly [18:31] kwmonroe: Actually, to be pedantic, we wouldn't have either charm-hadoop-namenode nor layer-hadoop-namenode because that lives in the Bigtop repo. ;) [18:33] kwmonroe, kjackal: I went ahead and used the charm- prefix for https://github.com/juju-solutions/charm-insightedge-core since I was renaming it anyway [18:33] very well [18:33] i'll let you decide how much confusion you've just injected to the datanode/nodemgr naming. [18:34] :) [18:34] I don't plan on changing them [18:34] And anyway, I was just considering this for future repos [18:35] Though all of our charm repos should live upstream anyway, so it shouldn't really make any difference [18:37] smgoller: ok, so our test setup does use data-port however, it sets it to the MAC address rather than the interface name ie: br-ex:fa:16:3e:ec:79:d5 Can you test setting the MAC address of the "external" interface? And we can go from there [18:40] ok, how would I do that? [18:40] oh [18:40] i see [18:41] the config line explicitly sets a mac address? or do I need it to match to something else [18:41] With MAAS you can tag a host as the gateway. Then use constraints: tags=$GATEWAY_TAG. Then find the MAC address of the interface and in the bundle have data-port: br-ext:$MAC [18:42] ok, but since i've already got something deployed i should just ssh in and grab the mac and redeploy? [18:42] or do i need to tear this down and deploy from scratch? [18:43] smgoller: you could test with the current deploy as a first step [18:43] i guess i can just set config on gateway-api? [18:43] Grab the mac and juju set neutron-gateway data-port="br-ext:$MAC" [18:44] ok, that's done. [18:45] do I need to do anything to the charm to get it to reconfigure? [18:45] Let that settle a moment and see if you can ping the neutron router [18:46] still no go. you mean the mac address of eno2, the physical interface connected to the external physical network, yes? [18:46] just to be clear. [18:46] yes, correct [18:47] should i reboot the machine? [18:47] ok, can I see `ovs-vsctl show` from the gateway and one of the compute nodes? [18:47] I don't think that will help [18:48] /paste.ubuntu.com/23203969/ [18:48] http://paste.ubuntu.com/23203969/ [18:48] that's the gateway [18:48] junaidali, fyi I just tested with a pre-release of juju rc1 and it fixed the lxd bridge issues I was hitting [18:48] ok, looking [18:49] http://paste.ubuntu.com/23203974/ [18:49] compute node [18:53] smgoller: simple test. From 172.21.0.4 can you ping 172.21.0.7? Just making sure the tunnels should be expected to work. [18:54] yes [18:54] thanks [18:59] smgoller: so I am not finding a smoking gun the ovs output looks good. [19:00] ok. [19:00] here is a doc I often follow from ODS for troubleshooting neutron http://www.slideshare.net/SohailArham/troubleshoot-cloud-networking-like-a-pro [19:00] I think the next step would be a re-deploy and if that still does not work follow this doc until we have a smoking gun [19:01] Sounds good. Thank you so much for your help! I'll report back once it's redeployed. [19:01] smgoller: great. And do set the data-port with the MAC in the bundle [19:01] thedac: yup, i just set that. [19:01] cool, I'll hear from you soon [19:04] cory_fu: kwmonroe: After some debugging... Seems we are hitting a permission problem on HDFS "Permission denied: user=root, access=WRITE, inode="/user/flume/...." [19:05] cory_fu: kwmonroe: I do not see any way of onfiguring the output directory of flume-hdfs https://jujucharms.com/u/bigdata-dev/apache-flume-hdfs/trusty/34 [19:06] Is it acceptable to ask the user to change the permissions of that dir or should we use the apache-hadoop? [19:06] kjackal: I remember when kwmonroe originally hit that permissions issue and it was sorted out, I thought in the flume charm. [19:06] kjackal: That's not an on-disk path, that's an HDFS path. The Flume charm should be creating and managing that directory inside HDFS [19:07] kjackal: do an 'hdfs dfs -ls -R /user' and see if the /user/flume dir is there [19:07] kjackal: https://github.com/juju-solutions/layer-apache-flume-base/blob/master/lib/charms/layer/apache_flume_base.py#L122 [19:08] cory_fu: true. but bigtop hdfs pre-creates all directories with permissions we do not like for this usecase [19:08] kwmonroe: yes, /user/flume is there and is owned by flume in the hadoop group [19:09] cool kjackal, now you need to find out why 'root' is trying to write there.. writes from flume-hdfs should be coming from the 'flume' user [19:10] I see! [19:10] Back to debugging :) [19:10] fwiw kjackal, the flume source will be setting the output dir based on the 'event_dir'. so apache-flume-syslog defaults that to 'flume-syslog', which would appear in hdfs as '/user/flume/flume-syslog' [19:11] kwmonroe: yeap [19:26] thanks coreycb [20:37] so i have a maas machine that failed deployment according to maas. Can I recover from this from juju's standpoint or do I need to destroy the model and start over? [20:37] juju doesn't think the machine is in an error state but maas is pissed [20:47] smgoller: so if it's a deployment you can destroy the application and redeploy? [20:47] smgoller: need more info on what happened I guess. [20:47] smgoller: does juju status --format=yaml show more details on the machine? [20:47] so I did a deployment of a bundle, which required 4 machines [20:47] i blew the model away and redeployed. [20:48] but for the record, one of the machines failed to deploy according to maas [20:48] smgoller: k, if it happens again let us know and we can try to help see what's up. [20:49] and juju status showed that machine as "error" state. but I'm sorry I was too impatient. :) If it happens again i'll leave it :) === tris- is now known as tris [21:17] kwmonroe: cory_fu: need some help with bundles and the store [21:17] kjackal: Sure, what's up? [21:17] there is no charm build step for bundles, right? [21:17] cory_fu: charm show cs:~bigdata-dev/bundle/kafka-ingestion-0 [21:18] cory_fu: I pushed the bundle but I am not sure where did it go in the store [21:18] cory_fu: I do not see it here: https://jujucharms.com/q/bigdata-dev?type=bundle [21:19] I must be mising some thing, eg a metadata.yaml [21:19] kjackal: 1) Did you do `charm release cs:~bigdata-dev/bundle/kafka-ingestion-0`? 2) Did you do `charm grant cs:~bigdata-dev/bundle/kafka-ingestion everyone`? [21:20] kjackal: (Hint, you forgot to grant) [21:20] https://jujucharms.com/u/bigdata-dev/kafka-ingestion/0 [21:20] http://pastebin.ubuntu.com/23204426/ [21:22] kjackal: Odd. http://pastebin.ubuntu.com/23204431/ [21:22] So cory_fu, is this a permissions issue? [21:23] Is the series = bundle correct? [21:23] kjackal: Oh, I think what happened is that you granted before you released, and the stable channel didn't exist yet, so the grant ended up being a no-op [21:23] kjackal: Try granting again [21:23] ok [21:23] kjackal: Yes, the commands look fine to me [21:24] https://jujucharms.com/u/bigdata-dev/kafka-ingestion [21:24] Awesome, thanks! [21:54] thedac: ok, I've got it redeployed. I've created the external network according to the docs again. I've also got a manually deployed host as well. [21:55] thedac: at this point, the manual host can ping the router. however, pinging via ip nets exec on the qrouter fails to ping the router. [21:55] the manual host is mainly just to validate layer1 connectivity [21:56] smgoller: hmm, ok. [21:56] I guess I need you to unpack "manually deployed host". [21:56] Is that a booted instance on the cloud or something else [21:56] a host manually deployed with maas with ubuntu on it [21:56] outside openstack [21:57] but outside the softare defined network .. ok [21:57] basically something else connected to the provider network that's not the router [21:57] i have not created a internal network [21:58] Being able to ping the router at all is a good sign. I would finish the config and boot an instance and we can debug from there [21:58] this is what i used to create the external network "./neutron-ext-net -g 10.118.28.1 -c 10.118.28.0/24 -f 10.118.28.10:10.118.28.254 ext_net" [21:59] smgoller: remember there are probably secgroups aslo at play. You might consider making the default secgroup wide open for early testing. [22:00] the default secgroup seems to be wide open by default [22:02] downloading an ubuntu image so i can launch an instance. [22:03] thedac: do you want me to go ahead and create an internal network and launch an instance? Or should we continue trying to debug the external part of this? [22:04] I would go ahead and create the internal network and boot an instance. [22:05] Having said that when you say the manual host can ping the router. Do you mean 10.118.28.1 or 10.118.28.10 (the likey SDN router IP) [22:05] cory_fu: https://github.com/battlemidget/charm-layer-ghost/pull/5 [22:05] when you get a chance [22:08] 10.118.28.1 [22:09] thedac: the upstream router, not the router in openstack. my apologies. [22:09] smgoller: which is an external device corect? ... ok, I was confused [22:09] stokachu: LGTM, but I didn't test it [22:09] yes [22:09] cory_fu: thanks, im testing it now [22:09] smgoller: what does neutron router-list show as the IP of the SDN router? [22:10] 10.118.28.10 [22:10] cory_fu: im noticing 5 minute waits between the APT layer running ensure_package_status [22:10] not sure what that's about [22:10] smgoller: ok, and can you ping that? [22:10] either from the manual host or inside the netns? [22:10] sudo ip netns exec qrouter-c2a5b6f2-8006-4a78-ad07-c82f8c1fd7ef ping 10.118.28.10 [22:10] succeeds. [22:10] you can see that here: http://paste.ubuntu.com/23204650/ [22:10] thedac: fails from the manual host [22:11] and what is the IP of the manual host? [22:11] stokachu: Strange. Sounds like maybe it's queueing it but not acting on it during that hook, so it gets processed during the next hook (presumably update-status, after 5 min) [22:11] 10.118.28.8 [22:11] smgoller: ok, on the neutron-gateway host are eth0 and eth1 in the same VLAN? [22:12] smgoller: they would have to be for that to work [22:12] cory_fu: im not sure what package it's queueing as everything was installed on line 973 [22:12] stub: ^ any idea on that? [22:12] thedac: you mean the physical interfaces on the machine running neutron-gateway? [22:12] yes [22:13] stokachu: Where do you see the 5 minute gap? [22:13] the physical interfaces are both untagged, but they are on separate vlans on the physical switch they're connected to [22:13] cory_fu: line 1118 and line 1119 [22:14] smgoller: Because 10.118.28.8 and 10.118.28.10 are both in the 10.118.28.0/24 network they need to be in the same broadcast domain. Or you need a different set of network address space for the ext interface [22:14] make sense? [22:15] stokachu: It's not actually installing anything there. That's just update-status running after 5 min of doing nothing. The apt layer always logs that it's initializing at the start of every hook, even if there's nothing for it to do [22:15] thedac: I'm misinterpreting something. the manual host's ethernet and the "external" interface (eno2) on the machine running neutron-gateway are in the same vlan [22:15] stokachu: Probably worth filing a bug for stub to remove that log message, as it doesn't seem useful [22:15] cory_fu: hmm, ghost is sitting at installing NPM dependencies [22:16] thedac: my initial answer was for the two physical interfaces on the machine running neutron-gateway [22:16] smgoller: ooh, ok, so it has a second ethernet interace in the same vlan as the neutron-gateway's external interface? [22:16] stokachu: Guessing that there's a handler order dependency in how you update your status [22:16] it being the manual host? yes. [22:16] yes, ok [22:16] stokachu: Give me a few and I'll take a closer look. It's probably actually ready [22:16] cory_fu: ok [22:17] smgoller: so we expect that ping to work. But to humor me. Will you set the default secgroup wide open with: http://pastebin.ubuntu.com/23204687/ [22:17] thedac: on it [22:17] in particular the icmp bit [22:18] thedac: still no go [22:19] ok, ... /me thinks for a bit [22:20] marcoceppi: Is this the issue you were hitting with resources not downloading? http://pastebin.ubuntu.com/23204673/ [22:20] If so, will it eventually recover? [22:20] cory_fu: I didn't hit it, lazypower and mbruzek did [22:21] cory_fu: and no, there was no recover from what I remember [22:21] Ok, then mbruzek, how would handle that? [22:22] i.e., do I have to redeploy the app, start a new model, or tear down the entire controller? [22:22] cory_fu: is that with juju 2.0 ? [22:22] Yes [22:23] Did you juju attach the resource [22:24] mbruzek: Yes, it's in the store. This worked several times before this one choked [22:24] Oh, wiat [22:24] I deployed from local this time [22:24] ha! [22:24] Thanks, mbruzek [22:25] we have code that checks the size of a resource [22:25] and I do resource-get in a try catch [22:25] mbruzek: Does it sometimes return an empty file? [22:25] try/except [22:25] We did this so we can specify zero byte resources [22:28] smgoller: ok, looking at the neutron-ext-net script it defaults to network-type GRE. For your external network you actually want --network-type flat. So, I would remove the router and networks and re-run neutron-ext-net with --network-type flat [22:28] aha [22:29] smgoller: you can prove this to yourself with neutron net-show ext-net [22:29] look at provider:network_type [22:30] the version of the script i have doesn't have support that argument [22:31] just to clear up. Are you using the openstack-charm-testing repo to get that script? [22:31] no, but I can grab it. [22:32] is that on launchpad or github? [22:32] interesting, so before I send you there. What does neutron net-show ext-net show for provider:network_type [22:33] lp:~ost-maintainers/openstack-charm-testing/trunk/ [22:33] That has our version of the script in bin [22:33] | provider:network_type | gre | [22:34] ok, so that is still a problem. [22:34] Let me track down the neutron commands directly so we are not depending on a script. One sec [22:35] cory_fu: yea something is causing dpkg to go into a unconfigured state [22:35] thedac: to be clear, that was the old one. I'm checking out that one from launchpad now [22:35] ok [22:37] marcoceppi, tvansteenburgh: To get the new version of jujuclient on Xenial, do you recommend `pip install --upgrade` over the one provided by python-jujuclient, or is there a ppa I should use instead? [22:38] thedac: that was it. I can now ping from the netns to the provider router [22:38] ok, great. I think that should get you unblocked [22:38] and i can ping the manual host as well [22:39] thank you so much. So should I be basing my work of the bundle there as well? [22:39] huh, charm build layer-nginx puts the built dir in ~/charms/trusty, where building my ghost charm places it in ~/charms/build/ghost [22:39] smgoller: no, I the one you are working with is the state of the art [22:39] s/I// [22:39] thedac: roger that. [22:39] cory_fu: think about what you just said. [22:40] but the internal network as gre should be fine, yes? [22:40] that is correct [22:40] awesome. [22:40] marcoceppi: pip install, got it. ;) [22:40] marcoceppi: What is the ppa, then? [22:41] https://launchpad.net/~tvansteenburgh/+archive/ubuntu/ppa [22:41] thedac: you rock sir, thank you so much for your time. [22:41] smgoller: in that repo you can look at profiles/default for the commands run and use that as a crib sheet [22:41] no problem [22:41] thedac: will do [22:41] marcoceppi: Thank you! [22:42] marcoceppi: Just for that, I'll go ahead and fix dhx for 2.0 [22:42] cory_fu: what about charm build ;) [22:42] marcoceppi: meh [22:43] marcoceppi: I probably won't get dhx fixed tonight either, actually. [22:49] hows it going all? [22:49] can someone point me at an example of how to bootstrap to rackspace pls? [22:50] cory_fu: im thinking it's something with the apt layer, i can't get my nginx layer to deploy either [22:52] hi, is there an example how to use charmhelpers.core.services.helpers.TemplateCallback or an equivalent? [22:52] stokachu: one thing that got me hung up with layer apt when included by a bottom layer, is that the configs specified in config.yaml for the bottom layer werent making into the built artifact [22:53] hmm [22:54] stokachu: I had to add them to the top layer to get them to persist through to the built charm [22:55] ugh [22:55] * stokachu sad [22:55] was that it? [22:55] bdx_: no lol [22:56] it pulls the package from my layer.yaml but then sits in this loop checking package_ensure_status [22:56] nothing ever finishes after the apt layer does its thing [22:58] stokachu: I modeled layer-nginx-passenger after your nginx layer, it uses layer-apt -> https://github.com/jamesbeedy/layer-nginx-passenger [22:58] stokachu: are you using it differently then I? [22:58] bdx_: have you done a charm build/juju deploy recently? [22:58] errr, not in the last day or so [22:58] nah i just built the nginx layer and tried to deploy it locally [22:59] ahh changes in layer-apt then [22:59] going to download yours and try it [22:59] layer apt hasn't changed [22:59] bdx_: yea so im not sure whats going on [23:01] this is how im using it -> https://github.com/jamesbeedy/layer-nginx-passenger/blob/master/reactive/nginx_passenger.py#L22,L24 [23:01] not sure if that is correct or not, but it work [23:01] s [23:01] bdx_: deploying your layer now to see what happens [23:04] bdx_: yours fails too [23:05] http://paste.ubuntu.com/23204830/ [23:07] * stokachu super sad [23:07] anyone remember the cmd to get the reactive states that are set when debugging? [23:13] thedac: launched an instance and was able to ssh into it just fine. [23:13] fantastic! [23:16] thedac: so the host can't really resolve anything, including its own hostname. I'm guessing my options are either define a DNS server to use when I create the internal network, or maybe install designate and designate-bind to get route53-like functionality? [23:16] s/DNS server/external DNS server/ [23:17] smgoller: yes, when you define the internal network set a DNS server. As long as it can route there that will work. [23:17] would designate/designate-bind fulfill that as well? [23:17] designate is a whole other ballgame. More if you want to serve DNS as a service [23:17] ok [23:19] thedac: it seems like if I set designate up and link it to nova-compute, then when instances came up, their names would resolve, plus they'd be able to resolve things on the internet normally? Like, I have an instance called "xenial-test". It believes that is its hostname, but it doesn't resolve to anything. designate seems like it would allow that to work. [23:19] "out of the box" that is [23:20] stokachu: mine fails similarly to yours? [23:20] smgoller: I am not going to stop you from using designate. It is a good solution but it is adding complexity to a fairly simple problem [23:20] if you find that interesting, go for it [23:20] thedac: a very diplomatic answer. Thank you. :) I'll stick with defining DNS. [23:20] and leave designate for another day [23:22] thedac: one last question before I let you escape: Is it possible to configure the bundle such that the openstack console for instances works? [23:23] I see you can pass custom configuration to nova-compute, but it feels like the configuration for console needs knowledge of its own ip address for it to work, which I don't know if you can model in a bundle [23:23] smgoller: it has been a while since I have plaeyed with that. [23:23] ok, then i won't worry about it. [23:23] thanks! [23:24] smgoller: if the charm is missing anything, patches are welcome :) [23:24] oh for sure [23:24] If I come up with anything I'll contribute back, definitely. [23:25] stokachu: I figured it out to some extent [23:26] stokachu: add a series tag to your metadata.yaml [23:26] stokachu: xenial is what I used that worked === rockstar_ is now known as rockstar === alexisb is now known as alexisb-afk