=== frankban|afk is now known as frankban [08:45] @ryebot hi [08:48] @ryebot here is my history : i installed juju controller and add-machine manually , then i downloaded the charms and installed the applications with juju and machines set to 0, manually created all the relations [08:48] @ryebot then i got stuck with missing resources like flannel-amd64 [08:49] @ryebot so resources i downloaded from the juju charm page and attached to the application [08:49] @ryebot but charms i downloaded using snap download kube-proxy for example [08:50] @ryebot i got etcd cluster and master with flannel installed but the worker isnt loading kube-proxy and kubelet beacuse of configuration issues [10:50] hello. Im trying to deploy a bundle with a charm with terms and this is the error im getting. https://gist.github.com/gizmo693/5a4fc5235da987a4f64e378e1850dd62 [13:31] bdx: Not sure if you're around this early, but I have an update on the Endpoints branch of reactive. We're going to cut a dev release today, run it though CI for a week, and then release it for real. However, we're going to make one small change that will break things. We're going to rename Endpoint.flag to Endpoint.expand_name to make it more clear. [13:33] bdx: If need be, I can deprecate the existing flag method for a bit so that it's not a hard break === freyes__ is now known as freyes [14:43] cory_fu: nah, its cool, I'll update the bits I have [15:25] beisner: Hey, I just tagged 0.6.0rc1 a.k.a. proposed for charms.reactive. I'll let that stew for this week, but is there anything else we need to do to get it run through, e.g. the OpenStack CI? [16:00] hi [16:00] question someone have the problem that when the container [16:00] is created in lxd the container just ping in the host [16:00] and not on the network? === disposable3 is now known as disposable2 === frankban is now known as frankban|afk [19:39] hi cory_fu - thx for the heads up. we'll discuss in our daily standup. [19:42] kwmonroe: sup [19:42] kwmonroe: do you hit this http://paste.ubuntu.com/26066506/ [19:43] with your graylog bundle? [19:43] I feel like I'v filed a bug on that before for the elasticsearch charm [19:43] I want that thing gone [19:43] It a huge burden that constantly causes me issues [19:44] every corner, eh ... .I think I have the fix for this in my fork of the upstream charm [19:45] dont know why I thought the upstream elasticsearch charm would work [19:45] yup yup bdx [19:45] that's https://bugs.launchpad.net/elasticsearch-charm/+bug/1714393 [19:45] Bug #1714393: ERROR! lookup plugin (dns) not found [19:46] oh I think its different [19:46] bdx: the dns plugin packed into elasticsearch is too old and doesn't conform to the new plugin api, which means ES can't find it, which causes the firewall logic to fail. [19:46] ahh right [19:46] ok [19:46] you worked around it with a dig plugin (iirc) [19:47] i worked around it by disabling the ES firewall, which skips the firewall logic. [19:47] proving once again that firewalls are stupid and we should all just trust one another with our public ipv6 addresses. [19:48] i see [19:48] coke, stop looking at pepsi traffic! [19:48] "ok". problem solved ;) [19:48] yea, I just wasnt seeing the dig error in my logs [19:49] bdx: you may have made other changes to the firewaller in the elasticsearch charm that doesn't fail if/when the dns/dig plugins fail [19:49] ahhok [19:49] http://paste.ubuntu.com/26066536/ [19:49] running it manually exposes the underlying error [19:50] bdx: line 726 of your first paste shows the underlying error too :) http://paste.ubuntu.com/26066506/ [19:50] ahh I see now, thx thx thx [19:51] possibly I'll get some tests and polishh in my new elasticsearch charm and we can look to get it swapped with upstream after the new endpoints stuff lands [19:52] +100 bdx [20:11] hey, kwmonroe [20:14] thanks for the +100 [20:14] but also [20:14] https://imgur.com/a/lUAGr [20:14] I think I see the disconnect [20:15] that is leading to graylog seeming like its not working [20:16] https://imgur.com/a/XzOZW [20:16] the elasticsearch node that graylog sees is itself [20:16] lol [20:16] "hey there are no logs!" [20:16] go figure [20:17] kwmonroe: not sure if you have gotten past that or if you are hitting that too [20:18] just for kicks, I'm going to point filebeat at graylog and see what gives [20:19] http://paste.ubuntu.com/26066668/ <- from graylog [20:19] its listening [20:20] yeah bdx, you'll need to do "juju config filebeat logstash_hosts=GRAYLOG_IP:5044 [20:21] ohh, not 9200? [20:21] negative bdx, you want to link filebeat to the graylog beats input [20:22] bdx: if you go to the graylog interface, System->Inputs, you'll see a beats input [20:22] ahhhh [20:22] I see it [20:22] and that'll be bound to 0.0.0.0:5044 [20:24] bdx: i just learned this today. i assumed graylog would pull logs out of ES, so the path would go Filebeat->ES->graylog, but that's not how it works. graylog is more like a logstash replacement, so it goes Filebeat->graylog->ES [20:24] meaning filebeat needs to connect to the graylog beats input (which is done by filebeat logstash_hosts config, and not via relation... yet) [20:24] got it got it [20:25] then the elasticsearch charm/application is not needed [20:25] ? [20:25] ok, I see logs! [20:25] yes [20:26] I have been eyeing this thing for a few months now, trialing every few days when it catches my interest and just always failing due to like 1 of 50 reasons [20:26] lol [20:26] this is great to know the full path [20:26] :) [20:27] kwmonroe: priceless colab on that, thank you [20:27] now we just habe to figure out how to make it better [20:33] bdx: graylog does require ES, so you can't just get rid of it. if the internet taught me anything today, it's that graylog presents itself as an ES cluster node to take advantage of ES indexing. as a cluster member, it can also read/write really fast to ES (non cluster members would have to hit the api and (de)serialize json all the time. [20:34] right right [20:34] but it runs es [20:34] whatchu talkin bout willis? [20:34] oh, so what you are saying is just use juju to deploy an es cluster next to it to hook it up to [20:34] so, like [20:34] if you deploy graylog [20:34] and look at the running processes [20:35] the java/elasticsearch is running on graylog [20:35] and it only seems to know about the elasticsearch node that is itself [20:35] that ain't because of graylog bdx. did you deploy both gl and es to the same unit? [20:35] no [20:35] don't you lie to me [20:35] it gets that automatically [20:36] thats what I was trying to show you ^^^^^'] [20:36] i have a graylog deployed, and i don't see any elastic java procs on my gl unit [20:36] with the http://paste.ubuntu.com/26066668/ [20:36] really [20:36] ok [20:36] so [20:37] right bdx -- i'm assuming you're running that netstat on 172.31.103.25, and your ES node is 172.31.103.161 [20:38] also, lol @ -peanut. i've never seen that [20:39] anyway bdx, that connection to 9200 is on a separate machine. graylog is connecting to it, but it's not running an embedded ES or anything like that [20:39] ok [20:39] I think I follow [20:39] so, how do you explain this [20:41] ooooo [20:41] I think I see [20:42] this https://imgur.com/a/3lD5G [20:42] is not indicative of an elasticsearch node, but a graylog node [20:42] becasue graylog is a clustering type service [20:42] ok [20:43] I was so backwards [20:43] thank you for enlightening me [20:43] you got it [20:43] the elasticsearch config is in there somewhere [20:44] that's right bdx -- in the graylog interface, System->Overview will show you the ES config [20:45] ahh I see it now [20:45] which graylog knows about because it's an ES cluster member [20:45] I was loooking in the wrong place initially [20:45] sudo cat /var/snap/graylog/common/server.conf | grep elasticsearch [20:45] I see [20:45] that totally makes sense [20:58] is posbile when i deploy containers with juju [20:58] use a local image instead of copying image for juju/xenial/amd64 from https://cloud-images.ubuntu.com/releases [20:58] ? [21:43] navinsridharan: ping [21:46] hi [21:47] Is this Heather [21:47] navinsridharan: yes - it’s heather [21:47] Okay great [21:47] navinsridharan: can I get a pastebin of the debug juju bootstrap output? [21:47] Yeah will send you in a sec [21:50] https://pastebin.com/QMkrhahm [21:51] navinsridharan: looking [21:51] Thanks [21:55] navinsridharan: what do the nova logs say about instance 7833ed83-345a-4796-89cb-086bf01bc78b? is there more information about why the ‘No valid host was found’ error? [21:56] navinsridharan: to confirm the uuid of the “private” network is 383fd64b-4c4c-497d-809d-3bcf8ed72e1c? [21:57] Yes that's correct [21:57] I checked by logging into Openstack GUI [21:58] In case of instance 7833ed83-345a-4796-89cb-086bf01bc78b , I don't see any log written into nova-compute.log file [21:58] Is there any other log file that I should be checking for?? [21:59] navinsridharan: check all the /var/log/nova/*.log files [22:00] navinsridharan: “No valid host” should be in the logs also [22:00] I only see two log files under /var/log/nova -- > nova-compute.log and privsep-helper.log ( empty) [22:01] I don't see "No valid host" written into the log [22:01] navinsridharan: so the question is where is it… hmmm [22:01] but if I boot an instance manually in Openstack cloud using GUI, I see the log written into nova-compute.log [22:03] navinsridharan: are the credentials and openstack endpoint given to juju the same as what you’re using in the OpenStack GUI? [22:04] Yes I did.. [22:04] If JUJU was not able to contact the endpoint, then it wouldnt be able to resolve the private network's UUID [22:04] navinsridharan: true… i’m just wondering if it’s the same openstack cloud… [22:04] but we do see UUID of "private" network in the --debug log [22:05] navinsridharan: the instance juju created should be in the log [22:05] Where do I check for this? [22:05] navinsridharan: same place as the other logs - where you found the other instance. [22:06] Yeah, but it's just weird that it doesn't write this into that log [22:06] navinsridharan: try the juju bootstrap with —keep-broken, this will cause the instance juju created not to be deleted [22:06] I am kind of completely stuck at this issue for about 2 weeks now not able to move forward [22:06] navinsridharan: then we might be able to see from the cli or the gui more info [22:06] navinsridharan: sorry - this is frustrating I know. [22:07] navinsridharan: I’ve been asking a few others for some hints, so far it looks like this should work… we just need to find the little thing different [22:07] like instead of "--debug" use "--keep-broken" [22:07] navinsridharan: use both [22:07] That's so nice of you, thank you so much [22:08] Let me try and get back in a sec [22:11] https://www.irccloud.com/pastebin/sI8pg2oB/ [22:12] I have copied from the point it says "using network id..... [22:13] navinsridharan: now look at the openstack juju instance in the GUI - can you see details of the failure? [22:14] Quick question though --- > should I enter the credentials for Openstack cloud in here ( /home/ubuntu/.local/share/juju/controllers.yaml) [22:15] navinsridharan: i don’t recommended editting the files - run `juju autoload-credentials` after sourcing your novarc file [22:15] I don't see any instance failure in the Openstack GUI [22:16] ubuntu@ubuntu-ProLiant-DL380-G6:~$ sudo juju autoload-credentials [22:16] Looking for cloud and credential information locally... [22:16] No cloud credentials found. [22:16] navinsridharan: do you have a novarc file you can source [22:17] I do have one sitting under /joid_config [22:17] in the name "admin-openrc" [22:18] navinsridharan: the autoload-credentials command will look for the environment variables used for OpenStack authentication to use and import them for juju [22:18] navinsridharan: though it should have been done already to get as far as you have [22:19] True, but looks for a ".yaml" file correct? [22:19] I manually fed the credentials saying "juju add-cloud" [22:20] navinsridharan: that’s all under the covers of juju so to speak. as a user, you can verify them with “juju credentials --format yaml --show-secrets” [22:21] ubuntu@ubuntu-ProLiant-DL380-G6:~/joid_config$ juju credentials --format yaml --show-secrets [22:21] credentials: [22:21] openstack: [22:21] openstack: [22:21] auth-type: userpass [22:21] password: openstack [22:21] project-domain-name: admin_domain [22:21] tenant-name: admin [22:21] user-domain-name: admin_domain [22:21] username: admin [22:21] opnfv-virtualpod1-maas: [22:21] opnfv-credentials: [22:22] navinsridharan: those should be fine. [22:23] navinsridharan: i know why we couldn’t see the instance the gui - there’s a juju bug :-/ for openstack. [22:24] Ohh I see, I thought this bug was fixed in JUJU2.0 ?? [22:24] navinsridharan: this one is specific to keep-broken [22:24] navinsridharan: not the rest of it [22:24] Ohh I see, okay [22:25] Counting on you..... :) [22:26] navinsridharan: hold on a sec [22:26] sure [22:36] navinsridharan: my personal openstack is busted, but there should be a bunch of other nova logs - could they be on a different VM? from nova-compute.log - they are in my config [22:43] sorry, missed your message [22:44] navinsridharan: i filed a bug on the one keep-broken problem: https://bugs.launchpad.net/juju/+bug/1735013 [22:44] Bug #1735013: openstack provider deletes instance when keep-broken used during bootstrap [22:44] There are only two VM's where the control and compute logs are hosted [22:44] I checked both the locations [22:44] navinsridharan: hrmm… [22:47] navinsridharan: can you look at the security groups with the admin-openrc credentials from the CLI? the juju created ones should still be there. [22:48] Yes I do see them there [22:49] navinsridharan: that’s good news… do they show up in the neutron logs? [22:51] neutron-api/0* active idle 2/lxd/2 192.168.122.183 9696/tcp Unit is ready [22:51] neutron-gateway/0* active idle 0 192.168.122.174 Unit is ready [22:52] I see two units in the name of neutron [22:52] which one am I supposed to login? [22:55] navinsridharan: try both? i’m blanking on the specific one [22:55] navinsridharan: did you deploy openstack with juju? [22:56] Yes I did.. [22:57] navinsridharan:the nova logs with instance info would be on nova-cloud-controller/0 [22:58] I see a bunch of *.log under /var/log/nova on nova-cloud-controller/0 [22:58] Is there any specific file you expect me to check?? [22:59] got it [22:59] navinsridharan: i’d just grep all of them [22:59] I see No valid host found [22:59] using grepping [22:59] Yeah did the same [22:59] I see them in nova-conductor.log file [23:00] navinsridharan: so then that file should have the info we’re looking for around where the No valid host is located [23:03] https://www.irccloud.com/pastebin/W2vCpvmn/nova-conductor.log [23:04] navinsridharan: now i have to laugh a little - reason=“” :-) [23:04] is it empty?? [23:05] navinsridharan: there’s more info - i can track it down hopefully by the trace [23:05] navinsridharan: let me talk to some folks with this info and get back to you by email hopefully tomorrow ok? [23:06] Thanks, just can't wait for you to kill this :) [23:06] navinsridharan: me too! [23:06] Sure [23:06] Is this info more than enough or would you be needing anything else Heather?> [23:06] navinsridharan: ttyl [23:07] navinsridharan: nothing specific is coming to mind right now [23:07] Sure, thanks once again for guiding me through, appreciate it [23:07] take caere [23:07] Hoping to hear from you on something positive by tom :)