[08:45] <assaf> @ryebot hi
[08:48] <assaf> @ryebot here is my history : i installed juju controller and add-machine manually , then i downloaded the charms and installed the applications with juju and machines set to 0, manually created all the relations
[08:48] <assaf> @ryebot then i got stuck with missing resources like flannel-amd64
[08:49] <assaf> @ryebot so resources i downloaded from the juju charm page and attached to the application
[08:49] <assaf> @ryebot but charms i downloaded using snap download kube-proxy for example
[08:50] <assaf> @ryebot i got etcd cluster and master with flannel installed but the worker isnt loading kube-proxy and kubelet beacuse of configuration issues
[10:50] <gizmo__> hello. Im trying to deploy a bundle with a charm with terms and this is the error im getting. https://gist.github.com/gizmo693/5a4fc5235da987a4f64e378e1850dd62
[13:31] <cory_fu> bdx: Not sure if you're around this early, but I have an update on the Endpoints branch of reactive.  We're going to cut a dev release today, run it though CI for a week, and then release it for real.  However, we're going to make one small change that will break things.  We're going to rename Endpoint.flag to Endpoint.expand_name to make it more clear.
[13:33] <cory_fu> bdx: If need be, I can deprecate the existing flag method for a bit so that it's not a hard break
[14:43] <bdx> cory_fu: nah, its cool, I'll update the bits I have
[15:25] <cory_fu> beisner: Hey, I just tagged 0.6.0rc1 a.k.a. proposed for charms.reactive.  I'll let that stew for this week, but is there anything else we need to do to get it run through, e.g. the OpenStack CI?
[16:00] <jose-phi_> hi
[16:00] <jose-phi_> question someone have the problem that when the container
[16:00] <jose-phi_> is created in lxd the container just ping in the host
[16:00] <jose-phi_> and not on the network?
[19:39] <beisner> hi cory_fu - thx for the heads up.  we'll discuss in our daily standup.
[19:42] <bdx> kwmonroe: sup
[19:42] <bdx> kwmonroe: do you hit this http://paste.ubuntu.com/26066506/
[19:43] <bdx> with your graylog bundle?
[19:43] <bdx> I feel like I'v filed a bug on that before for the elasticsearch charm
[19:43] <bdx> I want that thing gone
[19:43] <bdx> It a huge burden that constantly causes me issues
[19:44] <bdx> every corner, eh ... .I think I have the fix for this in my fork of the upstream charm
[19:45] <bdx> dont know why I thought the upstream elasticsearch charm would work
[19:45] <kwmonroe> yup yup bdx
[19:45] <kwmonroe> that's https://bugs.launchpad.net/elasticsearch-charm/+bug/1714393
[19:45] <mup> Bug #1714393: ERROR! lookup plugin (dns) not found <conjure> <Elasticsearch Charm:New> <https://launchpad.net/bugs/1714393>
[19:46] <bdx> oh I think its different
[19:46] <kwmonroe> bdx: the dns plugin packed into elasticsearch is too old and doesn't conform to the new plugin api, which means ES can't find it, which causes the firewall logic to fail.
[19:46] <bdx> ahh right
[19:46] <bdx> ok
[19:46] <kwmonroe> you worked around it with a dig plugin (iirc)
[19:47] <kwmonroe> i worked around it by disabling the ES firewall, which skips the firewall logic.
[19:47] <kwmonroe> proving once again that firewalls are stupid and we should all just trust one another with our public ipv6 addresses.
[19:48] <bdx> i see
[19:48] <kwmonroe> coke, stop looking at pepsi traffic!
[19:48] <kwmonroe> "ok".  problem solved ;)
[19:48] <bdx> yea, I just wasnt seeing the dig error in my logs
[19:49] <kwmonroe> bdx: you may have made other changes to the firewaller in the elasticsearch charm that doesn't fail if/when the dns/dig plugins fail
[19:49] <bdx> ahhok
[19:49] <bdx> http://paste.ubuntu.com/26066536/
[19:49] <bdx> running it manually exposes the underlying error
[19:50] <kwmonroe> bdx: line 726 of your first paste shows the underlying error too :)  http://paste.ubuntu.com/26066506/
[19:50] <bdx> ahh I see now, thx thx thx
[19:51] <bdx> possibly I'll get some tests and polishh in my new elasticsearch charm and we can look to get it swapped with upstream after the new endpoints stuff lands
[19:52] <kwmonroe> +100 bdx
[20:11] <bdx> hey, kwmonroe
[20:14] <bdx> thanks for the +100
[20:14] <bdx> but also
[20:14] <bdx> https://imgur.com/a/lUAGr
[20:14] <bdx> I think I see the disconnect
[20:15] <bdx> that is leading to graylog seeming like its not working
[20:16] <bdx> https://imgur.com/a/XzOZW
[20:16] <bdx> the elasticsearch node that graylog sees is itself
[20:16] <bdx> lol
[20:16] <bdx> "hey there are no logs!"
[20:16] <bdx> go figure
[20:17] <bdx> kwmonroe: not sure if you have gotten past that or if you are hitting that too
[20:18] <bdx> just for kicks, I'm going to point filebeat at graylog and see what gives
[20:19] <bdx> http://paste.ubuntu.com/26066668/ <- from graylog
[20:19] <bdx> its listening
[20:20] <kwmonroe> yeah bdx, you'll need to do "juju config filebeat logstash_hosts=GRAYLOG_IP:5044
[20:21] <bdx> ohh, not 9200?
[20:21] <kwmonroe> negative bdx, you want to link filebeat to the graylog beats input
[20:22] <kwmonroe> bdx: if you go to the graylog interface, System->Inputs, you'll see a beats input
[20:22] <bdx> ahhhh
[20:22] <bdx> I see it
[20:22] <kwmonroe> and that'll be bound to 0.0.0.0:5044
[20:24] <kwmonroe> bdx: i just learned this today.  i assumed graylog would pull logs out of ES, so the path would go Filebeat->ES->graylog, but that's not how it works.  graylog is more like a logstash replacement, so it goes Filebeat->graylog->ES
[20:24] <kwmonroe> meaning filebeat needs to connect to the graylog beats input (which is done by filebeat logstash_hosts config, and not via relation... yet)
[20:24] <bdx> got it got it
[20:25] <bdx> then the elasticsearch charm/application is not needed
[20:25] <bdx> ?
[20:25] <bdx> ok, I see logs!
[20:25] <bdx> yes
[20:26] <bdx> I have been eyeing this thing for a few months now, trialing every few days when it catches my interest and just always failing due to like 1 of 50 reasons
[20:26] <bdx> lol
[20:26] <bdx> this is great to know the full path
[20:26] <bdx> :)
[20:27] <bdx> kwmonroe: priceless colab on that, thank you
[20:27] <bdx> now we just habe to figure out how to make it better
[20:33] <kwmonroe> bdx: graylog does require ES, so you can't just get rid of it.  if the internet taught me anything today, it's that graylog presents itself as an ES cluster node to take advantage of ES indexing.  as a cluster member, it can also read/write really fast to ES (non cluster members would have to hit the api and (de)serialize json all the time.
[20:34] <bdx> right right
[20:34] <bdx> but it runs es
[20:34] <kwmonroe> whatchu talkin bout willis?
[20:34] <bdx> oh, so what you are saying is just use juju to deploy an es cluster next to it to hook it up to
[20:34] <bdx> so, like
[20:34] <bdx> if you deploy graylog
[20:34] <bdx> and look at the running processes
[20:35] <bdx> the java/elasticsearch is running on graylog
[20:35] <bdx> and it only seems to know about the elasticsearch node that is itself
[20:35] <kwmonroe> that ain't because of graylog bdx.  did you deploy both gl and es to the same unit?
[20:35] <bdx> no
[20:35] <kwmonroe> don't you lie to me
[20:35] <bdx> it gets that automatically
[20:36] <bdx> thats what I was trying to show you ^^^^^']
[20:36] <kwmonroe> i have a graylog deployed, and i don't see any elastic java procs on my gl unit
[20:36] <bdx> with the http://paste.ubuntu.com/26066668/
[20:36] <bdx> really
[20:36] <bdx> ok
[20:36] <bdx> so
[20:37] <kwmonroe> right bdx -- i'm assuming you're running that netstat on 172.31.103.25, and your ES node is 172.31.103.161
[20:38] <kwmonroe> also, lol @ -peanut.  i've never seen that
[20:39] <kwmonroe> anyway bdx, that connection to 9200 is on a separate machine.  graylog is connecting to it, but it's not running an embedded ES or anything like that
[20:39] <bdx> ok
[20:39] <bdx> I think I follow
[20:39] <bdx> so, how do you explain this
[20:41] <bdx> ooooo
[20:41] <bdx> I think I see
[20:42] <bdx> this https://imgur.com/a/3lD5G
[20:42] <bdx> is not indicative of an elasticsearch node, but a graylog node
[20:42] <bdx> becasue graylog is a clustering type service
[20:42] <bdx> ok
[20:43] <bdx> I was so backwards
[20:43] <bdx> thank you for enlightening me
[20:43] <kwmonroe> you got it
[20:43] <bdx> the elasticsearch config is in there somewhere
[20:44] <kwmonroe> that's right bdx -- in the graylog interface, System->Overview will show you the ES config
[20:45] <bdx> ahh I see it now
[20:45] <kwmonroe> which graylog knows about because it's an ES cluster member
[20:45] <bdx> I was loooking in the wrong place initially
[20:45] <bdx> sudo cat /var/snap/graylog/common/server.conf | grep elasticsearch
[20:45] <bdx> I see
[20:45] <bdx> that totally makes sense
[20:58] <jose-phi_> is posbile when i deploy containers with juju
[20:58] <jose-phi_> use a local image instead of  copying image for juju/xenial/amd64 from https://cloud-images.ubuntu.com/releases
[20:58] <jose-phi_> ?
[21:43] <hml> navinsridharan: ping
[21:46] <navinsridharan> hi
[21:47] <navinsridharan> Is this Heather
[21:47] <hml> navinsridharan: yes - it’s heather
[21:47] <navinsridharan> Okay great
[21:47] <hml> navinsridharan: can I get a pastebin of the debug juju bootstrap output?
[21:47] <navinsridharan> Yeah will send you in a sec
[21:50] <navinsridharan> https://pastebin.com/QMkrhahm
[21:51] <hml> navinsridharan: looking
[21:51] <navinsridharan> Thanks
[21:55] <hml> navinsridharan: what do the nova logs say about instance 7833ed83-345a-4796-89cb-086bf01bc78b?  is there more information about why the ‘No valid host was found’ error?
[21:56] <hml> navinsridharan: to confirm the uuid of the “private” network is 383fd64b-4c4c-497d-809d-3bcf8ed72e1c?
[21:57] <navinsridharan> Yes that's correct
[21:57] <navinsridharan> I checked by logging into Openstack GUI
[21:58] <navinsridharan> In case of instance 7833ed83-345a-4796-89cb-086bf01bc78b , I don't see any log written into nova-compute.log file
[21:58] <navinsridharan> Is there any other log file that I should be checking for??
[21:59] <hml> navinsridharan: check all the /var/log/nova/*.log files
[22:00] <hml> navinsridharan: “No valid host” should be in the logs also
[22:00] <navinsridharan> I only see two log files under /var/log/nova -- > nova-compute.log and privsep-helper.log ( empty)
[22:01] <navinsridharan> I don't see "No valid host" written into the log
[22:01] <hml> navinsridharan: so the question is where is it…  hmmm
[22:01] <navinsridharan> but if I boot an instance manually in Openstack cloud using GUI, I see the log written into nova-compute.log
[22:03] <hml> navinsridharan: are the credentials and openstack endpoint given to juju the same as what you’re using in the OpenStack GUI?
[22:04] <navinsridharan> Yes I did..
[22:04] <navinsridharan> If JUJU was not able to contact the endpoint, then it wouldnt be able to resolve the private network's UUID
[22:04] <hml> navinsridharan: true… i’m just wondering if it’s the same openstack cloud…
[22:04] <navinsridharan> but we do see UUID of "private" network in the --debug log
[22:05] <hml> navinsridharan: the instance juju created should be in the log
[22:05] <navinsridharan> Where do I check for this?
[22:05] <hml> navinsridharan: same place as the other logs - where you found the other instance.
[22:06] <navinsridharan> Yeah, but it's just weird that it doesn't write this into that log
[22:06] <hml> navinsridharan: try the juju bootstrap with —keep-broken, this will cause the instance juju created not to be deleted
[22:06] <navinsridharan> I am kind of completely stuck at this issue for about 2 weeks now not able to move forward
[22:06] <hml> navinsridharan: then we might be able to see from the cli or the gui more info
[22:06] <hml> navinsridharan: sorry - this is frustrating I know.
[22:07] <hml> navinsridharan: I’ve been asking a few others for some hints, so far it looks like this should work… we just need to find the little thing different
[22:07] <navinsridharan> like instead of "--debug" use "--keep-broken"
[22:07] <hml> navinsridharan: use both
[22:07] <navinsridharan> That's so nice of you, thank you so much
[22:08] <navinsridharan> Let me try and get back in a sec
[22:11] <navinsridharan> https://www.irccloud.com/pastebin/sI8pg2oB/
[22:12] <navinsridharan> I have copied from the point it says "using network id.....
[22:13] <hml> navinsridharan: now look at the openstack juju instance in the GUI - can you see details of the failure?
[22:14] <navinsridharan> Quick question though --- > should I enter the credentials for Openstack cloud in here (  /home/ubuntu/.local/share/juju/controllers.yaml)
[22:15] <hml> navinsridharan: i don’t recommended editting the files - run `juju autoload-credentials` after sourcing your novarc file
[22:15] <navinsridharan> I don't see any instance failure in the  Openstack GUI
[22:16] <navinsridharan> ubuntu@ubuntu-ProLiant-DL380-G6:~$ sudo juju autoload-credentials
[22:16] <navinsridharan> Looking for cloud and credential information locally...
[22:16] <navinsridharan> No cloud credentials found.
[22:16] <hml> navinsridharan: do you have a novarc file you can source
[22:17] <navinsridharan> I do have one sitting under /joid_config
[22:17] <navinsridharan> in the name "admin-openrc"
[22:18] <hml> navinsridharan: the autoload-credentials command will look for the environment variables used for OpenStack authentication to use and import them for juju
[22:18] <hml> navinsridharan: though it should have been done already to get as far as you have
[22:19] <navinsridharan> True, but looks for a ".yaml" file correct?
[22:19] <navinsridharan> I manually fed the credentials saying "juju add-cloud"
[22:20] <hml> navinsridharan: that’s all under the covers of juju so to speak.  as a user, you can verify them with “juju credentials --format yaml --show-secrets”
[22:21] <navinsridharan> ubuntu@ubuntu-ProLiant-DL380-G6:~/joid_config$ juju credentials --format yaml --show-secrets
[22:21] <navinsridharan> credentials:
[22:21] <navinsridharan>   openstack:
[22:21] <navinsridharan>     openstack:
[22:21] <navinsridharan>       auth-type: userpass
[22:21] <navinsridharan>       password: openstack
[22:21] <navinsridharan>       project-domain-name: admin_domain
[22:21] <navinsridharan>       tenant-name: admin
[22:21] <navinsridharan>       user-domain-name: admin_domain
[22:21] <navinsridharan>       username: admin
[22:21] <navinsridharan>   opnfv-virtualpod1-maas:
[22:21] <navinsridharan>     opnfv-credentials:
[22:22] <hml> navinsridharan:   those should be fine.
[22:23] <hml> navinsridharan:  i know why we couldn’t see the instance the gui - there’s a juju bug  :-/ for openstack.
[22:24] <navinsridharan> Ohh I see, I thought this bug was fixed in JUJU2.0 ??
[22:24] <hml> navinsridharan:  this one is specific to keep-broken
[22:24] <hml> navinsridharan:   not the rest of it
[22:24] <navinsridharan> Ohh I see, okay
[22:25] <navinsridharan> Counting on you..... :)
[22:26] <hml> navinsridharan:  hold on a sec
[22:26] <navinsridharan> sure
[22:36] <hml> navinsridharan: my personal openstack is busted, but there should be a bunch of other nova logs -  could they be on a different VM? from nova-compute.log - they are in my config
[22:43] <navinsridharan> sorry, missed your message
[22:44] <hml> navinsridharan: i filed a bug on the one keep-broken problem: https://bugs.launchpad.net/juju/+bug/1735013
[22:44] <mup> Bug #1735013: openstack provider deletes instance when keep-broken used during bootstrap <openstack-provider> <juju:Triaged> <https://launchpad.net/bugs/1735013>
[22:44] <navinsridharan> There are only two VM's where the control and compute logs are hosted
[22:44] <navinsridharan> I checked both the locations
[22:44] <hml> navinsridharan: hrmm…
[22:47] <hml> navinsridharan: can you look at the security groups with the admin-openrc credentials from the CLI?  the juju created ones should still be there.
[22:48] <navinsridharan> Yes I do see them there
[22:49] <hml> navinsridharan: that’s good news… do they show up in the neutron logs?
[22:51] <navinsridharan> neutron-api/0*            active    idle   2/lxd/2  192.168.122.183  9696/tcp                                 Unit is ready
[22:51] <navinsridharan> neutron-gateway/0*        active    idle   0        192.168.122.174                                           Unit is ready
[22:52] <navinsridharan> I see  two units in the name of neutron
[22:52] <navinsridharan> which one am I supposed to login?
[22:55] <hml> navinsridharan: try both?  i’m blanking on the specific one
[22:55] <hml> navinsridharan: did you deploy openstack with juju?
[22:56] <navinsridharan> Yes I did..
[22:57] <hml> navinsridharan:the nova logs with instance info would be on nova-cloud-controller/0
[22:58] <navinsridharan> I see a bunch of *.log under /var/log/nova on nova-cloud-controller/0
[22:58] <navinsridharan> Is there any specific file you expect me to check??
[22:59] <navinsridharan> got it
[22:59] <hml> navinsridharan: i’d just grep all of them
[22:59] <navinsridharan> I see No valid host found
[22:59] <navinsridharan> using grepping
[22:59] <navinsridharan> Yeah did the same
[22:59] <navinsridharan> I see them in nova-conductor.log file
[23:00] <hml> navinsridharan: so then that file should have the info we’re looking for around where the No valid host is located
[23:03] <navinsridharan> https://www.irccloud.com/pastebin/W2vCpvmn/nova-conductor.log
[23:04] <hml> navinsridharan: now i have to laugh a little - reason=“”  :-)
[23:04] <navinsridharan> is it empty??
[23:05] <hml> navinsridharan: there’s more info - i can track it down hopefully by the trace
[23:05] <hml> navinsridharan: let me talk to some folks with this info and get back to you by email hopefully tomorrow ok?
[23:06] <navinsridharan> Thanks, just can't wait for you to kill this :)
[23:06] <hml> navinsridharan: me too!
[23:06] <navinsridharan> Sure
[23:06] <navinsridharan> Is this info more than enough or would you be needing anything else Heather?>
[23:06] <hml> navinsridharan: ttyl
[23:07] <hml> navinsridharan: nothing specific is coming to mind right now
[23:07] <navinsridharan> Sure, thanks once again for guiding me through, appreciate it
[23:07] <navinsridharan> take caere
[23:07] <navinsridharan> Hoping to hear from you on something positive by tom :)