/srv/irclogs.ubuntu.com/2017/11/28/#juju.txt

=== frankban|afk is now known as frankban
assaf@ryebot hi08:45
assaf@ryebot here is my history : i installed juju controller and add-machine manually , then i downloaded the charms and installed the applications with juju and machines set to 0, manually created all the relations08:48
assaf@ryebot then i got stuck with missing resources like flannel-amd6408:48
assaf@ryebot so resources i downloaded from the juju charm page and attached to the application08:49
assaf@ryebot but charms i downloaded using snap download kube-proxy for example08:49
assaf@ryebot i got etcd cluster and master with flannel installed but the worker isnt loading kube-proxy and kubelet beacuse of configuration issues08:50
gizmo__hello. Im trying to deploy a bundle with a charm with terms and this is the error im getting. https://gist.github.com/gizmo693/5a4fc5235da987a4f64e378e1850dd6210:50
cory_fubdx: Not sure if you're around this early, but I have an update on the Endpoints branch of reactive.  We're going to cut a dev release today, run it though CI for a week, and then release it for real.  However, we're going to make one small change that will break things.  We're going to rename Endpoint.flag to Endpoint.expand_name to make it more clear.13:31
cory_fubdx: If need be, I can deprecate the existing flag method for a bit so that it's not a hard break13:33
=== freyes__ is now known as freyes
bdxcory_fu: nah, its cool, I'll update the bits I have14:43
cory_fubeisner: Hey, I just tagged 0.6.0rc1 a.k.a. proposed for charms.reactive.  I'll let that stew for this week, but is there anything else we need to do to get it run through, e.g. the OpenStack CI?15:25
jose-phi_hi16:00
jose-phi_question someone have the problem that when the container16:00
jose-phi_is created in lxd the container just ping in the host16:00
jose-phi_and not on the network?16:00
=== disposable3 is now known as disposable2
=== frankban is now known as frankban|afk
beisnerhi cory_fu - thx for the heads up.  we'll discuss in our daily standup.19:39
bdxkwmonroe: sup19:42
bdxkwmonroe: do you hit this http://paste.ubuntu.com/26066506/19:42
bdxwith your graylog bundle?19:43
bdxI feel like I'v filed a bug on that before for the elasticsearch charm19:43
bdxI want that thing gone19:43
bdxIt a huge burden that constantly causes me issues19:43
bdxevery corner, eh ... .I think I have the fix for this in my fork of the upstream charm19:44
bdxdont know why I thought the upstream elasticsearch charm would work19:45
kwmonroeyup yup bdx19:45
kwmonroethat's https://bugs.launchpad.net/elasticsearch-charm/+bug/171439319:45
mupBug #1714393: ERROR! lookup plugin (dns) not found <conjure> <Elasticsearch Charm:New> <https://launchpad.net/bugs/1714393>19:45
bdxoh I think its different19:46
kwmonroebdx: the dns plugin packed into elasticsearch is too old and doesn't conform to the new plugin api, which means ES can't find it, which causes the firewall logic to fail.19:46
bdxahh right19:46
bdxok19:46
kwmonroeyou worked around it with a dig plugin (iirc)19:46
kwmonroei worked around it by disabling the ES firewall, which skips the firewall logic.19:47
kwmonroeproving once again that firewalls are stupid and we should all just trust one another with our public ipv6 addresses.19:47
bdxi see19:48
kwmonroecoke, stop looking at pepsi traffic!19:48
kwmonroe"ok".  problem solved ;)19:48
bdxyea, I just wasnt seeing the dig error in my logs19:48
kwmonroebdx: you may have made other changes to the firewaller in the elasticsearch charm that doesn't fail if/when the dns/dig plugins fail19:49
bdxahhok19:49
bdxhttp://paste.ubuntu.com/26066536/19:49
bdxrunning it manually exposes the underlying error19:49
kwmonroebdx: line 726 of your first paste shows the underlying error too :)  http://paste.ubuntu.com/26066506/19:50
bdxahh I see now, thx thx thx19:50
bdxpossibly I'll get some tests and polishh in my new elasticsearch charm and we can look to get it swapped with upstream after the new endpoints stuff lands19:51
kwmonroe+100 bdx19:52
bdxhey, kwmonroe20:11
bdxthanks for the +10020:14
bdxbut also20:14
bdxhttps://imgur.com/a/lUAGr20:14
bdxI think I see the disconnect20:14
bdxthat is leading to graylog seeming like its not working20:15
bdxhttps://imgur.com/a/XzOZW20:16
bdxthe elasticsearch node that graylog sees is itself20:16
bdxlol20:16
bdx"hey there are no logs!"20:16
bdxgo figure20:16
bdxkwmonroe: not sure if you have gotten past that or if you are hitting that too20:17
bdxjust for kicks, I'm going to point filebeat at graylog and see what gives20:18
bdxhttp://paste.ubuntu.com/26066668/ <- from graylog20:19
bdxits listening20:19
kwmonroeyeah bdx, you'll need to do "juju config filebeat logstash_hosts=GRAYLOG_IP:504420:20
bdxohh, not 9200?20:21
kwmonroenegative bdx, you want to link filebeat to the graylog beats input20:21
kwmonroebdx: if you go to the graylog interface, System->Inputs, you'll see a beats input20:22
bdxahhhh20:22
bdxI see it20:22
kwmonroeand that'll be bound to 0.0.0.0:504420:22
kwmonroebdx: i just learned this today.  i assumed graylog would pull logs out of ES, so the path would go Filebeat->ES->graylog, but that's not how it works.  graylog is more like a logstash replacement, so it goes Filebeat->graylog->ES20:24
kwmonroemeaning filebeat needs to connect to the graylog beats input (which is done by filebeat logstash_hosts config, and not via relation... yet)20:24
bdxgot it got it20:24
bdxthen the elasticsearch charm/application is not needed20:25
bdx?20:25
bdxok, I see logs!20:25
bdxyes20:25
bdxI have been eyeing this thing for a few months now, trialing every few days when it catches my interest and just always failing due to like 1 of 50 reasons20:26
bdxlol20:26
bdxthis is great to know the full path20:26
bdx:)20:26
bdxkwmonroe: priceless colab on that, thank you20:27
bdxnow we just habe to figure out how to make it better20:27
kwmonroebdx: graylog does require ES, so you can't just get rid of it.  if the internet taught me anything today, it's that graylog presents itself as an ES cluster node to take advantage of ES indexing.  as a cluster member, it can also read/write really fast to ES (non cluster members would have to hit the api and (de)serialize json all the time.20:33
bdxright right20:34
bdxbut it runs es20:34
kwmonroewhatchu talkin bout willis?20:34
bdxoh, so what you are saying is just use juju to deploy an es cluster next to it to hook it up to20:34
bdxso, like20:34
bdxif you deploy graylog20:34
bdxand look at the running processes20:34
bdxthe java/elasticsearch is running on graylog20:35
bdxand it only seems to know about the elasticsearch node that is itself20:35
kwmonroethat ain't because of graylog bdx.  did you deploy both gl and es to the same unit?20:35
bdxno20:35
kwmonroedon't you lie to me20:35
bdxit gets that automatically20:35
bdxthats what I was trying to show you ^^^^^']20:36
kwmonroei have a graylog deployed, and i don't see any elastic java procs on my gl unit20:36
bdxwith the http://paste.ubuntu.com/26066668/20:36
bdxreally20:36
bdxok20:36
bdxso20:36
kwmonroeright bdx -- i'm assuming you're running that netstat on 172.31.103.25, and your ES node is 172.31.103.16120:37
kwmonroealso, lol @ -peanut.  i've never seen that20:38
kwmonroeanyway bdx, that connection to 9200 is on a separate machine.  graylog is connecting to it, but it's not running an embedded ES or anything like that20:39
bdxok20:39
bdxI think I follow20:39
bdxso, how do you explain this20:39
bdxooooo20:41
bdxI think I see20:41
bdxthis https://imgur.com/a/3lD5G20:42
bdxis not indicative of an elasticsearch node, but a graylog node20:42
bdxbecasue graylog is a clustering type service20:42
bdxok20:42
bdxI was so backwards20:43
bdxthank you for enlightening me20:43
kwmonroeyou got it20:43
bdxthe elasticsearch config is in there somewhere20:43
kwmonroethat's right bdx -- in the graylog interface, System->Overview will show you the ES config20:44
bdxahh I see it now20:45
kwmonroewhich graylog knows about because it's an ES cluster member20:45
bdxI was loooking in the wrong place initially20:45
bdxsudo cat /var/snap/graylog/common/server.conf | grep elasticsearch20:45
bdxI see20:45
bdxthat totally makes sense20:45
jose-phi_is posbile when i deploy containers with juju20:58
jose-phi_use a local image instead of  copying image for juju/xenial/amd64 from https://cloud-images.ubuntu.com/releases20:58
jose-phi_?20:58
hmlnavinsridharan: ping21:43
navinsridharanhi21:46
navinsridharanIs this Heather21:47
hmlnavinsridharan: yes - it’s heather21:47
navinsridharanOkay great21:47
hmlnavinsridharan: can I get a pastebin of the debug juju bootstrap output?21:47
navinsridharanYeah will send you in a sec21:47
navinsridharanhttps://pastebin.com/QMkrhahm21:50
hmlnavinsridharan: looking21:51
navinsridharanThanks21:51
hmlnavinsridharan: what do the nova logs say about instance 7833ed83-345a-4796-89cb-086bf01bc78b?  is there more information about why the ‘No valid host was found’ error?21:55
hmlnavinsridharan: to confirm the uuid of the “private” network is 383fd64b-4c4c-497d-809d-3bcf8ed72e1c?21:56
navinsridharanYes that's correct21:57
navinsridharanI checked by logging into Openstack GUI21:57
navinsridharanIn case of instance 7833ed83-345a-4796-89cb-086bf01bc78b , I don't see any log written into nova-compute.log file21:58
navinsridharanIs there any other log file that I should be checking for??21:58
hmlnavinsridharan: check all the /var/log/nova/*.log files21:59
hmlnavinsridharan: “No valid host” should be in the logs also22:00
navinsridharanI only see two log files under /var/log/nova -- > nova-compute.log and privsep-helper.log ( empty)22:00
navinsridharanI don't see "No valid host" written into the log22:01
hmlnavinsridharan: so the question is where is it…  hmmm22:01
navinsridharanbut if I boot an instance manually in Openstack cloud using GUI, I see the log written into nova-compute.log22:01
hmlnavinsridharan: are the credentials and openstack endpoint given to juju the same as what you’re using in the OpenStack GUI?22:03
navinsridharanYes I did..22:04
navinsridharanIf JUJU was not able to contact the endpoint, then it wouldnt be able to resolve the private network's UUID22:04
hmlnavinsridharan: true… i’m just wondering if it’s the same openstack cloud…22:04
navinsridharanbut we do see UUID of "private" network in the --debug log22:04
hmlnavinsridharan: the instance juju created should be in the log22:05
navinsridharanWhere do I check for this?22:05
hmlnavinsridharan: same place as the other logs - where you found the other instance.22:05
navinsridharanYeah, but it's just weird that it doesn't write this into that log22:06
hmlnavinsridharan: try the juju bootstrap with —keep-broken, this will cause the instance juju created not to be deleted22:06
navinsridharanI am kind of completely stuck at this issue for about 2 weeks now not able to move forward22:06
hmlnavinsridharan: then we might be able to see from the cli or the gui more info22:06
hmlnavinsridharan: sorry - this is frustrating I know.22:06
hmlnavinsridharan: I’ve been asking a few others for some hints, so far it looks like this should work… we just need to find the little thing different22:07
navinsridharanlike instead of "--debug" use "--keep-broken"22:07
hmlnavinsridharan: use both22:07
navinsridharanThat's so nice of you, thank you so much22:07
navinsridharanLet me try and get back in a sec22:08
navinsridharanhttps://www.irccloud.com/pastebin/sI8pg2oB/22:11
navinsridharanI have copied from the point it says "using network id.....22:12
hmlnavinsridharan: now look at the openstack juju instance in the GUI - can you see details of the failure?22:13
navinsridharanQuick question though --- > should I enter the credentials for Openstack cloud in here (  /home/ubuntu/.local/share/juju/controllers.yaml)22:14
hmlnavinsridharan: i don’t recommended editting the files - run `juju autoload-credentials` after sourcing your novarc file22:15
navinsridharanI don't see any instance failure in the  Openstack GUI22:15
navinsridharanubuntu@ubuntu-ProLiant-DL380-G6:~$ sudo juju autoload-credentials22:16
navinsridharanLooking for cloud and credential information locally...22:16
navinsridharanNo cloud credentials found.22:16
hmlnavinsridharan: do you have a novarc file you can source22:16
navinsridharanI do have one sitting under /joid_config22:17
navinsridharanin the name "admin-openrc"22:17
hmlnavinsridharan: the autoload-credentials command will look for the environment variables used for OpenStack authentication to use and import them for juju22:18
hmlnavinsridharan: though it should have been done already to get as far as you have22:18
navinsridharanTrue, but looks for a ".yaml" file correct?22:19
navinsridharanI manually fed the credentials saying "juju add-cloud"22:19
hmlnavinsridharan: that’s all under the covers of juju so to speak.  as a user, you can verify them with “juju credentials --format yaml --show-secrets”22:20
navinsridharanubuntu@ubuntu-ProLiant-DL380-G6:~/joid_config$ juju credentials --format yaml --show-secrets22:21
navinsridharancredentials:22:21
navinsridharan  openstack:22:21
navinsridharan    openstack:22:21
navinsridharan      auth-type: userpass22:21
navinsridharan      password: openstack22:21
navinsridharan      project-domain-name: admin_domain22:21
navinsridharan      tenant-name: admin22:21
navinsridharan      user-domain-name: admin_domain22:21
navinsridharan      username: admin22:21
navinsridharan  opnfv-virtualpod1-maas:22:21
navinsridharan    opnfv-credentials:22:21
hmlnavinsridharan:   those should be fine.22:22
hmlnavinsridharan:  i know why we couldn’t see the instance the gui - there’s a juju bug  :-/ for openstack.22:23
navinsridharanOhh I see, I thought this bug was fixed in JUJU2.0 ??22:24
hmlnavinsridharan:  this one is specific to keep-broken22:24
hmlnavinsridharan:   not the rest of it22:24
navinsridharanOhh I see, okay22:24
navinsridharanCounting on you..... :)22:25
hmlnavinsridharan:  hold on a sec22:26
navinsridharansure22:26
hmlnavinsridharan: my personal openstack is busted, but there should be a bunch of other nova logs -  could they be on a different VM? from nova-compute.log - they are in my config22:36
navinsridharansorry, missed your message22:43
hmlnavinsridharan: i filed a bug on the one keep-broken problem: https://bugs.launchpad.net/juju/+bug/173501322:44
mupBug #1735013: openstack provider deletes instance when keep-broken used during bootstrap <openstack-provider> <juju:Triaged> <https://launchpad.net/bugs/1735013>22:44
navinsridharanThere are only two VM's where the control and compute logs are hosted22:44
navinsridharanI checked both the locations22:44
hmlnavinsridharan: hrmm…22:44
hmlnavinsridharan: can you look at the security groups with the admin-openrc credentials from the CLI?  the juju created ones should still be there.22:47
navinsridharanYes I do see them there22:48
hmlnavinsridharan: that’s good news… do they show up in the neutron logs?22:49
navinsridharanneutron-api/0*            active    idle   2/lxd/2  192.168.122.183  9696/tcp                                 Unit is ready22:51
navinsridharanneutron-gateway/0*        active    idle   0        192.168.122.174                                           Unit is ready22:51
navinsridharanI see  two units in the name of neutron22:52
navinsridharanwhich one am I supposed to login?22:52
hmlnavinsridharan: try both?  i’m blanking on the specific one22:55
hmlnavinsridharan: did you deploy openstack with juju?22:55
navinsridharanYes I did..22:56
hmlnavinsridharan:the nova logs with instance info would be on nova-cloud-controller/022:57
navinsridharanI see a bunch of *.log under /var/log/nova on nova-cloud-controller/022:58
navinsridharanIs there any specific file you expect me to check??22:58
navinsridharangot it22:59
hmlnavinsridharan: i’d just grep all of them22:59
navinsridharanI see No valid host found22:59
navinsridharanusing grepping22:59
navinsridharanYeah did the same22:59
navinsridharanI see them in nova-conductor.log file22:59
hmlnavinsridharan: so then that file should have the info we’re looking for around where the No valid host is located23:00
navinsridharanhttps://www.irccloud.com/pastebin/W2vCpvmn/nova-conductor.log23:03
hmlnavinsridharan: now i have to laugh a little - reason=“”  :-)23:04
navinsridharanis it empty??23:04
hmlnavinsridharan: there’s more info - i can track it down hopefully by the trace23:05
hmlnavinsridharan: let me talk to some folks with this info and get back to you by email hopefully tomorrow ok?23:05
navinsridharanThanks, just can't wait for you to kill this :)23:06
hmlnavinsridharan: me too!23:06
navinsridharanSure23:06
navinsridharanIs this info more than enough or would you be needing anything else Heather?>23:06
hmlnavinsridharan: ttyl23:06
hmlnavinsridharan: nothing specific is coming to mind right now23:07
navinsridharanSure, thanks once again for guiding me through, appreciate it23:07
navinsridharantake caere23:07
navinsridharanHoping to hear from you on something positive by tom :)23:07

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!