[02:40] <blr> hey there, does anyone have any advice for debugging actions? When my action status is 'failed', I can see a line in my debug-log that reads "Traceback", however there's no traceback.
[08:53] <cnf> morning
[10:41] <Neepu> Hi @stokachu i had trouble with proxy and juju bootstrapper yesterday. I had trouble with even lauching test lxd containers with ubuntu, but resolved that today. Any chance you can help me today?
[10:41] <Neepu> Given that lxd now works, i assume juju is the remaining half.
[12:48] <ybaumy> haying a very strange problem
[12:49] <ybaumy> since the last update i loose connections in juju status from services and machines
[12:50] <ybaumy> status is either unknown or down
[12:50] <ybaumy> but when i log onto openstack
[12:50] <ybaumy> everything seems to be working
[12:50] <ybaumy> anyone had this too?
[12:50] <ybaumy> root@openstacksa2:~/kube# juju version
[12:50] <ybaumy> 2.2.1-xenial-amd64
[12:52] <ybaumy> agent status is lost
[12:55] <wpk> jam: ^^^
[12:55] <ybaumy> ah and when i restart the whole machines they reconnect agents and everything is green for a day or so
[12:56] <wpk> ybaumy: do you have controllers in HA setup?
[12:56] <ybaumy> wpk: yes
[12:57] <ybaumy> Machine  State    DNS            Inst id  Series  AZ       Message
[12:57] <ybaumy> 0        down     10.14.162.72   mt4cyg   xenial  default  Deployed
[12:57] <ybaumy> 1        down     10.14.162.78   g6kfkb   xenial  default  Deployed
[12:57] <ybaumy> 2        started  10.14.151.121  w4qxnp   xenial  default  Deployed
[12:58] <ybaumy> ic
[12:58] <ybaumy> ;)
[12:59] <ybaumy> somehow my controllers seem to be  dead
[12:59] <ybaumy> but machines are running
[12:59] <ybaumy> and strangely they seem to be on different subnets
[12:59] <ybaumy> hmm
[12:59] <ybaumy> they have the same vlan
[12:59] <ybaumy> i think at least
[12:59] <jam> ybaumy: bug 1703526 is tracking this issue
[12:59] <mup> Bug #1703526: agent lost, even though agents are still responding <juju:Triaged by jameinel> <juju 2.2:Triaged by jameinel> <https://launchpad.net/bugs/1703526>
[13:00] <jam> ybaumy: is it possible for you to add some debugging info to that bug?
[13:00] <ybaumy> jam: of course .. what kind of information should i provide
[13:01] <ybaumy> there also seems to be a problem with maas dhcp
[13:01] <ybaumy> because of the different subnets
[13:01] <jam> ybaumy: so doing: juju model-config -m controller logging-config="<root>=INFO;juju.state.presence=TRACE" and then saving a few minutes of "juju debug-log -m controller" output would probably be useful
[13:01] <ybaumy> but those vlans are interconnected
[13:01] <ybaumy> so connections are possible
[13:02] <ybaumy> jam: will do
[13:02] <jam> ybaumy: from what we can tell, all the agents are connected, its just that our 'Ping to keep alive' logic is wrong
[13:02] <jam> so we start reporting 'agent is lost' even though it is happily responding to requests
[13:03] <ybaumy> jam: but i will have to wait for now since i restarted all machines like 30 minutes ago. the error is not yet happening
[13:04] <ybaumy> jam: or doesnt it matter
[13:04] <jam> ybaumy: well, maybe data-from-now and then data-from-when-it-happens will be useful for us to track what's going on
[13:04] <jam> it can be useful to see the delta
[13:05] <ybaumy> jam: how much data is generated in a day when trace is on?
[13:05] <jam> ybaumy: well, every ping from every agent gets a line in the log file
[13:05] <jam> and we ping every 20s or so
[13:05] <jam> (and there are some other bits of info)
[13:06] <wpk> ybaumy: it was working on 2.2.0 without problems?
[13:06] <jam> and there is a line every second from PingBatcher about how much stuff it had to write out
[13:06] <ybaumy> wpk: yes
[13:06] <jam> so, its fairly noisy
[13:07] <ybaumy> jam: well then i will attach a volume from my nas in order to store the data
[13:07] <ybaumy> i dont have that much space on OS
[13:08] <jam> ybaumy: napkin math says about 500MB
[13:08] <jam> ybaumy: but I don't think we need the entire time
[13:08] <jam> ybaumy: if you could do say 30min now, and then 30min tomorrow might be sufficient
[13:08] <ybaumy> ok
[13:08] <jam> or even 10min or so now, and then 10min when you see the issue present itself
[13:09] <ybaumy> jam: last question how to turn loggin to default after trace was enabled
[13:10] <jam> ybaumy: default logging is logging-config="<root>=WARNING" I believe
[13:10] <jam> (though we'd *like* it to be INFO)
[13:10] <ybaumy> jam: ok
[13:15] <ybaumy> but that with the IP addresses is also very very strange
[13:15] <ybaumy> 10.14.151 vlan is 500 ... and 10.14.162 is vlan 503
[13:16] <ybaumy> normally there shouldnt be a conflict or addresses from on net in the other one
[13:17] <ybaumy> never seen this until now on the juju controllers
[13:18] <ybaumy> hmm where does it get the vlan 503 from
[13:18] <ybaumy> i dont get it
[13:20] <jam> ybaumy: generally we get the vlan ids from the MAAS config
[13:20] <ybaumy> hmm
[13:20] <ybaumy> jam: i know it must be maas who is responsible for this
[13:21] <ybaumy> jam: but i dont see the error. its just the first 2 controller nodes who are in the wrong subnet
[14:22] <ak_dev> hello, seems that the charm store is down, says service unavailable
[14:22] <ak_dev> while deploying
[14:22] <ak_dev> same while searching for a charm on web
[14:23] <ak_dev> ERROR cannot deploy bundle: cannot resolve URL "cs:~containers/easyrsa-12": cannot resolve charm URL "cs:~containers/xenial/easyrsa-12": cannot get "/~containers/xenial/easyrsa-12/meta/any?include=id&include=supported-series&include=published": cannot unmarshal error response "<html><body><h1>503 Service Unavailable</h1>\nNo server is available to handle this request.\n</body></html>\n": invalid character '<' looking for
[14:23] <ak_dev> beginning of value
[14:23] <ak_dev> ^^ error
[14:23] <rick_h> ak_dev: there's a current outage the team is chasing.
[14:24] <ak_dev> oh okay cool :-)
[14:24] <rick_h> ak_dev: will keep the topic up to date and update on twitter as we get things back and going. https://twitter.com/jujuui
[14:24] <ak_dev> Okay! looking out for it, thanks
[14:25] <rick_h> np, apologies for the unexpected issue
[15:07] <rick_h> ak_dev: should be back now
[15:11] <ak_dev> Hey thanks, will check soon :-)
[15:24] <cnf> hmz
[15:24] <cnf> openstack with juju is torture
[15:25] <ybaumy> cnf: i feel you
[15:26] <cnf> what a load of bs
[15:26] <ybaumy> im already using for production redhat
[15:26] <ybaumy> im testing suse currently
[15:26] <cnf> we are in RFI rounds
[15:27] <cnf> but canonical is looking unlikeley atm
[15:27] <ybaumy> whats RFI if i may ask?
[15:28] <cnf> Request For Information
[15:28] <ybaumy> ah ok i thought so.
[15:29] <ybaumy> cnf: i only had problems with juju with scale out
[15:29] <ybaumy> cnf: and other stuff
[15:29] <cnf> redhat and mirantis had a strong story, canonical next week will really have to step up.
[15:30] <ybaumy> cnf: maybe im too stupid but os-charmers should put out a documentation out IMO
[15:30] <cnf> i have had nothing but trouble with juju / charms myself :(
[15:30] <ybaumy> cnf: im a ubuntu guy normally but this just didnt work out
[15:30] <cnf> "Placement API service is not responding" is what i'm stuck at, atm
[15:31] <cnf> ybaumy: yeah, i'm thinking the same atm
[15:32] <ybaumy> cnf: im using juju and openstack at home for personal training but thats all.
[15:35] <cnf> uhu
[15:36] <ybaumy> at work i also moved from maas to https://theforeman.org
[15:37] <cnf> maas i do rather like
[15:39] <ybaumy> well do many bugs that came up and was tired getting them fixed. currently i have a bug open in relation to juju and im waiting now over a week for response. i  mean how can i work with something like that
[15:40] <cnf> oh yeah, juju is just tiresome
[15:40] <kwmonroe> hey tvansteenburgh, any objection to a PR that turns down the noise on nvidia-cuda?  the -x is making a big haystack in my logs:  https://github.com/juju-solutions/layer-nvidia-cuda/blob/master/reactive/cuda.sh#L2
[15:40] <ybaumy> i like it
[15:40] <cnf> maas is much quicker in reacting and fixing
[15:40] <ybaumy> but ...
[15:40] <tvansteenburgh> kwmonroe: no objection
[15:42] <ybaumy> i found the integration of chef and puppet with foreman very useful
[15:42] <cnf> hmz, what the hell is the problem with this BS >,<
[15:42] <beisner_> hi cnf ybaumy - are there specific issues that you're running into with openstack charms?
[15:43] <cnf> tons, atm my compute nodes apparently can't talk to the placement API
[15:45] <ybaumy> beisner_: the reason i switched from juju os-charms to redhat cloud was scale out/down and HA. i had problems adding nodes, removing nodes from the config. then HA setup from the charms should be default. i had problems with mysql HA and much more other stuff.
[15:46] <ybaumy> ybaumy: i must say i put in a lot of work to get it to run. but then there is no piece of good documentation .. here a piece in the net then go there to find something out...
[15:46] <ybaumy> beisner_: and so on. i would really like to give it another chance but in the end its management who says ok this is useful or not
[15:46] <cnf> all i get atm is "Placement API service is not responding", and I can't figure out why
[15:47] <cnf> ybaumy: it's a mess
[15:47] <beisner_> interesting.  appreciate the input.  these things are in circulation in production in lots of places, perhaps there are configuration issues which might be more clear with more docs.
[15:47] <ybaumy> beisner_: and in my presentation i had problem with scale out and they said ok how should this work in production if not in the a presenation
[15:47] <beisner_> LP bug pointers with logs are most helpful in these scenarios
[15:47] <cnf> in my experience, when something goes wrong with a juju deploy, you are on your own figureing out what went wrong
[15:48] <ybaumy> beisner_: and that was it. management said the project is dead and i had to switch to redhat and im happy with it
[15:48] <cnf> well, canonical RFI presentation next week, we'll see what they say
[15:48] <beisner_> sorry to hear that didn't work out for you.  if there are specific technical issues that need attention, please do raise a bug and ping here or on #openstack-charms
[15:49] <ybaumy> beisner_: i will. im still using it at home only though but at least i having giving up on it
[15:49] <cnf> it's not the most active channel during european time
[15:50] <ybaumy> cnf: true CEST its like dead
[15:50] <beisner_> if there is already a canonical person working with you, i'd be happy to help connect people.  lmk in a priv msg if so.
[15:50] <beisner_> but also, LP bugs are how issues are raised/tracked/prioritized with the charms.
[15:51] <cnf> beisner_: i find my issues take weeks if not longer to even get looked at
[15:51] <cnf> i don't bother anymore
[15:51] <ybaumy> the thing is .. if only one person has a bug or problem attention is not good ..
[15:51]  * D4RKS1D3 hi!
[15:52] <beisner_> if there is a specific bug or issue, please holler here with a link.  thank you.
[15:53] <cnf> right now, i just want a working openstack
[15:53] <cnf> never gotten to a functional openstack with juju
[15:55] <cnf> right, i'm done
[15:55] <cnf> i'll look at it more tomorrow
[15:56] <ybaumy> https://bugs.launchpad.net/maas/+bug/1666521 this is my maas bug .. but related to juju ... no response since 4th of july
[15:56] <mup> Bug #1666521: MAAS does not detect RAM in newly commissioned VMWARE VM's <MAAS:Incomplete> <https://launchpad.net/bugs/1666521>
[15:57] <ybaumy> there is a workaround but its just plain annoying
[15:57] <cnf> you are using MaaS to manage vmware vm's? o,O
[15:58] <ybaumy> then  bug 1703526
[15:58] <mup> Bug #1703526: agent lost, even though agents are still responding <juju:Triaged by jameinel> <juju 2.2:Triaged by jameinel> <https://launchpad.net/bugs/1703526>
[15:58] <ybaumy> im still collecting logs here
[15:58] <cnf> anyway, i'm going home
[15:58] <cnf> enough frustration for one day
[15:58] <ybaumy> cnf: bye good luck with your  setu
[15:58] <ybaumy> p
[15:59] <ybaumy> beisner_: i will submit my openstack issues this week. i will collect my problems im haying with the charm setup
[16:00] <ybaumy> to be honest without jamespage's help i would have gotten nowhere
[16:27] <julen> Hi there!
[16:28] <julen> I got stuck trying to bootstrap juju with MaaS
[16:28] <julen> from the juju command line I get the error:
[16:29] <julen> ERROR unable to contact api server after 1 attempts: unable to connect to API: Forbidden
[16:29] <ybaumy> did you juju add-cloud .. and add credentials correctly?
[16:30] <ybaumy> julen: you have to add the api key
[16:31] <julen> ybaumy, I think I did that already...
[16:31] <ybaumy> julen: the error message points that you didnt but im not the expert here
[16:31] <julen> while doing "juju add-cloud"
[16:33] <ybaumy> julen: you need to provide username and api key
[16:33] <ybaumy> root@maas:~# cat .local/share/juju/credentials.yaml
[16:33] <ybaumy> credentials:
[16:33] <ybaumy>   homemaas:
[16:33] <ybaumy>     baum:
[16:33] <ybaumy>       auth-type: oauth1
[16:33] <ybaumy>       maas-oauth:
[16:33] <ybaumy> this is a example of the credentials file
[16:34] <ybaumy> maas-oauth: <apikey>
[16:34] <ybaumy> i didnt paste that
[16:34] <julen> hmm... so I have to create the credentials.yaml file manually?
[16:34] <ybaumy> julen: no normally not but you can check the file if its correct
[16:35] <julen> and, it should be generated during bootstrap, right?
[16:35] <ybaumy> no during add-cloud
[16:36] <julen> oh! .. true.. yes
[16:36] <ybaumy> you can always make juju add-credential
[16:37] <julen> well... I got a little confused. Sure, I have the credentials file, what I was missing is the old "environment.yaml" file, but I think that it should not be there in the newer versions
[16:37] <julen> and the credentials file does look like your example
[16:37] <ybaumy> julen: ok
[16:37] <ybaumy> hmm
[16:38] <ybaumy> did you enter the enpoint correctly
[16:38] <ybaumy> endpoint: http://10.14.151.4/MAAS in ~/.local/share/juju# cat clouds.yaml
[16:39] <julen> and the key string on the credentials file does match the one on the "admin"  site or the "maas apikey --username admin"
[16:40] <julen> ybaumy: I think so... http://192.168.122.139:5240/MAAS
[16:40] <julen> I also tried without the port, with the same result
[16:40] <ybaumy> well that looks good
[16:41] <julen> the only clue I get from the juju instance is, on the /var/log/juju/logsink.log
[16:41] <julen> machine-0 2017-07-11 16:09:56 DEBUG juju.apiserver.logsink logsink.go:205 logsink receive error: websocket: close 1006 (abnormal closure): unexpected EOF
[16:42] <ybaumy> well here you have to ask the developers  then
[16:42] <ybaumy> if nobody answers here open up a bug report
[16:42] <ybaumy> sorry
[16:44] <julen> I'll try... but the problem is, that I don't even know if it is a bug. Probably I'm doing something wrong, but it's been already a few days, and I can't manage to work around it
[16:45] <ybaumy> julen: you can also write to the mailing list
[16:46] <julen> uff... too many mailing lists already... I'll try here first
[16:46] <ybaumy> good luck
[16:47] <julen> beisner, beisner_, jamespage? any idea about where would I look?
[16:47] <julen> ybaumy: thanks :)
[17:17] <cnf> aaand home
[17:53] <Budgie^Smore> o/ juju world
[18:21] <julen> The juju-gui seems to get stuck while trying to "Deploy changes". The "Choose a cloud to deploy to" dialog has the circle spinning.
[18:22] <julen> I probably has something to do with the http-proxy. Isn't there a way to configure the juju-gui to work properly behind a proxy?
[18:23] <julen> I didn't find anything on the config section, and the server does have already the http/s/no_proxy variables set on /etc/environment
[18:24] <tvansteenburgh> hatch ^
[18:24] <julen> (I mean the server/instance running the juju-gui)
[18:24] <hatch> julen what version of Juju are you using?
[18:24] <hatch> thanks for the ping tvansteenburgh
[18:25]  * tvansteenburgh waves
[18:25] <julen> hatch: juju  2.2.1
[18:26] <hatch> julen ok and are you using the GUI charm, or the one that ships with Juju?
[18:26] <hatch> ala `juju gui`
[18:26] <julen> the juju-gui on a MaaS instance
[18:28] <hatch> julen in the top right of the GUI there should be a circle with a question mark in it, can you click there and then click "Keyboard Shortcuts" it'll tell you the version of the GUI
[18:29] <julen> on the top right of my juju-gui there is only "search store" and "Logout"
[18:30] <hatch> ok then the version of the GUI you're using is very old
[18:30] <hatch> you'll want to run the one that ships with Juju and not a charm
[18:30] <hatch> you can get the path to that by running `juju gui` in the cli
[18:31] <julen> GUI 2.7.4
[18:32] <hatch> and does that version work? It will have displayed a URL and a u/p
[18:32] <julen> yes, that's the one I am logged into
[18:33] <hatch> hmm
[18:33] <julen> but the version 2.7.4 is at least relatively recent.. isn't it?
[18:33] <hatch> it's the current release yes
[18:34] <hatch> which does not have Logout in the top right :)
[18:34] <rick_h> hatch: julen if the GUI is on a MAAS and the controller needs a proxy to get out it might not be configured in Juju yet causing thigns like add-charm to fail to get the charms from the store?
[18:34] <julen> I belive it has something to do with the proxy again... It has been making me crazy
[18:34] <rick_h> julen: have you tried to see if you can juju deploy from the CLI?
[18:35] <rick_h> julen: and what are you looking to deploy?
[18:35] <hatch> rick_h that may very well be - but the GUI that julen is using is quite old - so trying to figure out why it's not running 2.7.4 as the CLI states
[18:35] <julen> juju deploy did work for the juju-gui
[18:35] <hatch> oh
[18:35] <rick_h> hatch: :)
[18:35] <hatch> so then you're not connected to the one from the CLI
[18:35] <julen> I just wanted to test the openstack charm
[18:36] <hatch> julen from the CLI when you type `juju gui` it'll print a URL and a username/password - visit the url listed there and log in using the credentials provided
[18:37] <julen> but the instance running the juju-gui does have the http/s/no-proxy variables properly set in /etc/environment...
[18:37] <hatch> you can also run `juju remove-application juju-gui` since that version is not supported on the Juju 2   you have
[18:38] <julen> hatch: Allright! this looks better! :D
[18:38] <julen> now I seem to have the GUI you mentioned
[18:38] <hatch> haha there we go!
[18:38] <hatch> now can you please re-try what you were doing before to see if it's still not working
[18:38] <julen> for some reason, I just went to the plain IP, without the port and the rest
[18:39] <hatch> no problem at all
[18:40] <julen> hatch: Aha!! not it totally works! :D
[18:40] <hatch> assuming s/not/now
[18:40] <hatch> \o/
[18:41] <julen> it's funny that the URL without the port also gives you a web interface, almost like the good one
[18:41] <julen> thank you very much hatch !!
[18:41] <hatch> no problem at all, sorry about that - we'll update the charm to fail obviously in unsupported versions of Juju to avoid this problem for others :)
[18:42] <hatch> julen also, when new GUI versions are released you can simply run `juju upgrade-gui` and it'll upgrade for you :)
[18:42] <julen> that's useful, thanks!
[18:43] <hatch> glad it all worked out for you
[18:44] <hatch> issues and feature requests can be filed here if you run into any more things - https://github.com/juju/juju-gui/issues
[18:45] <julen> sure, I was just a little bit unsure, because I kind of knew that I was doing something (relatively trivial) wrong
[18:47] <hatch> :) no problem
[19:35] <jam> ybaumy: I have a potential patch https://github.com/juju/juju/pull/7626 assuming the way I triggered the bug is representative of what is going on for you
[19:36] <jam> namely, if the worker that writes pings to the database gets restarted, it looses sync with the Pingers that were feeding it data
[19:36] <jam> I caused it by manually injecting bad data, so I can't guarantee that it is the same root cause that you are seeing
[19:36] <jam> but it might be a fix
[20:32] <ybaumy> jam: will look into it tomorow and will give you feedback
[20:35] <jam> ybaumy: so 'juju debug-log -m controller --level ERROR' might show something about pingbatcher worker dying
[20:36] <jam> if it does, then it fits the symptoms really well
[20:40] <tvansteenburgh> rick_h: you still around?
[20:43] <rick_h> tvansteenburgh: out and about. What's up?
[20:44] <tvansteenburgh> rick_h: wanted to ask some questions about `charm set extra-info`, but if it's a bad time i can wait :)
[20:53] <rick_h> tvansteenburgh: getting the boy from summer camp. Can you email questions and I can poke at it when I get back?
[20:53] <tvansteenburgh> rick_h: yeah np
[20:54] <tvansteenburgh> rick_h: if you don't here from me it means i poked until i figured it out myself :D