[02:40] hey there, does anyone have any advice for debugging actions? When my action status is 'failed', I can see a line in my debug-log that reads "Traceback", however there's no traceback. === frankban|afk is now known as frankban [08:53] morning === gurmble is now known as grumble [10:41] Hi @stokachu i had trouble with proxy and juju bootstrapper yesterday. I had trouble with even lauching test lxd containers with ubuntu, but resolved that today. Any chance you can help me today? [10:41] Given that lxd now works, i assume juju is the remaining half. [12:48] haying a very strange problem [12:49] since the last update i loose connections in juju status from services and machines [12:50] status is either unknown or down [12:50] but when i log onto openstack [12:50] everything seems to be working [12:50] anyone had this too? [12:50] root@openstacksa2:~/kube# juju version [12:50] 2.2.1-xenial-amd64 [12:52] agent status is lost [12:55] jam: ^^^ [12:55] ah and when i restart the whole machines they reconnect agents and everything is green for a day or so [12:56] ybaumy: do you have controllers in HA setup? [12:56] wpk: yes [12:57] Machine State DNS Inst id Series AZ Message [12:57] 0 down 10.14.162.72 mt4cyg xenial default Deployed [12:57] 1 down 10.14.162.78 g6kfkb xenial default Deployed [12:57] 2 started 10.14.151.121 w4qxnp xenial default Deployed [12:58] ic [12:58] ;) [12:59] somehow my controllers seem to be dead [12:59] but machines are running [12:59] and strangely they seem to be on different subnets [12:59] hmm [12:59] they have the same vlan [12:59] i think at least [12:59] ybaumy: bug 1703526 is tracking this issue [12:59] Bug #1703526: agent lost, even though agents are still responding [13:00] ybaumy: is it possible for you to add some debugging info to that bug? [13:00] jam: of course .. what kind of information should i provide [13:01] there also seems to be a problem with maas dhcp [13:01] because of the different subnets [13:01] ybaumy: so doing: juju model-config -m controller logging-config="=INFO;juju.state.presence=TRACE" and then saving a few minutes of "juju debug-log -m controller" output would probably be useful [13:01] but those vlans are interconnected [13:01] so connections are possible [13:02] jam: will do [13:02] ybaumy: from what we can tell, all the agents are connected, its just that our 'Ping to keep alive' logic is wrong [13:02] so we start reporting 'agent is lost' even though it is happily responding to requests [13:03] jam: but i will have to wait for now since i restarted all machines like 30 minutes ago. the error is not yet happening [13:04] jam: or doesnt it matter [13:04] ybaumy: well, maybe data-from-now and then data-from-when-it-happens will be useful for us to track what's going on [13:04] it can be useful to see the delta [13:05] jam: how much data is generated in a day when trace is on? [13:05] ybaumy: well, every ping from every agent gets a line in the log file [13:05] and we ping every 20s or so [13:05] (and there are some other bits of info) [13:06] ybaumy: it was working on 2.2.0 without problems? [13:06] and there is a line every second from PingBatcher about how much stuff it had to write out [13:06] wpk: yes [13:06] so, its fairly noisy [13:07] jam: well then i will attach a volume from my nas in order to store the data [13:07] i dont have that much space on OS [13:08] ybaumy: napkin math says about 500MB [13:08] ybaumy: but I don't think we need the entire time [13:08] ybaumy: if you could do say 30min now, and then 30min tomorrow might be sufficient [13:08] ok [13:08] or even 10min or so now, and then 10min when you see the issue present itself [13:09] jam: last question how to turn loggin to default after trace was enabled [13:10] ybaumy: default logging is logging-config="=WARNING" I believe [13:10] (though we'd *like* it to be INFO) [13:10] jam: ok [13:15] but that with the IP addresses is also very very strange [13:15] 10.14.151 vlan is 500 ... and 10.14.162 is vlan 503 [13:16] normally there shouldnt be a conflict or addresses from on net in the other one [13:17] never seen this until now on the juju controllers [13:18] hmm where does it get the vlan 503 from [13:18] i dont get it [13:20] ybaumy: generally we get the vlan ids from the MAAS config [13:20] hmm [13:20] jam: i know it must be maas who is responsible for this [13:21] jam: but i dont see the error. its just the first 2 controller nodes who are in the wrong subnet === rick_h changed the topic of #juju to: STATUS: charmstore is currently experiencing errors, the team is correcting | #juju Juju as a Service Beta now available at https://jujucharms.com/jaas | https://review.jujucharms.com/ | https://jujucharms.com/docs/ | http://goo.gl/MsNu4I || https://www.youtube.com/c/jujucharms [14:22] hello, seems that the charm store is down, says service unavailable [14:22] while deploying [14:22] same while searching for a charm on web [14:23] ERROR cannot deploy bundle: cannot resolve URL "cs:~containers/easyrsa-12": cannot resolve charm URL "cs:~containers/xenial/easyrsa-12": cannot get "/~containers/xenial/easyrsa-12/meta/any?include=id&include=supported-series&include=published": cannot unmarshal error response "

503 Service Unavailable

\nNo server is available to handle this request.\n\n": invalid character '<' looking for [14:23] beginning of value [14:23] ^^ error [14:23] ak_dev: there's a current outage the team is chasing. [14:24] oh okay cool :-) [14:24] ak_dev: will keep the topic up to date and update on twitter as we get things back and going. https://twitter.com/jujuui [14:24] Okay! looking out for it, thanks [14:25] np, apologies for the unexpected issue === rick_h changed the topic of #juju to: Juju as a Service Beta now available at https://jujucharms.com/jaas | https://review.jujucharms.com/ | https://jujucharms.com/docs/ | http://goo.gl/MsNu4I || https://www.youtube.com/c/jujucharms [15:07] ak_dev: should be back now [15:11] Hey thanks, will check soon :-) [15:24] hmz [15:24] openstack with juju is torture [15:25] cnf: i feel you [15:26] what a load of bs [15:26] im already using for production redhat [15:26] im testing suse currently [15:26] we are in RFI rounds [15:27] but canonical is looking unlikeley atm [15:27] whats RFI if i may ask? [15:28] Request For Information [15:28] ah ok i thought so. [15:29] cnf: i only had problems with juju with scale out [15:29] cnf: and other stuff [15:29] redhat and mirantis had a strong story, canonical next week will really have to step up. [15:30] cnf: maybe im too stupid but os-charmers should put out a documentation out IMO [15:30] i have had nothing but trouble with juju / charms myself :( [15:30] cnf: im a ubuntu guy normally but this just didnt work out [15:30] "Placement API service is not responding" is what i'm stuck at, atm [15:31] ybaumy: yeah, i'm thinking the same atm [15:32] cnf: im using juju and openstack at home for personal training but thats all. [15:35] uhu [15:36] at work i also moved from maas to https://theforeman.org [15:37] maas i do rather like [15:39] well do many bugs that came up and was tired getting them fixed. currently i have a bug open in relation to juju and im waiting now over a week for response. i mean how can i work with something like that [15:40] oh yeah, juju is just tiresome [15:40] hey tvansteenburgh, any objection to a PR that turns down the noise on nvidia-cuda? the -x is making a big haystack in my logs: https://github.com/juju-solutions/layer-nvidia-cuda/blob/master/reactive/cuda.sh#L2 [15:40] i like it [15:40] maas is much quicker in reacting and fixing [15:40] but ... [15:40] kwmonroe: no objection [15:42] i found the integration of chef and puppet with foreman very useful [15:42] hmz, what the hell is the problem with this BS >,< [15:42] hi cnf ybaumy - are there specific issues that you're running into with openstack charms? [15:43] tons, atm my compute nodes apparently can't talk to the placement API [15:45] beisner_: the reason i switched from juju os-charms to redhat cloud was scale out/down and HA. i had problems adding nodes, removing nodes from the config. then HA setup from the charms should be default. i had problems with mysql HA and much more other stuff. [15:46] ybaumy: i must say i put in a lot of work to get it to run. but then there is no piece of good documentation .. here a piece in the net then go there to find something out... [15:46] beisner_: and so on. i would really like to give it another chance but in the end its management who says ok this is useful or not [15:46] all i get atm is "Placement API service is not responding", and I can't figure out why [15:47] ybaumy: it's a mess [15:47] interesting. appreciate the input. these things are in circulation in production in lots of places, perhaps there are configuration issues which might be more clear with more docs. [15:47] beisner_: and in my presentation i had problem with scale out and they said ok how should this work in production if not in the a presenation [15:47] LP bug pointers with logs are most helpful in these scenarios [15:47] in my experience, when something goes wrong with a juju deploy, you are on your own figureing out what went wrong [15:48] beisner_: and that was it. management said the project is dead and i had to switch to redhat and im happy with it [15:48] well, canonical RFI presentation next week, we'll see what they say [15:48] sorry to hear that didn't work out for you. if there are specific technical issues that need attention, please do raise a bug and ping here or on #openstack-charms [15:49] beisner_: i will. im still using it at home only though but at least i having giving up on it [15:49] it's not the most active channel during european time [15:50] cnf: true CEST its like dead [15:50] if there is already a canonical person working with you, i'd be happy to help connect people. lmk in a priv msg if so. [15:50] but also, LP bugs are how issues are raised/tracked/prioritized with the charms. [15:51] beisner_: i find my issues take weeks if not longer to even get looked at [15:51] i don't bother anymore [15:51] the thing is .. if only one person has a bug or problem attention is not good .. [15:51] * D4RKS1D3 hi! [15:52] if there is a specific bug or issue, please holler here with a link. thank you. [15:53] right now, i just want a working openstack [15:53] never gotten to a functional openstack with juju [15:55] right, i'm done [15:55] i'll look at it more tomorrow [15:56] https://bugs.launchpad.net/maas/+bug/1666521 this is my maas bug .. but related to juju ... no response since 4th of july [15:56] Bug #1666521: MAAS does not detect RAM in newly commissioned VMWARE VM's [15:57] there is a workaround but its just plain annoying [15:57] you are using MaaS to manage vmware vm's? o,O [15:58] then bug 1703526 [15:58] Bug #1703526: agent lost, even though agents are still responding [15:58] im still collecting logs here [15:58] anyway, i'm going home [15:58] enough frustration for one day [15:58] cnf: bye good luck with your setu [15:58] p [15:59] beisner_: i will submit my openstack issues this week. i will collect my problems im haying with the charm setup [16:00] to be honest without jamespage's help i would have gotten nowhere === scuttle|afk is now known as scuttlemonkey [16:27] Hi there! [16:28] I got stuck trying to bootstrap juju with MaaS [16:28] from the juju command line I get the error: [16:29] ERROR unable to contact api server after 1 attempts: unable to connect to API: Forbidden [16:29] did you juju add-cloud .. and add credentials correctly? [16:30] julen: you have to add the api key [16:31] ybaumy, I think I did that already... [16:31] julen: the error message points that you didnt but im not the expert here [16:31] while doing "juju add-cloud" [16:33] julen: you need to provide username and api key [16:33] root@maas:~# cat .local/share/juju/credentials.yaml [16:33] credentials: [16:33] homemaas: [16:33] baum: [16:33] auth-type: oauth1 [16:33] maas-oauth: [16:33] this is a example of the credentials file [16:34] maas-oauth: [16:34] i didnt paste that [16:34] hmm... so I have to create the credentials.yaml file manually? [16:34] julen: no normally not but you can check the file if its correct [16:35] and, it should be generated during bootstrap, right? [16:35] no during add-cloud [16:36] oh! .. true.. yes [16:36] you can always make juju add-credential [16:37] well... I got a little confused. Sure, I have the credentials file, what I was missing is the old "environment.yaml" file, but I think that it should not be there in the newer versions [16:37] and the credentials file does look like your example [16:37] julen: ok [16:37] hmm [16:38] did you enter the enpoint correctly [16:38] endpoint: http://10.14.151.4/MAAS in ~/.local/share/juju# cat clouds.yaml [16:39] and the key string on the credentials file does match the one on the "admin" site or the "maas apikey --username admin" [16:40] ybaumy: I think so... http://192.168.122.139:5240/MAAS [16:40] I also tried without the port, with the same result [16:40] well that looks good [16:41] the only clue I get from the juju instance is, on the /var/log/juju/logsink.log [16:41] machine-0 2017-07-11 16:09:56 DEBUG juju.apiserver.logsink logsink.go:205 logsink receive error: websocket: close 1006 (abnormal closure): unexpected EOF [16:42] well here you have to ask the developers then [16:42] if nobody answers here open up a bug report [16:42] sorry [16:44] I'll try... but the problem is, that I don't even know if it is a bug. Probably I'm doing something wrong, but it's been already a few days, and I can't manage to work around it [16:45] julen: you can also write to the mailing list [16:46] uff... too many mailing lists already... I'll try here first [16:46] good luck [16:47] beisner, beisner_, jamespage? any idea about where would I look? [16:47] ybaumy: thanks :) [17:17] aaand home === frankban is now known as frankban|afk [17:53] o/ juju world [18:21] The juju-gui seems to get stuck while trying to "Deploy changes". The "Choose a cloud to deploy to" dialog has the circle spinning. [18:22] I probably has something to do with the http-proxy. Isn't there a way to configure the juju-gui to work properly behind a proxy? [18:23] I didn't find anything on the config section, and the server does have already the http/s/no_proxy variables set on /etc/environment [18:24] hatch ^ [18:24] (I mean the server/instance running the juju-gui) [18:24] julen what version of Juju are you using? [18:24] thanks for the ping tvansteenburgh [18:25] * tvansteenburgh waves [18:25] hatch: juju 2.2.1 [18:26] julen ok and are you using the GUI charm, or the one that ships with Juju? [18:26] ala `juju gui` [18:26] the juju-gui on a MaaS instance [18:28] julen in the top right of the GUI there should be a circle with a question mark in it, can you click there and then click "Keyboard Shortcuts" it'll tell you the version of the GUI [18:29] on the top right of my juju-gui there is only "search store" and "Logout" [18:30] ok then the version of the GUI you're using is very old [18:30] you'll want to run the one that ships with Juju and not a charm [18:30] you can get the path to that by running `juju gui` in the cli [18:31] GUI 2.7.4 [18:32] and does that version work? It will have displayed a URL and a u/p [18:32] yes, that's the one I am logged into [18:33] hmm [18:33] but the version 2.7.4 is at least relatively recent.. isn't it? [18:33] it's the current release yes [18:34] which does not have Logout in the top right :) [18:34] hatch: julen if the GUI is on a MAAS and the controller needs a proxy to get out it might not be configured in Juju yet causing thigns like add-charm to fail to get the charms from the store? [18:34] I belive it has something to do with the proxy again... It has been making me crazy [18:34] julen: have you tried to see if you can juju deploy from the CLI? [18:35] julen: and what are you looking to deploy? [18:35] rick_h that may very well be - but the GUI that julen is using is quite old - so trying to figure out why it's not running 2.7.4 as the CLI states [18:35] juju deploy did work for the juju-gui [18:35] oh [18:35] hatch: :) [18:35] so then you're not connected to the one from the CLI [18:35] I just wanted to test the openstack charm [18:36] julen from the CLI when you type `juju gui` it'll print a URL and a username/password - visit the url listed there and log in using the credentials provided [18:37] but the instance running the juju-gui does have the http/s/no-proxy variables properly set in /etc/environment... [18:37] you can also run `juju remove-application juju-gui` since that version is not supported on the Juju 2 you have [18:38] hatch: Allright! this looks better! :D [18:38] now I seem to have the GUI you mentioned [18:38] haha there we go! [18:38] now can you please re-try what you were doing before to see if it's still not working [18:38] for some reason, I just went to the plain IP, without the port and the rest [18:39] no problem at all [18:40] hatch: Aha!! not it totally works! :D [18:40] assuming s/not/now [18:40] \o/ [18:41] it's funny that the URL without the port also gives you a web interface, almost like the good one [18:41] thank you very much hatch !! [18:41] no problem at all, sorry about that - we'll update the charm to fail obviously in unsupported versions of Juju to avoid this problem for others :) [18:42] julen also, when new GUI versions are released you can simply run `juju upgrade-gui` and it'll upgrade for you :) [18:42] that's useful, thanks! [18:43] glad it all worked out for you [18:44] issues and feature requests can be filed here if you run into any more things - https://github.com/juju/juju-gui/issues [18:45] sure, I was just a little bit unsure, because I kind of knew that I was doing something (relatively trivial) wrong [18:47] :) no problem [19:35] ybaumy: I have a potential patch https://github.com/juju/juju/pull/7626 assuming the way I triggered the bug is representative of what is going on for you [19:36] namely, if the worker that writes pings to the database gets restarted, it looses sync with the Pingers that were feeding it data [19:36] I caused it by manually injecting bad data, so I can't guarantee that it is the same root cause that you are seeing [19:36] but it might be a fix === scuttlemonkey is now known as scuttle|afk [20:32] jam: will look into it tomorow and will give you feedback [20:35] ybaumy: so 'juju debug-log -m controller --level ERROR' might show something about pingbatcher worker dying [20:36] if it does, then it fits the symptoms really well [20:40] rick_h: you still around? [20:43] tvansteenburgh: out and about. What's up? [20:44] rick_h: wanted to ask some questions about `charm set extra-info`, but if it's a bad time i can wait :) [20:53] tvansteenburgh: getting the boy from summer camp. Can you email questions and I can poke at it when I get back? [20:53] rick_h: yeah np [20:54] rick_h: if you don't here from me it means i poked until i figured it out myself :D