[05:24] is there a way to switch back to using a local charm? [05:24] I switched to use haproxy from the charmstore but can't seem to switch back [05:25] $ juju upgrade-charm --switch local:xenial/haproxy haproxy [05:25] ERROR unknown schema for charm URL "local:xenial/haproxy" [05:25] $ juju upgrade-charm --switch xenial/haproxy haproxy [05:25] ERROR already running latest charm "cs:haproxy-38" [05:54] hmm, code says local: is meant to be supported [05:54] https://github.com/juju/juju/blob/staging/cmd/juju/application/upgradecharm.go#L278 [06:00] ok worked it out, had to specify full path, so /srv/mojo/... [07:01] good morning juju world! [07:01] morning! [07:14] nites over here :) === markusfluer_ is now known as markusfluer === frankban|afk is now known as frankban [10:58] Mattyw,Hi [10:59] Ankammarao, morning [11:00] mattyw,stiil we are not able to create terms [11:00] Ankammarao, what's the error you get? [11:00] error: cannot get discharge from "https://api.jujucharms.com/identity/v1/discharger": third party refused discharge: cannot discharge: user is not a member of required groups [11:01] Ankammarao, what's the name of the term you're uploading? [11:01] mattyw,we have tried with different users which they are already members of that group [11:01] ibm-platform-ac [11:02] Ankammarao, and the group you're uploading to? [11:02] ibmcharmers [11:04] mattyw,/snap/bin/charm push-term /root/ibm-platform-ac.txt ibmcharmers/ibm-platform-ac is the command using to push terms [11:04] Ankammarao, and you're a member of the ibmcharmers group? [11:04] Ankammarao: can you see the groups returned with "charm whoami"? [11:05] no i am just seeing the user-name logged in [11:05] sry, i am getting like this "root@islrpbeixv685:~# charm whoami User: rajith-pv Group membership: ibmcharmers" [11:06] mattyw: ^ this means that the group is ok ... please check access on terms side [11:07] mattyw,i am not a member of ibmcharmers group,but my team meber tried with his userid(already a member to that group) [11:08] mattyw: ping? [11:08] Ankammarao, so rajith should be able to push the terms then from his machine? [11:08] Ankammarao, and to get access you will need to contact one of the admins for that team [11:08] mattyw,no he unable to push terms [11:09] Ankammarao, what error does he get? [11:09] Ankammarao, the same one? [11:09] mattyw,the same error [11:10] ashipika, can you help Ankammarao work out what's going on? they're trying to push terms but keep getting "user is not a member of required group" [11:10] mattyw, Ankammarao: just trying to verify that it works for me.. one minute, please [11:10] Ankammarao, how long as rajith been a member of that group? [11:11] ashipika, is it possible they'd need to login again? [11:11] mattyw: yes, perhaps [11:12] mattyw,he is a member since from 6months or more [11:12] mattyw,root@islrpbeixv685:~# /snap/bin/charm version charm 2.2.0 charm-tools 2.1.8 [11:13] mattyw,is that fine with version [11:13] Ankammarao, looks fine to me - there's some kind of issue with login, ashipika is the expert there so he'll be able to help out [11:13] Ankammarao: interesting.. so which term are you trying to push? what is the term id? [11:13] ashipika, ibmcharmers/ibm-platform-ac [11:14] ashipika, and charm whoami shows ibmcharmers membership [11:14] mattyw: that is unusual [11:14] Ankammarao, when you ran charm whoami was that running /snap/bin/charm whoami? [11:14] Ankammarao, would be interesting to see if they return the same information [11:15] mattyw: for me "charm whoami" returns "not logged into https://api.ujucharms.com/charmstore" :) [11:15] mattyw,showing diffrent user root@islrpbeixv685:~# /snap/bin/charm whoami User: achittet [11:16] Ankammarao: could you please try removing your .go-cookies file? [11:16] ashipika, be careful... [11:16] mattyw: ? [11:16] ashipika, there's some confusion between the go-cookies being used by the charm command and the one being used by the snap charm command [11:16] ashipika, and only the snap charm command contains the commands for pushing terms [11:16] ashipika, so we need to remove go-cookies for that command [11:17] ashipika, which might not be ~/.go-cookies [11:17] mattyw: ah, interesting.. i wonder why charm whoami does not work for me [11:18] Ankammarao: launchpad says you are not member of any group.. [11:18] mattyw: ^ [11:19] ashipika, they're using login for rajith-pv [11:19] ashipika : yes i am not member to that group, but my team member rajith is a member [11:20] Ankammarao: could you please go to https://jujucharms.com and log in as rajith-pv? and then try again? [11:20] ashipika: ok [11:20] Ankammarao: thank you [11:26] mattyw,ashipika: its working now [11:26] Ankammarao: \o/ [11:26] Ankammarao: it's because of the way our identity manager works.. it cannot get user information until you log in with jujucharms.com at least once.. [11:27] Ankammarao: sorry for the inconvenience [11:28] ashika: there is a conflict b/w charm login and /snap/bin/charm login [11:28] ashipika : both are different users [11:29] mattyw,ashika: root@islrpbeixv685:~# sudo /snap/bin/charm push-term /root/ibm-platform-ac.txt ibmcharmers/ibm-platform-ac ibmcharmers/ibm-platform-ac/1 [11:29] ashika : getting output like "ibmcharmers/ibm-platform-ac/1" [11:29] Ankammarao: did you do snap login first? [11:30] yes [11:30] Ankammarao: now you can use ibmcharmers/ibm-platform-ac/1 term in your charms.. [11:30] mattyw: any idea why the two users should be different? [11:31] ashika : but we have mentioned term name like "ibm-platform-ac/1" in metada.yaml [11:31] ashipika : no, both users should be same as ealrier i got error because of both users are diffrenet [11:32] folks, one simple question... update-status... what is should return [11:32] it's like a health check? [11:32] so if i'll try to run curl to see that website is working, and return error code ... that should be fine? [11:32] Ankammarao: you need to use ibmcharmers/ibm-platform-ac/1 in metadaya.yaml.. because therm id consists of the namespace (ibmcharmers) and the term name (ibm-platform-ac). [11:33] Ankammarao: if you want your charm to require agreement to a specific term, you need to include the term revision number.. otherwise your charm will always require agreement to the latest revision of the term document [11:36] ashika: ok,thank you [11:37] Ankammarao: you're welcome.. if you have any further issues pushing or releasing terms, please ping me [11:41] ashipika : sure i'll ping you [15:25] hey guys. What is the status of this PR? https://code.launchpad.net/~dbuliga/charms/trusty/nagios/nagios/+merge/288614 Nothing changed on it since 2016-10-07. Is it possible to get it reviewed? Thx. Denis [15:47] Hi, I have installed VNC server and firefox on ubuntu 16.0 s390x machine. When I tried to start firefox through vnc client, it's giving me error: sementation fault (core dumped). [15:48] *Segmentation fault === freyes__ is now known as freyes === Guest36_ is now known as uglyboxer [17:13] hey whats up guys, I'm having issues connecting to my controller, see -> https://bugs.launchpad.net/juju/+bug/1644634 [17:13] Bug #1644634: cannot access controller [17:15] this has basically rendered all of the models I have provisioned on that controller inaccessable [17:16] any ideas on what to do here? [17:19] bdx: so you're not able to restart the mongodb process? [17:20] marcoceppi: I'm not able to get any correspondence from the controller [17:20] marcoceppi: my initial inclination was that the controller is just mad iowaiting [17:21] but my crud monitoring through aws console suggests its not under much load [17:21] too many file descriptors open? [17:21] bdx: you're not able to SSh or anything? [17:22] omg [17:22] my ssh just connected [17:22] after like 5+ minutes [17:22] I was right [17:22] controller is slammed [17:23] I couldn't connect all weekend [17:23] it would just time out [17:23] I'm actually surprised it is still running [17:23] marcoceppi: https://s22.postimg.org/gfo71ug4h/Screen_Shot_2016_11_28_at_9_26_25_AM.png [17:24] okya, that's a lot of mongod processes [17:24] cloudwatch metrics are poop [17:24] bdx: can you clean shutdown those processes? [17:24] I guess ... [17:25] I want to grab logs, it seems like systemd is just going crazy spawning monogd procs [17:25] rick_h: ^ ? [17:25] like `sudo kill ` [17:25] `killall -15 mongod` [17:25] might be quicker [17:25] right lol [17:27] bdx: service juju-db stop might be good too [17:27] bdx: then getting logs and restarting [17:27] restart the controller? [17:28] I must of hit a memory increase, my ssh session froze [17:28] eeehh, waiting for ssh again omp [17:29] bdx: marcoceppi sorry was otp, looking [17:29] marcoceppi: ok, back on the controller [17:30] bdx: restarting the services, though at this point, restarting the controller VM might not be a bad idea, though grabbing logs will be super helpful [17:30] https://s14.postimg.org/65u1rqcc1/Screen_Shot_2016_11_28_at_9_33_37_AM.png [17:30] bdx: k, yea worst case reset the VM and it should come up if things are working. [17:30] grabbing logs now [17:30] bdx: ouch, so load of 30 but cpu/memory is ok...disk thrashing to no end? [17:31] bdx: are you running LXD workloads on here? [17:34] marcoceppi: no, but my controller was initially throwing a bunch of errors due to not having lxd it seemed, so I installed lxd simply to negate the noise [17:35] rick_h, marcoceppi: my tar of the controller logs is ~100MB, how can I get this to you guys? [17:36] bdx: is that also gzipped? [17:36] ya [17:36] dang. I don't know if LP will let you upload that much, but the bug you linked might be a good place [17:37] bdx: if not, try slicing into 10 chunks perhaps? or dropbox link or whatever. [17:37] bdx: get me a url and I'll get it and help get it to folks if needed [17:40] rick_h, marcoceppi: https://bugs.launchpad.net/juju/+bug/1644634/comments/1 [17:40] Bug #1644634: cannot access controller [17:41] bdx: <3 ty [17:41] bdx: are things somewhat sane post-restart? [17:41] bdx: or still nuts? [17:41] rick_h, marcoceppi: no, the controller is iowaiting again, fully maxed [17:41] bdx: is iotop installed on the machine? [17:42] rick_h, marcoceppi: I have a redis cluster deployed supporting a staging and QA env, other than that I could ditch this controller, unfortunately I need to keep that QA env up at all costs bc its getting beat on pretty heavily right now [17:43] bdx: can you restart the instance? [17:43] bdx: or did you do that? /me glanced through the backlog but might have missed it [17:44] iotop -> https://s21.postimg.org/58l6v7787/Screen_Shot_2016_11_28_at_9_47_38_AM.png [17:44] rick_h: no, i didn't restart .... do you think that will send my redis cluster into a state of disarray? [17:45] bdx: shouldn't effect it at all. [17:45] bdx: I mean the controller is not the same machiens as the redis cluster right? [17:45] correct [17:45] bdx: restarting the controller should just have agents on the redis machines time out for a bit while it comes back up [17:46] bdx: no damage to the running redis processes at all [17:46] rick_h: ok, restarting controller now [17:47] bdx: poking at the logs but will take a sec [17:47] lol 1.7G of logs [17:47] rick_h: I've previously had a whole openstack go tits up after losing the controller ... only reason I ask [17:47] rick_h: right ... [17:48] bdx: really? I'd be curious about that story. We've had folks deploy OS with juju and then go in and shutdown all jujud on each machine/etc [17:48] bdx: openstack still ran and they made it a manually run openstack at that point [17:49] rick_h: yeah, after I shutdown all of the juju agents everywhere I was able to move forward and save it all ... but if I remember correctly my db ended up borked [17:50] bdx: the controller db? or the OS mysql db? [17:50] rick_h: openstack mysql db [17:50] bdx: can you add info to the bug about the cloud/instances running the controller/etc? [17:51] yea, omp [17:54] ahhhh shoot, wrong issue [17:55] rick_h: comment #3 [17:55] bdx: k, ty [17:56] bdx: can you note the instance/disk and such of the controller nodes. So there's a storm in the logs, but based on that top output it seems like it's not ram/cpu but disk and so wondering if we can draw some lessons on disk peroformance and the mount of logs/etc going on [17:57] bdx: there's an openstack in here as well as the redis? [17:57] how many models with what running? [17:57] rick_h: no the openstack was yesteryear [17:58] rick_h: http://paste.ubuntu.com/23549554/ [17:58] bdx: hmm, seeing this in the log [17:58] 2016-11-28 17:32:16 DEBUG juju.apiserver request_notifier.go:140 -> [55] unit-openstack-dashboard-0 795.567548ms {"request-id":44,"error":"watcher has been stopped","error-code":"stopped","response":"'body redacted'"} NotifyWatcher["10"].Next [17:59] rick_h: yeah, I've been using the dashboard to help troubleshoot keystone v3 ops I'm trying to solidify [17:59] bdx: oh ok, so not all of openstack but that is there ok [17:59] rick_h: I've deployed keystone + barbican as a standalone secrets provisioning [17:59] yea [18:00] bdx: gotcha, just trying to understand the logs as I go through sorry [18:00] np [18:02] bdx: is the restart back up? [18:05] rick_h: yesa [18:05] bdx: sane or slammed? [18:06] rick_h: looking much better now -> https://s22.postimg.org/gkfbj32ld/Screen_Shot_2016_11_28_at_10_09_21_AM.png [18:06] bdx: ok, I've got to run to a meeting. I've got hte logs and alexisb is getting someone to look at the bug/details when they come online in a bit. [18:06] rick_h: thanks man [18:06] bdx: thanks for the added details, we're currently working on a lot of this so this is good extra data. [18:06] rick_h: appreciate it [18:07] do I get a guinea pig award? [18:09] bdx you get the juju ecosystem rockstar award :) [18:09] thanks for all the info in the bug [18:09] we will get updates in it tonight === sg is now known as Guest34504 [18:12] Is it possible to use Juju resource with a bundle? https://jujucharms.com/docs/stable/developer-resources [18:12] If so, pls let me know the way to do it [18:14] alexisb: awesome, thx [19:04] after adding and removing a machine multiple times from the manual provider, I am now not able to add the machine anymore, I'm getting this -> http://paste.ubuntu.com/23549809/ [19:04] even though the machine doesn't exist on my controller in any models [19:05] I've tried ssh'ing in and 'rm -rf /var/{log,lib}/juju`, and `sudo deluser --remove-all-files ubuntu` [19:05] but I still get the same result [19:05] could the machine be hanging around in the controller somewhere, even though its not shown via `juju status`? [19:06] rick_h, marcoceppi: ^ [19:07] bdx: looking, want to check what juju looks for to make that determination === frankban is now known as frankban|afk [19:08] rick_h: can you use resources in a bundle? [19:09] marcoceppi: not a local resource atm, it's on the 17.04 roadmap list [19:09] rick_h: ack, ta [19:09] rick_h: https://bugs.launchpad.net/juju/+bug/1645446 [19:09] Bug #1645446: juju incorrectly reports machine already provisioned [19:11] I'm going to hit every corner case today, fyi [19:15] bdx: wheeee, replied with request for as much detail on the before as we can get please [19:15] bdx: only work around I could think of atm is to manually remove the doc from the db, or change the ip address/hostname of the machine. [19:16] bdx: but looks like some bug in the removal that you hit with the add/remove several times. [19:17] rick_h: ok, I don't have dynamic provisioning of spinning these up and down ... I could kill-controller and reprovision it all if that would be easier ? [19:17] bdx: yea, unfortunately that's the non-hackiest path forward. [19:17] bdx: since it's in the mongodb that the doc is hanging around that is triggering that error [19:22] rick_h: alright, updated, thanks [19:22] ty bdx [19:27] rick_h: ehh, even after killing my controller, and re-bootstrapping, I'm getting the same thing [19:28] bdx: ?! /me goes back to the code he thought he understood [19:29] bdx: when you kill-controller in manual...I wonder if it clears the db? I mean that would be nuts...but. [19:30] rick_h: yea, I'm wondering if need to manualy tear all that ishk outta there? [19:30] bdx: sec, can you grab the logs please? [19:31] bdx: looking for lines like: "Checking if %s is already provisioned" [19:32] bdx: looks like there's another place that can hit this if the juju service exists on the system [19:32] bdx: so that's in upstart or systemd depending on series [19:33] bdx: so it's probably not the juju directory, but the services directory that's the issue. [19:34] rick_h: ok, yeah, its a trusty controller [19:34] bdx: but the machine is xenial? [19:34] rick_h: no, the machine is trusty too [19:34] ooh, the services dir on the machine ... my b [19:35] bdx: right, the systemd script for jujud on that machine that's "already provisioned" [19:35] yea, I was looking around in there and didn't see anything that said juju .... oh, where is that? [19:35] * rick_h is looking [19:35] /etc/systemd/system ? [19:36] anything juju in there? [19:36] rick_h: http://paste.ubuntu.com/23549930/ [19:36] bdx: yea, there we go: https://github.com/juju/juju/blob/6cf1bc9d917a4c56d0034fd6e0d6f394d6eddb6e/provider/manual/environ_test.go#L106 [19:36] bah [19:37] bdx: any others from that code link hit? [19:37] when did trusty get systemd? [19:37] bdx: /etc/init/juju ? [19:37] I checked there too .. omp [19:37] jrwren: no, but the controller is trusty, but the machine is xenial (the add-machine one that is) [19:38] oh, ok. [19:38] bdx: ok, what's omp? /me is ignorant [19:38] http://paste.ubuntu.com/23549940/ [19:39] nothing [19:39] rick_h: no its trusty [19:39] ok, so the controller was killed/restart, the machine in question has no reference to the juju service... [19:40] rick_h: here we go -> http://paste.ubuntu.com/23549954/ [19:40] rick_h: exactly [19:40] rick_h: should I kill the controller, then ssh in and rm everything juju and mongo from the controller ? [19:41] bdx: not yet, /me is chasing the code from that log output [19:41] ok, that is definitely from https://github.com/juju/juju/blob/6cf1bc9d917a4c56d0034fd6e0d6f394d6eddb6e/environs/manual/init.go#L34 then [19:42] at least the log lines match up [19:42] so time to figure out wtf we run on systemd to get the list of services [19:44] bdx: can you run this on the machine? -- /bin/systemctl list-unit-files --no-legend --no-page -t service | grep -o -P '^\w[\S]*(?=\.service)' [19:47] rick_h: bash: /bin/systemctl: No such file or directory [19:47] bdx: this is on the xenial host? [19:47] rick_h: there is no xenial host [19:47] bdx: on that pastebin it said xenial? [19:47] wha? [19:48] http://paste.ubuntu.com/23549833/ [19:48] bdx: ok sorry, I thought you had a trusty controller with a xenial machine you were trying to add [19:48] bdx: I see now that your other paste said it was --trusty [19:48] bdx: sorry, my confusion. Ok, looking in the wrong place then [19:50] bdx: so new command for trusty: sudo initctl list | awk '{print $1}' | sort | uniq [19:50] rick_h: http://paste.ubuntu.com/23549991/ [19:50] its there [19:51] bdx: there you go, so Juju thinks the machine's provisioned because of the juju entries in there [19:51] awesome, I'm gonig to rm the jujud-machine-17 [19:51] bdx: rgr [19:52] bdx: the unit one as well if that's done/gone [19:52] bdx: if any line has "juju" in it it's considered provisioned so need both gone: provisioned := strings.Contains(output, "juju") [19:52] yeah, just the machine-17 didn't do the trick [19:53] there we go, removing both did it [19:53] bdx: <3 ok...updating the bug with these notes [19:54] thanks rick [19:54] bdx: np, see you in an hour or so for round 3? :P [19:57] rick_h: I just got elasticsearch (patched with the addition of the python-yaml dep) to successfully deploy to that machine [19:58] rick_h: that was my last standing issue in the bunch, for the time being :-) [19:58] bdx: ok, I'm going to take nap for now then :P [20:35] mbruzek https://github.com/juju-solutions/jujubox/pull/20#pullrequestreview-10408603 - i had one minor comment. fi you dont want to take any action on that i'm +1 to this as it is. [20:40] lazyPower: I still think the juju-1 packaging is not final, and I don't want to create a sym-link at this time [20:40] If you want to create a bug we can track it that way. [20:40] ack, not that serious [20:40] i'm +1 as it is, merging now [20:45] mbruzek - both jujubox pr's landed. You made mention of a charmbox pr but i dont see one on the repo. Did it land already or is it MIA? [20:46] mbruzek - i assume it was this one? https://github.com/juju-solutions/charmbox/commit/3df12bf82ce6bd16d519bb812c710831e799e6fb [20:46] yes that is it [20:46] awesome. landed that one over the weekend [20:47] so i think we're good on the box images now, just need to kick the builders and get the flavors setup [20:47] plus get this back in CI [20:47] want to batcave it and get that done real quick? it'll only take a few minutes [20:49] mbruzek ^ [20:50] yes I am there? [22:52] lazyPower: i see a juju-1 and latest tag now in jujubox. is the 'devel' branch now deprecated? [22:53] kwmonroe - i beleive mbruzek sent a mail to the list, but yes, devel is now :latest [22:53] and the devel tag is in the process of being deprecated [22:53] juju-1 will be the best-effort tag to keep a juju-1 compliant charmbox around [22:55] word lazyPower -- i see mbruzek's note now (spelling errors and all). thanks! [22:55] kwmonroe np happy to help [23:25] 'juju bootstrap' always seems to hang (on both xenial and yakkety hosts), the console claims it's doing "apt-get upgrade", but the process list shows a [23:25] 100000 15822 15821 0 22:58 ? 00:00:00 sudo /bin/bash -c /bin/bash -c set -e tmpfile=$(mktemp) trap "rm -f $tmpfile" EXIT cat > $tmpfile /bin/bash $tmpfile [23:25] which has a bash child and bash grandchild with no arguments [23:45] tych0: using juju 2.0 and lxd, how can i specify that it should use an image i create? (i need to drop open-iscsi so juju bootstrap won't hang) [23:45] hallyn: sec, [23:46] hallyn: make an image called ubuntu-$series [23:46] or, one that has a tag called that [23:46] it'll use that one [23:46] oh, ok. that'll suffice for now, [23:46] but so there's no way to say "use image xenial-base" tha tyou knwo of? [23:47] tych0: (trying that - thanks) [23:47] hallyn: no, i don't think so [23:47] the ubuntu-$series alias is always used for exactly htis reason [23:47] i didn't think about making the name custom at the time :) [23:48] kewl - easy enough, re-bootstrapping, let's see [23:50] tych0: success, thx :) [23:51] hallyn: cool, glad it worked!