=== frankban|afk is now known as frankban [10:19] Hello there [10:19] i have juju 2.1.2 running. And i needed to remove a relation, but it isn't working [10:20] it told me the relation is removed [10:20] also if i retry the remove-relation, it tells me there is no relation [10:20] but the subordinate charm is still there [10:20] is there some way of forcing this? [10:42] Hi guys! [10:44] How can i completely re-deploy a bundle? [10:47] utking: juju destroy-model [10:47] after that, juju add-model [10:47] ah ok, thanks :) [10:47] and then you can deploy again :) [10:53] BlackDex: whats your definition of "the subordinate charm is still there"? [10:53] well, that it still is listed under the keystone charm in this case ;) [10:54] juju remove-relation keystone nrpe-xenial [10:54] ERROR relation "nrpe-xenial:nrpe-external-master keystone:nrpe-external-master" not found (not found) [10:54] juju remove-relation keystone nrpe-xenial [10:54] ERROR relation "nrpe-xenial:nrpe-external-master keystone:nrpe-external-master" not found (not found) [10:54] oeps [10:54] sorry [10:54] https://paste.ubuntu.com/p/39jnBQ8tyZ/ [10:56] if i start a debug-hooks on one of those nrpe charms it sometimes starts a hook. And i tried running hooks/stop my self, but that doesn't do anything [10:56] juju run etc.. is also working on these unit's [10:56] but i can't seem to remove them [10:56] tried restarting that container also, no effect [11:10] can you show us the full juju status BlackDex [11:10] ? === cnfer is now known as cnf === stokachu_ is now known as stokachu === stub` is now known as stub [12:42] kjackal: o/ [12:43] hey pekkari [12:43] hey, what is the best branch for contributions in apache/bigtop on github? [12:44] I see bunch of BIGTOP branches, master, some hadoops, some branch-* [12:47] kjackal: one moment [12:47] let mee see [12:48] pekkari: kwmonroe is your man for that [12:51] pekkari: the PRs should reach the apache bigtop repo [12:52] pekkari: here is the repo https://github.com/apache/bigtop [12:52] kjackal: http://paste.openstack.org/show/mA0DWpKLQl875T1WnUdG/ [12:53] magicaltrout: thanks! [12:53] pekkari: the process for submitting a PR is described here: https://github.com/juju-solutions/bigdata-community/wiki/Contributing and https://github.com/juju-solutions/bigdata-community/wiki/Bigtop-Patch-Process [12:55] BlackDex: I do not see the nrpe-xenial:nrpe-external-master keystone:nrpe-external-master relation in the relations list [12:55] kjackal: no indeed [12:55] kjackal: indeed, sorry for the noise, just quickread and destroyed, that strategy doesn't work :) [12:55] since i removed it ;) [12:56] thanks again! [12:56] But the subs are still there [12:56] on a big data note, kjackal, just tidying up my relations stuff for saiku->apache drill->hadoop connections [12:56] so its zero command line [12:56] outside of getting data into hadoop [12:56] magicaltrout: :) [12:59] kjackal: And, if i do an upgrade-charm, they also are triggered. [12:59] but i can't remove them any more [12:59] The only option i think i have now is adding new keystone unit's [13:00] remove the old once, if that will even work, since i think it will get stuck on removing the nrpe sub [13:02] BlackDex: what is the error in the nrpe-xenial charm? [13:02] can you show us the logs? [13:02] let mee see [13:02] it had something to do with opening/closing of a port [13:02] i think 5666 [13:03] but that it had the value ICMP [13:03] so what i did, i added "return" at the function, to skip that part [13:03] that didn't resolved it [13:03] i think someting went wrong with the upgrade [13:05] BlackDex: I would juju ssh onto that node and do a tail -f /var/log/juju/unit-nrpe-xenial-25.log . Then I would do a juju resolved nrpe-xenial/25 . And see in the logs what we get this time [13:05] ill get the output for you [13:05] one moment [13:06] If it is a bug in the charm we would better open a ticket [13:07] http://paste.openstack.org/show/K73G7vbWDsriJ2ZUg2tC/ [13:07] it is that, or the upgrade went wrong [13:07] i could try to overwrite the files /var/lib/juju/agent/x/x/x with an specific version [13:07] and see if that will help [13:15] BlackDex: I am not sure about this charm. You should open a bug with https://api.jujucharms.com/charmstore/v5/nrpe/archive/metadata.yaml . What you could do for now is to replace /var/lib/juju/agent/unit-nrpe-xenial-25/charm/hook/upgrade-charm with something that is a noop [13:15] it fails with update-config also [13:15] for example do a return on line 2 I think [13:15] so that doesn't work [13:16] how does it fail now? What is the error msg? [13:16] same error message [13:16] oke [13:17] i have removed ICMP from the .nrpe-config.ports [13:17] that seems to get rid of the error [13:17] hehe [13:17] :) [13:19] but it still doesn't remove the unit [13:20] is it in error state? [13:21] no [13:21] no error state [13:21] but it should be removed [13:21] stopped [13:21] killed [13:21] whatever [13:21] not still being there [13:28] oke [13:28] thx for the help i the right direction! [13:29] lets see if i can create a report of this. because this had something to do with an upgrade error, an not something else [13:29] all the others where fine [15:16] hi, how do i use juju to make my ephemeral storage using ceph instead of local disk [15:51] ejat: i haven't used it my self yet. But this page should help you: https://jujucharms.com/docs/stable/charms-storage [15:51] Not, you need the latest version of juju v2.3 [15:51] s/Not/Note/ [16:21] BlackDex: ok thanks .. i just tested manually each of ceph-mon node .. [16:22] the nova compute by default create volume on ceph not local disk === frankban is now known as frankban|afk [18:23] charmstore down? [18:24] https://imgur.com/a/X6ayw [18:24] https://jujucharms.com/q/nova-compute [18:29] bdx, here too [18:32] bdx: pmatulis some sort of outage tracking in the data center. [18:33] just verifying that it's true/folks on it === rick_h changed the topic of #juju to: Juju as a Service Beta now available at https://jujucharms.com/jaas | https://review.jujucharms.com/ | https://jujucharms.com/docs/ | http://goo.gl/MsNu4I || https://www.youtube.com/c/jujucharms [18:34] bah === rick_h changed the topic of #juju to: Charmstore outage currently in progress | Juju as a Service Beta now available at https://jujucharms.com | https://jujucharms.com/docs/ | http://goo.gl/MsNu4I | https://www.youtube.com/c/jujucharms === rick_h changed the topic of #juju to: Charmstore outage currently in progress | Get started with Juju https://jujucharms.com/docs/ | The Juju Show https://www.youtube.com/c/jujucharms === xavpaice_ is now known as xavpaice [20:08] if jujucharms.com shafted? [20:08] if/is [20:09] apparantly its out on the town with rick_h [20:09] magicaltrout: :( I'm taking my ball and going home. [20:10] <_thumper_> ha === _thumper_ is now known as thumper [20:10] kwmonroe stole it [20:12] magicaltrout: jujucharms.com is perfectly fine. it's only when you want to, ya know, do something besides look at the front page where you might have trouble. [20:13] hehe [20:13] magicaltrout: this is a perfect opportunity to polish up those drill bits. [20:13] okay the juju api is down [20:13] much like the snapcraft api about 6 hours ago [20:13] not my day [20:14] magicaltrout: rolling reboot for Spectre and meltdown... [20:14] i was actually wanting to record a quick video of my drill -> saiku no commandline data discovery thing [20:14] :P [20:14] So just not a good day [20:14] i also want to fix up my hdfs drillbit relation so we can do bigtop -> drill -> saiku no commandline hacks data analysis [20:24] do the hadoop charms support network spaces? [20:27] kwmonroe magicaltrout: what do I do if I want data-node <-> data-node traffic on a network other then my maas mgmt net [20:28] looking at the layer code I cont find any network bindings in any of the metadata.yaml [20:29] having a hard time thinking networking was over looked on those .... possibly I'm missing something ... or everyone is just using infiniband for their primary interface and I'm just behind the times .... [20:55] bdx: sorry for the delay, and sorrier for this reply: the big data charms don't support network spaces. [20:55] no worries, thx thx === chat is now known as Guest64380 [21:36] oooh she lives [21:40] rick_h: by the way since this afternoon [21:40] juju gui on jaas seems completely hosed [21:41] or i'm doing something that prevents it working cause its been fine [21:41] but now its not putting workloads onto machines [21:51] kwmonroe rick_h or someone [21:51] https://www.dropbox.com/s/katu466hlbcbnf2/jujugui.mp4?dl=0 [21:51] can you just validate my stupidity [21:52] i dunno if I'm doing something wrong, but i'm pretty sure I'm just doing what I've done 100 times [21:52] https://gist.github.com/buggtb/27de532516ce54764a8700671bba7e39 that video gave me this as a result of my deployment :) [21:58] everyone's out at ricks party :'( [22:01] ah this time it was even crazier [22:02] oh no this time worked [22:02] but I deployed 1 zk by itself then scaled that and added more charms [22:02] magicaltrout: i'm still watching your movies. seems fine to me. [22:03] kwmonroe it doesn't deploy any software [22:03] which seemingly in a software management solution seems a slight glitch [22:04] no software can't fail. closing as designed [22:05] har hars aside magicaltrout... machines are started, but no apps deployed? [22:05] yeah [22:05] just sits there with 5 nodes [22:05] repeated that this afternoon and now [22:06] magicaltrout: which juju version? 2.3.3? [22:07] yeah [22:09] magicaltrout: i just made a new thing from jujucharms.com, and it seems to be good: https://paste.ubuntu.com/p/DBBw3tFmQC/ [22:09] magicaltrout: was this from a "juju gui" on a pre-existing env? [22:10] its just jaas [22:10] on a new model [22:10] hmph.. i'm using jaas for my controller too [22:11] yeah but if i launch 1 zk it works [22:11] but if i launch 3 scaled at deployment time it doesn't seem to [22:11] just trying again to test my reasoning [22:12] well thats fscked [22:12] this time i asked for 3 units [22:12] it launched 3 machines [22:12] and it scheduling 2 nodes [22:13] living the dream [22:14] https://imagebin.ca/v/3sd1YIOgJKcJ [22:14] check that kwmonroe [22:14] ack, adding 3 zks.. gimme a minute [22:15] oh [22:15] the 3rd one has now shown up [22:15] weird [22:15] ugh, out of money. magicaltrout, please send me a bitcoin [22:16] send me one now [22:16] so i can help you [22:16] hey [22:16] NASA haven't paid me in 3 months [22:16] i'm about to go out of business don't ask me for cash :P [22:16] lolcry [22:17] okay 3 zk nodes works [22:17] 3 zk nodes with a relation [22:17] fails [22:17] its just done it again [22:17] that is 100% repeatable [22:18] i'll file a bug [22:18] magicaltrout: i may be seeing the same.. i just added 3 zks to my model, i see machines coming up -- https://paste.ubuntu.com/p/PKC45GN5Ns/ [22:19] but no zk [22:19] let's see if that changes after the machines come up [22:19] yeah [22:19] ... [22:19] it wont :P [22:20] in any normal deployment i see, services going into a waiting state before the machines go live [22:20] I've got another 4 here now just doing nothing [22:21] also kwmonroe if you look at the application status in Juju GUI [22:21] it says "uncommitted" whatever that means [22:23] also kwmonroe IRC needs a joy emoticon.... [22:23] https://gist.github.com/buggtb/f7299a2116e96e63dc1dac20bfe5b137 [22:23] juju gui refuses to delete the models [22:23] and juju cli thinks they have 0 machines [22:24] but juju cli does let me destroy the models [22:26] ah! So when deploying when I scale and select 1 unit per machine [22:26] it deploys the services correct [22:26] but if I deploy and select manually place, and then in the swimlanes select auto placement [22:26] it doesn't deploy the services [22:30] yeah, confirmed here magicaltrout [22:30] rick_h: haaaalp [22:30] ha [22:30] want me to file it on github? [22:31] * rick_h walks back to the computer [22:31] what's up? === rick_h changed the topic of #juju to: Get started with Juju https://jujucharms.com/docs/ | The Juju Show https://www.youtube.com/c/jujucharms [22:31] rick_h: start a new model at jujucharms.com, add zookeeper (or anything), update the deployment to 3 units, commit. [22:32] 3 machines will spin up, no apps get deployed [22:32] right magicaltrout? is that about the simplest way to do it? [22:33] i confirmed with both zk and mongo charms [22:33] yeah, although when I did it I had to relate it to something [22:33] kwmonroe: k, what's the logs on the machine that comes up say? [22:33] but kev already had kibana running so who knows [22:34] also rick_h https://gist.github.com/buggtb/f7299a2116e96e63dc1dac20bfe5b137 [22:35] the messed up models report 0 machines for the broken services [22:35] magicaltrout: yea, you might have to log out and back in to see something [22:35] magicaltrout: because of the outage/back up and such [22:35] even though my model and mymodel2 had 4 or 5 running [22:35] magicaltrout: I had errors until I re-authed [22:35] rick_h: this happened at 3pm [22:35] well before the downtime [22:35] magicaltrout: oic [22:36] and then post downtime [22:36] k, testing out kwmonroe's issue [22:36] aww shit kwmonroe even gets credited with it [22:36] today is a bad day [22:37] heh, lemme cycle my login and try again [22:37] sorry but it's my birthday and I'm packing for a trip. I only came back because of "haaaaalp" [22:37] so I was lazy and only read the backlog a little ways :P [22:38] hehe, i was just gonna file it [22:38] blame kev [22:38] i saw the bike getting packed, somewhere nice? [22:38] magicaltrout: was hoping so, ozark mountains (arkansaw) [22:38] magicaltrout: but it's been raining a solid week and raining tomorrow/sat [22:38] magicaltrout: so not sure how much riding we'll get to do...maybe some mudding? [22:39] plan was to get out of MI winter and it's warmer here than there right now lol [22:39] yea, something odd here [22:39] units still show as uncomitted [22:39] but the machines came up [22:39] ha! mongo just came up for me. maybe it's a zk thing. [22:39] hah, and hatch bailed [22:39] bastard [22:40] and i have to change locations for .. be back asap. [22:40] k, I'll file a bug and get folks to look in the AM...damn. We've got a new gui that's hit release and wonder if this is still in there... [22:41] magicaltrout: what's your GH username? [22:41] you know why? because you didn't hit autoplace? [22:41] or did you do some placement stuff? [22:42] buggtb [22:42] well I hit autoplace on one of the tests I did [22:42] and it made no difference rick_h [22:42] so it works if I manually place [22:43] works if I add and click "one unit per machine" [22:43] and works if I hit manually place and then later click "auto place" [22:43] magicaltrout: do you have a solid failure repro steps? [22:44] basically the bad thing is that if I scale 3 new units it says it'll add-unit but since I never placed it that changelog is a lie [22:44] in that case it's "broken" but it's because I said manually place and then didn't [22:46] so i've literally just tried this [22:46] i've stuck 1 apache-drill and 3 zookeepers on my canvas [22:46] related them [22:46] said manual place [22:47] but then in the machines view i clicked the auto place button [22:47] oh i see what its doing [22:47] sorta [22:47] thats messed up [22:48] screen grab coming [22:49] https://imagebin.ca/v/3sdC6OOnVLBJ [22:49] look at this rick_h [22:49] i stuck the stuff on the canvas then clicked the auto place button [22:49] and the deploy [22:49] it fired up 4 machines [22:49] which are empty [22:50] then did something pretty weird with 2 zookeeper nodes [22:50] which don't exist [22:50] and wheres the other zk node and wheres the apache drill? [22:50] they literally don't exist [22:51] magicaltrout: hmm, can you file a bug in https://github.com/juju/juju-gui/issues/new with the step by step and I'll get someone to look into it please? [22:52] sure thing [22:52] ty! [22:55] filed [22:55] have fun packing [22:55] thanks for the help kwmonroe