=== frankban|afk is now known as frankban | ||
BlackDex | Hello there | 10:19 |
---|---|---|
BlackDex | i have juju 2.1.2 running. And i needed to remove a relation, but it isn't working | 10:19 |
BlackDex | it told me the relation is removed | 10:20 |
BlackDex | also if i retry the remove-relation, it tells me there is no relation | 10:20 |
BlackDex | but the subordinate charm is still there | 10:20 |
BlackDex | is there some way of forcing this? | 10:20 |
utking | Hi guys! | 10:42 |
utking | How can i completely re-deploy a bundle? | 10:44 |
BlackDex | utking: juju destroy-model <model-name> | 10:47 |
BlackDex | after that, juju add-model <model-name> | 10:47 |
utking | ah ok, thanks :) | 10:47 |
BlackDex | and then you can deploy again :) | 10:47 |
magicaltrout | BlackDex: whats your definition of "the subordinate charm is still there"? | 10:53 |
BlackDex | well, that it still is listed under the keystone charm in this case ;) | 10:53 |
BlackDex | juju remove-relation keystone nrpe-xenial | 10:54 |
BlackDex | ERROR relation "nrpe-xenial:nrpe-external-master keystone:nrpe-external-master" not found (not found) | 10:54 |
BlackDex | juju remove-relation keystone nrpe-xenial | 10:54 |
BlackDex | ERROR relation "nrpe-xenial:nrpe-external-master keystone:nrpe-external-master" not found (not found) | 10:54 |
BlackDex | oeps | 10:54 |
BlackDex | sorry | 10:54 |
BlackDex | https://paste.ubuntu.com/p/39jnBQ8tyZ/ | 10:54 |
BlackDex | if i start a debug-hooks on one of those nrpe charms it sometimes starts a hook. And i tried running hooks/stop my self, but that doesn't do anything | 10:56 |
BlackDex | juju run etc.. is also working on these unit's | 10:56 |
BlackDex | but i can't seem to remove them | 10:56 |
BlackDex | tried restarting that container also, no effect | 10:56 |
kjackal | can you show us the full juju status BlackDex | 11:10 |
kjackal | ? | 11:10 |
=== cnfer is now known as cnf | ||
=== stokachu_ is now known as stokachu | ||
=== stub` is now known as stub | ||
pekkari | kjackal: o/ | 12:42 |
kjackal | hey pekkari | 12:43 |
pekkari | hey, what is the best branch for contributions in apache/bigtop on github? | 12:43 |
pekkari | I see bunch of BIGTOP branches, master, some hadoops, some branch-* | 12:44 |
BlackDex | kjackal: one moment | 12:47 |
BlackDex | let mee see | 12:47 |
magicaltrout | pekkari: kwmonroe is your man for that | 12:48 |
kjackal | pekkari: the PRs should reach the apache bigtop repo | 12:51 |
kjackal | pekkari: here is the repo https://github.com/apache/bigtop | 12:52 |
BlackDex | kjackal: http://paste.openstack.org/show/mA0DWpKLQl875T1WnUdG/ | 12:52 |
pekkari | magicaltrout: thanks! | 12:53 |
kjackal | pekkari: the process for submitting a PR is described here: https://github.com/juju-solutions/bigdata-community/wiki/Contributing and https://github.com/juju-solutions/bigdata-community/wiki/Bigtop-Patch-Process | 12:53 |
kjackal | BlackDex: I do not see the nrpe-xenial:nrpe-external-master keystone:nrpe-external-master relation in the relations list | 12:55 |
BlackDex | kjackal: no indeed | 12:55 |
pekkari | kjackal: indeed, sorry for the noise, just quickread and destroyed, that strategy doesn't work :) | 12:55 |
BlackDex | since i removed it ;) | 12:55 |
pekkari | thanks again! | 12:56 |
BlackDex | But the subs are still there | 12:56 |
magicaltrout | on a big data note, kjackal, just tidying up my relations stuff for saiku->apache drill->hadoop connections | 12:56 |
magicaltrout | so its zero command line | 12:56 |
magicaltrout | outside of getting data into hadoop | 12:56 |
kjackal | magicaltrout: :) | 12:56 |
BlackDex | kjackal: And, if i do an upgrade-charm, they also are triggered. | 12:59 |
BlackDex | but i can't remove them any more | 12:59 |
BlackDex | The only option i think i have now is adding new keystone unit's | 12:59 |
BlackDex | remove the old once, if that will even work, since i think it will get stuck on removing the nrpe sub | 13:00 |
kjackal | BlackDex: what is the error in the nrpe-xenial charm? | 13:02 |
kjackal | can you show us the logs? | 13:02 |
BlackDex | let mee see | 13:02 |
BlackDex | it had something to do with opening/closing of a port | 13:02 |
BlackDex | i think 5666 | 13:02 |
BlackDex | but that it had the value ICMP | 13:03 |
BlackDex | so what i did, i added "return" at the function, to skip that part | 13:03 |
BlackDex | that didn't resolved it | 13:03 |
BlackDex | i think someting went wrong with the upgrade | 13:03 |
kjackal | BlackDex: I would juju ssh onto that node and do a tail -f /var/log/juju/unit-nrpe-xenial-25.log . Then I would do a juju resolved nrpe-xenial/25 . And see in the logs what we get this time | 13:05 |
BlackDex | ill get the output for you | 13:05 |
BlackDex | one moment | 13:05 |
kjackal | If it is a bug in the charm we would better open a ticket | 13:06 |
BlackDex | http://paste.openstack.org/show/K73G7vbWDsriJ2ZUg2tC/ | 13:07 |
BlackDex | it is that, or the upgrade went wrong | 13:07 |
BlackDex | i could try to overwrite the files /var/lib/juju/agent/x/x/x with an specific version | 13:07 |
BlackDex | and see if that will help | 13:07 |
kjackal | BlackDex: I am not sure about this charm. You should open a bug with https://api.jujucharms.com/charmstore/v5/nrpe/archive/metadata.yaml . What you could do for now is to replace /var/lib/juju/agent/unit-nrpe-xenial-25/charm/hook/upgrade-charm with something that is a noop | 13:15 |
BlackDex | it fails with update-config also | 13:15 |
kjackal | for example do a return on line 2 I think | 13:15 |
BlackDex | so that doesn't work | 13:15 |
kjackal | how does it fail now? What is the error msg? | 13:16 |
BlackDex | same error message | 13:16 |
BlackDex | oke | 13:16 |
BlackDex | i have removed ICMP from the .nrpe-config.ports | 13:17 |
BlackDex | that seems to get rid of the error | 13:17 |
BlackDex | hehe | 13:17 |
kjackal | :) | 13:17 |
BlackDex | but it still doesn't remove the unit | 13:19 |
kjackal | is it in error state? | 13:20 |
BlackDex | no | 13:21 |
BlackDex | no error state | 13:21 |
BlackDex | but it should be removed | 13:21 |
BlackDex | stopped | 13:21 |
BlackDex | killed | 13:21 |
BlackDex | whatever | 13:21 |
BlackDex | not still being there | 13:21 |
BlackDex | oke | 13:28 |
BlackDex | thx for the help i the right direction! | 13:28 |
BlackDex | lets see if i can create a report of this. because this had something to do with an upgrade error, an not something else | 13:29 |
BlackDex | all the others where fine | 13:29 |
ejat | hi, how do i use juju to make my ephemeral storage using ceph instead of local disk | 15:16 |
BlackDex | ejat: i haven't used it my self yet. But this page should help you: https://jujucharms.com/docs/stable/charms-storage | 15:51 |
BlackDex | Not, you need the latest version of juju v2.3 | 15:51 |
BlackDex | s/Not/Note/ | 15:51 |
ejat | BlackDex: ok thanks .. i just tested manually each of ceph-mon node .. | 16:21 |
ejat | the nova compute by default create volume on ceph not local disk | 16:22 |
=== frankban is now known as frankban|afk | ||
bdx | charmstore down? | 18:23 |
bdx | https://imgur.com/a/X6ayw | 18:24 |
bdx | https://jujucharms.com/q/nova-compute | 18:24 |
pmatulis | bdx, here too | 18:29 |
rick_h | bdx: pmatulis some sort of outage tracking in the data center. | 18:32 |
rick_h | just verifying that it's true/folks on it | 18:33 |
=== rick_h changed the topic of #juju to: Juju as a Service Beta now available at https://jujucharms.com/jaas | https://review.jujucharms.com/ | https://jujucharms.com/docs/ | http://goo.gl/MsNu4I || https://www.youtube.com/c/jujucharms | ||
rick_h | bah | 18:34 |
=== rick_h changed the topic of #juju to: Charmstore outage currently in progress | Juju as a Service Beta now available at https://jujucharms.com | https://jujucharms.com/docs/ | http://goo.gl/MsNu4I | https://www.youtube.com/c/jujucharms | ||
=== rick_h changed the topic of #juju to: Charmstore outage currently in progress | Get started with Juju https://jujucharms.com/docs/ | The Juju Show https://www.youtube.com/c/jujucharms | ||
=== xavpaice_ is now known as xavpaice | ||
magicaltrout | if jujucharms.com shafted? | 20:08 |
magicaltrout | if/is | 20:08 |
magicaltrout | apparantly its out on the town with rick_h | 20:09 |
rick_h | magicaltrout: :( I'm taking my ball and going home. | 20:09 |
_thumper_ | ha | 20:10 |
=== _thumper_ is now known as thumper | ||
magicaltrout | kwmonroe stole it | 20:10 |
kwmonroe | magicaltrout: jujucharms.com is perfectly fine. it's only when you want to, ya know, do something besides look at the front page where you might have trouble. | 20:12 |
magicaltrout | hehe | 20:13 |
kwmonroe | magicaltrout: this is a perfect opportunity to polish up those drill bits. | 20:13 |
magicaltrout | okay the juju api is down | 20:13 |
magicaltrout | much like the snapcraft api about 6 hours ago | 20:13 |
magicaltrout | not my day | 20:13 |
rick_h | magicaltrout: rolling reboot for Spectre and meltdown... | 20:14 |
magicaltrout | i was actually wanting to record a quick video of my drill -> saiku no commandline data discovery thing | 20:14 |
magicaltrout | :P | 20:14 |
rick_h | So just not a good day | 20:14 |
magicaltrout | i also want to fix up my hdfs drillbit relation so we can do bigtop -> drill -> saiku no commandline hacks data analysis | 20:14 |
bdx | do the hadoop charms support network spaces? | 20:24 |
bdx | kwmonroe magicaltrout: what do I do if I want data-node <-> data-node traffic on a network other then my maas mgmt net | 20:27 |
bdx | looking at the layer code I cont find any network bindings in any of the metadata.yaml | 20:28 |
bdx | having a hard time thinking networking was over looked on those .... possibly I'm missing something ... or everyone is just using infiniband for their primary interface and I'm just behind the times .... | 20:29 |
kwmonroe | bdx: sorry for the delay, and sorrier for this reply: the big data charms don't support network spaces. | 20:55 |
bdx | no worries, thx thx | 20:55 |
=== chat is now known as Guest64380 | ||
magicaltrout | oooh she lives | 21:36 |
magicaltrout | rick_h: by the way since this afternoon | 21:40 |
magicaltrout | juju gui on jaas seems completely hosed | 21:40 |
magicaltrout | or i'm doing something that prevents it working cause its been fine | 21:41 |
magicaltrout | but now its not putting workloads onto machines | 21:41 |
magicaltrout | kwmonroe rick_h or someone | 21:51 |
magicaltrout | https://www.dropbox.com/s/katu466hlbcbnf2/jujugui.mp4?dl=0 | 21:51 |
magicaltrout | can you just validate my stupidity | 21:51 |
magicaltrout | i dunno if I'm doing something wrong, but i'm pretty sure I'm just doing what I've done 100 times | 21:52 |
magicaltrout | https://gist.github.com/buggtb/27de532516ce54764a8700671bba7e39 that video gave me this as a result of my deployment :) | 21:52 |
magicaltrout | everyone's out at ricks party :'( | 21:58 |
magicaltrout | ah this time it was even crazier | 22:01 |
magicaltrout | oh no this time worked | 22:02 |
magicaltrout | but I deployed 1 zk by itself then scaled that and added more charms | 22:02 |
kwmonroe | magicaltrout: i'm still watching your movies. seems fine to me. | 22:02 |
magicaltrout | kwmonroe it doesn't deploy any software | 22:03 |
magicaltrout | which seemingly in a software management solution seems a slight glitch | 22:03 |
kwmonroe | no software can't fail. closing as designed | 22:04 |
kwmonroe | har hars aside magicaltrout... machines are started, but no apps deployed? | 22:05 |
magicaltrout | yeah | 22:05 |
magicaltrout | just sits there with 5 nodes | 22:05 |
magicaltrout | repeated that this afternoon and now | 22:05 |
kwmonroe | magicaltrout: which juju version? 2.3.3? | 22:06 |
magicaltrout | yeah | 22:07 |
kwmonroe | magicaltrout: i just made a new thing from jujucharms.com, and it seems to be good: https://paste.ubuntu.com/p/DBBw3tFmQC/ | 22:09 |
kwmonroe | magicaltrout: was this from a "juju gui" on a pre-existing env? | 22:09 |
magicaltrout | its just jaas | 22:10 |
magicaltrout | on a new model | 22:10 |
kwmonroe | hmph.. i'm using jaas for my controller too | 22:10 |
magicaltrout | yeah but if i launch 1 zk it works | 22:11 |
magicaltrout | but if i launch 3 scaled at deployment time it doesn't seem to | 22:11 |
magicaltrout | just trying again to test my reasoning | 22:11 |
magicaltrout | well thats fscked | 22:12 |
magicaltrout | this time i asked for 3 units | 22:12 |
magicaltrout | it launched 3 machines | 22:12 |
magicaltrout | and it scheduling 2 nodes | 22:12 |
magicaltrout | living the dream | 22:13 |
magicaltrout | https://imagebin.ca/v/3sd1YIOgJKcJ | 22:14 |
magicaltrout | check that kwmonroe | 22:14 |
kwmonroe | ack, adding 3 zks.. gimme a minute | 22:14 |
magicaltrout | oh | 22:15 |
magicaltrout | the 3rd one has now shown up | 22:15 |
magicaltrout | weird | 22:15 |
kwmonroe | ugh, out of money. magicaltrout, please send me a bitcoin | 22:15 |
kwmonroe | send me one now | 22:16 |
kwmonroe | so i can help you | 22:16 |
magicaltrout | hey | 22:16 |
magicaltrout | NASA haven't paid me in 3 months | 22:16 |
magicaltrout | i'm about to go out of business don't ask me for cash :P | 22:16 |
kwmonroe | lolcry | 22:16 |
magicaltrout | okay 3 zk nodes works | 22:17 |
magicaltrout | 3 zk nodes with a relation | 22:17 |
magicaltrout | fails | 22:17 |
magicaltrout | its just done it again | 22:17 |
magicaltrout | that is 100% repeatable | 22:17 |
magicaltrout | i'll file a bug | 22:18 |
kwmonroe | magicaltrout: i may be seeing the same.. i just added 3 zks to my model, i see machines coming up -- https://paste.ubuntu.com/p/PKC45GN5Ns/ | 22:18 |
kwmonroe | but no zk | 22:19 |
kwmonroe | let's see if that changes after the machines come up | 22:19 |
magicaltrout | yeah | 22:19 |
magicaltrout | ... | 22:19 |
magicaltrout | it wont :P | 22:19 |
magicaltrout | in any normal deployment i see, services going into a waiting state before the machines go live | 22:20 |
magicaltrout | I've got another 4 here now just doing nothing | 22:20 |
magicaltrout | also kwmonroe if you look at the application status in Juju GUI | 22:21 |
magicaltrout | it says "uncommitted" whatever that means | 22:21 |
magicaltrout | also kwmonroe IRC needs a joy emoticon.... | 22:23 |
magicaltrout | https://gist.github.com/buggtb/f7299a2116e96e63dc1dac20bfe5b137 | 22:23 |
magicaltrout | juju gui refuses to delete the models | 22:23 |
magicaltrout | and juju cli thinks they have 0 machines | 22:23 |
magicaltrout | but juju cli does let me destroy the models | 22:24 |
magicaltrout | ah! So when deploying when I scale and select 1 unit per machine | 22:26 |
magicaltrout | it deploys the services correct | 22:26 |
magicaltrout | but if I deploy and select manually place, and then in the swimlanes select auto placement | 22:26 |
magicaltrout | it doesn't deploy the services | 22:26 |
kwmonroe | yeah, confirmed here magicaltrout | 22:30 |
kwmonroe | rick_h: haaaalp | 22:30 |
magicaltrout | ha | 22:30 |
magicaltrout | want me to file it on github? | 22:30 |
* rick_h walks back to the computer | 22:31 | |
rick_h | what's up? | 22:31 |
=== rick_h changed the topic of #juju to: Get started with Juju https://jujucharms.com/docs/ | The Juju Show https://www.youtube.com/c/jujucharms | ||
kwmonroe | rick_h: start a new model at jujucharms.com, add zookeeper (or anything), update the deployment to 3 units, commit. | 22:31 |
kwmonroe | 3 machines will spin up, no apps get deployed | 22:32 |
kwmonroe | right magicaltrout? is that about the simplest way to do it? | 22:32 |
kwmonroe | i confirmed with both zk and mongo charms | 22:33 |
magicaltrout | yeah, although when I did it I had to relate it to something | 22:33 |
rick_h | kwmonroe: k, what's the logs on the machine that comes up say? | 22:33 |
magicaltrout | but kev already had kibana running so who knows | 22:33 |
magicaltrout | also rick_h https://gist.github.com/buggtb/f7299a2116e96e63dc1dac20bfe5b137 | 22:34 |
magicaltrout | the messed up models report 0 machines for the broken services | 22:35 |
rick_h | magicaltrout: yea, you might have to log out and back in to see something | 22:35 |
rick_h | magicaltrout: because of the outage/back up and such | 22:35 |
magicaltrout | even though my model and mymodel2 had 4 or 5 running | 22:35 |
rick_h | magicaltrout: I had errors until I re-authed | 22:35 |
magicaltrout | rick_h: this happened at 3pm | 22:35 |
magicaltrout | well before the downtime | 22:35 |
rick_h | magicaltrout: oic | 22:35 |
magicaltrout | and then post downtime | 22:36 |
rick_h | k, testing out kwmonroe's issue | 22:36 |
magicaltrout | aww shit kwmonroe even gets credited with it | 22:36 |
magicaltrout | today is a bad day | 22:36 |
kwmonroe | heh, lemme cycle my login and try again | 22:37 |
rick_h | sorry but it's my birthday and I'm packing for a trip. I only came back because of "haaaaalp" | 22:37 |
rick_h | so I was lazy and only read the backlog a little ways :P | 22:37 |
magicaltrout | hehe, i was just gonna file it | 22:38 |
magicaltrout | blame kev | 22:38 |
magicaltrout | i saw the bike getting packed, somewhere nice? | 22:38 |
rick_h | magicaltrout: was hoping so, ozark mountains (arkansaw) | 22:38 |
rick_h | magicaltrout: but it's been raining a solid week and raining tomorrow/sat | 22:38 |
rick_h | magicaltrout: so not sure how much riding we'll get to do...maybe some mudding? | 22:38 |
rick_h | plan was to get out of MI winter and it's warmer here than there right now lol | 22:39 |
rick_h | yea, something odd here | 22:39 |
rick_h | units still show as uncomitted | 22:39 |
rick_h | but the machines came up | 22:39 |
kwmonroe | ha! mongo just came up for me. maybe it's a zk thing. | 22:39 |
rick_h | hah, and hatch bailed | 22:39 |
rick_h | bastard | 22:39 |
kwmonroe | and i have to change locations for <reasons>.. be back asap. | 22:40 |
rick_h | k, I'll file a bug and get folks to look in the AM...damn. We've got a new gui that's hit release and wonder if this is still in there... | 22:40 |
rick_h | magicaltrout: what's your GH username? | 22:41 |
rick_h | you know why? because you didn't hit autoplace? | 22:41 |
rick_h | or did you do some placement stuff? | 22:41 |
magicaltrout | buggtb | 22:42 |
magicaltrout | well I hit autoplace on one of the tests I did | 22:42 |
magicaltrout | and it made no difference rick_h | 22:42 |
rick_h | so it works if I manually place | 22:42 |
rick_h | works if I add and click "one unit per machine" | 22:43 |
rick_h | and works if I hit manually place and then later click "auto place" | 22:43 |
rick_h | magicaltrout: do you have a solid failure repro steps? | 22:43 |
rick_h | basically the bad thing is that if I scale 3 new units it says it'll add-unit but since I never placed it that changelog is a lie | 22:44 |
rick_h | in that case it's "broken" but it's because I said manually place and then didn't | 22:44 |
magicaltrout | so i've literally just tried this | 22:46 |
magicaltrout | i've stuck 1 apache-drill and 3 zookeepers on my canvas | 22:46 |
magicaltrout | related them | 22:46 |
magicaltrout | said manual place | 22:46 |
magicaltrout | but then in the machines view i clicked the auto place button | 22:47 |
magicaltrout | oh i see what its doing | 22:47 |
magicaltrout | sorta | 22:47 |
magicaltrout | thats messed up | 22:47 |
magicaltrout | screen grab coming | 22:48 |
magicaltrout | https://imagebin.ca/v/3sdC6OOnVLBJ | 22:49 |
magicaltrout | look at this rick_h | 22:49 |
magicaltrout | i stuck the stuff on the canvas then clicked the auto place button | 22:49 |
magicaltrout | and the deploy | 22:49 |
magicaltrout | it fired up 4 machines | 22:49 |
magicaltrout | which are empty | 22:49 |
magicaltrout | then did something pretty weird with 2 zookeeper nodes | 22:50 |
magicaltrout | which don't exist | 22:50 |
magicaltrout | and wheres the other zk node and wheres the apache drill? | 22:50 |
magicaltrout | they literally don't exist | 22:50 |
rick_h | magicaltrout: hmm, can you file a bug in https://github.com/juju/juju-gui/issues/new with the step by step and I'll get someone to look into it please? | 22:51 |
magicaltrout | sure thing | 22:52 |
rick_h | ty! | 22:52 |
magicaltrout | filed | 22:55 |
magicaltrout | have fun packing | 22:55 |
magicaltrout | thanks for the help kwmonroe | 22:55 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!