[00:13] <veebers> thumper: yay sorted. /me updates bug now
[00:21] <veebers> KingJ: Are you still online? FYI those agents are available now. Sorry for the hassle
[01:47] <babbageclunk> So we haven't renamed juju to krustykrab then?
[01:51] <babbageclunk> wallyworld: when you get a chance could you please have a squiz at https://github.com/juju/juju/pull/8695?
[01:52] <babbageclunk> wallyworld: it's not blocking me though so no hurry
[01:52] <wallyworld> babbageclunk: ok, np, after talking to tim
[01:52] <babbageclunk> ta
[02:07] <veebers> wallyworld: public-clouds have been updated
[02:10] <wallyworld> veebers: great! ty
[02:31] <wallyworld> babbageclunk: done!
[02:46] <thumper> wallyworld: here is that rework I was telling you about: https://github.com/juju/bundlechanges/pull/39
[02:46] <wallyworld> ok
[03:12] <babbageclunk> thanks wallyworld
[03:13] <wallyworld> thumper: done
[03:29] <thumper> wallyworld: awesome, thanks
[04:57] <thumper> wallyworld_: https://github.com/juju/juju/pull/8687 - this is the work to pass policy into deploy and add unit api calls
[04:57] <thumper> there was some refactoring to rework the api structures in the facade
[04:57] <thumper> but I think it now represents what we consider best practice
[08:29] <srihas> hi, with the help from channel, I have installed the OpenStack but when I try to open the horizon GUI after exposing the servie
[08:29] <srihas> I got internal server error
[08:31] <srihas> I just did "sudo chown www-data /var/lib/openstack-dashboard/secret_key" and restarted apache2 in one of the lxd containers and it started working from all the endpoints
[08:31] <srihas> is it just fine to do the change in one of the lxd hosts and it gets synced?
[08:32] <srihas> but now "Unable to establish connection to http://127.0.0.1:5000/v2.0/tokens: HTTPConnectionPool(host='127.0.0.1', port=5000): Max retries exceeded with url: /v2.0/tokens (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f80619967d0>: Failed to establish a new connection: [Errno 111] Connection refused',))" horizon is trying to connect on the localhost endpoint for keystone, which is not true in this case as the
[08:38] <manadart> jam: In HO.
[08:38] <jam> manadart: I'm kind of head deep in the netplan bonds stuff, and I don't think he's done an update yet, has he?
[08:42] <manadart> jam: Doesn't appear so.
[08:42] <manadart> jam: Happy to punt.
[08:44] <manadart> jam: Let me know if you get a natural break sooner than our 1:1. Need a quick powow.
[08:44] <jam> manadart: k. making food right now and then I'll ping you when I'm back
[08:44] <manadart> externalreality: G'morning. Are you able to hop on a quick HO?
[09:01] <jam> manadart: now is a good time before I get back into it.
[09:01] <jam> 1:1 ?
[09:01] <manadart> K.
[09:11] <wallyworld> rogpeppe: did you still need input om cmr macaroons?
[10:16] <TheAbsentOne> The layer-index repo does not have the cassandra interface. Could someone explain me how juju then works with that interface? How is the communication done?
[11:30] <cnp> Have installed pycharm using which have cloned juju-gui in my IDE. After which I ran setup.py which was successful. Now when I try to run development.ini from the IDE am receving as :
[11:30] <cnp> C:\Users\CG3\venv\juju-gui-2\Scripts\python.exe C:/Users/CG3/PycharmProjects/juju-gui/development.ini File "C:/Users/CG3/PycharmProjects/juju-gui/development.ini", line 6 [app:main] ^ SyntaxError: invalid syntax Process finished with exit code 1
[11:31] <cnp> please help
[11:52] <rick_h_> Hmm, wonder what cnp was trying to do
[12:19] <TheAbsentOne> true he didn't really stay long
[12:19] <TheAbsentOne> rick_h_: do you have an idea about my interface question? Also how do I receive the app-name of the charm actually? Couldn't find it in the docs and I thought I read it somewhere in other charms
[12:23] <stub> TheAbsentOne: Nobody has written a Cassandra interface yet, or if they have they haven't published it. It predates reactive charms.
[12:23] <stub> TheAbsentOne: You either need to write one, wait for someone to write one, or use relation_get/relation_set like in the good ol' days before charms.reactive
[12:23] <jam> manadart: reviewing 8696
[12:24] <TheAbsentOne> Ah I see, I never used the relation_get/set stuff thanks for the info stub
[12:24] <TheAbsentOne> I also took the liberty to put a feature request on the pgsql interface, I hope you don't mind
[12:24] <jam> stub: TheAbsentOne: I started working on one here: https://github.com/jameinel/interface-cassandra
[12:24] <stub> TheAbsentOne: Writing the interface is on my todo list along with a reactive rewrite of the Cassandra charm and supporting the very latest Cassandra versions (like DSE 6, which was reported as failing the other week)
[12:24] <manadart> jam: Thanks. Didn't want to break your stride. Was wondering how it looked against the the CI test for Bionic.
[12:24] <jam> but I didn't get to the point of knowing it worked for the case I needed, and got sidetracked with other things.
[12:24] <jam> It is likely close, IIRC
[12:27] <TheAbsentOne> ah that looks nice jam, depending on a lot of things I might take a look at it too, but I'm afraid I'm too much of a noob to write a decent cassandra charm, thanks for the info and the link!
[12:28] <jam> manadart: commented. btw you are likely fixing bug #1668547
[12:28] <mup> Bug #1668547: juju doesn't configure lxdbr0 properly with new LXD (>2.3) <bridge> <lxd> <network> <juju:Triaged by jameinel> <https://launchpad.net/bugs/1668547>
[12:30] <jam> manadart: btw, you have a couple of nodes allocated on GUIMAAS are you still using them?
[12:30] <manadart> jam: That's a LXD cluster I set up. Easy to do again when I need it. All yours.
[12:37] <jam> manadart: I don't, just saw it running and was checking.I'm doing a bionic netplan test, but I should only need 1 node.
[12:39] <bobeo> goooood morning!
[12:40] <rick_h_> jam: feel free to kill mine off if I still have any
[12:41] <rick_h_> bobeo: morning
[12:43] <bobeo> rick_h_: another beautiful day ! getting the morning started here. I realized this weekend I couldnt ssh via juju ssh into my models I have access to on one system, but I can another. does juju have a specific set of ssh keys It needs, IE the ones from maas, or does it use its own keys, and if so, that would be juju add-ssh-key --model <ModelName>
[12:44] <bobeo> rick_h_: also, to clarify, I mean I have access from one machine via juju client, but not from the new one
[12:44] <rick_h_> bobeo: yea, so in the MAAS case, if MAAS has a key it's used normally pulled in. If you need to add a key then you can add-ssh-key or import-ssh-key
[12:45] <rick_h_> bobeo: right, so juju will setup a key on bootstrap so that things work, and if you go to another machine you'll need to make sure a key on that machine is available to juju (well known to it)
[12:47] <bobeo> rick_h_: ok, so Im a bit lost at this part. I do juju import-ssh-key --model <ModelName> but then it repeats the options on the command with --model again, bt this time offers different options, but that are the same models. maybe im misinterpreting this? I deployed via conjure up, and the initial option offers both all the conjure up options, as well as all the model options, but the second interation only offers all the models, does it
[12:48] <bobeo> rick_h_: first, and then the specified model, or does one option, just as the conjure up option, allow me to target all models in that instance?
[12:48] <bobeo> rick_h_: if thats not the case, can we get that to be the case in some future option? because that would be really awesome! One import for all models in a cloud instance.
[12:48] <rick_h_> bobeo: so no, right now ssh keys are per model with is a :( thing that we've got on the roadmap to clean up
[12:49] <bobeo> YAY! rick_h_  I GUESSED A FEATURE! I get a COOKIE!
[12:49] <rick_h_> bobeo: so what you have to do is, for each model, juju import-ssh-key -m xxxx $lp_or_gh_username
[12:49] <bobeo> so juju import-ssh-key -m <modelname> <myusername>
[12:49] <bobeo> rick_h_: correct?
[12:50] <rick_h_> bobeo: correct
[12:51] <bobeo> rick_h_: ok, so I think I missed something again, do you have any docs on lp or gh? Im not familiar with either abbreviation, but it didnt like when I simply used my username, it gave me a "prefix in key ID "<username>" not supported, only lp: and gh: are allowed.
[12:51] <rick_h_> bobeo: sorry, that's the github or launchpad username that has public keys for you to use
[12:52] <bobeo> rick_h_: OOOH! thats what those mean! lemme update my list real quick!
[12:52] <rick_h_> bobeo: so I've got my public keys on both sites that I know my laptop will work with so it's easier for me to just import them from those sites vs upload it manually
[12:53] <bobeo> rick_h_: is that safe?
[12:53] <rick_h_> bobeo: so it's your public key you'd normally use. It's not the private key. Neither launchpad or github have that. The private key only exists on your laptop
[12:53] <bobeo> bobeo: that doesnt sound safe. I know you can login with ssh without a pass depending on how its configed and built. wait..ignore that, stupid question. you would have set a password
[12:53] <rick_h_> bobeo: so basically, you're saying that the juju models should accept the same key that you'd use to push changes to github or launchpad
[12:54] <rick_h_> bobeo: never send your private key off your laptop anywhere
[12:54] <bobeo> rick_h_: so toss <username> pub key, pull pub key, use priv key + pass to auth. I see where thats headed
[12:54] <rick_h_> well, unless you're backing it up or copying it to another machine you use or the like
[12:54] <rick_h_> bobeo: right
[12:55] <bobeo> rick_h_: righto, so Im guessing toss it in...wait, where does juju store the key?
[12:55] <rick_h_> bobeo: so it adds the public key to the normal ssh place of ~/.ssh/authorized_keys
[12:56] <bobeo> rick_h_: btw, are you guys ok with me creating ASCII docs on all the stuff you guys teach me? It helps me remember, and it would help you provide links on how-to's.
[12:56] <rick_h_> bobeo: when you use a command like ssh-copy-id or ssh-import-id or the like on a linux system the public key it put into a list in ^ file and then when you ssh it checks if your key you're providing matches up and works with the public keys in that list
[12:56] <bobeo> rick_h_: ok, I was wondering about that
[12:59] <bobeo> rick_h_: ok, so Im gonna try this with a git command. this might take awhile, I dont use it nearly often enough. im tryin to get better. Ill be back sooner than later if I level my git by accident XD
[12:59] <bobeo> and whatever you do, dont give me the answer o.o
[12:59] <rick_h_> bobeo: :)
[12:59] <rick_h_> bobeo:good stuff to learn
[13:01] <cnp> Have installed pycharm using which have cloned juju-gui in my IDE. After which I ran setup.py which was successful. Now when I try to run development.ini from the IDE am receving below error:  C:\Users\CG3\venv\juju-gui-2\Scripts\python.exe C:/Users/CG3/PycharmProjects/juju-gui/development.ini File "C:/Users/CG3/PycharmProjects/juju-gui/development.ini", line 6 [app:main] ^ SyntaxError: invalid syntax Process finished with exit code 1
[13:01] <cnp> can someone please help me in making development.ini run?
[13:02] <rick_h_> cnp: what are you doing? Is this from the github.com/juju/juju-gui repo?
[13:03] <cnp> am trying to create my own GUI version for juju. And yes, it is from juju-gui repo.
[13:06] <cnp> am I missing anything to make development.ini run?
[13:37] <rick_h_> cnp: so https://github.com/juju/juju-gui/blob/develop/docs/hacking.md walks you through hacking on the project
[13:38] <rick_h_> cnp: it's not been tested on windows so I'm not sure if the dev tools will work happily there to be honest
[14:22] <manadart> jam: Can you cast an eye over the new commit for 8696 (https://github.com/juju/juju/pull/8696/commits/4861a263479c9cbd2a029237fe945486372413de) ?
[14:22] <manadart> Worked like a champ when I delete eth0 and the default bridge.
[14:23] <manadart> I can address other sundries as next work and go onto these other bugs.
[14:31] <jam> manadart: I think I made comments
[14:31] <jam> manadart: if you didn't get them, let me know, and I'll see if I failed submit somehow
[14:38] <jam> manadart: tit-for-tat can you review my WIP: https://github.com/juju/juju/pull/8697 ?
[14:39] <manadart> jam: Ja.
[14:39] <jam> manadart: sorry its already 1k lines long. damn I suck :)
[14:39] <jam> fortunately a good portion of that should just be the 'examples'. (I hope)
[15:17] <srihas> hi guys, can someone tell if juju charm for neutron-api supports "neutron-plugin: aci" option?
[15:25] <rick_h_> beisner: ^ ?
[15:36] <rick_h_> evilnick: ping, how goes?
[15:37] <evilnick> rick_h_, hi, what's up?
[15:37] <rick_h_> evilnick: I'm looking to send the email ont he 2.4-beta2 announcement and I wanted to see what I needed to do for us to sync up
[15:38] <rick_h_> evilnick: in looking at the checklist, there's notes on syncing up with docs folks and as it's my first time going through it I'm not sure what "normal" is these days
[15:39] <evilnick> heh. Yes, we usually sync up so we can make sure any references are in the dev docs and any release notes are published in dev too. pmatulis would probably be better to sync with
[15:39] <evilnick> he is more day to day on juju and closer to your timezone
[15:39] <rick_h_> evilnick: gotcha, pmatulis howdy :)
[15:41] <evilnick> rick_h_, he is around but may be on lunch right now
[15:41] <rick_h_> evilnick: k, np
[15:53] <pmatulis> rick_h_, b2 you say? i see the gdoc RelNotes have been updated. are they complete?
[15:53] <rick_h_> pmatulis: well, complete as they're going to get today I think
[15:53] <rick_h_> pmatulis: I'm sure I'm messing something up and someone will correct me at some point heh
[16:03] <manadart> jam: Approved.
[16:28] <pmatulis> rick_h_, no mistakes allowed
[16:28] <rick_h_> pmatulis: boooooo
[16:29] <rick_h_> I have a reputation to uphold
[16:31] <pmatulis> rick_h_, i'm ready to publish. good to go?
[16:36] <rick_h_> pmatulis: sure, I'm heading home atm but will send the email when I get home
[16:36] <rick_h_> ty pmatulis
[16:46] <bobeo> hey kwmonroe is it possible to deploy openjdk as a container? I see that you mantain that one.
[16:46] <pmatulis> rick_h_, k, it will be there in 15 minutes (max)
[16:47] <kwmonroe> bobeo: openjdk is a subordinate charm, meaning it needs a principal to relate to.. as long as that principal can be deployed into a container, openjdk will happily go along with it.
[16:48] <kwmonroe> bobeo: all the openjdk charm really does is add a repo (if needed) and let you easily switch between versions via juju config (vs apt reinstalling on a unit)
[16:48] <bobeo> kwmonroe: OOOH! thats what that means, ok. So I need to simply deploy the principal first.
[16:50] <kwmonroe> yup bobeo, note that the principal must provide a java relation -- you can't just attach openjdk to any 'ol thing.  here's a list of stuff openjdk can relate to: https://jujucharms.com/q/?provides=java
[17:09] <JoeJulian> Is there a way to make juju partition a maas node?
[17:26] <bobeo> rick_h_: I found a thing lool http://paste.ubuntu.com/p/6NTG85V6N5/
[17:26] <bobeo> no machines, no applications, it states its already resolved, no units, and still an error
[17:27] <rick_h_> bobeo: quit breaking things!
[17:28] <bobeo> LOOL
[17:28] <bobeo> IIll provide you a copy of my history commands to be able to reproduce
[17:28] <rick_h_> bobeo: oh I believe you
[17:29] <rick_h_> bobeo: a bug with those steps would be awesome and we can see how to make sure you can get out of jail there
[17:30] <bobeo> oh absolutely! I know you do. lying to you guys is like picking up vipers and smacking them, surely to bite you in a bad way.
[17:30] <bobeo> hopefully with the history you can reproduce easily. deployment method was conjure-up
[17:33] <JoeJulian> Ok, no answer about juju partitioning maas so the other way I can solve this is by adding a loopback image. I see instructions for how to do that from a command line but can it just be configured in the bundle? I tried adding something to the machine section and it didn't error - but I also didn't get a device.
[17:40] <kwmonroe> JoeJulian: not sure what you mean by provisioning a maas node, but if you have juju bootstrapped in your maas env (https://jujucharms.com/docs/stable/clouds-maas), things like "juju deploy foo" should ask maas for an available machine and do whatever needs to be done to put "foo" on it.
[17:42] <JoeJulian> Yeah, everything but give me a block device I can use with heketi. So I need to either partition the machine drive instead of allocating the whole thing, or add a loopback image that heketi can put a thin-provision lvm on.
[17:42] <kwmonroe> bobeo: you're using bdx's elasticsearch charm, aren't you?
[17:43] <bobeo> kwmonroe: yes o.o
[17:43] <bobeo> kwmonroe: today ive tried I think 4 different types of elasticsearch
[17:44] <bobeo> one gave me no feedback from a direct curl query, one gave me a failed install, another worked just fine, and bxds, well its bxds, and I trust him enough to give him the keys to my car, and my own ssh keys.
[17:44] <kwmonroe> bobeo: i ask because http://paste.ubuntu.com/p/6NTG85V6N5/ shows ES and storage -- i'm not sure how (or if) storage works in containers.  bdx, has this worked for you?
[17:44] <bobeo> kwmonroe: my error message was that it didnt.
[17:44] <bdx> nah
[17:45] <bdx> I steered bobeo in the wrong direction
[17:45] <kwmonroe> good thing you've got his car keys
[17:45] <bdx> bobeo: neither elasticsearch charm will work on lxd currently as far as I know
[17:45] <bobeo> kwmonroe: my thoughts are use a separate elasticsearch outside of his in the interim if they work, simply deploy the other two modules which worked
[17:45] <bobeo> bdx: ive gotten elastic to work in lxd
[17:45] <bdx> bobeo: you have ? elastic 2.x?
[17:45] <bobeo> my thoughts are mesh the two solutions
[17:46] <kwmonroe> 2.x??? tell me you're not rocking ES 2.x...
[17:47] <bdx> elastic 5.x will hit this container bug
[17:47] <bdx> 2.x does not
[17:47] <kwmonroe> boooooo
[17:47] <bdx> so, bobeo: you very well may have deployed es to container previously
[17:47] <bdx> see https://github.com/elastic/elasticsearch/commit/32df032c5944326e351a7910a877d1992563f791
[17:50] <bobeo> just to be clear, i dont think conjure-up is broken, nor relevant. I am currently working on my terminology, and am in no way blaming the creators, or contributors for what is most likely my fault
[17:52] <bobeo> also, is there a good tutorial on how conjure-up works? I use it a lot for my openstack deployment, and its absolutely amazing, but id also like to understand how it works and why it works better.
[17:52] <bdx> hmm ... I'm not sure, you should ask in #juju for that
[17:52] <bdx> :p
[17:53] <bdx> lol whoops
[17:53] <bdx> I see we are there
[17:53] <bdx> @bobeo, I think there is a conjure-up channel, or slak or something too
[17:54] <bdx> @bobeo #conjure-up
[17:55] <rick_h_> bobeo: so conjure-up works by adding "spells" idea which it allows it to hook up to and do more than a basic bundle.
[17:55] <rick_h_> bobeo: and with the UX bits in there it basically lets you "build a bundle" so that you can tweak things in a more interactive fashion
[17:55] <rick_h_> bobeo: search for openstack spell conjure-up and you can see what goes into it
[17:56] <bobeo> rick_h_: so are you saying its a bit like a bundle of bundles? like a bundle book?
[17:56] <rick_h_> https://github.com/conjure-up/spells
[17:56] <rick_h_> bobeo: heh, it's a bit more like a "choose your own bundle adventure"
[17:57] <bobeo> rick_h_: ill definitely earmark those for reading later. as an update to the issue, I also discovered I cant remove the model, as it hangs up on removing the application.
[17:57] <bobeo> sorry, I cant destroy* the model
[17:57] <rick_h_> bobeo: right...normally we'd use the juju remove-machine --force to clean it up
[17:57] <rick_h_> bobeo: but....since you have no machines I'm not sure how we'd clean that out :/
[17:57] <bobeo> rick_h_: yea, I tried that
[17:58] <bobeo> rick_h_: i figured if I destroyed the machine, it woudl go away. I could try to reboot the controllers?
[17:58] <bobeo> rick_h_: maybe they are just hung,a nd a reboot might resolve the issue?
[17:59] <kwmonroe> bobeo: have you tried "juju remove-application xxx"?
[17:59] <rick_h_> bobeo: have you tried to reoslve it with --no-retry and see if you can get it to remove?
[17:59] <bobeo> kwmonroe: yea I tried that rick_h_ yes I tried that as well, it states its been resolved.
[17:59] <rick_h_> bobeo: k, so it states it's resolved, but then you can't remove it then?
[18:00] <bobeo> rick_h_: correct, and when I run the command again, the juju resolved --no-retry, it states its already resolved
[18:00] <kwmonroe> bobeo: does remove-application give you an error?
[18:03] <bobeo> kwmonroe: strangly no, it says "removing application elasticsearch" and then returns to prompt
[18:05] <kwmonroe> hmph bobeo, and destroy-model just hangs?
[18:08] <bdx> kwmonroe, rick_h: I think I just realized something, is it true that if a charm defines storage, it can no longer be deployed to lxd (on any provider other then lxd)?
[18:09] <rick_h_> kwmonroe: so storage is always optional
[18:09] <bdx> I guess in short, I'm wondering if lxd deploys and juju storage are mutually exclusive?
[18:10] <bdx> it seems they are
[18:10] <rick_h_> kwmonroe: by default it'll just use a path on disk
[18:10] <bobeo> kwmonroe: what it does it goes into a statement of "Waiting on model to be removed, 1 application(s)..."
[18:10] <rick_h_> kwmonroe: and the storage lxd bits are 18.10 goals so it'll be coming
[18:10] <bdx> ok thank
[18:12] <kwmonroe> bdx: i assume when rick_h_ was talking to me, you knew he meant you.
[18:12] <rick_h_> kwmonroe: bdx well I guess both of you
[18:12] <rick_h_> oh, I see what I did there. bdx was asking you
[18:12] <rick_h_> whoops
[18:12] <kwmonroe> :)
[18:12] <bdx> seriously monday
[18:13] <rick_h_> :P
[18:13] <bobeo> *sits back and nods head as if knowing whats going on*
[18:13] <rick_h_> multi-tasking
[18:13] <bobeo> caffeine my friends, caffeine. I know little about linux, but I know much of the sorcery of caffiene.
[18:14] <KingJ> veebers: Apologies, only just had a chance to try things out. All good now though - been able to deploy a 2.4-beta2 controller and the agent downloaded.
[18:15] <rick_h_> KingJ: yay
[18:16] <kwmonroe> bobeo: how much would it hurt to tear down your whole controller?  if you're able to do that without much angst, this is a pretty big hammer that might get rid of that model "juju destroy-controller --destroy-all-models"
[18:16] <bobeo> kwmonroe: I trust you, rick_h_ and bdx enough to buy you plane tickets, hand you the keys to my office, and leave for a 2 week vacation. if you tell me to burn the whole thing to the ground, id do it and not blink an eye.
[18:16] <kwmonroe> bobeo: otherwise, if destroy-model is looping forever on  "Waiting on model to be removed, 1 application(s)...", i'm not really sure what to do.  rick_h_, do you have anything less drastic to try other than taking out the controller?
[18:17] <bobeo> althogh I did something a bit simpler,a nd simply built another model called secappclusters-001-000002
[18:17] <rick_h_> bobeo: kwmonroe no, because of the fact that it's not on a machine and that Juju thinks it's resolved there's no other hammer I have at my disposal :(
[18:17] <bobeo> I figured id burn through a few dozen models, so I planned ahead
[18:18] <rick_h_> bobeo: yea, I mean basically just put a note on the wall that this model is a dead model killed by an infestation of bugs
[18:18] <rick_h_> bobeo: and carry on, it's not hurting anything
[18:18] <rick_h_> bobeo: but if you want a clean pristine setup controller death is the only way forward I've got for you
[18:18] <rick_h_> bobeo: kwmonroe well model migration is the other path
[18:18] <rick_h_> just bootstrap, migrate any models you want to keep, and kill off this controller
[18:18] <bobeo> rick_h_: ive already got a new model built, with a new system already building
[18:18]  * rick_h_ should have put quotes around "just"
[18:18] <rick_h_> bobeo: cool
[18:18] <bobeo> also, I have redundant controllers, or at least I think I do?
[18:19] <bobeo> I definitely have 3 controllers
[18:19] <rick_h_> bobeo: juju status -m controller
[18:19] <rick_h_> bobeo: will show you controller machines/etc
[18:21] <bobeo> rick_h_: that brings up a good point. how do I validate if controllers are clustered?
[18:21] <bobeo> IE in HA mode
[18:21] <kwmonroe> bobeo: shut down the controller and see if things still work.
[18:21] <bobeo> I had a controller fail on me int he past without it, it ended...poorly.
[18:21] <bobeo> kwmonroe: LOOL
[18:21] <kwmonroe> kidding btw ;)
[18:22] <bobeo> kwmonroe: that definitely is a good way to do it XD
[18:22] <rick_h_> lol
[18:22] <rick_h_> bobeo: juju show-controller
[18:22] <rick_h_> bobeo: should show HA status details
[18:23] <bobeo> rick_h_:  api-endpoints: ['10.0.0.99:17070', '10.0.0.17:17070', '10.0.0.20:17070'
[18:23] <bobeo> thats extremely reassuring
[18:23] <bobeo> bare minimum, the failure is also replicated?
[18:23] <rick_h_> bobeo: and I think juju status -m controller that the machines list will give you some details
[18:24] <rick_h_> lol
[18:24] <bobeo> either way, im not afraid of a dead model
[18:24] <bobeo> also, would it be helpful if I built a port based diagram for juju, maas, etc?
[18:25] <kwmonroe> bdx: good call on ES 5.x busted on containers -- systemd[1]: Failed to reset devices.list on /system.slice/elasticsearch.service: Operation not permitted :/
[18:25] <bobeo> it seems to be a popular use, and it would help me better understand as well.
[18:29] <kwmonroe> yeah bobeo, that would be awesome.  you're right that the question of how those pieces fit together comes up a lot.
[18:42] <bobeo> kwmonroe: rick_h_ bdx ok so I think I found a way to mcgyver it togther.
[18:42] <bobeo> I went with a generic elastic on baremetal, with an lxd instance install with kibana and logstash from bdx. ill let you konw how it goes. so far it looks good.
[18:49] <bobeo> ok so Ive got everything working minus the relationships, what are the requirements for the relationships? kwmonroe bdx rick_h_ do you have a link for this?
[18:50] <bobeo> i checked the ELK, elasticsearch, and kibana pages, but it didnt state relations
[19:16] <bobeo> ok so I got all three of them working, I just need to configure them now, and its fully operational (I hope)
[19:16] <bobeo> but status wise, all green http://paste.ubuntu.com/p/hBXqCwHxvb/
[20:03] <ascend> If I do not like the ceph.conf file that came out of a juju build, or any other service build how can I change that persistently.
[20:05] <rick_h_> ascend: most of the charms have some sort of "user extra config" you can try like https://jujucharms.com/ceph/#charm-config-config-flags
[20:06] <rick_h_> ascend: but honestly, you can't tell Juju that the config is X and then the application is running Y or else all the charm hooks and such that get config details and make decisions on scripts to execute, packages to install, information to tell related applications and have things work out well
[20:13] <TheAbsentOne> ascend, or you could hack your way into the charm and change it if the configs or actions are not sufficient x)
[20:16] <rick_h_> TheAbsentOne: ascend except juju runs various hooks just as a matter of course and you'll probably find your changes overwritten from time to time
[20:17] <TheAbsentOne> totally!
[20:23] <TheAbsentOne> rick_h_, if an interface layer only has a requires (like mongo), can I still use it with the endpoint pattern?
[20:23] <rick_h_> TheAbsentOne: I'm not sure, I've not messed with the endpoint pattern stuff atm
[20:24] <bobeo_> o/
[20:26] <TheAbsentOne> ah np rick_h_ I might try a Minimal working thing for mongo tomorrow, can't find any recent (reactive) charm using mongodb
[20:26] <TheAbsentOne> hi bobeo_
[20:27] <rick_h_> TheAbsentOne: yea, there's not one atm
[20:27] <bobeo_> hey TheAbsentOne!
[20:28] <bobeo_> Im gonna start workin on that document today kwmonroe and rick_h_ ill let you know as things progress
[20:32] <kwmonroe> +1 bobeo_ -- hey you dropped earlier, but i was gonna say a couple things about your ELK status.. (1) it looks like you'll need an elasticsearch:client logstash:elasticsearch relation (i didn't see any logstash relations in your paste), and (2) putting kibana in a container may not work in all clouds if the container network isn't available -- in your case, it looks like you'll be fine across the 10.0.x.x network, but if
[20:32] <kwmonroe> you moved that deployment to AWS, for example, kibana would have a private address that wouldn't be accessible to the outside world without some proxy/forwarder.
[20:34] <kwmonroe> if you want kibana accessible in something like a public cloud, you could hulk smash it on a public machine.  that means placing both ES and kibana on the same public host with something like "juju deploy kibana --to 0".  it's not generally recommended to smash multiple charms on the same host, but it's not illegal to do so.
[20:34] <bobeo_> kwmonroe: I push to it via haproxy for loadbalancing external to my juju instances
[20:35] <kwmonroe> ha bobeo_!  that was gonna be my next suggestion :)  haproxy can certainly handle the forwarding from public units to containers in the model.
[20:35] <bobeo_> kwmonroe: ill put those other relations in place, one moment please
[20:37] <bobeo_> kwmonroe: relations added. Also, how did you know that relation was missing? is there a command to see all the available relations, or relations required to make a charm fully functional?
[20:46] <bobeo_> kwmonroe: whats the best way for me to make config changes inside of juju with a standard install guide?
[20:49] <bobeo_> I ask because according to the system, everything is deployed, but according to the system, when I load the login page, or try to, its only nginx default landing page.
[20:53] <TheAbsentOne> what are you trying to deploy bobeo_
[20:58] <kwmonroe> bobeo_: i knew your logstash relation was missing because i didn't see anything in the Relation section related to logstash in your paste (http://paste.ubuntu.com/p/hBXqCwHxvb/)
[20:59] <kwmonroe> bobeo_: i didn't follow you on that last question... what login page are you loading?  kibana?
[21:00] <bobeo_> TheAbsentOne: kwmonroe deploying ELK, and yes kwmonroe , kibana.
[21:01] <bobeo_> im digging around in the kibana config, and I dont see much in regards to kibana for the web UI
[21:03] <kwmonroe> hm, bobeo_, not sure.. maybe try http://10.0.0.79/kibana
[21:03] <bobeo_> I did notice it didnt deploy with ES hosts configured, so Ill have to add all the es hosts, but other than that, it gives me the default password, adn the default listen port, but it needs to be forwarding 80 to something
[21:04] <kwmonroe> bdx: does your kibana have a different url path?  as in /kibana vs /, or maybe /james-is-the-best?
[21:04] <bobeo_> it doesnt show, which is the weird thing
[21:05] <bobeo_>  http://paste.ubuntu.com/p/fNPgZXp55R/
[21:05] <kwmonroe> bobeo_: gimme a minute to deploy the omni kibana.
[21:06] <bobeo_> kwmonroe: Omni kibana? that sounds badass
[21:06] <bobeo_> kwmonroe: Ill have what he's having
[21:06] <kwmonroe> bobeo_: you're using omnivector's kibana, right?  this one? https://jujucharms.com/u/omnivector/kibana/
[21:06] <kwmonroe> vs this one: https://jujucharms.com/kibana/
[21:06] <bobeo_> oh yea! I am having what youre having XD
[21:07] <bobeo_> kwmonroe: yea thats correct
[21:09] <kwmonroe> ok bobeo_, so bdx was sick and tired of the ELK stack development pace (which is fair), so he made ~omnivector versions of the ELK charms.  at some point, those will all merge to be the best of both worlds, but for now, it means i need to ask things like "which kibana are you deploying" so i know which one to try on my side.
[21:11] <bobeo_> kwmonroe: thats totally fine, Ill find you whatever information I can. honestly I like a lot of the things he did. merging out openjdk was genious
[21:11] <bobeo_> genius*
[21:16] <bobeo_> so this made me realize, omnivector isnt a username bdx uses, its a type of charm? So whats the difference in a regular charm, and an omnivector charm?
[21:16] <bobeo_> kwmonroe:
[21:28] <ascend> Sorry was away for a while thanks for the feedback on the above query.
[21:29] <zeestrat> bobeo_: It's one of bdx's namespaces on the charms.
[21:40] <kwmonroe> yeah bobeo_, what zeestrat said ^.  the charm store supports multiple namespaces, so for example, you and i could both have an openjdk charm.. one would be deployed with "juju deploy ~kwmonroe/openjdk" and the other with "juju deploy ~bobeo/openjdk" (or whatever namespace you wanted).  omnivector is bdx's company name, so he has a set of charms that they use published in the ~omnivector namespace.
[21:44] <kwmonroe> bobeo_: the charm store also supports a top level namespace, which means you can omit the ~namespace and just do "juju deploy openjdk".  that will deploy whatever namespace is promulgated as the top level charm.  there can only be 1 instance of a top level (or "promulgated") charm.
[21:46] <kwmonroe> bobeo_: so right now, https://jujucharms.com/kibana/ is the promulgated version of kibana.  it comes from the ~elasticsearch-charmers namespace.  both "juju deploy kibana" and "juju deploy ~elasticsearch-charmers/kibana" will deploy the same thing.  it's just that the former saves you some typing and is the default charm that shows up when people search for kibana.
[21:46] <kwmonroe> bobeo_: bdx has an alternate version of the kibana charm in the ~omnivector namespace, which you get when you type "juju deploy ~omnivector/kibana".
[21:47] <kwmonroe> and that's the one you're currently using
[21:48] <kwmonroe> there are 17 other versions of the kibana charm, btw.. you can see them by clicking "show 17 community results" here: https://jujucharms.com/q/kibana
[21:53] <TheAbsentOne> kwmonroe, since you are here did you encounter a charm using mongodb recently by any chance? I'm looking for a charm (reactive framework/endpoint pattern) that uses mongod charm/interface
[21:56] <bobeo_> kwmonroe: im guessing the promulgated charm is assigned to the official project owner? asin the one who created the project assessed with the charm, IE Elastic would be the promulgated elastic instance?
[21:56] <bobeo_> elasticsearch*
[21:59] <kwmonroe> TheAbsentOne: graylog uses mongo.. it's reactive, but not endpointy.  see around line 466: https://git.launchpad.net/graylog-charm/tree/reactive/graylog.py
[22:00] <Guest75873> wow I got dc'd, saw last comment, thanks for the link kwmonroe I will give it a look tomorrow, time to catch some zzz's goodnight all!
[22:01] <kwmonroe> TheAbsentOne: to make that endpointy, you'd say "@when('endpoint.mongodb.available'), def configure_mongodb_connection(), mongo = endpoint_from_name('mongodb'), for mongo_host in mongo.connection_strings():"
[22:02] <kwmonroe> Guest75873: ^^
[22:02] <Guest75873> ah kwmonroe so that's where I was wrong I tried to fetch it by flag as well
[22:02] <Guest75873> instead of name
[22:03] <kwmonroe> Guest75873: there is a _from_flag too.. i think it'd be "endpoint_from_flag('mongodb.available').  but we can check that after your nap ;)
[22:04] <Guest75873> kwmonroe, https://github.com/Ciberth/charm-mongo-consumer/blob/master/reactive/charm-mongo-consumer.py
[22:04] <Guest75873> it's okay my sleeping schedule is just as broken as my very first charm I wrote x)
[22:05] <kwmonroe> Guest75873: cory_fu may have to correct me here, but i think you want mongodb = endpoint_from_flag('mongodb.available') instead of mongodb = endpoint_from_flag('endpoint.mongodb.available') -- omit the 'endpoint.' prefix.
[22:05] <kwmonroe> maybe not though -- i know there was a _from_name vs _from_flag discussion recently, i just don't know which way it's supposed to go :)
[22:06] <Guest75873> yeah I wanted to check what flag was set right when I stopped working on it. Thanks kwmonroe I'll be back soon ^^
[22:06] <kwmonroe> +1
[22:06] <Guest75873> pretty sure flag I prefer flag x)
[22:06] <Guest75873> gnight!
[22:06] <kwmonroe> goodnight Guest75873!
[22:08] <wallyworld> thumper: did you see i reviewed your pr?
[22:08] <thumper> yes, I'd like to have a quick chat about it later
[22:08] <wallyworld> sure
[22:08] <thumper> but I'm on calls for ages now
[22:10] <kwmonroe> bobeo_: the promulgated charm is the one that the community decides is the best suited as a default charm.  anyone can have the top level / promulgated namespace.. you just have to mail the juju mailing list and ask for it.  currently ~elasticsearch-charmers (which is *not* actually affiliated with Elastic Co) has the promulgated elasticsearch charm, but there's nothing stopping bdx from requesting a takeover.  if he wanted
[22:10] <kwmonroe> , he'd make a case for that on the list, and anyone that has a stake in the elasticsearch charm would have a chance to ack/nak the transfer.  that said, lots of people don't bother promulgating these days -- if you know bdx's omnivector charms work best for you, you simply deploy with ~omnivector/<foo>.
[23:36] <thumper> babbageclunk: fyi race in raft stuff
[23:36] <babbageclunk> thumper: ah d'oh - ok, chasing.
[23:36] <thumper> babbageclunk: thanks