[04:08] <thumper> simple review for someone... https://github.com/juju/juju/pull/10681
[04:18] <Sou> Hey all, I am pretty new to the entire juju thing. We used juju charms to setup openstack
[04:19] <Sou> Is there anyway to regenrate a configuration file for a unit using juju?
[04:20] <Sou> any help would be great
[04:21] <thumper> Sou: I'm not sure I understand what you are asking for
[04:22] <thumper> what sort of configuration file are you expecting or wanting?
[04:24]  * thumper needs to head off for the week
[04:25] <thumper> Sou: you might want to consider asking the question on our discourse (link in topic)
[04:25] <thumper> https://discourse.jujucharms.com
[04:26] <Sou> Ohh. Lemme elaborate what I meant. I had setup a unit vault (number of containers : 3). For vault HA to work, I had to add etcd (count 3 ) and easyrsa (count 1). I had to reOS the host machine which was running easyRSA container. But that broke the etcd cluster. It was throwing bad tls error. So I removed all etcd units, and readded them. But the
[04:26] <Sou> vault configuration file (which uses etcd) has old details about  etcd. So I was finding out if there is a way to regenerate the configuration files of a container via juju. Another option is to edit it manually. But then I am not sure if juju will create any issues
[04:28] <thumper> Sou: if vault didn't update the config for the new etcd it smells like a bug in the vault charm
[04:28] <Sou> ohh
[04:29] <thumper> To get the right eyes on it, either file a bug in launchpad against the vault charm, or ask in discourse and I can tag the openstack charmers
[04:29] <Sou> okay thanks
[04:29] <thumper> juju doesn't hold the config of apps
[04:29] <thumper> that is the responsibility of the charms themselves
[04:30] <thumper> have a good weekend folks
[04:30]  * thumper out
[04:31] <Sou> Ohh okay. Please correct me if I am understanding it wrong. Juju charms are used to setup the application in containers. And post that does the charms still keep an eye on the changes being made?
[04:33] <Sou> also does it we shouldn't manage anything inside the containers created by juju?
[04:43] <babbageclunk> Sou: yes, the charm also manages the running application and lets you configure it using juju commands
[04:44] <babbageclunk> Sou: In general, you shouldn't be changing things in the container  directly because then the charm might be out of sync with what you've changed.
[04:48] <Sou> Okay. Thanks a lot babbageclunk. Wrt openstack nova-compute charm, there are many nova related configuration options which I can't see when I do a "juju config <app_name>"
[04:48] <Sou> Is there anyway to add such configuration options to the containers which run the unit
[04:50] <babbageclunk> Sou: it might be that some of those are managed by the charm in response to other units being related to the application.
[04:51] <babbageclunk> What kinds of options do you mean? (I'm not an openstack expert though)
[04:53] <Sou> $ sudo juju config nova-compute  | grep instance_name_template~$
[04:53] <Sou> My apologies for the typo
[04:53] <Sou> one variable name is instance_name_template
[04:55] <Sou> I can't modify that variable via charms
[04:56] <Sou> in a big setup if I want to make sure such a variable is managed, I might have to integrate the containers (or units) created  by juju with ansible or puppet
[04:56] <Sou> But then it would make the setup complex
[04:57] <Sou> Is that a suggested way of doing things?
[04:59] <babbageclunk> Sou: I don't think I understand what you're trying to do - you want to have Juju-created machines be managed with ansible or puppet? I'm not sure how that would work.
[05:02] <Sou> Managing juju created machines with ansible or puppet came to my thought when I was not able to manage a config parameter of an application via juju.
[05:02] <babbageclunk> Sou: I think people in #openstack-charmers would be able to help you with your instance_name_template question
[05:02] <Sou> Thanks @babbageclunk I will post the same in that conf
[05:06] <babbageclunk> Sou: I think I see what you mean - use ansible to make post-deployment changes to a unit to tweak that setting? I think it would be better to change the charm to expose the config you need.
[05:08] <Sou> Yeah, I think making changes in charm will make things easier
[09:02] <nammn_de> stickupkid: currently looking into the bug when a user calls "juju /foo" our code tries to create a fork and fails. You worked on the "similiar" cmds last time. I could either return a proper error "file does not exist" or we could run your "similiar" code again. What would you prefer?
[09:09] <stickupkid> don't remember what I did, any pointers?
[09:13] <nammn_de> https://github.com/juju/juju/blob/develop/cmd/juju/commands/plugin.go#L77
[09:13] <nammn_de> stickupkid:
[09:14] <nammn_de> stickupkid: ^ the code where if a command is not found you try to find the most similiar command and return it something like "foo does not exist, did you mean gui"
[09:15] <stickupkid> nammn_de, yeah, that imo, but best to ask rick_h
[09:16] <nammn_de> okey makes sense, rick_h: ^ above, but I will update launchpad to have it written down
[09:32] <achilleasa> manadart: I have finished reviewing your bridge policy PR and will start the QA steps next
[09:32] <manadart> achilleasa: Great; ta.
[09:37] <stickupkid> achilleasa, thumper pointed out an issue with the introspection stuff https://github.com/juju/juju/pull/10682
[09:42] <achilleasa> stickupkid: I have a question (see comment)
[09:45] <stickupkid> achilleasa, responded
[09:51] <achilleasa> stickupkid: are you sure that the command is interpreted as 'xargs "CMD" > out' instead of 'xargs "CMD > out"'?
[09:51] <stickupkid> achilleasa, tested it locally :D
[09:52] <achilleasa> bash or zsh?
[09:52] <stickupkid> juju bootstrap lxd test
[09:52] <stickupkid> juju enable-ha
[09:52] <stickupkid> well "sh"
[09:52] <stickupkid> let me tripple check
[09:54] <achilleasa> No you are actually right, you have to explicitly quote the commands to get the redirect bit for each command
[09:54] <achilleasa> wait. let me doublecheck this :D
[09:56] <achilleasa> yes, it works as you expect. Sorry for the confusion
[09:59] <stickupkid> achilleasa, yeah don't worry, I also had to check
[10:03] <stickupkid> nammn_de, updated per your comments https://github.com/juju/juju/pull/10675
[10:07] <nammn_de> stickupkid: ensure will create a model in the bootstrapped controller as well, if it does not exist, right?
[10:07] <stickupkid> sort of, it'll name the default model something else
[10:08] <stickupkid> i'll add that
[10:08] <nammn_de> stickupkid: 🦸‍♂️
[10:12] <stickupkid> nammn_de, done
[10:13] <nammn_de> stickupkid: approved
[10:14] <stickupkid> achilleasa, regarding the series stuff, you can't use head as that's for 2.7, I believe you'll need to make a 2.6 branch and add the new macOS version there
[10:15] <stickupkid> unless we back port what we did to 2.7 to 2.6, which i don't think is wise
[10:15] <achilleasa> stickupkid: I think 2.6.10 will be the last release and the sha is already out. We can fix it for 2.7 though...
[10:15] <stickupkid> achilleasa, sure sure
[10:16] <achilleasa> they did merge the PR btw
[10:16] <stickupkid> ah, that's fine then
[10:16] <achilleasa> yeah, I saw the response this morn. Yest I was thinking that they would say we cannot accept this as the tests don't pass :D
[10:17] <stickupkid> probably don't care for betas
[11:24] <rick_h> nammn_de:  what's the link to the bug number again? I want to see the use case in the bug that cause dfolks to file it
[11:25] <nammn_de> rick_h: https://bugs.launchpad.net/juju/+bug/1747040
[11:25] <mup> Bug #1747040: Invoking juju with no verb but a path results in confusing error messages <bitesize> <cli> <ui> <juju:Triaged by nammn> <https://launchpad.net/bugs/1747040>
[11:25] <nammn_de> rick_h: added the PR for more description. Can always update the PR. Just open for discussion
[11:26] <rick_h> nammn_de:  that works for me, ty
[11:30] <nammn_de> If thats the case, would love to get a review from someone. Pretty small one, fast to test rick_h stickupkid https://github.com/juju/juju/pull/10683
[14:29] <manadart> I think we may have a problem here.
[14:30] <manadart> If we need to run an upgrade to a version that causes a break in the allwatcher/modelcache code without upgrade steps having been run, we get into a deadlock.
[14:31] <manadart> modelcache error-cycles getting a new watcher, API can't come up, machine agent can't connect to API. Upgrade does not run.
[18:35] <gQuigs> does anyone have any tricks for referencing the machine_name in a juju_run command?
[18:36]  * gQuigs wants to do juju run --all "command --batch "machine_name"
[18:44] <pmatulis> gQuigs, i guess you would need to translate machine name to machine ID prior to 'juju run'
[18:46] <gQuigs> pmatulis: hoping to make it more like a one liner, so we don't have to give a script to customers..   command in question is sosreport :)
[18:48] <gQuigs> I guess I could just hope the machine name has the id consistantly in it..
[18:51] <pmatulis> gQuigs, and i guess a "support charm" on all machines is too heavy right?
[18:51] <pmatulis> but such a thing could be useful in other imaginative ways i suppose
[18:55] <fallenour> hey when watching juju status, what color settings shoudl I use in order to ensure the colors stay the same with watch as they do with juju status
[18:55] <fallenour> it all comes back grey when I do: watch -c color=auto juju status
[19:00] <gQuigs> pmatulis: yea
[19:14] <pmatulis> gQuigs, then you could have various actions ('sos-all', 'sos-maas', 'misc-support')
[19:14] <gQuigs> sos already determines what plugins to run automatically :)
[19:14] <pmatulis> bah :)
[19:14] <gQuigs> I think I'll just use run and ask them to provide a list of names and machine ids
[19:47] <Fallenour> hey guys, I keep getting this message from juju: failed to start machine 1/lxd/3 (acquiring LXD image: no matching image found), retrying in 10s (10 more attempts)
[19:47] <Fallenour> Im using maas, and I have all of the 18.04LTS images downloaded. Does anyoen have any idea what causes this issue? One machine already built out 3 lxd containers, so I dont know why its giving this error.
[19:49] <Fallenour> Im currently using juju version 2.6.9
[20:02] <Fallenour> I foudn the issue. its a dns error with juju. where can I got or what can I do to fix...I think I fixed it.
[20:03] <Fallenour> I updated the mAAS DNS addresses, and it ... nope, not fixed. How do I update juju dns info for lxd containers?
[20:17] <Fallenour> WARNING juju.provisioner incomplete DNS config found, discovering host's DNS config
[20:17] <Fallenour>   is the error I keep seeing in juju debug-log. I keep finding a lot of complaints about this, but no solution. Does anyone have an idea about a work around?
[20:57] <Fallenour>   is the error I keep seeing in juju debug-log. I keep finding a lot of complaints about this, but no solution. Does anyone have an idea about a work around?
[21:31] <Fallenour> Does conjure-up allow me to manage the systems built via juju, or do I need to manage those somewhere else?
[21:50] <davecore> Fallenour: Once you deployed using conjure-up, the rest of the management is done with juju