[00:13] <ss_juju> 'juju resolved' command runs the hook if there are any errors
[00:13] <ss_juju> Is there a way to run all the hooks in a charm from the beginning even if there is no error
[00:13] <ss_juju> Is there a way to run all the hooks in a charm from the beginning even if there is no error?
[00:14] <ss_juju> Is there a way to run all the hooks in a charm from the beginning even if there are no errors?
[00:33] <stormmore> anyone able to get helm to work with CDK?
[06:19] <stub> kwmonroe: The apt layer only knows about things you install via its API or layer.yaml. It does not know about any other packages (or you would end up with a few thousand unwanted states for all the packages in the base system)
[06:20] <kwmonroe> ah, yeah, makes sense stub, thx!
[06:22] <stub> kwmonroe, icey : It would be possible to make it set the states for a declared set of packages though without automatically installing them. But I haven't seen a real use case for adding that extra complexity.
[08:22] <kjackal> Good morning Juju world!
[09:14] <Zic> ryebot: received, do you think it's a k8s bug or a Juju one? just to know where I should fill the bug entry :)
[09:33] <Zic> ryebot: what is weird is that if the key changed, why 90% of others requests are OK :x
[10:21] <Zic> ryebot: I'm beginning to ask myself if putting the EasyRSA charm at the same machine of kube-api-loadbalancer could be a bad idea in this problem
[11:17] <admcleod> asdasdad
[11:17] <admcleod> .~.
[11:19] <magicaltrout> thats the most insightful thing you've said in a long time admcleod
[11:20] <admcleod> magicaltrout: asdasd!
[11:21] <magicaltrout> thanks!
[11:33] <admcleod> magicaltrout: asdasdasd?
[11:34] <magicaltrout> ¡sᴉɥʇ sɐ lnɟǝsn sɐ ʎlɹɐǝu
[11:37] <admcleod> so clever
[13:15] <icey> stub:  it wasn't really about the specific state name, the question was about mixing bash and python layers in a built charm (which worked beautifully!)
[14:56] <ryebot> Zic: I think we can start it out as a k8s bug for now, we can redirect it if it turns out to be Juju. What are your thoughts on colocating easyrsa and loadbalancer?
[15:43] <Zic> ryebot: I placed the easyrsa charm and the kube-api-loadbalancer on the same machine (by default, they are splitted in two machines)
[15:44] <Zic> ryebot: is it a bad idea?
[15:46] <ryebot> Zic: I'm not sure. I'm unaware of potential conflicts there.
[15:46] <Zic> ryebot: because I think that's the only part that I custom (except the scaling) on CDK
[15:46] <Zic> and I'm beginning to think it's maybe what you can't reproduce in your lab
[15:47] <Zic> (regarding my random problem of "rsa error")
[15:47] <ryebot> Zic: it may be worth investigating
[15:48] <Zic> and as I'm reinstalling the whole cluster currently, and have ressources to pop one other VM... maybe I will go with the easyrsa charm completey separated :)
[15:48] <ryebot> Zic: Okay, cool. Can you let me know if that solves the problem? I'll be around :)
[15:49] <Zic> even if it canges nothing, at least I'm sticking more to your default
[15:49] <Zic> changes*
[16:16] <jacekn> hello. I have 2 questions. 1. https://bugs.launchpad.net/juju-core/+bug/1613992 says fix released but juju 1.25.8 is not availble in trusty. Any idea when it will be? 2. Is it t supporetd config to have juju 1.25.6 managing 1.25.8 machines?
[16:16] <mup> Bug #1613992: 1.25.6 "ERROR juju.worker.uniter.filter filter.go:137 tomb: dying" <canonical-is> <cdo-qa-blocker> <landscape> <juju-core:Fix Released> <juju-core 1.25:Fix Released> <https://launchpad.net/bugs/1613992>
[16:23] <Spaulding> hi jacekn
[16:36] <cory_fu> petevg: https://github.com/juju-solutions/matrix/pull/71 is merged.  Thanks
[16:37] <petevg> cory_fu: awesome, thank you. You have the wrong link to you PR on your ticket, but I have it open, anyway :-)
[16:37] <cory_fu> Oh, carp.  Because it cut off the last digit
[16:38] <cory_fu> Fixed.  Thanks
[16:38] <petevg> np
[16:42] <cory_fu> kwmonroe: Can you weigh in on https://github.com/juju/juju-crashdump/pull/3
[16:44] <cory_fu> lutostag: Your input on ^ also welcome.  The motivation for this is that charms that use a venv or resources end up with very large crashdumps.  I started to suggest that /var/lib/juju be excluded by default, but I think petevg's right that it's useful info when running manually.
[16:45] <jacekn> Spaulding: IDK if you were going to reply but I filed a bug: https://bugs.launchpad.net/juju-core/+bug/1661681
[16:45] <mup> Bug #1661681: Broken agent complaints about tomb: dying <juju-core:New> <https://launchpad.net/bugs/1661681>
[17:03] <cory_fu> lutostag, petevg, kwmonroe, kjackal: I wasn't aware of the 5MB file limit in crashdump.   Maybe it makes more sense to just blacklist the .venv directory instead of adding a -s flag?
[17:05] <kwmonroe> cory_fu: or do both -s when you *know* you'll never need the /var/lib/juju (are there such occasions?), and tar --exclude .venv
[17:07] <cory_fu> kwmonroe: I think petevg's point that verifying the charm code and unitdata state could be useful in debugging is a good one.  If we exclude .venv and the dump is a reasonable size, why not include it?
[17:07] <petevg> cory_fu, kwmonroe, lutostag: another approach would be to only run crashdump when there has been a test failure, and then be more relaxed about the size of the resultant tarball.
[17:07] <lutostag> petevg: I think only getting a dump on failure is probably best practice in general
[17:08] <kwmonroe> +1 ^
[17:08] <lutostag> petevg: and I think a small flag is a good idea, and should be extended, so I'll +1 on this PR too, it makes sense to have in general, but we will probably need a better heuristic than just leave out the juju data in the long run
[17:09] <cory_fu> Also +1 to that but I think we should still consider excluding the venv.  At most, we'd want the output of `pip freeze`
[17:09] <cory_fu> Ok, that's a reasonable point, lutostag
[17:10] <cory_fu> Alright.  I think we're all +1 on this PR, then, and changing the charm to only capture crashdump on failure.
[17:12] <petevg> Cool. Thx, everyone.
[17:12] <lutostag> (just means my sos-report inclusion, https://github.com/lutostag/plugins/tree/crashdumps/sosreport I should probably upstream now, and change the flags :)
[17:16] <petevg> cory_fu: merged your PR.
[17:17] <cory_fu> petevg: Thanks
[17:17] <petevg> np
[17:20] <cory_fu> stokachu: I implemented the convention we discussed in Cape Town for enabling bundles to generate end-to-end work to verify the deployment, if you would take a look: https://github.com/juju-solutions/matrix/pull/75
[17:20] <cory_fu> See if that convention would work in conjure-up as well
[17:22] <stokachu> cory_fu, i think that would work, we just define end_to_end and put whatever tests we need in there
[17:23] <stokachu> cory_fu, just some additional documentation around using this would be good
[17:23] <cory_fu> stokachu: Note that the convention we need for matrix is for the bundle-provided end_to_end function or script to run forever, continually generating some sort of load.  Only if it returns / exits is it considered "failed."  That ok with you?
[17:23] <cory_fu> stokachu: +1 on docs.  Not sure where the best place for them would be
[17:23] <stokachu> cory_fu, yea that works for me
[18:00] <cory_fu> stokachu, kwmonroe, petevg: https://github.com/juju-solutions/matrix/pull/76 for README docs about end-to-end and the other ways to extend Matrix
[19:01] <curious-george> if you deploy a subordinate charm and say the parent charm is in 'blocked' state.. will the suborodinate charm begin deployment
[19:01] <curious-george> if you deploy a subordinate charm and say the parent charm is in 'blocked' state.. will the suborodinate charm begin deployment?
[19:02] <curious-george> I am not seeing the install hook of the subordinate charm getting hit
[19:22] <stokachu> cory_fu, cool ui :D
[21:10] <siva> I am not able to add juju model
[21:10] <siva> ubuntu@juju-api-client:~$ juju add-model default ERROR failed to create new model: model "default" for admin@local already exists (already exists)
[21:10] <siva> ubuntu@juju-api-client:~$ juju list-models CONTROLLER: juju-controller  MODEL        OWNER        STATUS     MACHINES  CORES  ACCESS  LAST CONNECTION controller*  admin@local  available         1      4  admin   just now
[21:11] <Guest80176> No default exists
[21:11] <Guest80176> Any help is much appreciated
[21:27] <petevg> Guest80176: You could try running "juju destroy-model default" again, to see if that cleaned things up, but I have a feeling that it won't.
[21:27] <petevg> Guest80176: I'd be tempted to destroy the controller, since it looks like a local controller, with no models, and re-bootstrap.
[21:27] <Guest80176> @petevg, yes it does not clean up. I already tried that
[21:28] <Guest80176> @petevg, I don't want to do that operation as it is destructive for me
[21:28] <Guest80176> is there a bug?
[21:29] <petevg> Guest80176: None that I know of. The next thing that I was going to do was ask you to file a bug against launchpad.net/juju
[21:29] <Guest80176> @petevg, for now I created another model and proceeding
[21:29] <Guest80176> Is there a way to manually clean this up?
[21:30] <Guest80176> I meant going into the files system and removing certain entries
[21:30] <petevg> Guest80176: creating another model makes sense. As for manually cleanup, that would depend on what broke, and I'm afraid that I'm not expert enough in the juju internals to know where to begin to look. :-/
[21:31] <petevg> (My guess is that you have a stray entry somewhere in mongodb, though I haven't seen the model just not show up when you call list-models.)
[21:31] <petevg> Anyone else have any ideas?
[21:39] <Guest80176> @petevg, is there any log file I can take a look at to find the cause of this issue?
[21:41] <petevg> Guest80176: you can run "juju debug-log --replay > somefile.log", and take a look at that. It may or may not have useful info, though.
[21:43] <petevg> Guest80176: switch to the "controller" model before running it.
[21:43] <petevg> (The controller lives in a juju model just like everything else, so you can use a lot of the standard juju debugging tools on it.)
[23:13] <Guest80176> @petevg, I see the following error in the juju logs
[23:13] <Guest80176> machine-0: 23:10:56 ERROR juju.worker.dependency "mgo-txn-resumer" manifold worker returned unexpected error: cannot resume transactions: cannot find document {settings 1df4e638-3914-45ef-821c-317041a73aec:r#12#peer#neutron-contrail/0} for applying transaction 5894ef0f13a69524b49d0710_5cd72db0
[23:15] <petevg> Guest80176: that sounds like a mongodb error. Please file a bug at https://bugs.launchpad.net/juju -- that's the best way to get in front of someone who might be able to offer a workaround (or a fix).
[23:24] <Guest80176> @petevg, do you need the full log?
[23:25] <petevg> Guest80176: attaching the full log definitely won't do any harm :-)
[23:27] <Guest80176> I meant for you to take a look at it now?
[23:27] <Guest80176> @petevg, I meant for you to take a look at it now?
[23:30] <petevg> Guest80176: right now, I am afraid that I'm running around packing for config management camp (I'm typing this on my phone). I think that it's safe to assume that you've run into a genuine bug, though, so filing it is probably the next best step.
[23:30] <petevg> I don't know the mongodb bits well enough to walk you through fixing the error, if that's where it is.
[23:35] <Guest80176> @petevg, OK. Sounds good
[23:35] <Guest80176> Thanks
[23:35] <petevg> You're welcome!