[00:13] 'juju resolved' command runs the hook if there are any errors [00:13] Is there a way to run all the hooks in a charm from the beginning even if there is no error [00:13] Is there a way to run all the hooks in a charm from the beginning even if there is no error? [00:14] Is there a way to run all the hooks in a charm from the beginning even if there are no errors? [00:33] anyone able to get helm to work with CDK? [06:19] kwmonroe: The apt layer only knows about things you install via its API or layer.yaml. It does not know about any other packages (or you would end up with a few thousand unwanted states for all the packages in the base system) [06:20] ah, yeah, makes sense stub, thx! [06:22] kwmonroe, icey : It would be possible to make it set the states for a declared set of packages though without automatically installing them. But I haven't seen a real use case for adding that extra complexity. [08:22] Good morning Juju world! === frankban|afk is now known as frankban [09:14] ryebot: received, do you think it's a k8s bug or a Juju one? just to know where I should fill the bug entry :) [09:33] ryebot: what is weird is that if the key changed, why 90% of others requests are OK :x [10:21] ryebot: I'm beginning to ask myself if putting the EasyRSA charm at the same machine of kube-api-loadbalancer could be a bad idea in this problem [11:17] asdasdad [11:17] .~. [11:19] thats the most insightful thing you've said in a long time admcleod [11:20] magicaltrout: asdasd! [11:21] thanks! [11:33] magicaltrout: asdasdasd? [11:34] ¡sᴉɥʇ sɐ lnɟǝsn sɐ ʎlɹɐǝu [11:37] so clever [13:15] stub: it wasn't really about the specific state name, the question was about mixing bash and python layers in a built charm (which worked beautifully!) === mskalka|afk is now known as mskalka [14:56] Zic: I think we can start it out as a k8s bug for now, we can redirect it if it turns out to be Juju. What are your thoughts on colocating easyrsa and loadbalancer? [15:43] ryebot: I placed the easyrsa charm and the kube-api-loadbalancer on the same machine (by default, they are splitted in two machines) [15:44] ryebot: is it a bad idea? [15:46] Zic: I'm not sure. I'm unaware of potential conflicts there. [15:46] ryebot: because I think that's the only part that I custom (except the scaling) on CDK [15:46] and I'm beginning to think it's maybe what you can't reproduce in your lab [15:47] (regarding my random problem of "rsa error") [15:47] Zic: it may be worth investigating [15:48] and as I'm reinstalling the whole cluster currently, and have ressources to pop one other VM... maybe I will go with the easyrsa charm completey separated :) [15:48] Zic: Okay, cool. Can you let me know if that solves the problem? I'll be around :) [15:49] even if it canges nothing, at least I'm sticking more to your default [15:49] changes* [16:16] hello. I have 2 questions. 1. https://bugs.launchpad.net/juju-core/+bug/1613992 says fix released but juju 1.25.8 is not availble in trusty. Any idea when it will be? 2. Is it t supporetd config to have juju 1.25.6 managing 1.25.8 machines? [16:16] Bug #1613992: 1.25.6 "ERROR juju.worker.uniter.filter filter.go:137 tomb: dying" [16:23] hi jacekn [16:36] petevg: https://github.com/juju-solutions/matrix/pull/71 is merged. Thanks [16:37] cory_fu: awesome, thank you. You have the wrong link to you PR on your ticket, but I have it open, anyway :-) [16:37] Oh, carp. Because it cut off the last digit [16:38] Fixed. Thanks [16:38] np [16:42] kwmonroe: Can you weigh in on https://github.com/juju/juju-crashdump/pull/3 [16:44] lutostag: Your input on ^ also welcome. The motivation for this is that charms that use a venv or resources end up with very large crashdumps. I started to suggest that /var/lib/juju be excluded by default, but I think petevg's right that it's useful info when running manually. [16:45] Spaulding: IDK if you were going to reply but I filed a bug: https://bugs.launchpad.net/juju-core/+bug/1661681 [16:45] Bug #1661681: Broken agent complaints about tomb: dying [17:03] lutostag, petevg, kwmonroe, kjackal: I wasn't aware of the 5MB file limit in crashdump. Maybe it makes more sense to just blacklist the .venv directory instead of adding a -s flag? [17:05] cory_fu: or do both -s when you *know* you'll never need the /var/lib/juju (are there such occasions?), and tar --exclude .venv [17:07] kwmonroe: I think petevg's point that verifying the charm code and unitdata state could be useful in debugging is a good one. If we exclude .venv and the dump is a reasonable size, why not include it? [17:07] cory_fu, kwmonroe, lutostag: another approach would be to only run crashdump when there has been a test failure, and then be more relaxed about the size of the resultant tarball. [17:07] petevg: I think only getting a dump on failure is probably best practice in general [17:08] +1 ^ [17:08] petevg: and I think a small flag is a good idea, and should be extended, so I'll +1 on this PR too, it makes sense to have in general, but we will probably need a better heuristic than just leave out the juju data in the long run [17:09] Also +1 to that but I think we should still consider excluding the venv. At most, we'd want the output of `pip freeze` [17:09] Ok, that's a reasonable point, lutostag [17:10] Alright. I think we're all +1 on this PR, then, and changing the charm to only capture crashdump on failure. [17:12] Cool. Thx, everyone. [17:12] (just means my sos-report inclusion, https://github.com/lutostag/plugins/tree/crashdumps/sosreport I should probably upstream now, and change the flags :) [17:16] cory_fu: merged your PR. [17:17] petevg: Thanks [17:17] np [17:20] stokachu: I implemented the convention we discussed in Cape Town for enabling bundles to generate end-to-end work to verify the deployment, if you would take a look: https://github.com/juju-solutions/matrix/pull/75 [17:20] See if that convention would work in conjure-up as well [17:22] cory_fu, i think that would work, we just define end_to_end and put whatever tests we need in there [17:23] cory_fu, just some additional documentation around using this would be good [17:23] stokachu: Note that the convention we need for matrix is for the bundle-provided end_to_end function or script to run forever, continually generating some sort of load. Only if it returns / exits is it considered "failed." That ok with you? [17:23] stokachu: +1 on docs. Not sure where the best place for them would be [17:23] cory_fu, yea that works for me === frankban is now known as frankban|afk [18:00] stokachu, kwmonroe, petevg: https://github.com/juju-solutions/matrix/pull/76 for README docs about end-to-end and the other ways to extend Matrix [19:01] if you deploy a subordinate charm and say the parent charm is in 'blocked' state.. will the suborodinate charm begin deployment [19:01] if you deploy a subordinate charm and say the parent charm is in 'blocked' state.. will the suborodinate charm begin deployment? [19:02] I am not seeing the install hook of the subordinate charm getting hit [19:22] cory_fu, cool ui :D [21:10] I am not able to add juju model [21:10] ubuntu@juju-api-client:~$ juju add-model default ERROR failed to create new model: model "default" for admin@local already exists (already exists) [21:10] ubuntu@juju-api-client:~$ juju list-models CONTROLLER: juju-controller MODEL OWNER STATUS MACHINES CORES ACCESS LAST CONNECTION controller* admin@local available 1 4 admin just now === siva is now known as Guest80176 [21:11] No default exists [21:11] Any help is much appreciated [21:27] Guest80176: You could try running "juju destroy-model default" again, to see if that cleaned things up, but I have a feeling that it won't. [21:27] Guest80176: I'd be tempted to destroy the controller, since it looks like a local controller, with no models, and re-bootstrap. [21:27] @petevg, yes it does not clean up. I already tried that [21:28] @petevg, I don't want to do that operation as it is destructive for me [21:28] is there a bug? [21:29] Guest80176: None that I know of. The next thing that I was going to do was ask you to file a bug against launchpad.net/juju [21:29] @petevg, for now I created another model and proceeding [21:29] Is there a way to manually clean this up? [21:30] I meant going into the files system and removing certain entries [21:30] Guest80176: creating another model makes sense. As for manually cleanup, that would depend on what broke, and I'm afraid that I'm not expert enough in the juju internals to know where to begin to look. :-/ [21:31] (My guess is that you have a stray entry somewhere in mongodb, though I haven't seen the model just not show up when you call list-models.) [21:31] Anyone else have any ideas? [21:39] @petevg, is there any log file I can take a look at to find the cause of this issue? [21:41] Guest80176: you can run "juju debug-log --replay > somefile.log", and take a look at that. It may or may not have useful info, though. [21:43] Guest80176: switch to the "controller" model before running it. [21:43] (The controller lives in a juju model just like everything else, so you can use a lot of the standard juju debugging tools on it.) === frankban is now known as frankban|afk [23:13] @petevg, I see the following error in the juju logs [23:13] machine-0: 23:10:56 ERROR juju.worker.dependency "mgo-txn-resumer" manifold worker returned unexpected error: cannot resume transactions: cannot find document {settings 1df4e638-3914-45ef-821c-317041a73aec:r#12#peer#neutron-contrail/0} for applying transaction 5894ef0f13a69524b49d0710_5cd72db0 [23:15] Guest80176: that sounds like a mongodb error. Please file a bug at https://bugs.launchpad.net/juju -- that's the best way to get in front of someone who might be able to offer a workaround (or a fix). [23:24] @petevg, do you need the full log? [23:25] Guest80176: attaching the full log definitely won't do any harm :-) [23:27] I meant for you to take a look at it now? [23:27] @petevg, I meant for you to take a look at it now? [23:30] Guest80176: right now, I am afraid that I'm running around packing for config management camp (I'm typing this on my phone). I think that it's safe to assume that you've run into a genuine bug, though, so filing it is probably the next best step. [23:30] I don't know the mongodb bits well enough to walk you through fixing the error, if that's where it is. [23:35] @petevg, OK. Sounds good [23:35] Thanks [23:35] You're welcome!