[00:00] is it possible to delete a charm from the charm store? https://discourse.jujucharms.com/t/-/2682 [00:48] timClicks: have to go through an RT to IS [00:48] that's what I thought [00:48] thanks for confirming [00:50] timClicks: replied [01:02] Does juju have any concept of operation timeouts? e.g. when I deploy a change and it takes longer than ${SOME_CONFIGURABLE_DURATION}, the app/unit/whatever is put into error status automatically? [01:02] Apart from bootstrap and actions, I mean. More a general one for charm operations. [01:21] hpidcock: hey btw is the aborting an action's work finished [01:22] timClicks: no, the bulk of it has, just the last pieces will be finished this week [01:22] all good (just writing that doc) [02:43] evhan: no [02:44] evhan: when the charm can do entirely arbitrary things in hooks, it is virtually impossible to come up with any sane default [04:04] A few PRs if people are bored... [04:04] https://github.com/juju/juju/pull/11244 [04:04] https://github.com/juju/juju/pull/11197 [04:21] lgtm thumper added one comment to second PR [04:22] tlm: thanks [04:22] wow tlm is bored already? [04:23] it was a trap [04:23] :) [04:23] I never get bored. Just break something [04:24] tlm: the constant is Superuser, but yes, I agree and will update [04:30] wow... make format is a mistake [04:30] spending all its time going through the vendor directory [10:59] stickupkid: is it possible to specify an lxd profile when bootstrapping? I am trying to limit the disk assigned to the containers [10:59] (and having "disk" in a charm lxd profile fails validation) [10:59] achilleasa, modify your default or juju-default [11:00] achilleasa, that's the profile names [11:00] achilleasa, but the answer to your original question, nope [11:01] that won't work on CI where the profiles are shared... hmmm I need to find another way... [11:29] achilleasa, https://github.com/juju/juju/pull/11245 [11:36] stickupkid: '-t' is for tee? [11:36] -t for test [11:37] achilleasa, we can't use tee, as we're shell and not bash and we can do piping [11:37] with failures [11:38] stickupkid: note that you have a static analysis failure for copyright ;-) [11:39] achilleasa, argh, no tput in github [11:40] achilleasa, me https://live.staticflickr.com/3438/4593531893_f67a757fa1_n.jpg [12:39] achilleasa, fixed, turns out TERM isn't set in github, and I can't find what to use instead [12:39] achilleasa, I'd rather use tput etc rathern than some weird echo setup... ah well, i'll fix that another day [12:40] stickupkid: what if you export TERM=xterm? [12:40] it hates me [12:41] I'm sure I tried that, but I'll give it another go [12:41] stickupkid: ahhhh, don't feel so unloved :P [12:41] hahaha [12:41] and morning party folks [12:42] forcing tty might be an option [12:42] stickupkid: so my workaround for limiting disk space for logs is, stop jujud-*, mount tmpfs as /var/log/juju and restart jujud... let's see if I can get the acceptance test to run faster :D [12:45] achilleasa: cheater lol [13:18] achilleasa, i feel sick and amazed at the same time [13:47] help. I wrote a charm for an app, and up until now I've only deployed the app with one unit. I tried deploying it with two units and it can't handle the db connection properly. [13:48] one of the two units will have the expected state for a bit, then they both eventually settle into a state where they report the db not being connected [13:48] Here's the code that checks for connections and sets a connected state, https://paste.ubuntu.com/p/PvGsNVPN7K/ [13:49] could someone help by reviewing that? [13:49] note, I wrote that code 2 years ago [14:10] I went ahead and made a post for it https://discourse.jujucharms.com/t/my-charm-cannot-handle-a-db-relation-when-it-is-deployed-to-multiple-units/2685 [14:11] skay, best way to get eyes on it +1 === narindergupta is now known as narinderguptamac [14:49] skay: yea, best thing would be to look at something like it [14:49] rick_h: any recommendations? [14:49] skay: sec thinking...first thought was keystone but that might be a big beast to jump into [14:49] (I'm looking through noisy logs right now) [14:51] how often do unit logs get rotated? [14:52] X days or Xmb but can't recall the numbers off the top of my head [14:55] I wonder if that workaround I have in there to use db.master.available instead of db.available is the problem [14:55] I haven't worked on the code in 2 odd years and I have a link to a thread on the forums from back then [14:56] (thank goodness I left comments and have a readme file with stuff in it.) [14:56] comments ftw [14:56] skay: what db are you talking to? [14:58] rick_h: my memory of the postgresql charm is very foggy, but when it first joins, it creates a database according to the config settings for hte database name and role? [14:58] rick_h: so, that one. and then my charm figures out it is connected and sets up django and runs migrations. [14:58] skay: no, I thought it created a new db, user, and password and sends it back on the relation data [14:58] by any chance do you know a good django charm I could look at? [14:58] skay: so it can be used more than once (one db serves many applications) [14:59] rick_h: ah, so, it uses the new db, user, and password for the app I've related it to [14:59] that's what I meant [14:59] skay: right, and then I think (and here's where it's just what I think) you'd use your charm to pass that info into any units that needed it as peer relation data [15:00] skay: but maybe not, you'd just get the same relation data on each unit [15:00] rick_h: brb, short standup [15:00] rgr [15:01] skay: https://jaas.ai/search?requires=pgsql (I'd look at landscape, mailman, maybe vault) [15:22] rick_h: I'll take a look [15:23] rick_h: I was assuming that each unit for the same app would just be able to get the same relation data === Sean is now known as Guest61795 [17:03] rick_h: the mailman3-web-charm is pretty readable. It checks for leadership in a few instances - e.g. before running django migrations. in other cases it does not. One big difference is that it does not use hooks except in one case, upgrade-charm. [17:03] rick_h: I'm assuming that with multiple units, the only unit specific code that would run would be when checking for leadership [17:04] I'm thinking that my unit may not be reporting state accurately, and that I should rethink how I'm setting/unsetting flags and reporting status [17:11] what's the recommended practice right now. use reactive states rather than hooks? [17:25] howdy do... feel like such a noob, been years since I've used irc [17:27] dvntstph: howdy [20:48] I have a question about the postgresql charm. I deployed it to an environment without altering hte default backup_dir. [20:48] and then, when I realized that storage didn't point to the parent directory of that, I changed the config to point to a location in storage [20:49] my question is, should I have to do anything other than change the config? if not, I have to report that it didn't work. backups are still going to the old directory [20:51] skay: hmm, can look at the "submit a bug" in https://jaas.ai/postgresql and see if there's something to it [20:55] rick_h: https://bugs.launchpad.net/postgresql-charm/+bug/1864549 [20:55] Bug #1864549: changing backup_dir config did not result in backups going to the new value [21:50] quick review for that juju/utils/tar missing dir bug? https://github.com/juju/utils/pull/309 [21:58] babbageclunk: already done? [21:58] anastasiamac: thanks! [21:58] no worries i did it when u proposed ;D [21:59] hmm, looks like no mergebot watching juju/utils... adding [21:59] thnx! [22:01] babbageclunk: there shoudl be jobs for that… i remember the pain of adding them. :-) [22:04] hml: hmm, you're right! Why aren't they working then? [22:04] babbageclunk: that, i’m not sure of. [22:12] maybe cred rolling? seems unlikely though - would have been noticed yesterday [22:13] setup looks the same as on juju-restore ones that I know were working last week [22:17] babbageclunk: looks liek https://jenkins.juju.canonical.com/view/github/job/github-check-merge-juju-utils/ is running, did you nudge something? [22:18] hml: yeah, I fired that off - not sure what it's going to try to build, I expected it to ask for some parameters [22:19] babbageclunk: 13 is me, i aborted it. expected parameters not to start. :-D [22:20] hah sounds like we're both trying that [22:20] babbageclunk: i need to run away, so i’ll let you have all the fun [22:21] hml: thanks! ;) catch you tomorrow [22:40] do we have an example of util function for tests that can check the type of error before I make something ? [22:42] tlm: m not sure wot u mean... we have smth like c.Assert(err, jc.Satisfies, os.IsNotExit) [22:42] tlm: is it wot u r after? [22:42] Exist* [22:43] it is, cheers anastasiamac [22:43] \o/ [23:29] wallyworld: got this PR to upgrade podspec v3, could u take a look? thanks https://github.com/juju/juju/pull/11240 [23:43] sure [23:56] kelvinliu: +1. i think we should rename the envConfig etc in this branch as well