[00:00] <timClicks> is it possible to delete a charm from the charm store? https://discourse.jujucharms.com/t/-/2682
[00:48] <rick_h> timClicks:  have to go through an RT to IS
[00:48] <timClicks> that's what I thought
[00:48] <timClicks> thanks for confirming
[00:50] <rick_h> timClicks:  replied
[01:02] <evhan> Does juju have any concept of operation timeouts? e.g. when I deploy a change and it takes longer than ${SOME_CONFIGURABLE_DURATION}, the app/unit/whatever is put into error status automatically?
[01:02] <evhan> Apart from bootstrap and actions, I mean. More a general one for charm operations.
[01:21] <timClicks> hpidcock: hey btw is the aborting an action's work finished
[01:22] <hpidcock> timClicks: no, the bulk of it has, just the last pieces will be finished this week
[01:22] <timClicks> all good (just writing that doc)
[02:43] <thumper> evhan: no
[02:44] <thumper> evhan: when the charm can do entirely arbitrary things in hooks, it is virtually impossible to come up with any sane default
[04:04] <thumper> A few PRs if people are bored...
[04:04] <thumper> https://github.com/juju/juju/pull/11244
[04:04] <thumper> https://github.com/juju/juju/pull/11197
[04:21] <tlm> lgtm thumper added one comment to second PR
[04:22] <thumper> tlm: thanks
[04:22] <anastasiamac> wow tlm is bored already?
[04:23] <tlm> it was a trap
[04:23] <anastasiamac> :)
[04:23] <tlm> I never get bored. Just break something
[04:24] <thumper> tlm: the constant is Superuser, but yes, I agree and will update
[04:30] <thumper> wow... make format is a mistake
[04:30] <thumper> spending all its time going through the vendor directory
[10:59] <achilleasa> stickupkid: is it possible to specify an lxd profile when bootstrapping? I am trying to limit the disk assigned to the containers
[10:59] <achilleasa> (and having "disk" in a charm lxd profile fails validation)
[10:59] <stickupkid> achilleasa, modify your default or juju-default
[11:00] <stickupkid> achilleasa, that's the profile names
[11:00] <stickupkid> achilleasa, but the answer to your original question, nope
[11:01] <achilleasa> that won't work on CI where the profiles are shared... hmmm I need to find another way...
[11:29] <stickupkid> achilleasa, https://github.com/juju/juju/pull/11245
[11:36] <achilleasa> stickupkid: '-t' is for tee?
[11:36] <stickupkid> -t for test
[11:37] <stickupkid> achilleasa, we can't use tee, as we're shell and not bash and we can do piping
[11:37] <stickupkid> with failures
[11:38] <achilleasa> stickupkid: note that you have a static analysis failure for copyright ;-)
[11:39] <stickupkid> achilleasa, argh, no tput in github
[11:40] <stickupkid> achilleasa, me https://live.staticflickr.com/3438/4593531893_f67a757fa1_n.jpg
[12:39] <stickupkid> achilleasa, fixed, turns out TERM isn't set in github, and I can't find what to use instead
[12:39] <stickupkid> achilleasa, I'd rather use tput etc rathern than some weird echo setup... ah well, i'll fix that another day
[12:40] <achilleasa> stickupkid: what if you export TERM=xterm?
[12:40] <stickupkid> it hates me
[12:41] <stickupkid> I'm sure I tried that, but I'll give it another go
[12:41] <rick_h> stickupkid:  ahhhh, don't feel so unloved :P
[12:41] <stickupkid> hahaha
[12:41] <rick_h> and morning party folks
[12:42] <stickupkid> forcing tty might be an option
[12:42] <achilleasa> stickupkid: so my workaround for limiting disk space for logs is, stop jujud-*, mount tmpfs as /var/log/juju and restart jujud... let's see if I can get the acceptance test to run faster :D
[12:45] <rick_h> achilleasa:  cheater lol
[13:18] <stickupkid> achilleasa, i feel sick and amazed at the same time
[13:47] <skay> help. I wrote a charm for an app, and up until now I've only deployed the app with one unit. I tried deploying it with two units and it can't handle the db connection properly.
[13:48] <skay> one of the two units will have the expected state for a bit, then they both eventually settle into a state where they report the db not being connected
[13:48] <skay> Here's the code that checks for connections and sets a connected state, https://paste.ubuntu.com/p/PvGsNVPN7K/
[13:49] <skay> could someone help by reviewing that?
[13:49] <skay> note, I wrote that code 2 years ago
[14:10] <skay> I went ahead and made a post for it https://discourse.jujucharms.com/t/my-charm-cannot-handle-a-db-relation-when-it-is-deployed-to-multiple-units/2685
[14:11] <stickupkid> skay, best way to get eyes on it +1
[14:49] <rick_h> skay:  yea, best thing would be to look at something like it
[14:49] <skay> rick_h: any recommendations?
[14:49] <rick_h> skay:  sec thinking...first thought was keystone but that might be a big beast to jump into
[14:49] <skay> (I'm looking through noisy logs right now)
[14:51] <skay> how often do unit logs get rotated?
[14:52] <rick_h> X days or Xmb but can't recall the numbers off the top of my head
[14:55] <skay> I wonder if that workaround I have in there to use db.master.available instead of db.available is the problem
[14:55] <skay> I haven't worked on the code in 2 odd years and I have a link to a thread on the forums from back then
[14:56] <skay> (thank goodness I left comments and have a readme file with stuff in it.)
[14:56] <rick_h> comments ftw
[14:56] <rick_h> skay:  what db are you talking to?
[14:58] <skay> rick_h: my memory of the postgresql charm is very foggy, but when it first joins, it creates a database according to the config settings for hte database name and role?
[14:58] <skay> rick_h: so, that one. and then my charm figures out it is connected and sets up django and runs migrations.
[14:58] <rick_h> skay:  no, I thought it created a new db, user, and password and sends it back on the relation data
[14:58] <skay> by any chance do you know a good django charm I could look at?
[14:58] <rick_h> skay:  so it can be used more than once (one db serves many applications)
[14:59] <skay> rick_h: ah, so, it uses the new db, user, and password for the app I've related it to
[14:59] <skay> that's what I meant
[14:59] <rick_h> skay:  right, and then I think (and here's where it's just what I think) you'd use your charm to pass that info into any units that needed it as peer relation data
[15:00] <rick_h> skay:  but maybe not, you'd just get the same relation data on each unit
[15:00] <skay> rick_h: brb, short standup
[15:00] <rick_h> rgr
[15:01] <rick_h> skay:  https://jaas.ai/search?requires=pgsql (I'd look at landscape, mailman, maybe vault)
[15:22] <skay> rick_h: I'll take a look
[15:23] <skay> rick_h: I was assuming that each unit for the same app would just be able to get the same relation data
[17:03] <skay> rick_h: the mailman3-web-charm is pretty readable. It checks for leadership in a few instances - e.g. before running django migrations. in other cases it does not. One big difference is that it does not use hooks except in one case, upgrade-charm.
[17:03] <skay> rick_h: I'm assuming that with multiple units, the only unit specific code that would run would be when checking for leadership
[17:04] <skay> I'm thinking that my unit may not be reporting state accurately, and that I should rethink how I'm setting/unsetting flags and reporting status
[17:11] <skay> what's the recommended practice right now. use reactive states rather than hooks?
[17:25] <dvntstph> howdy do... feel like such a noob, been years since I've used irc
[17:27] <rick_h> dvntstph:  howdy
[20:48] <skay> I have a question about the postgresql charm. I deployed it to an environment without altering hte default backup_dir.
[20:48] <skay> and then, when I realized that storage didn't point to the parent directory of that, I changed the config to point to a location in storage
[20:49] <skay> my question is, should I have to do anything other than change the config? if not, I have to report that it didn't work. backups are still going to the old directory
[20:51] <rick_h> skay:  hmm, can look at the "submit a bug" in https://jaas.ai/postgresql and see if there's something to it
[20:55] <skay> rick_h: https://bugs.launchpad.net/postgresql-charm/+bug/1864549
[20:55] <mup> Bug #1864549: changing backup_dir config did not result in backups going to the new value <PostgreSQL Charm:New> <https://launchpad.net/bugs/1864549>
[21:50] <babbageclunk> quick review for that juju/utils/tar missing dir bug? https://github.com/juju/utils/pull/309
[21:58] <anastasiamac> babbageclunk: already done?
[21:58] <babbageclunk> anastasiamac: thanks!
[21:58] <anastasiamac> no worries i did it when u proposed ;D
[21:59] <babbageclunk> hmm, looks like no mergebot watching juju/utils... adding
[21:59] <anastasiamac> thnx!
[22:01] <hml> babbageclunk: there shoudl be jobs for that… i remember the pain of adding them.  :-)
[22:04] <babbageclunk> hml: hmm, you're right! Why aren't they working then? <digs>
[22:04] <hml> babbageclunk: that, i’m not sure of.
[22:12] <babbageclunk> maybe cred rolling? seems unlikely though - would have been noticed yesterday
[22:13] <babbageclunk> setup looks the same as on juju-restore ones that I know were working last week
[22:17] <hml> babbageclunk: looks liek https://jenkins.juju.canonical.com/view/github/job/github-check-merge-juju-utils/ is running, did you nudge something?
[22:18] <babbageclunk> hml: yeah, I fired that off - not sure what it's going to try to build, I expected it to ask for some parameters
[22:19] <hml> babbageclunk: 13 is me, i aborted it.  expected parameters not to start.  :-D
[22:20] <babbageclunk> hah sounds like we're both trying that
[22:20] <hml> babbageclunk: i need to run away, so i’ll let you have all the fun
[22:21] <babbageclunk> hml: thanks! ;) catch you tomorrow
[22:40] <tlm> do we have an example of util function for tests that can check the type of error before I make something ?
[22:42] <anastasiamac> tlm: m not sure wot u mean... we have smth like c.Assert(err, jc.Satisfies, os.IsNotExit)
[22:42] <anastasiamac> tlm: is it wot u r after?
[22:42] <anastasiamac> Exist*
[22:43] <tlm> it is, cheers anastasiamac
[22:43] <anastasiamac> \o/
[23:29] <kelvinliu> wallyworld: got this PR to upgrade podspec v3, could u take a look? thanks https://github.com/juju/juju/pull/11240
[23:43] <wallyworld> sure
[23:56] <wallyworld> kelvinliu: +1. i think we should rename the envConfig etc in this branch as well