[01:30] hpidcock: https://github.com/juju/juju/pull/11360 [01:39] babbageclunk: lgtm, ty [01:39] wallyworld: thanks! [02:45] wallyworld: oops sorry, was lying down with a headache [02:45] all good, hopr you're ok [03:45] does anyone know why we start certificates 7 days before now and not 5 minutes in the past ? [03:45] * wallyworld shrugs [03:46] ¯\_(ツ)_/¯ [06:36] wallyworld: these two PRs for adding storage support for non stateful app, https://github.com/juju/charm/pull/308 https://github.com/juju/juju/pull/11362 could u take a look? [07:44] Good Morning Juju! [09:05] manadart, got this for CR https://github.com/juju/juju/pull/11355 [09:05] stickupkid: can you do a quick sanity check on a 2.7 -> dev merge PR https://github.com/juju/juju/pull/11363? [09:05] stickupkid: Yep; will look. [09:06] achilleasa, done === parlos is now known as parlos_afk [09:14] stickupkid: https://github.com/juju/juju/pull/11364 === parlos_afk is now known as parlos [09:34] manadart, https://media.giphy.com/media/OCgTKYSVnf7iM/giphy.gif [09:35] manadart, i have no idea what i was trying to say "Subnet defines the nes" - nes what? nintendo nes? what was I doing? [09:44] manadart, quick ho? [09:44] stickupkid: OMW. === parlos is now known as parlos_afk [10:23] jam: I have replied to your comments in 11341 and pushed two additional commits (the second deals with that odd c-tor for the relation resolvers); can you take another look? === parlos_afk is now known as parlos [10:37] manadart, this looks wrong to me, I would expect this just check for if != 1, as you want to know why it wasn't removed https://github.com/juju/juju/blob/develop/api/spaces/spaces.go#L196-L198 [10:37] hi all [10:37] It is me again [10:37] :) [10:38] stickupkid: Yes. [10:38] flxfoo, hi o/ [10:39] So while trying to instanciate `juju add-machine` (with constraints works fine)... so two things... first juju returns 11 pending pending bionic instance "31c020be-c4b9-440d-aaf0-d2c1b33ea218" has status BUILD, wait 10 seconds before retry, attempt 8, and just loop , until I do a `juju remove-machine X --force" [10:40] so then after a little delay, list of instances is empty ... on rackspace side though... I have a list of 11 instances with the same name (difference IDs) which will listed as building (90% at most) after some time though the instances starts to appear as Ready [10:41] So I suspect something wrong with juju, maybe it does not receive the right "return" so It keeps call for server instantiation... [10:42] stickupkid: hi :) [10:43] flxfoo, so this comes from https://github.com/juju/juju/blob/develop/provider/openstack/provider.go#L1240 [10:44] flxfoo, we're trying to provision a machine, but we're not getting back one that meets the constraints OR the provider doesn't have enough resources for the machine to be created [10:45] flxfoo, I've seen this locally when testing with multipass+microstack and it was the latter case for me [10:49] stickupkid: this is the only place right? [10:49] flxfoo, let me check [10:51] stickupkid: on rackspace side (as I said) each instance have the same name, but different IDs (which make sense), but juju return always the same ID to retry... why do I have several instances created on rackspace though? [10:51] flxfoo, that's correct https://paste.ubuntu.com/p/pk76srRk37/ [10:52] flxfoo, that I don't know [10:53] flxfoo, if you think you've got a reproducer then I would create a bug - https://bugs.launchpad.net/juju/+bugs === parlos is now known as parlos_afk [10:53] stickupkid: no idea yet, just trying to put things together [10:55] stickupkid: do you know if I explicitely pass `-n 1` that would make a difference? === parlos_afk is now known as parlos [10:56] flxfoo, tbh, I wouldn't know personally [11:04] stickupkid: Would you know how a single `add-machine` could endup looping in endless server creation? [11:04] flxfoo, not getting back one that meets the constraints OR the provider doesn't have enough resources for the machine to be created === parlos is now known as parlos_afk === parlos_afk is now known as parlos [11:50] manadart, haha, there are so many issues in remove space, trying to work it out now [11:50] in the cmd/remove.go === parlos is now known as parlos_afk [12:13] anyone know how to test that something was written to a cmd/ctx log? [12:40] manadart, that was a pig [12:40] https://github.com/juju/juju/pull/11365 === parlos_afk is now known as parlos === parlos is now known as parlos_afk [14:23] achilleasa: 11356 looks good here, but is still marked as a draft. [14:51] manadart: marked it as a draft bec it needs a rebase/force-push once the relation-created one lands [14:51] achilleasa: OK. I've approved it. [14:52] manadart: tyvm === parlos_afk is now known as parlos === parlos is now known as parlos_afk === parlos_afk is now known as parlos [15:58] is there a juju-idiomatic way for accessing controller config options inside state? I was thinking of having the facade fetch them and pass them as arguments to the state method I am working on but I am wondering whether we use a different pattern === parlos is now known as parlos_afk === parlos_afk is now known as parlos === parlos is now known as parlos_afk === parlos_afk is now known as parlos [17:39] rick_h_: do you think 640k for charm data and 512k for uniter data are reasonable defaults for the quota limits? 640k ought to be enough for everyone, right? ;-) [17:40] note that the operator will still be able to set the limits to 0 and bypass the quota checks if they are feeling yolo [17:41] achilleasa: hmmm, ideally folks will never know/hit this. I'd prefer to start with bigger defaults like 512k for juju and 1 or 2M for units? [17:44] rick_h_: OK, though 2M seems too generous to me. The absolute max (combined) is 16M [17:45] achilleasa: right, personally I'd almost default to the max. I mean we store charms with 1GB of resources/etc [17:45] achilleasa: it seems silly to make folks open the faucet more ever and only close it if they hit issues/care [17:46] rick_h_: but those end up in blob store, right? [17:46] achilleasa: but I can go wit some limit to start I guess. We know most controllers in the wild don't have > 3 models and most models are 3-5 applications [17:46] achilleasa: yes, this is true [17:46] achilleasa: but it's on the same disk as the rest of mongodb [17:47] rick_h_: we can also do a 14M/1.5M split as the default and allow operators to fine-tune it if they need to [17:48] my concern is basically that if we set it too high, charm authors will end up abusing it [17:48] to store binary blobs or something [17:48] (logs) [17:48] achilleasa: yea, that's why I'm ok with going roomy, but not all the way. I feel like the 512k 2M means we're talking 2.5mb per unit on the machine MAX. Realistically not all your charms will be abuses. [17:49] achilleasa: but we definitely need a metric about size of this collection so we can add it to graphana and if there's a disk usage rise track it directly to this change [17:49] grafana doh === parlos is now known as parlos_afk === parlos_afk is now known as parlos [19:37] * babbageclunk waves [19:38] are u using IRC for project development and discord for app usage? or migrating everything to discourse? [19:45] ventura: irc for real time chat but discourse for async/news/published details [19:49] most simple charm is ubuntu heh, just get a machine and setup the charm environment [19:55] rick_h_: turn off/on feature flags on backend for mobile clients [19:58] if it possible to create a git repos with config changes, juju would allow to easily git-revert bad flags with the benefit of always keep track of changes [20:00] hi again, :) [20:00] ventura: that does some work but as Juju is more than configuration management there's a lot more to "changes" than flipping config flags. [20:00] ventura: there's resources, or binaries provided to run/use. There's actions, that trigger administrative functions like adding users, backing up db, etc. There's relations, that instruct application to pass details about themselves back and forth. [20:01] ventura: a lot more moving parts and "live system" than can be easily git commit/rollback [20:02] so I can confirm that when a machine is created (juju add-unit / add-machine) the process on the provider finishes (takes time but finish)... Except that there is more than one machine created with the same name (different IDs)... [20:03] I had to do a `juju remove-machine XX --force`, because after 11 instances... it is too much :) [20:03] I think there is something as well here, where rackspace needs time to allocate resources, and the frequency where a new server will be spawned... [20:04] rick_h_: i mean "a simple charm that shows something to my manager allow using juju" :-) [20:04] TL; DR: we lost all machines configs during Black Friday due Bolsonaro Bug (i.e. crazy day-light saving time changes in Brazil) [20:04] I don't know If I could put a delay like 30 minutes intead of 10s [21:27] Anyone on the fact that calling one `juju add-unit/add-machine` would end up create multiple instance with the same name (different IDs) ? [21:29] flxfoo: I'm not sure I understand what you mean [21:29] wallyworld: https://github.com/juju/lumberjack/pull/1 [21:33] @thumper: when I perform a `juju add-unit` juju loop saying that instance is in state BUILD, retrying... probably due to rackspace lagging... but then a few minute later I have another one instamce created (with the same name different id) and juju report the same message (with different ID) [21:33] until I `juju remove-machine --force` that just goes on [21:34] flxfoo: that definitely sounds like a bug [21:34] after removing on juju side... all the instances go from buidling state to ready state [21:34] perhaps due to the slow nature in rackspace [21:34] we don't see it on our other openstacks [21:35] yeah that sounds very much something linked to rackspace.. [21:35] feww weeks ago that was not doing that for sure [23:20] thumper: lgtm [23:22] thumper: if the password we use to connect to mongo for juju-restore is always oldpassword from controller machine 0, what happens if machine 0 has gone away? ie, the controller has machines 1, 2, 3? [23:23] what does the juju-db plugin do? [23:23] we should use that [23:23] should we be connecting as a different user, so we can use the oldpassword from the machine we're on? [23:23] ooh good call - looking [23:25] ah, ok - it uses statepassword and the tag [23:25] I think that might have been the problem with how we were doing it before - trying with tag but oldpassword, not statepassword