[05:46] <Dmitrii-Sh> o/
[06:16] <jam> Dmitrii-Sh, morning. (I'm not really here, but there were comments left by someone on https://github.com/canonical/operator/pull/212)
[06:17] <Dmitrii-Sh> jam: thanks, just responded there
[06:18] <Dmitrii-Sh> I think extending add_relation by allowing it to accept initial data would be a good way to move some checks away from update_relation_data and make it less overloaded than it is right now
[06:19] <jam> Dmitrii-Sh, maybe we have both options written out and can compare the tests that result and what we feel is easier to understand
[06:20] <Dmitrii-Sh> jam: ok, I'll make a branch locally so that I have the current implementation preserved
[06:20] <Dmitrii-Sh> and see what I can come up with
[06:21] <jam> I had initially done the initial_app_data form, and then was trying to streamline and saw that you could use update for self. but I wasn't convinced which way would be better.
[06:24] <Dmitrii-Sh> jam: also, if I look at this signature `def add_relation(self, app, *, initial_unit_data=None, initial_app_data=None, remote_app_data=None):`, it gets confusing on which argument is for which purpose for peer relations
[06:24] <jam> app is redundant for peer but the rest isn't
[06:25] <jam> I would say for peers we just make one of them illegal to set
[06:26] <Dmitrii-Sh> hmm, there is a case where our unit comes app and there's already something in peer app relation data
[06:29] <Dmitrii-Sh> just thinking about a later addition of `relation-created` events - add_relation should trigger "relation-created" for our unit but during that there may already be some app relation data present for that relation-id and a "remote app" (which may be local on a peer rel)
[06:31] <Dmitrii-Sh> I think it should be illegal to use the `remote_app_data` argument for peer relations - I am going to raise an exception if that happens and see how it looks like
[06:39] <t0mb0> Hi I've been struggling with some issues with actions. It seems every time I tried to retrieve some data from StorageState with my action I keep getting the initial value of my variable, however this variable gets populated with some data as part of the configuration process and I can confirm the variable is set. It seems that when I call my action it is referencing a different state object?? https://pastebin.canonical.com/p/jWT6bDcRVR/
[06:39] <t0mb0> am I missing something ^^ or should I raise a bug
[06:39] <jam> t0mb0, can you link the code?
[06:40] <t0mb0> jam, https://pastebin.canonical.com/p/7BnhqRxDfG/
[06:41] <jam> t0mb0, so #1 i would suggest moving from an on_start setting all the attributes to a 'set_default()' call in __init__
[06:42] <jam> t0mb0, the on_start doing apt-get stuff is fine
[06:42] <t0mb0> jam, I had done a set_default for in __init__ for the data i'm trying to retrieve
[06:43] <jam> t0mb0, that isn't the code you linked
[06:43] <jam> hard to debug code that isn't what I'm looking at :)
[06:43] <t0mb0> jam, yeah I know, I'm just saying it's something I've already tested
[06:43] <t0mb0> I've been trying all number of things to try and isolate the issue :P
[06:43] <t0mb0> I'll move it back to __init__ though
[06:44] <jam> t0mb0, ah, you're on k8s, right? Charms fire hooks in the Operator pod, but fire actions in the Workload pod.
[06:44] <jam> (currently)
[06:44] <jam> I believe there is a flag
[06:44] <t0mb0> :O
[06:44] <t0mb0> I am in k8s yup
[06:47] <jam> t0mb0, sigh, the flag is only in dev mode. So if you set "export JUJU_DEV_FEATURE_FLAG=juju-v3" then "juju run-action" is renamed to "juju run" and has a "juju run --operator" to run the action in the operator pod.
[06:47] <jam> t0mb0, I'd bring that up in #juju about not being able to supply --operator to 'juju run-action'
[06:49] <t0mb0> jam, does this work with microk8s juju?
[06:49] <t0mb0> I don't seem to have that flag when I export the variable there
[06:49] <jam> t0mb0, it might be a 2.8 feature, not sure.
[06:50] <jam> t0mb0, with any k8s
[06:50] <t0mb0> hmm
[06:50] <jam> t0mb0, submitted https://bugs.launchpad.net/juju/+bug/1870487 for you.
[06:52] <t0mb0> thanks!
[06:52] <t0mb0> jam, I do have juju run --operator
[06:52] <t0mb0> but I'm not trying to run a shell cmd, I'm trying to run a charm action
[06:53] <t0mb0> ah - on another read of that bug report you're saying there is no reason not to also include --operator to "run-action"
[06:54] <jam> t0mb0, ah, it was juju run that grew --operator, not run-action
[06:55] <jam> t0mb0, bug update
[06:55] <jam> updated
[06:56] <t0mb0> jam, shouldn't state be the same whether it's running in the operator pod or on the workload?
[06:56] <jam> t0mb0, state is read from the local disk atm, no way to read it from the other pod
[06:57] <t0mb0> jam, so if the operator pod was to die then you'd lose that state?
[06:57] <jam> t0mb0, operator pods are currently stateful stets
[06:57] <jam> sets
[06:58] <jam> t0mb0, so they ask k8s for a disk mount that will float with a restart of the pod
[06:58] <jam> t0mb0, in 2.8 there are some hook tools that will let us save state back in the controller, and work that would allow operator pods to become stateless
[06:58] <jam> the operator framework doesn't support that yet. its on our 'todo next' list
[06:58] <t0mb0> jam, I also noticed that if I initialised admin_password in on_start, then my action was able to read it, it was just when that value was overwritten in the configure hook
[06:59] <jam> t0mb0, once you declare a pod spec, the Juju code copies the charm in an init container to the application pod
[06:59] <jam> t0mb0, currently the charm state is stored in the state dir, causing it to get copied
[06:59] <jam> but that is 'state as it exists at one point in time'
[07:00] <jam> which is naughty of how it is all done and definitely shouldn't be relied upon
[07:07] <Dmitrii-Sh> that also has an effect of not being able to update StoredState from actions such that regular hooks can see those updates
[07:08] <Dmitrii-Sh> as jam says, it should work when we have controller-side state storage used in the framework with juju 2.8
[08:14] <t0mb0> am I correct in understanding that by emitting an event like this https://github.com/dshcherb/cockroachdb-operator/blob/master/src/charm.py#L192 emit() will block or should it be asynchronous?
[08:16] <Chipaca> mo'in
[08:16] <niemeyer> t0mb0: That call dispatches the event to handlers
[08:16] <niemeyer> Morning
[08:18] <t0mb0> niemeyer, so if I was to do something like this https://pastebin.canonical.com/p/hpzGSrTSGh/ it shouldn't cause a stack overflow or hit maximum recursion limits?
[08:20] <niemeyer> If you emit an event inside the handler of that event, yes bad things can happen if that takes place forever
[08:21] <niemeyer> It's also not super nice to read and maintain despite anything else
[08:22] <niemeyer> It's awkward to say "Hey, it's now initialized!" inside the thing that handles "Oh, is it initialized? Cool, let me handle that.".. makes sense?
[08:22] <t0mb0> niemeyer, it shouldn't take place forever in that eventually the pod would have loaded (this is an interim solution until juju notifies us when the pod is alive).
[08:23] <niemeyer> It's better that it's known to be going away, but it doesn't make they nice.. I'd still look for a different solution
[08:24] <niemeyer> *that
[08:24] <niemeyer> t0mb0: You might just defer the event, for example
[08:25] <t0mb0> niemeyer, yeah that's my preferred solution atm however it's kind of annoying having to wait around for an update-status hook to fire for that event to be processed again
[08:27] <niemeyer> t0mb0: Okay, let me think about this and I will try to suggest something
[08:28] <t0mb0> niemeyer, it's not super critical, I think we're happy to just wait for an update-status for the time being until we have more k8s hooks to work off
[08:29] <t0mb0> I was just wondering if the emit() loop was viable
[08:45] <niemeyer> t0mb0: Viable it is.. it's just not a very nice way to organize things.. it's a very complex way to run an unbounded loop
[08:46] <Dmitrii-Sh> t0mb0: .emit() is not asynchronous but handling of that event may be deferred by whoever receives it
[08:48] <Dmitrii-Sh> t0mb0: from what I understand you would like to update the unit status based on whether a workload pod is actually ready or not
[08:50] <Dmitrii-Sh> this is a little odd to do from the operator pod because (in my view) its status is determined by whether it has enough data to produce a pod spec
[08:51] <Dmitrii-Sh> I can see the use case where you need to, for example, expose a service URL only when your services are actually up because otherwise a remote unit might fail trying to connect using that URL immediately after receiving it
[08:52] <Dmitrii-Sh> ideally a client would retry on connection failures because we cannot guarantee that a service is up and will stay up when we expose a URL over a relation
[08:53] <niemeyer> Dmitrii-Sh: That seems to be well understood. It's exactly the trying that is being discussed here.
[10:12] <Chipaca> how do we define when a charm is 'done'?
[10:13] <Chipaca> in my mind that's a case-by-case thing, with a process for ekeing out what done is for each charm
[10:13] <niemeyer> What would it mean for a charm to be done?
[10:15] <Chipaca> niemeyer: that the person assigned with doing it can tick it off a list :)
[10:16] <niemeyer> Chipaca: This is moving the question to what was the person aiming to do with that list..
[10:16] <Chipaca> we can't say "we need you to do this thing" without saying what that thing being "done" actually is
[10:16] <niemeyer> Chipaca: The charm manages the entire lifecycle.. it's never done
[10:16] <niemeyer> It's an event consumer.. it's always waiting for the next reason to do something
[10:17] <Chipaca> heh
[10:17] <niemeyer> Chipaca: Exactly.. and we can't say what "done" means without us knowing what "we need you to do this thing" means
[10:17] <niemeyer> Chipaca: The charm can be done handling a hook
[10:18] <niemeyer> Chipaca: "heh" is pretty much never the right answer, though
[10:18] <Chipaca> I thought you made a joke
[10:20] <Chipaca> niemeyer: we _have_ told people "we need you to do this charm". So I'm askig, how do they, and how do we, know when they're done?
[10:21] <niemeyer> Chipaca: So the actual question is "When are people done writing a charm?".. okay, that's something else altogether
[10:23] <niemeyer> Chipaca: The charm is supposed to manage the lifecycle of the applications involved. If we can start, stop, handle the applications relationships so it can actually work, communicate the necessary information for the underlying software to be useful.. it seems like we've got something tangible in hands
[10:58] <Chipaca> niemeyer: so questions of vert/horiz scalability, managing secrets, upgrades, metrics and logs, are all beyond the scope of a basic "charm is done"?
[10:59] <Dmitrii-Sh> jam: (for when you're around). Updated https://github.com/canonical/operator/pull/212 per our discussion and will have a look at debug-code.
[11:01] <niemeyer> Chipaca: I'm assuming you're talking specifically about the conversation we had yesterday, which was in the context of getting the Charmcraft developers to come up with working examples so that they can benefit from having actual experience on how to write a charm in the framework, and so that we can develop experience to help other people
[11:02] <Chipaca> niemeyer: yes but also for teams beyond just ours, that are also being tasked with charming things
[11:02] <niemeyer> Chipaca: If that's the case, then we don't need something fancy and comprehensive to start documenting these charms in our catalog of examples
[11:03] <niemeyer> Chipaca: If it's something else, then I cannot possibly answer that question without knowing what that something else is..
[11:03] <niemeyer> Chipaca: Yes, sometimes managing secrets, scalability, logs, metrics, are fundamental to consider basic work on a charm donw
[11:03] <niemeyer> *done
[11:04] <facubatista> Muy buenos días a todos!
[11:04] <Chipaca> facubatista: buen día!
[11:04] <Chipaca> facubatista: buen viernes, even :)
[11:04] <facubatista> hola Chipaca! :)
[11:09] <niemeyer> facubatista: Que tal?
[11:13] <facubatista> niemeyer, hola! all fine, I already did some gym, have my mate, sun enters through the window...
[11:13] <facubatista> you?
[11:13] <niemeyer> That sounds nice
[11:13] <niemeyer> I just finished my morning bottle of mate
[11:14] <facubatista> good
[11:23] <facubatista> niemeyer, Chipaca, in my mind, we should have a comprehensive set of charms that shows how to do each of those things (vert/horiz scalability, managing secrets, upgrades, metrics, logs, etc) but we should NOT do all those things on each example
[11:24] <facubatista> (of course, we need a couple of "big fully done" examples)
[11:25] <facubatista> but the small ones are very useful for other people writing charms... when a developer writing a charm says "I'm doing my charm, have much done, need to add 'secrets' here", we can point to docs, and an example where is easy to find *that*
[11:26] <facubatista> (note that even they should be very small and focused examples, they will be fully functional in the sense that can be deployed, we run functional tests on them, etc)
[11:33] <niemeyer> facubatista: The medium term task for the charmcraft team in that regard should be to identify what patterns need to be enabled.. it's not a goal to be maintaining comprehensive charm that can e.g. do everything one might possibly want to do in PostgresSQL in production
[11:34] <facubatista> yeap
[11:34] <niemeyer> facubatista: For this example, stub would be much better positioned to do that.. but what we must do is ensuring that he can indeed do that and isn't running short in support from the underlying framework
[11:35] <facubatista> perfect
[12:04] <Chipaca> facubatista: could you look at #196 again?
[12:05] <facubatista> Chipaca, yes, I'm on that right now
[12:05] <Chipaca> facubatista: 👍
[12:23] <Chipaca> I went to look at https://github.com/juju/juju/pull/11389 and at the bottom of all the discussion, it's merged :-D
[12:28] <facubatista> Chipaca, done with #196
[12:28] <facubatista> (not sure what you were expecting from that)
[12:29] <Chipaca> facubatista: what you've done seems fine
[12:30] <Chipaca> facubatista: i'm just allergic to "changes requested" and then nothing
[12:30] <facubatista> sure
[12:31] <facubatista> we probably need to interact more here, regarding PRs... like "answered your comments, pushed a commit, something still open to discussion"
[12:31] <facubatista> otherwise we (I) get to those changes after reading other 12496 issues
[12:40] <Dmitrii-Sh> relation-created in a to-be-published 2.8-beta1:
[12:40] <Dmitrii-Sh> juju show-status-log cockroachdb/0
[12:40] <Dmitrii-Sh> Time                        Type       Status       Message
[12:40] <Dmitrii-Sh> 03 Apr 2020 15:25:20+03:00  juju-unit  executing    running proxy-listen-tcp-relation-created hook
[12:40] <Dmitrii-Sh> 03 Apr 2020 15:25:21+03:00  juju-unit  executing    running cluster-relation-created hook
[12:40] <Dmitrii-Sh> 03 Apr 2020 15:25:22+03:00  juju-unit  executing    running leader-elected hook
[12:40] <Dmitrii-Sh> 03 Apr 2020 15:25:23+03:00  juju-unit  executing    running config-changed hook
[12:40] <Dmitrii-Sh> 03 Apr 2020 15:25:24+03:00  juju-unit  executing    running start hook
[12:49] <Chipaca> Dmitrii-Sh: nice
[12:51] <Chipaca> Dmitrii-Sh, niemeyer, did I understand right yesterday that juju changed the command we need to run to set a pod spec?
[12:53] <Dmitrii-Sh> Chipaca: https://github.com/juju/juju/pull/11323
[12:53] <Dmitrii-Sh> yes, that's correct
[12:54] <Chipaca> what does the deprecation look like? are we ok to carry on using the old one, or do we need to use jujuversion to run one or the other?
[13:08] <crodriguez> Good morning everyone! Just wanted to say thank you for yesterday, I was able to convert my pod_spec to v3 and incorporate the secret successfully :)
[13:09] <crodriguez> so is the whole charm-tech team EMEA?
[13:10] <facubatista> good morning crodriguez!
[13:10] <crodriguez> happy friday too !
[13:14] <Chipaca> crodriguez: part EMEA, part americas
[14:14] <Dmitrii-Sh> crodriguez: glad to hear it!
[14:40] <Dmitrii-Sh> facubatista: https://paste.ubuntu.com/p/kvNvF45Jm4/ - seems to work fine
[14:40] <Dmitrii-Sh> (first impression)
[14:47] <facubatista> Dmitrii-Sh, great!
[14:47] <facubatista> Dmitrii-Sh, just checking the value of the envvar is enough, for the different cases
[14:49] <facubatista> (I'm actually writing the Python code to react on that)
[14:54] <Dmitrii-Sh> facubatista: need to split the string if multiple breakpoints are specified https://paste.ubuntu.com/p/QfTSwnnjXg/
[14:54] <facubatista> Dmitrii-Sh, yeap
[15:05] <crodriguez> design question: My charm deploys a mssql container and its operator container. If I want to deploy (optionally) another container that contains tools to interact with the main container, should I create a new class for it? I see that if I simply add the container info in the same pod_spec, it includes it in the main mssql pod, which is not what I want, I would like a separate mssql-tools pod
[15:09] <crodriguez> or should it be a separate charm altogether and relate them in a bundle? (I hope not)
[15:12] <niemeyer> crodriguez: It's a pretty good question.. I don't have a good answer myself because I don't yet understand the design of the k8s integration as well as I want to
[15:12] <niemeyer> crodriguez: What I'd like to enable is a situation where the main charm container lives right next to the managed container, and you might have any number of workload containers next to it, but that's not there yet
[15:13] <niemeyer> crodriguez: Would be nice to have the view of someone that is comfortable with the ways currently work and might provide some advice on that
[15:13] <niemeyer> crodriguez: From the charm team, the best person would be jam.. tvansteenburgh might also have a good view on this from the k8s perspective
[15:14] <niemeyer> crodriguez: jam is off today
[15:15] <crodriguez> niemeyer: yes, maybe they could help. This is my first k8s charm so I'm not always sure of best practices . I was thinking that I'd try to create a 2nd charm class that is triggered optionally, and see how that does
[15:16] <Dmitrii-Sh> crodriguez: just trying to understand the use-case a little bit more: is there anything specific that you would like to do from a separate pod with mssql-tools?
[15:18] <crodriguez> Dmitrii-Sh: mostly just make it visible. Otherwise it's hidden, the pod/mssql-0 gets 2 containers instead of 1, like this (ignore the crashloopbackoff lol) https://usercontent.irccloud-cdn.com/file/be7sDsYu/image.png
[15:20] <crodriguez> for the use case, I am also going to add other sql components in this charm, such as a Machine Learning component, and I will have to decide if I want to extend the base mssql container or if I would prefer to deploy these components in separate containers
[15:21] <Dmitrii-Sh> crodriguez: hmm, the first impression is that a machine component would be a separate application
[15:22] <Dmitrii-Sh> and tools to access mysql would be, for example, a part of its container image
[15:23] <Dmitrii-Sh> when Juju deploys an application to k8s it creates a Deployment in k8s with a pod template specified
[15:24] <Dmitrii-Sh> that pod template may have multiple containers
[15:24] <Dmitrii-Sh> but, as far as Juju is concerned, a combination of a workload pod and an operator pod is considered to be a unit
[15:24] <Dmitrii-Sh> so individual containers in that unit are not visible
[15:26] <crodriguez> Okay, so there is not really a way to display more than one pod with one k8s charm right now, is that correct?
[15:29] <Dmitrii-Sh> crodriguez:  1 app can have multiple units which would correspond to multiple pods
[15:29] <Dmitrii-Sh> each pod can have multiple containers - that part is within the scope of 1 unit
[15:29] <Dmitrii-Sh> so, yes, that's correct
[15:30] <Dmitrii-Sh> crodriguez: containers within a pod share networking and storage, for example
[15:30] <crodriguez> Dmitrii-Sh: ok! so to make multiple units in this framework, would it be to create multiple charm classes in charm.py ?
[15:32] <Dmitrii-Sh> Similar to non-k8s charms, all units share the same code and have one Charm class. Except for actions, that code gets executed in operator pods, not workload pods (where multiple containers would be)
[15:32] <Dmitrii-Sh> so you'd still have one class providing the same code to run multiple units
[15:34] <Dmitrii-Sh> a leader unit would be able to use pod-spec-set, other units would mostly react to lifecycle events and, if a leader is gone, one of the units would take leadership over and be able to use pod-spec-set
[15:35] <Dmitrii-Sh> crodriguez: what are the component-specific things that you need to do in the charm code?
[15:35] <Dmitrii-Sh> is it about feeding some parameters to the container spec?
[15:36] <Dmitrii-Sh> crodriguez: if so, I think you could have multiple classes responsible for different parts of the final podspec
[15:44] <crodriguez> Dmitrii-Sh:  I'm not sure yet. I was working with mssql-tools image first as it helps me see how the charm structure works with different containers in one charm
[15:46] <crodriguez> you said one app can have multiple units, which would correspond to multiple pods, and that this would be only one class in the code. I do not see that. With using the same class (so the same pod spec), it deploys only one pod.
[15:52] <Dmitrii-Sh> crodriguez: just to demonstrate what I mean https://paste.ubuntu.com/p/rgFzBtpSWC/
[15:53] <Dmitrii-Sh> let me get some sample output from k8s as well to demonstrate multiple units
[15:57] <crodriguez> ok thanks Dmitrii-Sh
[16:13] <Dmitrii-Sh> crodriguez: https://paste.ubuntu.com/p/2P93kyFxd9/
[16:14] <Dmitrii-Sh> so in this example there is one charm that gets deployed, the app is called "wp" with 2 units. As a result you get 1 operator pod and 2 workload pods
[16:17] <Dmitrii-Sh> crodriguez: if you exec into the operator pod, you will see that there are multiple directories under /var/lib/juju
[16:17] <Dmitrii-Sh> microk8s.kubectl exec -n controller-microk8s-localhost wp-operator-0 -it -- bash
[16:17] <Dmitrii-Sh> root@wp-operator-0:/var/lib/juju# ls /var/lib/juju/agents/
[16:17] <Dmitrii-Sh> application-wp  unit-wp-0  unit-wp-1
[16:18] <Dmitrii-Sh> so the charm code for different units gets executed in the same operator pod, however, it runs with a different context and in different charm directories
[16:18] <Dmitrii-Sh> the operator pod is the same, the code is the same, the stored state is different, leadership status is different
[17:13] <facubatista> I need help for a error message
[17:13] <facubatista> this looks too long: "breakpoint names must start and end with lowercase alphanumeric characters, and only contain lowercase alphanumeric characters, the hyphen '-' or full stop '.'"
[17:14] <niemeyer> facubatista: 'breakpoint names must look like "foo" or "foo-bar"'
[17:14] <niemeyer> facubatista: I'd drop the dot..
[17:14] <niemeyer> facubatista: Reserve that for namespacing if we have to
[17:14] <facubatista> niemeyer, I just used the same rule than for actions
[17:14] <facubatista> but totally agree
[17:14]  * facubatista fixes the spec
[17:15] <niemeyer> facubatista: These are not actions.. if we have to add namespacing, we should have a good plan in mind
[17:16] <facubatista> niemeyer, and I like the error with the examples; developer can go for documentation if we want the "full rules" anyway
[17:16] <niemeyer> facubatista: Agreed
[19:09] <crodriguez> Dmitrii-Sh: Sorry if it's late for you, we can continue this conversation Monday morning. So I think we misunderstand each other, maybe it's because we do not reference the same thing when we say "unit". The example with wp still shows the same wordpress pod deployed multiple times. It's the same container image, etc. What I am trying to achieve is to have something like:
[19:09] <crodriguez> - 1x mssql-server pod
[19:09] <crodriguez> - 1x mssql-tools container (which depends on a different image that mssql-server)
[19:09] <crodriguez> - 1x mssql-machine-learning-components
[19:09] <crodriguez> - 1 operator pod
[19:09] <crodriguez> Since they are dependant on a different image, it's a different unit to deploy. I'm not looking to just scale my mssql-server to 2 containers.
[19:09] <crodriguez> But then, maybe my approach is wrong, and that I should combine all these components in a unique container image.
[19:20] <Dmitrii-Sh> crodriguez: hmm, I think I need to read on https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-machine-learning-docker?toc=%2Fsql%2Fadvanced-analytics%2Ftoc.json&view=sql-server-ver15
[19:20] <Dmitrii-Sh> skimming through it I think I understand it better. mssql-machine-learning-components is just an add-on service to MSSQL to be able to execute python and R scripts in the DB itself.
[19:21] <Dmitrii-Sh> it feels like ML services should be a separate container in the mssql-server pod
[19:21]  * facubatista loves `watch -n 5 -d juju status`
[19:21] <Dmitrii-Sh> mssql-tools could be as well just for the case where you want to exec into a container and access it through the CLI client
[19:22] <crodriguez> yeah I'm reading more about it and it might actually be a bigger image that includes mssql-server itself. From https://github.com/Microsoft/mssql-docker/tree/master/linux/preview/examples/mssql-mlservices
[19:23] <crodriguez> My point remains that if I wanted to deploy separate pods from one charm, there's no clear answer on how to do that
[19:24] <crodriguez> and mssql-tools was just a practice to figure out how to do this. because it's actually included by default in the mssql-server, I do not need to deploy it separately. But I would like to understand how to do it if I wanted to go that route, you know?
[19:29] <Dmitrii-Sh> crodriguez: multiple pods, each with different sets of container specs in one charm - no. It's not possible to do that with deployment objects in k8s IIRC (https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment) and Juju apps result in deployment objects being created
[19:30] <Dmitrii-Sh> I think we need to work out the target yaml for K8s that we would like to see
[19:30] <Dmitrii-Sh> and then map it to a podspec
[19:31] <Dmitrii-Sh> crodriguez: "would like to understand how to do it if I wanted to go that route", - understood, I sent out an invite for Monday to go over this
[19:31] <Dmitrii-Sh> let me know if the time is OK for you
[19:33] <crodriguez> ok thank you! yes that time works. Thanks for the additional info
[19:35] <Dmitrii-Sh> great, np
[19:36] <Dmitrii-Sh> Have a good weekend!
[19:38] <crodriguez> you too Dmitrii-Sh !
[20:29] <facubatista> reviews appreciated! Added breakpoint manual call to the framework - https://github.com/canonical/operator/pull/213
[20:30] <niemeyer> Thanks, will look on Monday
[20:30] <niemeyer> For now, have a great weekend
[20:53]  * facubatista eods
[20:53] <facubatista> Bye all! have a nice weekend
[20:56] <Chipaca> have a good weekend, all