#smooth-operator 2020-03-30
<Chipaca> moin moin
<niemeyer> Good morning
<jam> morning
<Chipaca> Dmitrii-Sh4: you around?
<Dmitrii-Sh4> Chipaca: +
<Chipaca> Dmitrii-Sh: welcome! how was the break?
<Dmitrii-Sh> Chipaca: mostly busy and isolated :^)
<Chipaca> Dmitrii-Sh: :-)  busy with what tho
<Dmitrii-Sh> accounting & legal things
<Chipaca> Dmitrii-Sh: aw :-(
<Dmitrii-Sh> but there's significant progress there, I think I am almost done and I don't need to do much from my side anymore
<Chipaca> looking at our pics on launchpad, there's a gameshow in the works, "how many years old is this photo" (or, "was this person legally allowed to drive at the time they took this photo")
<jam> Chipaca, as I get older, my pictures look younger and younger
<Chipaca> i'm just losing saturation
<Chipaca> (ignoring the whole "is that a hairline" thing)
<Chipaca> taking a break
<Chipaca> what would be the right order to present our modules in the API docs? alphabetical?
<facubatista> Muy buenos dÃ­as a todos!
<Chipaca> facubatista: buen dÃ­a seÃ±or
<facubatista> hola hola Chipaca
<facubatista> Chipaca, shall I land that code, or we want to implement deprecation policies first?
<facubatista> Chipaca, we should have releases and versions, before a deprecation policy
<facubatista> Chipaca, do we have the PyPI name settled? Let's release 0.1
<facubatista> Chipaca, we need a versioning scheme, will we be using "semantic versioning"?
<facubatista> https://semver.org/
 * facubatista uses a more firefox-like approach to versions in his projects, but snapd and snapcraft look like they use SemVer
<Chipaca> facubatista: what's the firefox-like approach?
<Chipaca> I thought they were saying each release is probably incompatible :)
<Chipaca> oh dang i was rushing through lunch because of the standup, forgetting i'm an hour ahead now :-D
 * Chipaca finishes his drink and gets coffee
<Chipaca> facubatista: go ahead and land
<facubatista> Chipaca, ack
<facubatista> Chipaca, with firefox-like I mean basically melting together major and minor
<facubatista> I currently have Firefox 74.0, for example
<Chipaca> yeah
<facubatista> I expect 74.1 being only patches, new release with lot of changes would be 75.0
<Chipaca> chrome does that also, right?
<facubatista> no idea
<Chipaca> k
<Dmitrii-Sh> hmm, it seems like we still need to rely on leader-settings-changed (https://discourse.jujucharms.com/t/charm-hooks/1040#heading--leader-settings-changed) for doing things on leadership loss
<Dmitrii-Sh> even though we don't use leader settings anymore and rely on peer app relation data
<Dmitrii-Sh> jam: IIRC there was no code added to fire a -changed event for peer app relation data, do you remember?
<jam> Dmitrii-Sh, so we fire -changed when the peer data changes, but you're asking for a -changed if who-the-leader-is changes ?
<Dmitrii-Sh> jam: yes, potentially
<jam> Dmitrii-Sh, so what specifically do you need to do if you lose leadership?
<jam> I can understand a lack of symmetry, but given you are no longer the leader, a lot of things aren't accessible to you anymore.
<Dmitrii-Sh> in a keepalived charm I need to change a vrrp instance priority
<jam> Dmitrii-Sh, charm leadership should not be used for application leadership
<Dmitrii-Sh> true, I guess I shouldn't re-implement the behavior of the existing charm in this case
<Chipaca> facubatista: if we had a Makefile I'd add a target to it, but we don't, so I wonder what to do
<Chipaca> facubatista: add a "make_docs" script?
<facubatista> Chipaca, "build" isn't a better verb for docs?
<facubatista> Chipaca, `build-docs`?
<Chipaca> facubatista: we're in the building-stuff business, so no, an unqualified "build" is never a better verb for any single thing
<Chipaca> hmm, my kernel now thinks I have 0 cpus online
<Chipaca> I think I'll reboot now
<Chipaca> before the hardware agrees
<Chipaca> jam: standup?
<jam> sorry, just finishing my other, will brt
<jam> Chipaca, I have you listed as part of the Canonical group, which seems to have read/write/edit to the docs-wip. Can you check if you can get to https://discourse.jujucharms.com/c/docs/docs-wip ?
<facubatista> something I forgot to mention: after https://github.com/canonical/operator/issues/194 I found that we have no tests for "marshaling the event code so we know that all types are simple", so I'll be proposing a PR for that
<jam> facubatista, sgtm
<Chipaca> jam: it looks like I can, ys
<jam> Chipaca, great. I don't have any clue how that will interact with wanting to use Sphinx for a more natural documentation, so I'll wait to hear what we as a project want to do
<jam> certainly some amount of API reference will be generated from the source tree, but maybe the bigger docs are just discourse posts?
<Chipaca> that's the idea at the moment, yes
<jam> (or maybe it is all Sphinx for Charmhub as separate from juju.is)
<Chipaca> just apidocs from sphinx, "story" docs from discourse
<Chipaca> keeps the barrier to enter low
<jam> makes sense to me.
<jam> the one bit that discourse makes clumsy is 'talking about the doc'. You miss having google-doc Comments, or github review inline comments.
<jam> You can reply to the post, but it doesn't tell you where your comment is.
<Chipaca> yep
 * jam EOD
 * facubatista brb
<facubatista> reviews appreciated: https://github.com/canonical/operator/pull/195
<facubatista> (Test event marshalling, and improved its error message.)
 * facubatista is back
<Chipaca> niemeyer: WRT having example repositories, some of the charms we're looking at will be perfectly functional and useful so having 'example' in the name feels wrong (and I'm not sure you were advocating for having example in the name)
<Chipaca> niemeyer: would having e.g. canonical/charm-<foo> be reasonable? e.g. canonical/charm-haproxy
<Chipaca> niemeyer: not sure canonical is the best org for this so if there's another you'd rather have it under, also speak up :)
<niemeyer> Chipaca: Wasn't advocating for something in particular, no.. just something standardized and reviewed in an expectable location
<Chipaca> niemeyer: it being in the same org as operator is good
<Chipaca> niemeyer: and it being charm-<foo> seems reasonable to me
<niemeyer> haproxy-operator?
<Chipaca> oo, could be
<Chipaca> i like that
<Chipaca> niemeyer: what about the ones that get k8s in their name, how would those look?
<Chipaca> haproxy-k8s-operator?
<Chipaca> hakpr8xys
 * Chipaca takes that last one back -- to use it for something else
<niemeyer> Do they need that in their names?
<Chipaca> niemeyer: dunno, seems to be a pattern that exists
<Chipaca> not sure if they're operator framework charms tho
<Chipaca> niemeyer: there's mysql and mysql-k8s i think
<Chipaca> that sort of thing
<niemeyer> That first suggestion seems reasonable (foo-k8s-operator)
<Chipaca> OK! We'll run with that and see how far we get
<facubatista> Chipaca, but also we will have pure examples, not really functional charms
<facubatista> "functional" in the sense of "people will use this in production because it provides a functionality for something"
<niemeyer> Why? Isn't a functional charm a better example?
<Chipaca> facubatista: yes, we'll have an incremental bit-by-bit example as part of a tutorial
<Chipaca> facubatista: but that's not executable code, really
<Chipaca> that is, not code you check out and run
<Chipaca> niemeyer: it depends what the example is for
<Chipaca> niemeyer: an example can be something for you to base your work on, in which case yes a fully worked out, working example is best
<niemeyer> It seems fine to have a charm that is not *useful*, but it should be functional in the sense that it actually works
<Chipaca> niemeyer: an example can be a learning tool, in which case a lot of the stuff you do for a production-ready one you omit
<niemeyer> It probably needs to do something as well, whatever it is.. something that does nothing is not very interesting as an example or otherwise
<Chipaca> also, when you use an example as a didactic tool for a tutorial, you build on it â¦ the example changes throughout the tutorial
<Chipaca> the tutorial isn't "read this code and be enlightened"
<Chipaca> most people don't learn that way
<facubatista> Chipaca, I've seen tutorials where they build the final code step by step, and each step is a commit of the final project
<facubatista> niemeyer, "a charm that is not *useful*, but it should be functional in the sense that it actually works" <-- yes, exactly, this is what I meant
<niemeyer> That's a bit problematic because if you want to change a step you need to break people's checkouts
<facubatista> my point is don't know if to put a whole project for something that is not useful
<niemeyer> One option might be a repository with directories named step-1, step-2, etc..
<niemeyer> But it's unclear whether there's much value on it by itself
<niemeyer> Perhaps for testing purposes, ensuring the tutorial remains valid
<niemeyer> But otherwise, the tutorial can live on its own
<Chipaca> I'm going to go for a run before the clouds break down
<Chipaca> brb
 * facubatista brb
 * facubatista is back
<facubatista> currently, or launchpad builders, are doing anything windows-related?
<Chipaca> I don't think so
<Chipaca> but maybe wgrant knows more :)
<facubatista> Chipaca, I just asked Colin, yes
 * facubatista keeps improving the doc
<Chipaca> I'm EODing, mostly. Might pop back later tonight if my brain bubbles up some docs insight :)
<Chipaca> ð
<facubatista> unit-my-super-charm-8: 15:06:18 DEBUG unit.my-super-charm/8.upgrade-charm     from ops.framework import Object, EventSource, EventBase, EventsBase, StoredState
<facubatista> unit-my-super-charm-8: 15:06:18 DEBUG unit.my-super-charm/8.upgrade-charm ImportError: cannot import name 'EventsBase'
<facubatista> je :(
 * facubatista eods
<facubatista> see you all tomorrow!
<niemeyer> Have a good evening, Facundo
<facubatista> niemeyer, thanks! same for you
<niemeyer> Thanks!
<Dmitrii-Sh> https://trello.com/c/nPJqB61l  - I added the information on the progress so far to this note. A bundle with the current charms is available here along with some instructions: https://github.com/dshcherb/bundle-cockroachdb-ha
#smooth-operator 2020-03-31
<jam> So the reason some charms end up with '-k8s' in the name, is because in the current charm store, you can't have the same name for different purposes.
<jam> (k8s vs normal iaas charms)
<jam> The intent with the new charm store was to leverage a field like Arch to allow you to 'juju deploy mysql' which if you are in a k8s model is a different charm than if you are in a AWS model.
<jam> So the goal is to get rid of -k8s, but it is a necessary wart today.
<jam> thanks for the update Dmitrii-Sh . You really should sleep earlier. :)
<Dmitrii-Sh> jam: you're right
<Dmitrii-Sh> jam: regarding the charms, one thing I noticed is that VIPs make functional testing harder
<Dmitrii-Sh> there is an expectation that something will be pre-allocated and is not modeled as an abstraction
<jam> Dmitrii-Sh, we talked in Juju about having a way for a charm to say "get me another IP from the provider" which it would then use for its purpose.
<jam> In practice, it was just a testing thing, because all the charms that have VIP end up assigning real meaning to them in the rest of the system
<jam> so an arbitrary value wasn't good enough.
<jam> They *cared* that it was 10.100.20.50 because that was the firewall rule that they set in the some other system.
<Dmitrii-Sh> I agree, in all cases I've seen so far it was the case. Plus there was a domain name associated with that vip
<Dmitrii-Sh> jam: w.r.t https://github.com/canonical/operator/issues/193, instead of inheriting in ops/testing.py I could recreate the same type with a different name and __bases__ and __dict__ preserved
<Dmitrii-Sh> that wouldn't help with mutable class attributes though
<Dmitrii-Sh> (inheritance doesn't help either so it's not better or worse in a sense)
<jam> Dmitrii-Sh, it sounds like we're doing something bad with SavedState if we can't subclass a type that has it and have it work.
<jam> Dmitrii-Sh, that said, we'd rather not subclass and just fix the attr thing to change attrs on the instance rather than the class
<Dmitrii-Sh> jam: so far we have .on, .state (or whichever other name a dev picks) and .handle_kind as class attributes. define_event works with types that inherit from EventSetBase so define_event would have to be done for every instance of a type that inherits from EventSetBase.
<Dmitrii-Sh> Do you have any ideas on where that binding to instances would take place?
<jam> Dmitrii-Sh, I haven't worked out exactly how it would hold together. Maybe EventSetBase.
<jam> We could do something like CharmBase.__init__, but I don't think we want people to have to re implement that for every thing they want to create that emits events.
 * jam lunches
<Chipaca> gooood (belated) morning peeps!
<niemeyer> Gutten Morgen
<Dmitrii-Sh> Chipaca: https://trello.com/c/mc1YggOd/9-top-10-15-charms-in-op-fmk there is a card to identify top 15 charms to be written in the operator framework. I wrote down the ones that I think would we worthwhile to consider based on the charm store stats.
<Chipaca> Dmitrii-Sh: nice
<Chipaca> Dmitrii-Sh: missing two inputs into that: what field thinks is hawt-and-upcoming, and any special requests :-)
<Chipaca> also, where are my glasses
<Dmitrii-Sh> The most popular ones in the store are the OpenStack charms but I excluded most of them (except for rabbitmq, mysql and hacluster).
<Chipaca> ð
<Chipaca> thank you
<Dmitrii-Sh> mysql is one of the charms that OpenStack eng recently rewrote in reactive so I am not sure if we need to rewrite it yet again as a first candidate
<Chipaca> :-) no need to start there, indeed
<Dmitrii-Sh> Chipaca: as for input from the field, I can go and ask them in their channel. I know that Arno was keen on having OSM k8s charms in the new framework.
<Dmitrii-Sh> I am sure they'd appreciate better landscape charms
<Chipaca> Dmitrii-Sh: ask them for a top 5 (and make it clear it's not a promise)
<Dmitrii-Sh> ok
<jam> Chipaca, can you see the message above about -k8s ? I can repeat it if you cannot
<Chipaca> jam: not with this account, could you repeat?
<jam> <jam> So the reason some charms end up with '-k8s' in the name, is because in the current charm store, you can't have the same name for different purposes.
<jam> <jam> (k8s vs normal iaas charms)
<jam> <jam> The intent with the new charm store was to leverage a field like Arch to allow you to 'juju deploy mysql' which if you are in a k8s model is a different charm than if you are in a AWS model.
<jam> <jam> So the goal is to get rid of -k8s, but it is a necessary wart today.
<Chipaca> jam: yep
<Chipaca> jam: but we're not getting platform for a while yet
<Chipaca> jam: also, also
<Chipaca> jam: i understand it's often the case that the k8s charm and the non-k8s charm is frequently authored by different teams
<Chipaca> jam: in that case, -k8s in the name will continue to be a thing
<jam> Chipaca, indeed. I'm just pointing that -k8s feels like a wart, when you look at "kubectl get all" and see a bunch of foo-k8s-operator objects.
<Chipaca> or as per gustavo's idea yesterday <foo>-k8s-operator
<Chipaca> jam: that is fair. There is nothing in the store's roadmap to fix that wart though.
<niemeyer> My idea was -operator, to be fair :)
<niemeyer> Chipaca: Not sure I get the relationship between that wart and the store?  How's that a wart in the store?
<Chipaca> niemeyer: the wart can only be fixed by store-side work
<niemeyer> That's what I understood the statement to mean, but I don't get what that's actually referring to
<Chipaca> niemeyer: ownership is by name, you can't have different owners for different platforms under a single name
<niemeyer> Chipaca: We do have delegation in the roadmap, and that seems like an aside.. we have support for independent channels for different architectures, and will have for different platforms as well
<Chipaca> niemeyer: where is delegation in the roadmap? (is this what delegation is? seems a strange name for it)
<Chipaca> niemeyer: i don't understand what you mean by it being an aside
<niemeyer> Chipaca: Delegation of authority
<Chipaca> in on-prem enterprise stores, yes
<Chipaca> that does not sound like this
<niemeyer> Chipaca: It's an aside in the sense that when it becomes an issue we can prioritize the work.. the critical piece of that work is that we need to support orthogonal channels for orthogonal platforms, and that's part of the design right now
<niemeyer> Chipaca: It'll be an excellent problem to have when entirely different authors are fighting to maintain the same charm in different platforms
<Dmitrii-Sh> Chipaca: I sent an e-mail to their list (CC-ed) and a message to their channel soliciting some feedback. Hopefully we can get some stats out of that.
<Chipaca> Dmitrii-Sh: thank you
<jam> https://github.com/canonical/operator/pull/196 pulls out some of the stuff from my test_model.py patch, as the Harness changes were relevant for interface-pgsql
<Chipaca> jam: did you see i tagged you on issue #193?
<jam> Chipaca, yeah. Dmitrii-Sh and I were chatting about it earlier.
<Chipaca> ah ok
<jam> We can change how we create the class, but it feels like we're doing something wrong if you can subclass something that has state.
<jam> that said, we'd rather not subclass if we can change how the accessors are done.
<niemeyer> How to subclass something that does _not_ have state?  Isn't a class without state basically a function? :)
<jam> niemeyer, if you do class MyClass: state = StoredState(); having class NewClass(MyClass): will break.
<jam> at least, that is how I read https://github.com/canonical/operator/issues/193
<niemeyer> Should we fix that?
<jam> yes. my argument was we should fix that regardless, even if we don't want to create a subclass for testing.
<niemeyer> I'm not suggesting we should encourage subclassing of everything, but it sounds like a slightly awkward hard constraint
<niemeyer> Yeah, same
<Chipaca> will make writing components as mixins hard for example :)
<niemeyer> It just feels like it's an artificial constraint.. we never considered it an intended limitation
<niemeyer> We can discuss when and whether mixins/subclassing/etc make sense, but right now we can't
<niemeyer> It's probably an easy fix as well.. did you look into it jam?
<Chipaca> StoredState is gnarly, undocumented code
<Chipaca> is the attr_name thing it's doing just doing what we do with set_name elsewhere?
<niemeyer> It is indeed.. it exists so that people can not worry about that sort of thing
<jam> I haven't looked deep into it yet.
<Chipaca> I'll see if I can't make the code a bit clearer, unless you're already looking at it
<jam> I'm not right now. looking at interface-pgsql currently
<Chipaca> ð
<Chipaca> niemeyer: so the intent is for any class to be able to have a stored state, even if it doesn't inherit from Object?
<niemeyer> Chipaca: Object is not as important as it looks like for the framework.. logic generally check for what they need, not for inheritance
<niemeyer> Chipaca: Object is there for developer convenience
<Chipaca> niemeyer: so it doesn't have to Object, but it does have to look like Object
<niemeyer> Chipaca: If that was the intention then we'd check for inheritance
<niemeyer> Chipaca: The point being that Object may "look like" things that we don't care about
<niemeyer> Chipaca: Makes sense?
<Chipaca> it's duck typing, but without being explicit about what's needed
<Chipaca> no it does not make sense :)
<Chipaca> currently if you just try to use StoredState like this,
<Chipaca> class MyC:  stored = StoredState()
<Chipaca> and then do  c = MyC(); c.foo = 42
<Chipaca> it fails, because it tried to call register_type on a 'framework' attribute
<niemeyer> Chipaca: If class C1 depends on attribute A1 and A2, and class C2 offers A1, 2, 3, 4, 5, 6, and 7..  does class C1 depend on how class C2 looks like, or depends on A1 and A2??
<Chipaca> niemeyer: if class C1 does not document what it expects of the things it's usually used with, and fails in obscure ways as soon as you try to use it outside of that, it might as well depend on every detail of the implementation
<niemeyer> Chipaca: If the water is dirty, do you throw the baby out too?
<niemeyer> Chipaca: Or do you document the baby and clean it up?
<Chipaca> why are you talking to me like this last thing isn't _EXACTLY_ why i started asking the question, dammit
<niemeyer> Chipaca: I'll wait until you relax so we can continue the conversation..
 * niemeyer steps out and sips some mate..
<facubatista> Muy buenos dÃ­as a todos!
<facubatista> Dmitrii-Sh, hola! question about gh#193, you say that "having a state attribute on a charm currently raises an exception."... that means that if you have a real charm class, with a class attribute `state = StoredState()`, it will crash?
<facubatista> Ah, I see now that that conversation continued later
 * facubatista keeps reading
<Dmitrii-Sh> facubatista: yes, pretty much
<facubatista> Dmitrii-Sh, sounds to something we should allow to be tested, not something that should be worked around
<facubatista> Dmitrii-Sh, otherwise we're giving the signal to developers that is a bad idiom of the charm
<facubatista> jam, ^
<jam> facubatista, that seems to be the consensus. The specific issue is that testing.Harness uses a subclass, which turns out to break.
<jam> and we don't really want to use a subclass for testing, but we also don't want to break if someone does subclass.
<facubatista> jam, perfect, thanks
<Dmitrii-Sh> Sorry for lagging - had a call with David from the OSM team.
<Dmitrii-Sh> Yes this is definitely something we need to have built-in. All of my current charms use state in some way at the moment, for example.
<jam> Dmitrii-Sh, yeah, the component I'm testing uses state, but isn't a complete charm, so I hadn't run into it yet.
<jam> The 'charm' I'm interacting with is just CharmBase, and then I use that to fire events into my component.
<facubatista> niemeyer, I tried to build a simple class that uses StoredState... I end up depending on having a Framework... yes, Object is not that important per se, but this "implicit framework needed floating around" is what makes me incomfortable in some situations
<facubatista> Not saying we should change that, but maybe we could improve something about it
<facubatista> for example:
<facubatista> class DBLoggerInterface(Object):
<facubatista>     def __init__(self, charm, name):
<facubatista>         super().__init__(charm, name)
<facubatista> Having to call Object with the `charm` looks arbitrary
<facubatista> and that's only because "or the parent is the framework, or the parent has a framework"
<facubatista> so, it looks like the rule is "there must be a framework floating around for everything" (which is fine, this is the operator framework), but "it's not really floating around, everybody should have it", maybe we need to make that more explicit
 * facubatista is just thinking out loud, doesn't have a concrete proposal
<niemeyer> facubatista: Let's cover that in the standup in a bit.. first thing we need to figure out is why specifically subclassing was breaking down. This is unrelated to what is passed in or whether the object has a framework or not. If the base class worked, the subclass should work too, right?
<facubatista> niemeyer, absolutely, that's out the question for me; subclassing must work
<Dmitrii-Sh> The attribute lookup (https://git.io/Jv5uv) is performed in a __dict__ of the type that is supposed to hold the StoredState attribute. When we subclass the type that holds '.state = StoredState()', the new type no longer holds '.state' attribute in its __dict__ (its parent type does).
<Dmitrii-Sh> what I have in the pdb output in the bug:
<Dmitrii-Sh> (Pdb++) parent_type.__dict__.items()
<Dmitrii-Sh> dict_items([('__module__', 'ops.testing'), ('on', <ops.testing.Harness.begin.<locals>.TestEvents object at 0x7f935be611f0>), ('__doc__', None)])
<Dmitrii-Sh> (Pdb++) parent_type.__bases__[0].__dict__.items()
<Dmitrii-Sh> dict_items([('__module__', 'src.charm'), ('state', <ops.framework.StoredState object at 0x7f935c486400>), ('HAPROXY_ENV_FILE', PosixPath('/etc/default/haproxy')), ('__init__', <function HaproxyCharm.__init__ at 0x7f935c06faf0>), ('on_install', <function HaproxyCharm.on_install at 0x7f935befc160>), ('on_start', <function HaproxyCharm.on_start at
<Dmitrii-Sh> 0x7f935befc1f0>), ('on_stop', <function HaproxyCharm.on_stop at 0x7f935befc280>), ('on_config_changed', <function HaproxyCharm.on_config_changed at 0x7f935befc310>), ('on_backends_changed', <function HaproxyCharm.on_backends_changed at 0x7f935befc3a0>), ('__doc__', None)])
<jam> Dmitrii-Sh, then could we just do getattr(obj) instead of __dict__ ?
<jam> or if we're intentionally avoiding __get tricks, then object.__get_attribute__ ?
<Dmitrii-Sh> going to try that now
<Dmitrii-Sh> hmm, self.attr_name can be None
<Dmitrii-Sh> jam: the first __get__ that will be run is going to have self.attr_name == None
<Dmitrii-Sh> and then the for loop will be invoked to look up an attribute name by an attribute value
<Dmitrii-Sh> so we also need to take care of this: `parent_type.__dict__.items()`
<Dmitrii-Sh> jam: e.g. `for attr_name, attr_value in inspect.getmembers(parent_type): ...`
<niemeyer> Folks, we don't need to *all* be working on the exact same issue
<jam> afaict Dmitrii is the only one actively coding, I'm giving feedback on how I think it could work.
<Dmitrii-Sh> using getmembers leads to an infinite recursion in some framework test cases so I'll need to explore more
<jam> so for things that do getattr hacks (as we tend to do here), you can often use object.__getattribute__(member) to avoid the classes' __get function (IIRC).
<facubatista> jam, these methods not implemented in the testing, e.g.
<facubatista>     def storage_list(self, name):
<facubatista>         raise NotImplementedError(self.storage_list)
<facubatista> jam, should be implemented by whom? where self.storage_list should come from?
<jam> facubatista, they are part of the ModelBackend, and should be implemented in the Harness, just didn't get to the full interface
<jam> sorry, should  be implemented in _TestModelBackend, I had NotImplementedError there for us to know what things to add.
<facubatista> ah, for the future we should have a comment there (ideally, with name and date, like: XXX 2020-03-25 jam: need to implement this in future branches)
<facubatista> jam, do we have an issue open or a trello card to track this missing work?
<jam> not that I'm aware of.
<facubatista> jam, we should, otherwise we'll forget about it
<facubatista> jam, would you please? thanks!
 * facubatista is not really sure if in English that "would you please" is like in Spanish having it implicitly the "take care of", or sounds harsh -- if it sounded bad, was not my intention
<jam> facubatista, no, it is a genuine gentle request.
<facubatista> thanks
<jam> or at least I interpreted it as such. (probably you could do it otherwise :)
<Dmitrii-Sh> Created draft PRs for the charms I have at the moment: https://github.com/canonical/cockroachdb-operator/pull/1 https://github.com/canonical/keepalived-vrrp-operator/pull/1 https://github.com/canonical/haproxy-operator/pull/1
<Dmitrii-Sh> the repositories under canonical are empty except the license file
 * facubatista brb
<karimsye> I am rewriting the charmed-osm/mariadb-k8s charm with the operator framework and I am having some trouble providing the mysql relation and relation data.
<karimsye> Dmitrii-Sh: I tried to implement it by following the gitlab-k8s example but couldn't get it to work.
<Dmitrii-Sh> karimsye: I am up for a call to give you some guidance if you have time
 * jam eod
<facubatista> karimsye, can you pastebin what you currently have? thanks!
<karimsye> Dmitrii-Sh: ok thanks. I have to run an errand, I should be back in about 15 minutes
<Dmitrii-Sh> karimsye: ok, sgtm
<Dmitrii-Sh> I'll wait
<karimsye> Dmitrii-Sh: https://meet.google.com/uck-qvcz-aoq
<Dmitrii-Sh> karimsye: 2 min, need to send an e-mail
<karimsye> Dmitrii-Sh: np, I am trying to setup my environment as well
<karimsye> facubatista: the git repo for my charm (the files in src are not the latest) - https://github.com/SMAK1993/charmed-osm-mariadb-k8s
<karimsye> facubatista: my latest src/charm.py - https://pastebin.canonical.com/p/R8sKN7FWW3/
<karimsye> facubatista: another file being used src/interface_mysql_provides.py - https://pastebin.canonical.com/p/ZCVP43qBcV/
<karimsye> interface_mysql_provides.py is my version of https://github.com/johnsca/interface-http/blob/master/interface_http.py
<karimsye> but trying to implement the mysql interface/relation
<facubatista> karimsye, reading it
<karimsye> thanks
<karimsye> facubatista: in charm.py the class MariaDbProvides at the bottom was my attempt to imitate https://github.com/tvansteenburgh/charm-kine/blob/master/lib/charm.py. The current charm.py isnt using it but when I tried to use it in a similar manner to the charm-kine example, I wasn't able to successfully relate to cs:~charmed-osm/keystone-k8s
<facubatista> karimsye, let me understand better the behaviour you're NOT getting... if you deploy this code, where is the problem, or when the problem arises? or you didn't get to deploy it yet?
<karimsye> facubatista: I deployed it and the error I got was:
<karimsye> https://pastebin.canonical.com/p/TDZD7yhNMq/
<karimsye> basically EventBase does not inherit from Object
 * facubatista checks the error
<Dmitrii-Sh> facubatista: on a call with karimsye and Narinder now
<facubatista> karimsye, ok, the code looks good AFAIC (by just looking it) for everything related hooking the charm and the interface together and have stuff flying around
<facubatista> going in
<Dmitrii-Sh> events are not Objects themselves so there's an issue in using shortcuts
<niemeyer> facubatista was going to look into this.. we should be able to make this work as well
<Chipaca> niemeyer: do you have a moment to go over StoredState?
<niemeyer> I do in about an hour or so
<Chipaca> niemeyer: works for me
<facubatista> niemeyer, regarding the "having shortcuts in events" I never got a conclusion, got lost in the code, I can hit another round to it
<facubatista> niemeyer, regarding current problem, was something else, really, funnily we could locate it after applying a patch from this branch https://github.com/canonical/operator/pull/195/files
<Chipaca> facubatista: ping?
<facubatista> Chipaca, pong
<Chipaca> facubatista: meet plz
<Chipaca> hmm
<Chipaca> niemeyer: not sure if you saw my ping, or if you replied
<facubatista> Chipaca, nothing from you since the "meet plz" to me and your "hmm"
<Chipaca> something seriously wrong with my thunderbolt ethernet thing
<Chipaca> facubatista: ack, thanks
<Chipaca> niemeyer: ping :)
<Chipaca> nice tracebacks in dmesg
<niemeyer> Chipaca: Just grabbing a cuppa and will be with you
<Chipaca> augh, machine is now all weird
<Chipaca> rebooting, bbiab
<niemeyer> > niemeyer: not sure if you saw my ping, or if you replied
<niemeyer> Chipaca: There was no ping here, FWIW
<Chipaca> niemeyer: yep, facu told me so (so then i pung)
<niemeyer> Chipaca: I'm ready when you are
<Chipaca> now back, without the thunderbolt ethernet
<Chipaca> niemeyer: standup meet work for you?
<niemeyer> Yeah
<Chipaca> ok
<Dmitrii-Sh> facubatista: alright, I am done with the call. For the purposes of this charm, storing their names is enough.
<facubatista> Dmitrii-Sh, wonderful!
<facubatista> Dmitrii-Sh, I wonder about the other code, though, that was storing the apps
<Dmitrii-Sh> the idea there is that .serve needs to be called once per relation-id
<Dmitrii-Sh> Cory's original code did it by storing app names
<Dmitrii-Sh> so if you've seen an app already, you don't need to write relation data to your own unit's data bag multiple times
<Dmitrii-Sh> it is an idempotent operation which doesn't generate new -changed events for remote units if there are no data changes but this code seems to be avoiding extra relation-set calls
<karimsye> Dmitrii-Sh, facubatista thanks again!
<facubatista> karimsye, anytime!
<Chipaca> niemeyer: I need to go make dinner, but thought I'd point you at this before going
<Chipaca> https://github.com/canonical/operator/compare/master...chipaca:heritage#diff-6753f94ec7178cc8798db3ae94d35100R1255
<Chipaca> need to figure out what's going on, but if I uncomment that line, the test suite hangs
<Chipaca> that's generally considered a bad thing
<niemeyer> Chipaca: Test is creating a framework at the end without closing it
<niemeyer> Chipaca: It must be hanging while opening the next framework, waiting for the former one to go away
<Chipaca> niemeyer: THANK YOU, that was it
<Chipaca> i think we should split the stored state out into its own file; it deserves its own book in the docs :)
<Chipaca> EOD here
<Chipaca> supermarket has changed its timetable, so that was a waste of time. I'll have to go tomorrow morning early.
<facubatista> jam, there I commented on #196
<facubatista> jam, beyond that, we need to talk about _TestingModelBackend... some of its private structures (_relation_ids_map, _relation_names, _relation_list_map, _relation_data) are being used/changed from *outside* the class
<facubatista> jam, that could potentially lead to incongruences from those structures (like, a relation id being in _relation_ids_map but not in _relation_data) that would lead to super subtle bugs in the future
<facubatista> jam, it's kind of fine to have several structures to access the same "data corpus", but we need to ensure consistency
<facubatista> jam, anyway, let's talk on Thursday
<facubatista> (I'm off tomorrow)
<facubatista> so! see you all on Thursday!
 * facubatista eods
#smooth-operator 2020-04-01
<t0mb0> Hey stub, in the old reactive way of doing things if I wanted to execute a piece of code that configures something I'd just set some reactive state and configure a @when handler. Is the right pattern in the operator framework to create a custom EventBase and then register it with framework.observe? Are there examples of creating a custom event that isn't due to a juju hook or relationship state change?
<stub> One way is just to call the function directly.
<stub> Otherwise, yeah, you define an event, register an observer, then fire the event when you want the observer to run
<t0mb0> stub, right but atm I have a chicken and the egg problem where I need to wait for the pod to be ready before I can call the function
<stub> I'm just deferring events in that case
<t0mb0> stub, yeah that's what I'm doing too :(
<stub> 'if not_ready: event.defer(); return'
<t0mb0> yeah I'm doing that too but it means I have to pass this event object around
<t0mb0> i'd much rather just just set some state then signal a new event
<t0mb0> than have to pass the event around and have massive callbacks
<t0mb0> er callstack even
<stub> Why fire and event to trigger the observer, instead of calling it directly?
<stub> oh, I guess you want to defer it there if things are not ready
<t0mb0> yeah
<t0mb0> I don't want to configure my application in the same function where I spin up the pod
<t0mb0> I want to spin up the pod and issue a new event, say "init", then have my init handler just defer loop until the pod is live
<t0mb0> I can get it to work, I wondered if there was a cleaner way than passing the even down the stack
<stub> https://pastebin.ubuntu.com/p/pgGQKdkFVb/ is roughly how it should go
<stub> It gets more complex if you want to add behavior to your events or extra __init__ arguments
<stub> https://pastebin.ubuntu.com/p/RHMcTQPJBv/ sorry (typo)
<t0mb0> stub, so line 19 is where I am signalling an on.ants_in_my_pants event?
<stub> Yup
<t0mb0> awesome thanks!
<stub> So the emit would happen after the pod_set_spec
<Chipaca> good moring!
<Chipaca> morning, also
 * Chipaca decides the typo is ominous and cancels the whole day
<jam> at least it wasn't "mourning"
<Chipaca> indeed
<niemeyer> Good morning
 * niemeyer reads stub's pastebin and notices facubatista recent change will break people up
<niemeyer> Events was renamed to EventSet
<Chipaca> niemeyer: which pastebin?
<niemeyer> Chipaca: There was a conversation here in the channel shortly before you joined
<mthaddon> https://pastebin.ubuntu.com/p/RHMcTQPJBv/
<mthaddon> ^ that's the one
<Chipaca> ta
<jam> niemeyer, actually it doesn't break that code, as it is using CharmEvents and EventBase which didn't change.
<jam> but yes, custom events on types that use EventsBase break unless we do something like "@deprecated\nEventsBase = EventSetBase"
<niemeyer> jam: If it didn't change it must change, or we need to revert it.. CharmEvents needs to be CharmEventSet
<Chipaca> niemeyer: https://github.com/canonical/operator/pull/191#issuecomment-605875917
<niemeyer> Otherwise we're neither here nor there
<niemeyer> Chipaca: Well, yeah, that sounds like an awkward non-agreement
<niemeyer> Chipaca: You raised the inconsistency, jam said he liked the old naming better, facundo merged it
<t0mb0> I have a requirement to store some secrets, previously we stored them as files inside the leader pod but I'd like to change that so we store these as k8s secrets, does the framework support that yet? I see in this example charm the developer has written his own k8s API https://github.com/majduk/charm-k8s-mongodb/tree/b0e55efd53d38abe26ea9deb271af9922b88b238/src
<t0mb0> will the StoredState support this kind of thing?
<stub> Personally I'd revert given they are not sets (like set(), frozenset() or collections.abc.Set)
<Chipaca> stub: the issue is EventBase vs EventsBase
<niemeyer> stub: Yeah, I think we should just revert it.. it sounds like multiple people don't agree the naming is better
<niemeyer> t0mb0: What API do you need to access?  We definitely want to grow the supported APIs over time
<stub> If we have CharmEvents, we can have ObjectEvents
<t0mb0> niemeyer, the kubernetes secrets and configmaps
<Chipaca> stub: ooo
<Chipaca> niemeyer: wdyt?
<t0mb0> we're initialising a wordpress deployment with a random password and it'd be nice if we could expose that so operators can retrieve it if the pod dies
<niemeyer> Chipaca, stub: Sounds fine to me
<Chipaca> PR coming up
<Chipaca> niemeyer: another thing: the check for having multiple names for a single instance of a StoredState is bogus, never worked AFAICT, and is untested. Should I drop it, or change it to work and add tests?
<Chipaca> this is for class C: foo=StoredState(); bar=foo
<jam> Chipaca, did you see my suggestion?
<Chipaca> jam: I did. I'll PR that up as well so we can discuss.
<jam> (when iterating attributes, just don't exit on first discovery)
<Chipaca> ah
<Chipaca> no i did not see that one
<niemeyer> Chipaca: I see it as preventing an obvious invariant from being broken.. in which case would it be fine to ignore the invariant being wrong?
<jam> the real question is whether that actually works.
<jam> niemeyer, the problem is that it doesn't actually find the invariant
<niemeyer> jam: Invariants don't have to be looked for :)
<niemeyer> jam: If the attribute name is wrong, this is all bogus..
<niemeyer> jam: But I agree with your statement, we might look further to detect a bug
<Chipaca> so I can change the code to not break out of the loop, and if it finds itself twice it'll catch that
<Chipaca> I'll do that, and then run to the shops
<stub> Can we have relids being relname:16 format again? I just spent ages tracking down a bug, where I was failing to retrieve data by relid because somewhere thought it should be str(15) instead of int(15)
<Chipaca> I'll look at @deprecated after
<mthaddon> t0mb0: essentially we're talking about interacting with kubernetes secrets here (being able to store and retrieve them). I think filing an issue about that be the best way to go for now
<niemeyer> stub: The relation ID is really the number.. the name is there to aid people when they are in a hook context. But in Python we have both the name and the ID at hand.
<niemeyer> mthaddon, t0mb0: +1
<niemeyer> And also +1 in supporting the API internally somehow
 * mthaddon nods - sounds good
<stub> yeah, and and now I need to ensure nothing casts it to a string or it becomes a different key according to dictionary lookups. Just stuffing in a pile of asserts to see what fed me a string instead of an int.
<niemeyer> stub: Good question.. please let me know when you find out. If we have other APIs manipulating these IDs independently, we definitely need to look into it.
<t0mb0> stub, do you know if there is a maximum number of times an event will be reemitted?
 * stub shrugs
<jam> t0mb0, under what circumstance? as in if you keep defer() then will it even not reemit ?
<niemeyer> stub: This should only be an issue if you're running juju CLI commands next to the framework, in which case the answer would be to use the APIs in the framework
<niemeyer> We'll soon start talking directly to the agent, and allow iterating over multiple hook events if desired.. this will not work if one is interacting with the agent externally
<t0mb0> jam, so i'm basically defer looping until self.model.get_binding('website').network.ingress_addresses[0] returns something
<t0mb0> I'd expect juju debug-log to be spamming my logger statements but it doesn't
<jam> t0mb0, the only thing that causes deferred events to reemit is another hook invocation. if you are deferring and then exiting the hook, you are waiting for the next hook to tell you to check.
<jam> that hook might  be 'config-changed' or 'update-status', but nothing that would 'spam the logs' as update-status is on a 5-min timer.
<t0mb0> oh
<stub> t0mb0: (and you won't see log.debug() level messages atm if they are missing entirely)
<jam> t0mb0, generally state that you read from Juju won't change during the lifetime of a hook anyway (otherwise hooks have to check at the end of the hook if something is different than at the beginning of the hook)
<jam> juju generally fires another hook to let you know if stuff that you responded to changed (eg, you get 'config-changed' when config is different than the last time config-changed ran)
<t0mb0> jam, so if I'm waiting for my pod to be active before resuming the rest of the config-changed stuff what should I be doing?
<jam> (also, I think set-pod-spec doesn't take effect until the hook exits, either, but I know there was some question there)
<jam> t0mb0, so I thought there was supposed to be an event if your IP address changes, which might match what you're hoping to do?
<jam> Juju doesn't currently feed the charm operator the full k8s events, IIRC
<stub> Sounds like check if things are ready in a config-changed hook, and if not just wait for the next config-changed hook
<t0mb0> jam, https://github.com/johnsca/charm-gitlab-k8s/blob/master/src/charm.py this charm makes use of a custom HTTPInterface is that what you're talking about?
<t0mb0> stub, but there won't be another config-changed hook right? because you're returning from the on_config_changed function?
<t0mb0> unless you do a self.on.config_changed.emit() or something similar?
<t0mb0> if the state isn't set
<stub> There will be another config-changed hook when something changes that Juju wants to tell you about. So if it is sending one when the pod IP addresses change, you can wait until that config-changed hook is fired. Might not be the first, won't be the last.
<niemeyer> t0mb0: Also, this is normal code.. you don't have to (and most likely should not) have all of your logic living _inside_ the on_config_changed method
<niemeyer> Instead, ideally code should be organized in methods and types that properly reflect what they are doing, and then you simply call them by their name whenever an appropriate situation shows up
<t0mb0> niemeyer, that's how I roughly have it now, I'm checking to see if various variables on my state object are set and calling methods when they are and returning an event.defer() if they're not
<niemeyer> t0mb0: Yeah, if I get what you're saying that's a pattern I've seen repeatedly in charms, but it doesn't feel like a very nice way to organize code in general.. what I've seen looks like:
<niemeyer> 1) Ensure everything in the state and in the machine about everything the charm could possibly want to do is as it should be
<niemeyer> 2) Do everything the charm possibly would like to do based on other checks previously done
<niemeyer> The problem with this approach is that those several checks are detached from the code they actually "protect", and local encapsulation is broken down
<niemeyer> A much better approach is to do things in smaller pieces, with local information.. for example, we want small components that are responsible for their own knowledge, and emit high-level events that are meaningful for what they do
<t0mb0> niemeyer, so in that case I'd need k8s to tell Juju the pod is alive and ready so I can observe that event right?
<niemeyer> t0mb0: Yes, that definitely sounds like something we want to have more comfortable access to
<t0mb0> niemeyer, i'm guessing that doesn't exist yet and I should raise an issue?
<niemeyer> t0mb0: Yes please.. but also, meanwhile let's find a nice way for you to move forward with your needs without being blocked on that issue
<t0mb0> I feel like for initial setups of wordpress we don't mind waiting 5 minutes or so for another hook to run
<niemeyer> t0mb0: Also note that what I described above says more about the overall organization of the charm itself than about external needs
<niemeyer> t0mb0: In other words, improving things in that way doesn't need external events
<t0mb0> niemeyer, yeah it makes sense. that's why i asked (s)tub about writing custom events earlier
<niemeyer> t0mb0: Right, we made events very easy to create and use precisely so we can reuse that form of encapsulation into our own custom components as well
<niemeyer> t0mb0: As a random example, it would make sense for example to have a MySQLServer endpoint handler that would consume the relation events, and emit higher-level on_database_available, etc
<niemeyer> I'm somewhat surprised that the k8s support in juju doesn't inform the charm of the pod being ready.. I wonder what kind of pattern people have been using so far
<niemeyer> Do people just fork off a daemon that keeps trying to do things with it?
<niemeyer> I will be digging further into k8s in the coming days
<t0mb0> niemeyer, it seems like the existing charms shy away from doing application initialisation and it's left to the user. I imagine you could bake initialisation logic into the docker image and then have some kind of lock to stop non leader pods from racing
<stub> So.... using the relation id as a dictionary key isn't great if you are going to convert it to json, cause it will be cast into a string and won't round trip
<niemeyer> stub: Where's the json conversion taking place
<niemeyer> ?
<stub> my code
<stub> stuffing a data structure into leadership settings in this case
<niemeyer> I see
<stub> >>> assert json.loads(json.dumps({1: True})) == {"1": True}
<stub> solution... stick to yaml :)
<niemeyer> stub: Yeah, I'm surprised the json package doesn't complain about it
<niemeyer> "Oh, sure.. here is your data completely different now sir."
<niemeyer> t0mb0: I'd like to significantly improve the control charms have over their workload containers
<niemeyer> t0mb0: That's one of my key goals over the next couple of cycles
<t0mb0> niemeyer, I mean something like an "pod-ready" event/hook would be nice
<t0mb0> once the readinessProbe is green
<niemeyer> I need to get my hands dirty to get a better view of how we'll accomplish that
<niemeyer> t0mb0: Yeah, something like that as well
<Chipaca> ok, now yes, to the shops
 * Chipaca really really needs to go
<stub> git+ssh://git.launchpad.net/~stub/interface-pgsql/+git/operator seems to be working now if anyone wants a look. Unit tests next week.
<stub> And the 'operator' branch of git+ssh://git.launchpad.net/~stub/plinth/+git/plinth-charm is a (private) non-k8s operator framework charm using it, still needing its actions wired up and a tidy
 * stub wanders off into the long weekend
<niemeyer> stub: Thanks for the branches, and have fun!
<niemeyer> stub: Or have a good isolation I guess :)
<stub> I'm shopping for everyone tomorrow... got to keep my ancestors fed.
<niemeyer> stub++
<Chipaca> jam: was @deprecated a suggestion to use https://pypi.org/project/deprecation/, or was it something else?
<jam> Chipaca, I think I looked at 2 or 3 different ways of doing deprecation. It feels a bit overdone to do deprecation before we hit a 1.0 but certainly I do think we want a *plan* around deprecation.
<jam> If we feel we need it from now, then we can
<jam> Chipaca, given the simplicity of deprecation, it would be nice to not need an external lib
<Dmitrii-Sh> Added unit tests for a peer relation to one of my charm PRs: https://github.com/canonical/cockroachdb-operator/pull/1/commits/95c61ebf40eaf812eb46e4b90055f47f8545704e (they need PR #196 to be merged to pass because of the issue #202)
<Dmitrii-Sh> https://git.io/JvdO4 - TestCharmClass here is something worth discussing I think. The peer relation interface code expects a parent charm class to provide an event so that it can expose cluster id details on a relation when it happens. However, using the concrete CockroachDBCharm type in interface-specific unit tests is problematic because
<Dmitrii-Sh> interface component events can trigger CockroachDBCharm handlers that do real work (such as writing config). Instead of using mocking, I am using CharmBase here but with an event set for CockroachDBCharm.
<Dmitrii-Sh> This provides me with a way to still trigger CockroachDBCharm-specific events for the interface code to react to but without invoking the charm logic I don't need.
<jam> Dmitrii-Sh, so why isn't the code that depends on an event/generates an event a separate class? it doesn't have to be, but it might be interesting to think about keeping the Charm as only generating Juju events, and having something else that generates higher level events.
<Dmitrii-Sh> jam:  *thinking about how to rework it*. The first impression is that I would still have to avoid triggering handlers but only on a different type.
<Chipaca> so many reviews to do
<Chipaca> so little coffee
<Chipaca> also, PEP 9999
<jam> Chipaca, so we should use foo,bar,baz,quux as animal-friendly alternatives?
<Chipaca> jam: I say we go anti-pep-9999 and make all our variables refer to animals being mutilated in 'orrible ways
<jam> deadbeef we have
<jam> slaughterhouse
<Chipaca> 'foo' can become 'gib'
<jam> Dmitrii-Sh, so I would think you'd want abstractions for places that your code interacts with "other things" like the filesystem, and then in tests you would substitute an alternate implementation of those that doesn't actually interact with the system.
<Dmitrii-Sh> jam: ty, I see what you mean. The charm itself would only poke an implementation of this abstraction which I would replace in the test in some way.
<jam> right
<jam> certainly you don't want your actual charm rewriting /etc/postgresql.conf while you are testing.
<niemeyer> Is that PEP-9999: https://www.blackcomb-shop.eu/en-PT/skirt-craft-1904867pep-9999black/
<niemeyer> It's the first hit on Google at least..
<niemeyer> It's also no longer available, so we don't need to worry
<Chipaca> niemeyer: https://github.com/gerritholl/peps/blob/animal-friendly/pep-9999.rst
<jam> http://www.montypython.net/scripts/fruit.php "Self Defense Against Fruit" I don't know that one nearly as well.
<Chipaca> niemeyer: aw, but bugs with side effects are my favourite!
<Chipaca> they're the gift that carry on giving!
<niemeyer> I know.. I also like how colorful they are :)
<Chipaca> niemeyer: got a couple of questions about debug-hook, after talking with rick h and reading through the discussion on the spec
<Chipaca> niemeyer: if you have 15 minutes to a half hour that'd be good
<Chipaca> (non-overlapping holidays + feature freeze means I'm trying to get this sorted myself, rather than wait for facubatista to be back)
<niemeyer> Chipaca: I'm just off a call.. if you have time in ~5 minutes, I'll prepare a mate and we can dive into it
<Chipaca> niemeyer: sgtm
<niemeyer> Alright
<niemeyer> Chipaca: Standup?
<Chipaca> niemeyer: omw
<Chipaca> niemeyer: i'll ping you for a meet as soon as i get out of my next one
<jam> Chipaca, oth
<jam> Chipaca, anything I can hepl with?
<Chipaca> jam: if you're around, you're welcome to join
<Chipaca> jam: but it is rather late for you
<jam> true enough, though I wouldn't worry terribly about having the Charmcraft code finished for Juju feature freeze.
<Chipaca> jam: so I hear :)
<Chipaca> jam, niemeyer, standup meet if you could
<Chipaca> jam: very optional for you
<jam> I can give it a few mins
<niemeyer> I need a couple of minutes, but will be there
<Chipaca> crazy day today
<Chipaca> i got to do 0 of Dmitrii-Sh's reviews
<Chipaca> i also got to run 0
<Chipaca> i'm going to go run because daylight is nearly over, then make dinner, then review some things
<Chipaca> ttfn!
<niemeyer> Enjoy!
<Dmitrii-Sh> Would be good to get more eyes on this one - https://github.com/canonical/operator/issues/206. It seems to ask for `goal-state`-like functionality but I suggested a different idea.
<Dmitrii-Sh> this seems to be a common problem when it comes to setting up stateful applications - you need to know whether to have one or multiple units at the beginning of a unit lifetime to initialize a cluster state from the initial cluster unit. In some cases this is merely about setting an initial replication factor to 1 vs 3 (or some other value).
<jam> Dmitrii-Sh, I'm of the mind that we should let them know about it. But the other option is to have 'we've told you what we're going to tell you for now' events that let you make progress.
<jam> eg, "start" will be called after all the relation-created
<jam> so having some form of "we're done telling you about relation-joined for now"
<jam> but relation-joined is blocked on the units actually coming up, and doesn't let you know what the admin's *intent* of deployment is
<Dmitrii-Sh> jam: yes, it feels like a unit need reasonable checkpoints with enough data about the overall model to make progress. Most importantly, at the beginning of a lifetime where inital state is created.
<Dmitrii-Sh> one other use-case that comes to mind besides clustering is setting up TLS
<Dmitrii-Sh> at the beginning it is important not to have any exchanges without TLS between units if the end goal is to have TLS enabled
<Dmitrii-Sh> you might later decide to remove a relation with a particular CA charm that generates certs/keys and choose another but this is unlikely to change quickly
<niemeyer> There are different ideas intermixed here
<niemeyer> We already discussed this (jam and myself) a while ago and we both seem to agree that having a number of units as a hint, and a created relation hook, sorts out a good part of the desire
<niemeyer> We should avoid having something like goal state, though, as it creates a mixed and confusing reality for the unit
<niemeyer> Another part of what's described above sounds like just configuration
#smooth-operator 2020-04-02
<jam> mroning all
<Chipaca> good morning!
 * Chipaca stayed up too late and now needs more coffee
<Chipaca> is there a good reason for _is_our_unit and _is_our_app to be called like that, as opposed to something more unified like _is_ours (or _is_self)?
<niemeyer> Morning Chipaca
<niemeyer> Chipaca: Reviews mainly.. it wasn't easy to get to those names either :)
<jam> morning Chipaca and niemeyer
<niemeyer> "Is ours what?" ""self is something else" and it goes on..
<niemeyer> Morning jam
<jam> Chipaca, niemeyer : if the parameter of 'juju debug-code' is now '--at', should the env var still be JUJU_BREAKPOINT?
<jam> 'JUJU_DEBUG_AT' or something along those lines feels like it matches better.
<jam> The doc is also now significantly out of date, and I'm not sure I was around for all the conversations.
<jam> I pretty much just understood changing "juju debug-hook --breakpoint" to "juju debug-code --at" but the rest seem to stay the same.
<niemeyer> Sounds reasonable and generic enough to be useful in multiple ways.. can't think of better alternatives right away
<Chipaca> niemeyer: are you talking about AT or about BREAKPOINT there
<Chipaca> can be read both ways
<niemeyer> The suggestion
<niemeyer> (AT)
<Dmitrii-Sh> Chipaca: added a link to the spec doc here https://trello.com/c/vcZTwoOI
<Dmitrii-Sh> I am going to update it with more details
<niemeyer> We had an agreement to have 160 as the line length
<niemeyer> I also asked to have "sandbox" into the git ignore list
<niemeyer> Looks like both have been undone by 907672c0901168903bf17c94165b034881bd35bd
<niemeyer> Looking at the diff here https://github.com/canonical/operator/pull/162/files
<niemeyer> The new lines on the right are consistently less readable
<niemeyer> https://usercontent.irccloud-cdn.com/file/5g77t1LD/image.png
<niemeyer> Do you really prefer reading the one on the right?
<niemeyer> There are several of those...
<niemeyer> Can we please revert this PR?
<Chipaca> niemeyer: 160 is much too wide
<niemeyer> Chipaca: I don't care about the number as much as I care about the result
<Chipaca> niemeyer: yes, i prefer the one on the right
<niemeyer> We can make it 1000 is if it looks better
<niemeyer> Chipaca: Really!? Are you seriously saying you read that completely broken up text and code better than what you have on the left side?
<Chipaca> niemeyer: and sanbox is still in git ignore
<Chipaca> niemeyer: it's unclear what sandbox is in the gitignore for, but it's still there
<Chipaca> without a comment or anything it's a prime candidate for cleanup
<niemeyer> Chipaca: Both the line length and sandbox have been discussed before you were in the team
<niemeyer> Chipaca: The line length is 160 precisely so that it reads better.. we got several examples and decided that in practice it ends up looking nicer with longer lines than with arbitrarily broken down indentations
<niemeyer> Chipaca: The sandbox is a directory I personally use to put things that should not be in the tree
<jam> So for my personal use, I prefer narrower than 160 for line lengths, as I tend to have 2 or 3 files open concurrently (one for the code, one for the tests of the code, sometimes another as a reference for the code I'm implementing)
<jam> https://www.python.org/dev/peps/pep-0008/#maximum-line-length itself recommends up to 99 chars,
<jam> I don't prefer the 'indent to where the (' opens, as it tends to waste a lot of horizontal space
<niemeyer> jam: How do broken down lines look like in your terminal?  Does it fit more than 99 chars, or is it broken down further?
<jam> so I tend to do the return immediately after the (which causes it to indent just a single extra indent level)
<niemeyer> jam: Because reviews look terrible..  it's neither 99 nor 160
<niemeyer> jam: Which means we get the manual line breaking, and then the enforced breaking on top of that either way
<niemeyer> jam: (as we had discussed many months ago)
<jam> I believe the actual line length discussion predates me as well.
<jam> In my editor, it tends to horizontal scroll, in a terminal, it wraps the line
<niemeyer> jam: Or maybe it just wasn't interesting enough to be talking in the channel.. ;)
<niemeyer> jam: Okay, so 99 will *also* wrap and horizontal scroll..
<niemeyer> Yeah, just barely.. conversation happened on Oct 3rd, you joined on Oct 2nd
<jam> fair enough. I remember it being 160, but certainly didn't have a strong rationale for defending it staying there.
<niemeyer> It's okay..  the rationale is not super insightful other than we looked at code and it looked worse when indenting too much
<niemeyer> Reading the delta seems to confirm that still..
<niemeyer> Sometimes it's like this:
<niemeyer> https://www.irccloud.com/pastebin/LDdXKI2x/
<niemeyer> Sometimes like that:
<niemeyer> https://www.irccloud.com/pastebin/nIyHc8G3/
<niemeyer> And sometimes this:
<niemeyer> https://www.irccloud.com/pastebin/1ex3TAbJ/
<niemeyer> Versus stuff like this:
<niemeyer> -                        raise RuntimeError("StoredState shared by {}.{} and {}.{}".format(parent_tname, self.attr_name, parent_tname, attr_name))
<niemeyer> https://www.irccloud.com/pastebin/cy32xfyL/
<niemeyer> I easily prefer the latter, maybe because I'm unusual in that I don't appreciate jumping my eyes around  with no semantic involved and do appreciate its tightness. We can discuss further in the standup.
<jam> so personally, while I don't prefer arbitrarily wrapped differently each time, I do prefer the wrapped form. I think it is because it decreases the linear distance
<jam> I think there was also an issue with github rendering, at least IIRC that is what Chipaca put the PR up for.
<niemeyer> The screenshot I first posted above was from GH
<facubatista> Muy buenos dÃ­as a todos!
<facubatista> So, you decided to go -vvvvvv mode the day I was off? :p
 * facubatista has a lot of stuff to read
<Chipaca> facubatista: it's _because_ you were off
<Chipaca> facubatista: it's you that's been inhibiting us
 * Chipaca just out of a meeting and needs to read up
<facubatista> :)
<jam> I run a 27" monitor, and while I can go wider, I tend not to maximize browser windows because they are too wide.
<jam> In my normal width: https://imgur.com/cSVJFWU the old one wraps but the new one doesn't.
<niemeyer> jam: To be clear, my point was never that the monitor was wide enough to fit 160 chars
<jam> If I max the window, neither wraps https://imgur.com/kfjK3IR
<jam> but I actually find it hard to review at that size, because the two sides are too far apart
<jam> it is hard visually to follow from left to right for me
<niemeyer> jam: The point is rather that this is not a great way to force the formatting of code, because in practice it ends up looking worse in multiple places
<jam> and if I make it narrow https://imgur.com/1YEpDMK it looks awful in the new ay
<niemeyer> jam: Extremely long lines can usually be reorganized to not be extremely long by using proper coding practices
<niemeyer> jam: And we can always *choose* to break lines down where really appropriate
<niemeyer> jam: What the limit does is to force people to break them down whatever it takes.. and we end up with the stuff mentioned
<niemeyer> That's subjective, though.. so let's just raise that up in the next standup and see where the team stands
<niemeyer> jam: This "narrow" view is actually what I get naturally over here, and I'm on a high-resolution (4k) 14 inch monitor
<jam> for whatever reason my 'natural' window size is about 1200x1200 pix. it fits most content cleanly (bugs on launchpad, gogole docs, etc)
<niemeyer> My resolution is 2560x1440 (2k actually)
<niemeyer> I use a tiling WM, but for that sort of content the window is always full screen
<jam> Chipaca, facubatista : I tried to encapsulate the discussions that happened yesterday around "Debugging live charms" and brought up a couple more questions
<jam> https://docs.google.com/document/d/1zKwzo26bNVFgohaRB_kdUCsFPY3OcjfcpQklGpkOHpc/edit#
<jam> I also have 'juju debug-code' sketched up, testing it now.
<facubatista> jam, will go there when I finish all the reading here, 5'
<facubatista> I don't like this breaking lines style:
<facubatista>   some_call(foobar,
<facubatista>             other,
<facubatista> not only because the totally wasted space there, but also because it's indented to (75% chance) a weird column
<facubatista> and pathologically leads to code like this: https://pastebin.canonical.com/p/fb7XW8yJWx/
<facubatista> niemeyer, for me code at the left here is totally broken: https://usercontent.irccloud-cdn.com/file/5g77t1LD/image.png
<niemeyer> facubatista: Broken in which sense? You're probably not using that word in ways I normally read it :)
<niemeyer> Dmitrii-Sh: https://github.com/canonical/cockroachdb-operator/pull/1 has an initial review
<facubatista> the idea is also to not see it as the right, never, that's why a sensible value for column limit like 99 is thought to have it not too wide to have those line breaks
<facubatista> niemeyer, I mean, my eyes can not parse that; the moment I have a line that wraps and gets too long, I lose the identation
<niemeyer> facubatista: Agreed.. and that's my main concern, to be clear. 160 is an arbitrary number to prevent such formatting.. I don't have anything locally that would make 160 special.
<niemeyer> facubatista: But, that formatting in the right is what I consistently get..
<niemeyer> I probably need a wider monitor, which is ironic given that I'm the one advocating for wider lines
<Dmitrii-Sh> thanks, also submitted the tls-certificates interface for review https://github.com/canonical/operator/pull/210
<niemeyer> ANyway, lunch time here
<niemeyer> And we have a meeting in just a few
<niemeyer> See you soon
<Dmitrii-Sh> need to figure out what to do about python-cryptography for pull/210
<facubatista> aghhh, canonical's pastebin messes with the identation
<facubatista> niemeyer, btw, what you see on the right there is also because it's indented where the parenthesis break! that should be like this: http://linkode.org/#A3zKfjmPofCP2jgBR5eo5
<facubatista> (or like the second version there)
<jam> facubatista, I also prefer that wrapping
<facubatista> niemeyer, stub, regarding the surprising json key: https://bugs.python.org/issue34972
<jam> facubatista, "its how the format works", in that it doesn't support ints for keys, not that it treats ints as strings...
<jam> IMO
<facubatista> jam, yes, it was surprising for me too! I actually found the issue when going to report the issue, a couple of months ago :)
 * facubatista had it fresh because of that
<facubatista> jam, thanks for the massive editing in the Debugger doc
<jam> facubatista, It ended a bit ugly, but I was hoping to show the changes as discussed, and make sure it was clear what was changing.
<facubatista> jam, yes, it's way better for me to see that, instead of having to read everything and diff current text to what was in my memory
<narinderguptamac> Hello all
<facubatista> hello narinderguptamac
<narinderguptamac> facubatista, was taking to Dmitri about peer relation and got it cleared that any peer event will fire after start event
<facubatista> narinderguptamac, nice!
 * facubatista -> lunch
<crodriguez> hello hello
<crodriguez> anyone tried to attach secrets with this new feature from juju 2.8? https://discourse.juju.is/t/k8s-spec-v3-changes/2698. It doesn't work for me and I'm unsure if it points to juju or the framework itself
<crodriguez> also, is there an example somewhere with the import of the k8s yaml manifest in the charm.py, instead of having to write the spec in a yaml format ourselves?  I believe calvin did it for his cdk-cats charm, but I don't think it was fully functional
<Chipaca> crodriguez: o/!
<Chipaca> crodriguez: we haven't done the work to support the new secrets thing AFAIK
<Chipaca> crodriguez: we will shortly
<crodriguez> Chipaca: aah ok then I'll wait before integrating that. Thanks! Is there a newsletter / discourse / etc. where updates, new features, etc are published?
<jam> Chipaca, crodriguez : so any pod spec that they give us we pass on through, I think you're crossing 2 different things.
<jam> crodriguez, the first thing that stands out from the thread is making sure your pod spec is version 3, beyond that *I* don't know what the values of the fields should be.
<jam> crodriguez, the actual bug you have is in your code, though
<jam> application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed if self.state.spec != new_spec:
<jam> that sais you are looking at 'self.state.spec' but you never initialized the field
<jam> crodriguez, you'll want to have something like "self.state.set_default(spec=None)" in your charm's __init__.py
<jam> sorry def __init__():
<jam> crodriguez, I'm actually past EOD here, but if you can put your actual code somewhere we can give you feedback on it.
<crodriguez> thank you jam! I'll look into the init section of my charm... it might be missing things related to updates have been made to the operator framework. My code is here https://github.com/camille-rodriguez/charm-mssql/tree/add-secret , feel free to look at it tomorr
<crodriguez> tomorrow*. I'll check the state of the spec like you mentionned
<Chipaca> crodriguez: what is _write_state in that code?
<jam> crodriguez, yeah, I think you used to have initialization in _write_state that was called by on_install, but when you got rid of that, you removed your initialization
<crodriguez> mhh cause on_install wasn't working until recently. Most of that code was written with the operator framework in late december/early jan, and I'm getting back to it now. I need to clean the on_start if on_install is the right way to go now
<jam> crodriguez, so in __init__ having it do self.state.set_default() is the recommended way
<jam> that way it is guaranteed to happen even after an upgrade/etc.
<crodriguez> okay I'll add that
<jam> but also won't ever overwrite the values that you've actually set.
<Chipaca> jam: you go have a good evening now :)
<jam> Chipaca, well, I have to hang around to get my PR reviewed. Might as well offer some tech support.
<Chipaca> aw
<crodriguez> mmh  "AttributeError: attribute 'set_default' is not stored"
<niemeyer> defaultS?
<niemeyer> Hi Camille :)
<jam> niemeyer, crodriguez : its 'set_default' but it sounds like you have an out-of-date copy of ops
<niemeyer> I'm not 100% sure as it's been a few weeks, but that's what came to mind when you mentioned the error
<jam> if you're using submodules you'll need to 'git pull' in the subdir.
<niemeyer> Ah, and jam just proved me wrong.. efficiency! :)
<crodriguez> in the operator folder? it doesn't have info on the upstream repo
<crodriguez> I thought it was a command like update submodules or something similar..
<Dmitrii-Sh> crodriguez: if you have submodules in your repository, you can use this to update all of them: git submodule update --recursive --remote
<Dmitrii-Sh> but a prerequisite to that is actually adding it
<Dmitrii-Sh> git submodule add https://github.com/canonical/operator mod/operator
<jam> Dmitrii-Sh, ah, the --remote is the trick that I didn't realize so always just did 'git pull' which worked for me.
<crodriguez> okay that recursive update command worked, thanks. I moved on to another bug with the framework haha
<crodriguez> well with my code
<Dmitrii-Sh> jam: yes it's not very intuitive which reminds me of https://imgs.xkcd.com/comics/git_2x.png
<Dmitrii-Sh> crodriguez: what is it? Can I help?
<jam> Dmitrii-Sh, git submodule update --help isn't particularly... helpful
<Dmitrii-Sh> jam: heh, it's rather "--quiet"
<crodriguez> Are we supposed to use the on_install or the on_start hook now? or both? install being for the first deploy, and start for if you restart the nodes?
<crodriguez> so if I change my pod_spec version to 3, the 'rules' and 'global' keywords are not recognised anymore. Is that from a change in the framework or in juju?
<crodriguez> lol sorry I'm bombing y'all with questions
<Dmitrii-Sh> https://bugs.launchpad.net/juju/+bug/1854635 It's still "start" for now while we use 2.7. This fix was committed to 2.8 for k8s charms to get the install event
<Dmitrii-Sh> but then "dispatch" will also be in 2.8 https://github.com/juju/juju/pull/11257/files#diff-95b793849a6a72c6122da04f68ecb586
<jam> crodriguez, so the framework makes use of on_install to do some initialization stuff, but I wouldn't think you have the information you need to start a pod until after config-changed, and probably around start.
<jam> Dmitrii-Sh, I actually thought that landed in 2.7.4, doesn't look like the bug was updated.
<jam> "bake and land in 2.7.3" is pretty far away from 2.7.4/5 so not sure where its at.
<Dmitrii-Sh> jam: hmm, I looked at #5 https://bugs.launchpad.net/juju/+bug/1854635/comments/5 for reference, haven't looked at the actual commits
<Dmitrii-Sh> crodriguez: regarding keywords: you probably mean "roles". There's an example of a spec here in "QA steps" https://github.com/juju/juju/pull/11293 - how different is it to what you have?
<crodriguez>         spec = {
<crodriguez>             'version': 2,
<crodriguez>             'serviceAccount': {
<crodriguez>                 'global': True,
<crodriguez>                 'rules': [
<crodriguez>                     {
<crodriguez>                         'apiGroups': ['apps'],
<crodriguez>                         'resources': ['statefulsets', 'deployments'],
<crodriguez>                         'verbs': ['*'],
<crodriguez>                     },
<crodriguez>                     {
<crodriguez> oops
<crodriguez> so in this spec, if I change version to 3, I get this :
<crodriguez> https://www.irccloud.com/pastebin/2qFlNy81/
<crodriguez> same for unknown field 'rules'
<jam> crodriguez, offhand I don't think you're allowed to set global service accounts in a charm. They effect people that aren't in your namespace
<jam> I'm not the one who has the definitive answer to that, though.
<jam> I know there were discussions
<Dmitrii-Sh> the error seems to be coming from the framework though
<Dmitrii-Sh> b'ERROR json: unknown field "global"\n'
<Dmitrii-Sh> hmm, actually that doesn't look like something we'd emit
<Dmitrii-Sh> I just tried using that spec as an input in a unit test and it didn't error out (because we just write to a file)
<crodriguez> Hmm. Okay, I'll look more into that. I need to step out for an hour, will circle back about this. Thanks everyone!
<Dmitrii-Sh> version: 3
<Dmitrii-Sh> serviceAccount:
<Dmitrii-Sh>   automountServiceAccountToken: true
<Dmitrii-Sh>   roles:
<Dmitrii-Sh>     - global: true
<Dmitrii-Sh>       rules:
<Dmitrii-Sh>         - apiGroups: [""]
<Dmitrii-Sh>           resources: ["pods"]
<Dmitrii-Sh>           verbs: ["get", "watch", "list"]
<Dmitrii-Sh>         - nonResourceURLs: ["*"]
<Dmitrii-Sh>           verbs: ["*"]
<Dmitrii-Sh> crodriguez: it seems like you need to have "global: true" under roles
<Dmitrii-Sh> and then rules under roles as well
<jam> Dmitrii-Sh, ModelError(e.stderr) means it is coming from Juju
<jam> Dmitrii-Sh, hm, seems v2 is quite a bit different than v3. https://discourse.juju.is/t/updated-podspec-yaml-new-features/2124
<jam> vs https://github.com/juju/juju/pull/11293
<Dmitrii-Sh> jam: I think charms need to specify min-juju-version: 2.8.0 and use a v3 spec if they support v3 only
 * facubatista is back
<Dmitrii-Sh> but this is slightly unfriendly to a user
<jam> Dmitrii-Sh, so AIUI, you can stick with V2, but the goal is to get to something we want to recommend quickly vs a lot of polish on migration
<Dmitrii-Sh> jam: agreed, there is nothing restricting the usage of v3 via pod.spec_set AFAIKS
<jam> Dmitrii-Sh, we need to add 'k8s-spec-set' to the list of 2.8-isms. btw
<jam> Chipaca, ^^ juju is recommended a different name for setting the definition of your pod in 2.8
<Dmitrii-Sh> jam: right, the command got renamed
<facubatista> is there a reason we're not sending DEBUG logs from the framework/charms to juju
<facubatista> ?
<niemeyer> I don't know what behavior you are observing, but the goal is to make debug logs visible if the user asks to see debug logs
<Dmitrii-Sh> I originally wanted to use DEBUG: https://github.com/canonical/operator/pull/133/commits/2c778fc3f128b088790229bc3cc55bf7f7daedd6#diff-a76991ba26c9c9b312be489400d3f481L21
<Dmitrii-Sh> IIRC the reason not to use it was to encourage people not to use "debug" for everything. It would be nice to retrieve the debug level from the level set for a given model but there's no way to do that yet https://bugs.launchpad.net/juju/+bug/1861987
<niemeyer> What we don't want is for the norm to be seeing debug logs at all times, otherwise what we have is really info logs
<niemeyer> I see debug logs as a fire hose, where even the smallest detail is fine
<niemeyer> That's not the kind of thing that should be on by default.. if it is on, we'll learn not to log things at all to not overload people with boring details, and we no longer have debug logs
<facubatista> niemeyer, agree with you, that's why I wrote in https://github.com/canonical/operator/issues/198 that the logger needs to be in INFO by default (the user can easily change that) -- the problem is that the *handler* is in INFO, and we should be in DEBUG so if the logger is set to debug, those messages will reach Juju
<facubatista> *it* should be in DEBUG...
<niemeyer> Again, I'm not claiming it's correct.. just that this is what I hope we get
<Dmitrii-Sh> I think we should change the JujuLogHandler level to DEBUG. Otherwise even if one overrides the root logger level in the charm, the messages are still filtered at the handler level
<Dmitrii-Sh> that would be changing the JujuLogHandler constructor to be like this:     def __init__(self, model_backend, level=logging.DEBUG):
<facubatista> Dmitrii-Sh, yeap, that's what issue #198 talks about
<niemeyer> Tentatively calling it a day.. o/
<tvansteenburgh1> Hi everyone. The Canonical k8s team will soon publish (as a preview release) our first k8s charm based on the operator framework. Source code is at https://github.com/charmed-kubernetes/charm-multus. Feedback welcome.
<Chipaca> tvansteenburgh1: ack
<Dmitrii-Sh> crodriguez: the charm tvansteenburgh1 has a v3 spec example ^ https://github.com/charmed-kubernetes/charm-multus/blob/master/src/charm.py#L87-L125
<Dmitrii-Sh> seems to confirm the theory I had
<crodriguez> Dmitrii-Sh: thanks for your help. I will compare and adjust to use the v3
 * facubatista eods
<Dmitrii-Sh> https://github.com/canonical/operator/issues/211- found during testing
<Dmitrii-Sh> https://github.com/canonical/operator/pull/212 - would be helpful for progressing #210
#smooth-operator 2020-04-03
<Dmitrii-Sh> o/
<jam> Dmitrii-Sh, morning. (I'm not really here, but there were comments left by someone on https://github.com/canonical/operator/pull/212)
<Dmitrii-Sh> jam: thanks, just responded there
<Dmitrii-Sh> I think extending add_relation by allowing it to accept initial data would be a good way to move some checks away from update_relation_data and make it less overloaded than it is right now
<jam> Dmitrii-Sh, maybe we have both options written out and can compare the tests that result and what we feel is easier to understand
<Dmitrii-Sh> jam: ok, I'll make a branch locally so that I have the current implementation preserved
<Dmitrii-Sh> and see what I can come up with
<jam> I had initially done the initial_app_data form, and then was trying to streamline and saw that you could use update for self. but I wasn't convinced which way would be better.
<Dmitrii-Sh> jam: also, if I look at this signature `def add_relation(self, app, *, initial_unit_data=None, initial_app_data=None, remote_app_data=None):`, it gets confusing on which argument is for which purpose for peer relations
<jam> app is redundant for peer but the rest isn't
<jam> I would say for peers we just make one of them illegal to set
<Dmitrii-Sh> hmm, there is a case where our unit comes app and there's already something in peer app relation data
<Dmitrii-Sh> just thinking about a later addition of `relation-created` events - add_relation should trigger "relation-created" for our unit but during that there may already be some app relation data present for that relation-id and a "remote app" (which may be local on a peer rel)
<Dmitrii-Sh> I think it should be illegal to use the `remote_app_data` argument for peer relations - I am going to raise an exception if that happens and see how it looks like
<t0mb0> Hi I've been struggling with some issues with actions. It seems every time I tried to retrieve some data from StorageState with my action I keep getting the initial value of my variable, however this variable gets populated with some data as part of the configuration process and I can confirm the variable is set. It seems that when I call my action it is referencing a different state object?? https://pastebin.canonical.com/p/jWT6bDcRVR/
<t0mb0> am I missing something ^^ or should I raise a bug
<jam> t0mb0, can you link the code?
<t0mb0> jam, https://pastebin.canonical.com/p/7BnhqRxDfG/
<jam> t0mb0, so #1 i would suggest moving from an on_start setting all the attributes to a 'set_default()' call in __init__
<jam> t0mb0, the on_start doing apt-get stuff is fine
<t0mb0> jam, I had done a set_default for in __init__ for the data i'm trying to retrieve
<jam> t0mb0, that isn't the code you linked
<jam> hard to debug code that isn't what I'm looking at :)
<t0mb0> jam, yeah I know, I'm just saying it's something I've already tested
<t0mb0> I've been trying all number of things to try and isolate the issue :P
<t0mb0> I'll move it back to __init__ though
<jam> t0mb0, ah, you're on k8s, right? Charms fire hooks in the Operator pod, but fire actions in the Workload pod.
<jam> (currently)
<jam> I believe there is a flag
<t0mb0> :O
<t0mb0> I am in k8s yup
<jam> t0mb0, sigh, the flag is only in dev mode. So if you set "export JUJU_DEV_FEATURE_FLAG=juju-v3" then "juju run-action" is renamed to "juju run" and has a "juju run --operator" to run the action in the operator pod.
<jam> t0mb0, I'd bring that up in #juju about not being able to supply --operator to 'juju run-action'
<t0mb0> jam, does this work with microk8s juju?
<t0mb0> I don't seem to have that flag when I export the variable there
<jam> t0mb0, it might be a 2.8 feature, not sure.
<jam> t0mb0, with any k8s
<t0mb0> hmm
<jam> t0mb0, submitted https://bugs.launchpad.net/juju/+bug/1870487 for you.
<t0mb0> thanks!
<t0mb0> jam, I do have juju run --operator
<t0mb0> but I'm not trying to run a shell cmd, I'm trying to run a charm action
<t0mb0> ah - on another read of that bug report you're saying there is no reason not to also include --operator to "run-action"
<jam> t0mb0, ah, it was juju run that grew --operator, not run-action
<jam> t0mb0, bug update
<jam> updated
<t0mb0> jam, shouldn't state be the same whether it's running in the operator pod or on the workload?
<jam> t0mb0, state is read from the local disk atm, no way to read it from the other pod
<t0mb0> jam, so if the operator pod was to die then you'd lose that state?
<jam> t0mb0, operator pods are currently stateful stets
<jam> sets
<jam> t0mb0, so they ask k8s for a disk mount that will float with a restart of the pod
<jam> t0mb0, in 2.8 there are some hook tools that will let us save state back in the controller, and work that would allow operator pods to become stateless
<jam> the operator framework doesn't support that yet. its on our 'todo next' list
<t0mb0> jam, I also noticed that if I initialised admin_password in on_start, then my action was able to read it, it was just when that value was overwritten in the configure hook
<jam> t0mb0, once you declare a pod spec, the Juju code copies the charm in an init container to the application pod
<jam> t0mb0, currently the charm state is stored in the state dir, causing it to get copied
<jam> but that is 'state as it exists at one point in time'
<jam> which is naughty of how it is all done and definitely shouldn't be relied upon
<Dmitrii-Sh> that also has an effect of not being able to update StoredState from actions such that regular hooks can see those updates
<Dmitrii-Sh> as jam says, it should work when we have controller-side state storage used in the framework with juju 2.8
<t0mb0> am I correct in understanding that by emitting an event like this https://github.com/dshcherb/cockroachdb-operator/blob/master/src/charm.py#L192 emit() will block or should it be asynchronous?
<Chipaca> mo'in
<niemeyer> t0mb0: That call dispatches the event to handlers
<niemeyer> Morning
<t0mb0> niemeyer, so if I was to do something like this https://pastebin.canonical.com/p/hpzGSrTSGh/ it shouldn't cause a stack overflow or hit maximum recursion limits?
<niemeyer> If you emit an event inside the handler of that event, yes bad things can happen if that takes place forever
<niemeyer> It's also not super nice to read and maintain despite anything else
<niemeyer> It's awkward to say "Hey, it's now initialized!" inside the thing that handles "Oh, is it initialized? Cool, let me handle that.".. makes sense?
<t0mb0> niemeyer, it shouldn't take place forever in that eventually the pod would have loaded (this is an interim solution until juju notifies us when the pod is alive).
<niemeyer> It's better that it's known to be going away, but it doesn't make they nice.. I'd still look for a different solution
<niemeyer> *that
<niemeyer> t0mb0: You might just defer the event, for example
<t0mb0> niemeyer, yeah that's my preferred solution atm however it's kind of annoying having to wait around for an update-status hook to fire for that event to be processed again
<niemeyer> t0mb0: Okay, let me think about this and I will try to suggest something
<t0mb0> niemeyer, it's not super critical, I think we're happy to just wait for an update-status for the time being until we have more k8s hooks to work off
<t0mb0> I was just wondering if the emit() loop was viable
<niemeyer> t0mb0: Viable it is.. it's just not a very nice way to organize things.. it's a very complex way to run an unbounded loop
<Dmitrii-Sh> t0mb0: .emit() is not asynchronous but handling of that event may be deferred by whoever receives it
<Dmitrii-Sh> t0mb0: from what I understand you would like to update the unit status based on whether a workload pod is actually ready or not
<Dmitrii-Sh> this is a little odd to do from the operator pod because (in my view) its status is determined by whether it has enough data to produce a pod spec
<Dmitrii-Sh> I can see the use case where you need to, for example, expose a service URL only when your services are actually up because otherwise a remote unit might fail trying to connect using that URL immediately after receiving it
<Dmitrii-Sh> ideally a client would retry on connection failures because we cannot guarantee that a service is up and will stay up when we expose a URL over a relation
<niemeyer> Dmitrii-Sh: That seems to be well understood. It's exactly the trying that is being discussed here.
<Chipaca> how do we define when a charm is 'done'?
<Chipaca> in my mind that's a case-by-case thing, with a process for ekeing out what done is for each charm
<niemeyer> What would it mean for a charm to be done?
<Chipaca> niemeyer: that the person assigned with doing it can tick it off a list :)
<niemeyer> Chipaca: This is moving the question to what was the person aiming to do with that list..
<Chipaca> we can't say "we need you to do this thing" without saying what that thing being "done" actually is
<niemeyer> Chipaca: The charm manages the entire lifecycle.. it's never done
<niemeyer> It's an event consumer.. it's always waiting for the next reason to do something
<Chipaca> heh
<niemeyer> Chipaca: Exactly.. and we can't say what "done" means without us knowing what "we need you to do this thing" means
<niemeyer> Chipaca: The charm can be done handling a hook
<niemeyer> Chipaca: "heh" is pretty much never the right answer, though
<Chipaca> I thought you made a joke
<Chipaca> niemeyer: we _have_ told people "we need you to do this charm". So I'm askig, how do they, and how do we, know when they're done?
<niemeyer> Chipaca: So the actual question is "When are people done writing a charm?".. okay, that's something else altogether
<niemeyer> Chipaca: The charm is supposed to manage the lifecycle of the applications involved. If we can start, stop, handle the applications relationships so it can actually work, communicate the necessary information for the underlying software to be useful.. it seems like we've got something tangible in hands
<Chipaca> niemeyer: so questions of vert/horiz scalability, managing secrets, upgrades, metrics and logs, are all beyond the scope of a basic "charm is done"?
<Dmitrii-Sh> jam: (for when you're around). Updated https://github.com/canonical/operator/pull/212 per our discussion and will have a look at debug-code.
<niemeyer> Chipaca: I'm assuming you're talking specifically about the conversation we had yesterday, which was in the context of getting the Charmcraft developers to come up with working examples so that they can benefit from having actual experience on how to write a charm in the framework, and so that we can develop experience to help other people
<Chipaca> niemeyer: yes but also for teams beyond just ours, that are also being tasked with charming things
<niemeyer> Chipaca: If that's the case, then we don't need something fancy and comprehensive to start documenting these charms in our catalog of examples
<niemeyer> Chipaca: If it's something else, then I cannot possibly answer that question without knowing what that something else is..
<niemeyer> Chipaca: Yes, sometimes managing secrets, scalability, logs, metrics, are fundamental to consider basic work on a charm donw
<niemeyer> *done
<facubatista> Muy buenos dÃ­as a todos!
<Chipaca> facubatista: buen dÃ­a!
<Chipaca> facubatista: buen viernes, even :)
<facubatista> hola Chipaca! :)
<niemeyer> facubatista: Que tal?
<facubatista> niemeyer, hola! all fine, I already did some gym, have my mate, sun enters through the window...
<facubatista> you?
<niemeyer> That sounds nice
<niemeyer> I just finished my morning bottle of mate
<facubatista> good
<facubatista> niemeyer, Chipaca, in my mind, we should have a comprehensive set of charms that shows how to do each of those things (vert/horiz scalability, managing secrets, upgrades, metrics, logs, etc) but we should NOT do all those things on each example
<facubatista> (of course, we need a couple of "big fully done" examples)
<facubatista> but the small ones are very useful for other people writing charms... when a developer writing a charm says "I'm doing my charm, have much done, need to add 'secrets' here", we can point to docs, and an example where is easy to find *that*
<facubatista> (note that even they should be very small and focused examples, they will be fully functional in the sense that can be deployed, we run functional tests on them, etc)
<niemeyer> facubatista: The medium term task for the charmcraft team in that regard should be to identify what patterns need to be enabled.. it's not a goal to be maintaining comprehensive charm that can e.g. do everything one might possibly want to do in PostgresSQL in production
<facubatista> yeap
<niemeyer> facubatista: For this example, stub would be much better positioned to do that.. but what we must do is ensuring that he can indeed do that and isn't running short in support from the underlying framework
<facubatista> perfect
<Chipaca> facubatista: could you look at #196 again?
<facubatista> Chipaca, yes, I'm on that right now
<Chipaca> facubatista: ð
<Chipaca> I went to look at https://github.com/juju/juju/pull/11389 and at the bottom of all the discussion, it's merged :-D
<facubatista> Chipaca, done with #196
<facubatista> (not sure what you were expecting from that)
<Chipaca> facubatista: what you've done seems fine
<Chipaca> facubatista: i'm just allergic to "changes requested" and then nothing
<facubatista> sure
<facubatista> we probably need to interact more here, regarding PRs... like "answered your comments, pushed a commit, something still open to discussion"
<facubatista> otherwise we (I) get to those changes after reading other 12496 issues
<Dmitrii-Sh> relation-created in a to-be-published 2.8-beta1:
<Dmitrii-Sh> juju show-status-log cockroachdb/0
<Dmitrii-Sh> Time                        Type       Status       Message
<Dmitrii-Sh> 03 Apr 2020 15:25:20+03:00  juju-unit  executing    running proxy-listen-tcp-relation-created hook
<Dmitrii-Sh> 03 Apr 2020 15:25:21+03:00  juju-unit  executing    running cluster-relation-created hook
<Dmitrii-Sh> 03 Apr 2020 15:25:22+03:00  juju-unit  executing    running leader-elected hook
<Dmitrii-Sh> 03 Apr 2020 15:25:23+03:00  juju-unit  executing    running config-changed hook
<Dmitrii-Sh> 03 Apr 2020 15:25:24+03:00  juju-unit  executing    running start hook
<Chipaca> Dmitrii-Sh: nice
<Chipaca> Dmitrii-Sh, niemeyer, did I understand right yesterday that juju changed the command we need to run to set a pod spec?
<Dmitrii-Sh> Chipaca: https://github.com/juju/juju/pull/11323
<Dmitrii-Sh> yes, that's correct
<Chipaca> what does the deprecation look like? are we ok to carry on using the old one, or do we need to use jujuversion to run one or the other?
<crodriguez> Good morning everyone! Just wanted to say thank you for yesterday, I was able to convert my pod_spec to v3 and incorporate the secret successfully :)
<crodriguez> so is the whole charm-tech team EMEA?
<facubatista> good morning crodriguez!
<crodriguez> happy friday too !
<Chipaca> crodriguez: part EMEA, part americas
<Dmitrii-Sh> crodriguez: glad to hear it!
<Dmitrii-Sh> facubatista: https://paste.ubuntu.com/p/kvNvF45Jm4/ - seems to work fine
<Dmitrii-Sh> (first impression)
<facubatista> Dmitrii-Sh, great!
<facubatista> Dmitrii-Sh, just checking the value of the envvar is enough, for the different cases
<facubatista> (I'm actually writing the Python code to react on that)
<Dmitrii-Sh> facubatista: need to split the string if multiple breakpoints are specified https://paste.ubuntu.com/p/QfTSwnnjXg/
<facubatista> Dmitrii-Sh, yeap
<crodriguez> design question: My charm deploys a mssql container and its operator container. If I want to deploy (optionally) another container that contains tools to interact with the main container, should I create a new class for it? I see that if I simply add the container info in the same pod_spec, it includes it in the main mssql pod, which is not what I want, I would like a separate mssql-tools pod
<crodriguez> or should it be a separate charm altogether and relate them in a bundle? (I hope not)
<niemeyer> crodriguez: It's a pretty good question.. I don't have a good answer myself because I don't yet understand the design of the k8s integration as well as I want to
<niemeyer> crodriguez: What I'd like to enable is a situation where the main charm container lives right next to the managed container, and you might have any number of workload containers next to it, but that's not there yet
<niemeyer> crodriguez: Would be nice to have the view of someone that is comfortable with the ways currently work and might provide some advice on that
<niemeyer> crodriguez: From the charm team, the best person would be jam.. tvansteenburgh might also have a good view on this from the k8s perspective
<niemeyer> crodriguez: jam is off today
<crodriguez> niemeyer: yes, maybe they could help. This is my first k8s charm so I'm not always sure of best practices . I was thinking that I'd try to create a 2nd charm class that is triggered optionally, and see how that does
<Dmitrii-Sh> crodriguez: just trying to understand the use-case a little bit more: is there anything specific that you would like to do from a separate pod with mssql-tools?
<crodriguez> Dmitrii-Sh: mostly just make it visible. Otherwise it's hidden, the pod/mssql-0 gets 2 containers instead of 1, like this (ignore the crashloopbackoff lol) https://usercontent.irccloud-cdn.com/file/be7sDsYu/image.png
<crodriguez> for the use case, I am also going to add other sql components in this charm, such as a Machine Learning component, and I will have to decide if I want to extend the base mssql container or if I would prefer to deploy these components in separate containers
<Dmitrii-Sh> crodriguez: hmm, the first impression is that a machine component would be a separate application
<Dmitrii-Sh> and tools to access mysql would be, for example, a part of its container image
<Dmitrii-Sh> when Juju deploys an application to k8s it creates a Deployment in k8s with a pod template specified
<Dmitrii-Sh> that pod template may have multiple containers
<Dmitrii-Sh> but, as far as Juju is concerned, a combination of a workload pod and an operator pod is considered to be a unit
<Dmitrii-Sh> so individual containers in that unit are not visible
<crodriguez> Okay, so there is not really a way to display more than one pod with one k8s charm right now, is that correct?
<Dmitrii-Sh> crodriguez:  1 app can have multiple units which would correspond to multiple pods
<Dmitrii-Sh> each pod can have multiple containers - that part is within the scope of 1 unit
<Dmitrii-Sh> so, yes, that's correct
<Dmitrii-Sh> crodriguez: containers within a pod share networking and storage, for example
<crodriguez> Dmitrii-Sh: ok! so to make multiple units in this framework, would it be to create multiple charm classes in charm.py ?
<Dmitrii-Sh> Similar to non-k8s charms, all units share the same code and have one Charm class. Except for actions, that code gets executed in operator pods, not workload pods (where multiple containers would be)
<Dmitrii-Sh> so you'd still have one class providing the same code to run multiple units
<Dmitrii-Sh> a leader unit would be able to use pod-spec-set, other units would mostly react to lifecycle events and, if a leader is gone, one of the units would take leadership over and be able to use pod-spec-set
<Dmitrii-Sh> crodriguez: what are the component-specific things that you need to do in the charm code?
<Dmitrii-Sh> is it about feeding some parameters to the container spec?
<Dmitrii-Sh> crodriguez: if so, I think you could have multiple classes responsible for different parts of the final podspec
<crodriguez> Dmitrii-Sh:  I'm not sure yet. I was working with mssql-tools image first as it helps me see how the charm structure works with different containers in one charm
<crodriguez> you said one app can have multiple units, which would correspond to multiple pods, and that this would be only one class in the code. I do not see that. With using the same class (so the same pod spec), it deploys only one pod.
<Dmitrii-Sh> crodriguez: just to demonstrate what I mean https://paste.ubuntu.com/p/rgFzBtpSWC/
<Dmitrii-Sh> let me get some sample output from k8s as well to demonstrate multiple units
<crodriguez> ok thanks Dmitrii-Sh
<Dmitrii-Sh> crodriguez: https://paste.ubuntu.com/p/2P93kyFxd9/
<Dmitrii-Sh> so in this example there is one charm that gets deployed, the app is called "wp" with 2 units. As a result you get 1 operator pod and 2 workload pods
<Dmitrii-Sh> crodriguez: if you exec into the operator pod, you will see that there are multiple directories under /var/lib/juju
<Dmitrii-Sh> microk8s.kubectl exec -n controller-microk8s-localhost wp-operator-0 -it -- bash
<Dmitrii-Sh> root@wp-operator-0:/var/lib/juju# ls /var/lib/juju/agents/
<Dmitrii-Sh> application-wp  unit-wp-0  unit-wp-1
<Dmitrii-Sh> so the charm code for different units gets executed in the same operator pod, however, it runs with a different context and in different charm directories
<Dmitrii-Sh> the operator pod is the same, the code is the same, the stored state is different, leadership status is different
<facubatista> I need help for a error message
<facubatista> this looks too long: "breakpoint names must start and end with lowercase alphanumeric characters, and only contain lowercase alphanumeric characters, the hyphen '-' or full stop '.'"
<niemeyer> facubatista: 'breakpoint names must look like "foo" or "foo-bar"'
<niemeyer> facubatista: I'd drop the dot..
<niemeyer> facubatista: Reserve that for namespacing if we have to
<facubatista> niemeyer, I just used the same rule than for actions
<facubatista> but totally agree
 * facubatista fixes the spec
<niemeyer> facubatista: These are not actions.. if we have to add namespacing, we should have a good plan in mind
<facubatista> niemeyer, and I like the error with the examples; developer can go for documentation if we want the "full rules" anyway
<niemeyer> facubatista: Agreed
<crodriguez> Dmitrii-Sh: Sorry if it's late for you, we can continue this conversation Monday morning. So I think we misunderstand each other, maybe it's because we do not reference the same thing when we say "unit". The example with wp still shows the same wordpress pod deployed multiple times. It's the same container image, etc. What I am trying to achieve is to have something like:
<crodriguez> - 1x mssql-server pod
<crodriguez> - 1x mssql-tools container (which depends on a different image that mssql-server)
<crodriguez> - 1x mssql-machine-learning-components
<crodriguez> - 1 operator pod
<crodriguez> Since they are dependant on a different image, it's a different unit to deploy. I'm not looking to just scale my mssql-server to 2 containers.
<crodriguez> But then, maybe my approach is wrong, and that I should combine all these components in a unique container image.
<Dmitrii-Sh> crodriguez: hmm, I think I need to read on https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup-machine-learning-docker?toc=%2Fsql%2Fadvanced-analytics%2Ftoc.json&view=sql-server-ver15
<Dmitrii-Sh> skimming through it I think I understand it better. mssql-machine-learning-components is just an add-on service to MSSQL to be able to execute python and R scripts in the DB itself.
<Dmitrii-Sh> it feels like ML services should be a separate container in the mssql-server pod
 * facubatista loves `watch -n 5 -d juju status`
<Dmitrii-Sh> mssql-tools could be as well just for the case where you want to exec into a container and access it through the CLI client
<crodriguez> yeah I'm reading more about it and it might actually be a bigger image that includes mssql-server itself. From https://github.com/Microsoft/mssql-docker/tree/master/linux/preview/examples/mssql-mlservices
<crodriguez> My point remains that if I wanted to deploy separate pods from one charm, there's no clear answer on how to do that
<crodriguez> and mssql-tools was just a practice to figure out how to do this. because it's actually included by default in the mssql-server, I do not need to deploy it separately. But I would like to understand how to do it if I wanted to go that route, you know?
<Dmitrii-Sh> crodriguez: multiple pods, each with different sets of container specs in one charm - no. It's not possible to do that with deployment objects in k8s IIRC (https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment) and Juju apps result in deployment objects being created
<Dmitrii-Sh> I think we need to work out the target yaml for K8s that we would like to see
<Dmitrii-Sh> and then map it to a podspec
<Dmitrii-Sh> crodriguez: "would like to understand how to do it if I wanted to go that route", - understood, I sent out an invite for Monday to go over this
<Dmitrii-Sh> let me know if the time is OK for you
<crodriguez> ok thank you! yes that time works. Thanks for the additional info
<Dmitrii-Sh> great, np
<Dmitrii-Sh> Have a good weekend!
<crodriguez> you too Dmitrii-Sh !
<facubatista> reviews appreciated! Added breakpoint manual call to the framework - https://github.com/canonical/operator/pull/213
<niemeyer> Thanks, will look on Monday
<niemeyer> For now, have a great weekend
 * facubatista eods
<facubatista> Bye all! have a nice weekend
<Chipaca> have a good weekend, all
