[05:22] <jam> thanks for the explanation vultaire. You're right that Harness takes out the Juju side but doesn't change anything else about the environment
[10:50] <facubatista> ¡Muy buenos días a todos!
[11:25]  * facubatista is back for a while
[11:25] <facubatista> jam, I put the router on the UPS and myself in the laptop... I may have "systems running" for a while
[11:26] <facubatista> jam, let's do a HO to sync?
[11:26] <jam> facubatista, sure, bug HO or standup?
[11:27] <facubatista> jam, whatever, but please DM me the link
[15:48] <crodriguez_> vultaire (or jam) so to mock an action, you need to create a FakeActionEvent class ? Would this class be created in the same file as the unit tests?
[15:49] <crodriguez_> I tried to just use action_event = Mock() from this example https://paste.ubuntu.com/p/67Hcd5yPT3/ but I get `TypeError: 'Mock' object is not subscriptable`
[15:50] <jam> crodriguez_, so ceph-iscsi went with https://github.com/openstack-charmers/charm-ceph-iscsi/blob/master/unit_tests/test_ceph_iscsi_charm.py#L192
[15:50] <jam> They use a MagicMock so they can easily populate the params that are returned by the action event
[15:52] <jam> The interface of ActionEvent is that it has a 'params' that is a dict containing the content from 'action-egt'
[15:53] <jam> If you don't want to create your own, you can just use ActionEvent itself, but the backend would complain if you called log or set_results in your charm.
[15:54] <crodriguez_> so in MagicMock, should I set results (result of the event) as a param ?
[15:55] <crodriguez_> the FakeActionEvent class that vultaire shared sets set_result at least, so that doesn't fail. I can add log() as well
[15:55] <vultaire> you set return values on magic mock functions via setting the .return_value property, in the general case
[15:56] <vultaire> let me look at the slide deck, been awhile since I did this...
[15:56] <jam> crodriguez_, params is the content that the Human passed when they did "juju run-action"A
[15:56] <jam> set_results is what *your action* tells the human once it completes
[15:58] <vultaire> ok, now I'm a little better contexted :)
[15:58] <crodriguez_> mhm... well it works if I do it like that https://paste.ubuntu.com/p/csDbCRNpDH/ , it's just heavy with a whole class to replicate the event
[15:58] <jam> generally there isn't a return value from set_results because you are setting them for someone else to read
[15:59] <jam> crodriguez_, what you have seems reasonable and easier to understand than Mock (IMO).
[15:59] <crodriguez_> jam: I don't understand that last statement. I should just not set any set_result in my charm ?
[16:00] <jam> Ideally we'd have something like Harness.run_action(action_name, params) -> (results, logs) possibly raising an exception on Fail
[16:01] <vultaire> crodriguez_: I tend to agree with jam.  Mock and MagicMock definitely have their place, but in this case it seems like you simply need something to take the place of an action event object - having a class which mimics the interface at a very basic level (FakeActionEvent) isn't necessarily a bad pattern
[16:01] <crodriguez_> okay thanks, i'll keep it like that then
[16:01] <vultaire> although I think it's fair to perhaps suggest that the operator framework perhaps offer some primitives to make this type of testing easier?
[16:02] <crodriguez_> honestly.. I don't even see the point really in testing something like that. I'm basically overwriting the subprocess call with mock, forcing it to return true, and assessing it reported true.  I'm forcing a positive result
[16:02] <crodriguez_> maybe I need to take a class on the benefits of unit tests lol
[16:02] <vultaire> :)  unit testing is something that takes practice to get good value out of
[16:03] <jam> crodriguez_, you *are* testing that you don't have typos in your code, and you have a way to test that if the subprocess succeeds the code reports success (doesn't call fail, etc)
[16:03] <vultaire> the point is that you exercise your code and make sure that it flows the way you expect and gives the results (or exceptions) that you expect
[16:03] <jam> I don't see a huge value of lots of unit tests here, if you aren't actually setting content
[16:03]  * jam eods
[16:03] <crodriguez_> true true :)
[16:04] <vultaire> the value is certainly more obvious when you don't have to mock anything :)
[16:04] <vultaire> for example, if you're doing a unit test of a factorial function and making sure that factorial(3) returns 6...  that's a very obvious test with clear value
[16:04] <crodriguez_> it's not pointless. It's just hard to know what I should do unit tests against or not in a new charm
[16:04] <vultaire> because you're testing that your function gives the correct result
[16:04] <vultaire> so...  regarding that
[16:05] <vultaire> well, granted, there's a number of different schools of thought here :)
[16:05] <vultaire> one school of thought is test-driven (or test-first) development
[16:05] <vultaire> which in a nutshell is where you write unit tests first (which will fail since you haven't written any code), and then you write the implementation which will make those tests pass
[16:06] <vultaire> ...it's kind of an odd method for first-timers, but some people swear by it
[16:06] <vultaire> I do use it, at least sometimes
[16:06] <vultaire> in that case, you write unit tests just as a side-effect of developing
[16:07] <crodriguez_> yeah I've heard about test-driven dev, haven't done it yet in my work though
[16:07] <vultaire> ...if you write tests after the fact, then you could take a few different tacks... unit testing particular use cases, or unit testing to exercise certain branches of code
[16:08] <vultaire> ...and in some cases, if you're just mocking too much, it may be better to do a more end-to-end test.  I'd say experience is a good guide for that.
[16:09] <vultaire> as in the slide deck, while I believe most testing should be done at the unit test level (because the tests are small and *very* fast, typically), there's still absolutely value in e.g. deploying a model and interacting with actual units to make sure they do what you want
[16:09] <vultaire> ...I hope this helps :)
[16:10] <crodriguez_> yeah well I already have functional tests ready for this charm. I just though I'm *supposed* to do unit tests too to make it a good charm :P
[16:10] <crodriguez_> I agree that func tests takes a lot of time though
[16:12] <vultaire> well, one last bit is: have someone do a code review of your charm.  Get feedback from others, and let that guide you as you get more familiarity with all of this
[16:12] <crodriguez_> so maybe I'll leave out "render_config" out of the unit tests.. It is the main thing tested by the func tests
[16:12] <crodriguez_> ah yes of course
[21:56]  * facubatista eods