[00:53] <lucidone> How would I log from a k8s charm using the operator framework? (So that the logs show up in the pod logs e.g kubectl -n test-k8s logs -f test-charm-operator-0)
[00:55] <lucidone> Is that something debugf in ops/main.py will handle (function definition is currently just 'pass')
[01:02] <lucidone> Ah scratch that, looks like print() does show up in logs - a separate issue meant that chunk of code wasn't getting hit :)
[02:10] <thumper> Here is a relatively boring PR for someone: https://github.com/juju/juju/pull/11124
[02:11] <thumper> lucidone: oh, I don't know if we have any way to write into the pod logs
[02:11] <thumper> wallyworld, kelvinliu: do you know?
[02:11] <thumper> lucidone: we have a juju-log command that has things show up in the juju debug-log
[02:11] <thumper> but I don't know if there is a specific one for the pod log
[02:11] <wallyworld> you can also use kubectl log
[02:11] <thumper> wallyworld: from the charm?
[02:11] <wallyworld> not really
[02:12] <wallyworld> kubectl is for outside the cluster
[02:12] <thumper> wallyworld: do you think that we should have juju-log commands for k8s models go into the pod log?
[02:12] <evhan> I think just writing to stdout does the right thing.
[02:12] <wallyworld> they do go to pod log
[02:13] <wallyworld> the juju logs to go both the k8s log and the juju log
[02:15] <kelvinliu> didn't find a juju-log helper method, but you can do this https://github.com/canonical/operator/blob/dd2a0d4537ff9ef2f19354a3fd7a9598e4b4c076/ops/model.py#L605
[02:30] <evhan> lucidone: For debug-log you can use charmhelpers.core.hookenv.log from https://pypi.org/project/charmhelpers/, works fine with operator.
[02:30] <evhan> In case it hasn't been reported elsewhere, https://discourse.jujucharms.com is down from here.
[02:45] <wallyworld> seems to be back now
[02:46] <tlm> last job!!!!
[02:47] <wallyworld> whoot
[02:48] <tlm> i have run them all seperate wallyworld like we discussed to avoid the wait
[02:49] <wallyworld> tlm: i'm still seeing red dots?
[02:49] <tlm> yeah that was the job from this morning
[02:49] <tlm> if you click through you can see the latests ones
[02:50] <tlm> no way to re run something in jenkins and have latest result trickle up
[02:50] <wallyworld> ah, didn't realise that
[02:50] <tlm> but
[02:50] <tlm> 2.7.1 is released
[02:50] <tlm> I think
[02:51] <wallyworld> tlm: snap store thinks so!
[02:51] <wallyworld> i'll check simplestreams also
[02:51] <tlm> party time
[02:51] <wallyworld> and we should check the ppa manually too
[02:51] <wallyworld> using apt query blah
[02:51] <tlm> ok i'll jump on one of my ubuntu boxes
[02:53] <wallyworld> tlm: seem to be missing released simple streams http://streams.canonical.com/juju/tools/streams/v1/
[02:53] <wallyworld> metadata is in proposed but not released
[02:54] <wallyworld> maybe there's one more job still to run
[02:55] <tlm> it's all green when I check
[02:56] <wallyworld> maybe it takes a few minutes to wash through
[02:56] <wallyworld> needs to have a job run on jerff
[02:56] <tlm> ah that guy. It passed
[02:57] <tlm> https://jenkins.juju.canonical.com/job/release-juju-request-streams-update/120/
[16:05] <nammn_de> manadart: so i created the initial mocking for apiserver/spaces test https://github.com/juju/juju/pull/11088/  your comment https://github.com/juju/juju/pull/11088#discussion_r367868986 I could not 100% realize. I cannot remove all of my changes from StubNetwork, because it needs to implement Backing so I just left it there. Which can be fixed
[16:05] <nammn_de> later when we move more and more tests from stubs to gomock
[16:15] <manadart> nammn_de: I think we could wrap the StubNetwork with with empty stubs for those methods, and remove them. They won't be called from the old tests.
[16:16] <manadart> Remove them from StubNetwork that is.
[16:18] <nammn_de> manadart:  how would you do that?  My initial idea would be to just remove all of the code I added and just add those 2 stub methods :
[16:18] <nammn_de> AllEndpointBindings  AllMachines without any logic and just return
[16:18] <manadart> nammn_de: Yes.
[16:22] <nammn_de> manadart: the other "problem: i've encountered was: because we already have a test lying in spaces_test package I could either: put the new test based on macks in a newfolder or let it be in spaces for now, I put it in spaces for now
[16:24] <manadart> nammn_de: In the spaces package is OK. As long as it is suffixed with _test.go, it will only be built for tests. We do this in other places.
[16:24] <manadart> spaces_mock_test.go would follow the convention.
[16:26] <manadart> That's the mock. The tests can stay in spaces_test and import the mock.
[16:30] <nammn_de> manadart: what i found out from looking at the other packages is the following: mocks are under ../mocks/ and follow the naming of <interface/topic>_mock.go and the test itself is under <api>_test.go, but as we have both (stub and mock tests) I need to choose a new name. This would lead to spaces_mock_test.go, right?
[16:31] <nammn_de> and later when we have moved all from spaces_test, spaces_mock_test will be spaces_test