[00:17]  * thumper off to the vet
[00:17] <perrito666> So annyone can point me to where is the nice path to go from controller/config.go to let's say state? I find that I am missing something
[00:27] <babbageclunk> thumper's getting a cone of shame applied.
[00:45] <perrito666> Oh I would so pay to see that
[00:52] <babbageclunk> axw: yay, adding in an individual get works, thanks!
[00:52] <axw> babbageclunk: sweet :)
[00:53] <babbageclunk> axw: I've got one other thing that is a bit nasty in my standalone code now though - maybe you can tell me a better way?
[00:54] <babbageclunk> axw: In order to pass the right APIVersion values I get all of the providers first and get the latest version from the ProviderClient.
[00:54] <babbageclunk> axw: Basically building a big map of resource type -> version.
[00:56] <babbageclunk> axw: uh, that was clumsily phrased - I mean, I get all the providers from the provider client and build a big map of the latest api version for each type.
[00:56] <babbageclunk> Then I need to look up the version before each call for the different resources.
[00:57] <axw> babbageclunk: that's probably the right thing to do, I'm not really sure tho - haven't done this before
[00:57] <axw> babbageclunk: I don't think we'd want to hard-code, because we're using the generic resource API
[00:57] <axw> so I'm presuming we get the most recent version
[00:58] <axw> babbageclunk: are you sure it's not meant to be the API version for the generic resource API?
[00:59] <babbageclunk> axw: Yeah, I'm sure - I get errors if I leave it the resource API version - for some resource types it says there's no provider registered.
[00:59] <axw> babbageclunk: ok
[01:00] <babbageclunk> axw: The error message nicely says which are available, so I could compare that to the output from the azure CLI and check.
[01:02] <axw> babbageclunk: I think listing providers is the way to go
[01:02] <babbageclunk> axw: Cool. Anyway, thanks for all your help! I think it's a simple matter of porting what I've got into the environ and putting tests in.
[01:02] <axw> babbageclunk: cool. I replied to your query from the other day about reentrancy btw
[01:03] <babbageclunk> axw: oh great - I'll check it out.
[01:04] <redir> is there a way to specify series for a controller: e.g. juju bootstrap cloud controller --config series=trusty
[01:04] <redir> i know you can set default-series but can you only specify for the controller
[01:04] <redir> ?
[01:15] <babbageclunk> axw: replied - I think it'll handle that ok.
[01:16] <axw> babbageclunk: yes, I was really just suggesting an optimisation. feel free to ignore, since this is going to happen infrequently
[01:25] <babbageclunk> axw: Oh, I'll add the check. I guess the extra thing would be to filter by the controller tag (with the old uuid) in the call to ListResources? But that would require the old uuid being passed in.
[01:26] <axw> babbageclunk: I think we can do without that for now
[01:26] <axw> babbageclunk: and trust that the migration worker does the right thing :)
[01:34] <axw> babbageclunk: ping? coming to perf meeting?
[01:35] <blahdeblah> Hi all, could I have some quick advice, please?  I'm trying to work out why this env is just sitting in "waiting for machine" state across the board: https://pastebin.canonical.com/177303/
[01:35] <blahdeblah> Where's the best place to look for debug info?
[02:03] <babbageclunk> blahdeblah: I'd look at the controller logs - juju debug-log -m controller
[02:04] <babbageclunk> blahdeblah: Can you see the machines in the underlying cloud?
[02:04] <blahdeblah> babbageclunk: no, apparently it never gets that far
[02:05] <babbageclunk> blahdeblah: it's weird though - normally the machines reporting "down" indicates that the machines have been provisioned but the agent isn't running.
[02:06] <blahdeblah> babbageclunk: the CI job has just timed out and destroyed the model; all I got relevant from the controller logs was this: https://pastebin.canonical.com/177304/
[02:07] <blahdeblah> I'll run the job again and try the debug log while it's starting up
[02:07] <blahdeblah> babbageclunk: can we run the debug log before the model we're interested in exists?
[02:08] <blahdeblah> Oh, looks like that's just the equivalent of juju switch controller; juju debug-log, so I'm going to assume yes.
[02:08] <babbageclunk> blahdeblah: yeah, that's what I was about to say :)
[02:09] <blahdeblah> :-)
[02:09] <blahdeblah> OK - another run under way now
[02:11] <babbageclunk> blahdeblah: if it happens again, are you able to ssh into the provisioned machines without using `juju ssh`?
[02:13] <blahdeblah> babbageclunk: It doesn't seem to be getting that far
[02:14] <blahdeblah> OK, here's progress: https://pastebin.canonical.com/177306/
[02:14] <babbageclunk> blahdeblah: hmm. When I've seen machines stuck in down, it's because there's a problem running cloud-init and so it hasn't been able to start jujud.
[02:14] <blahdeblah> Yeah - I think it's failing before that
[02:15] <babbageclunk> blahdeblah: yeah, I see what you mean
[02:18] <babbageclunk> blahdeblah: I don't know much about openstack - does that error message help?
[02:18] <blahdeblah> babbageclunk: yeah - it does; just checking our build scripts to see what's responsible for specfiying it
[02:18] <blahdeblah> thanks for the help
[02:19] <babbageclunk> blahdeblah: no worries!
[02:20]  * babbageclunk goes for a run
[02:20] <blahdeblah> excellent choice
[02:20] <blahdeblah> Although, depending on what part of the world you're in, maybe not. :-)
[02:38]  * redir goes to make dinner and eods
[03:15] <thumper> babbageclunk: https://github.com/juju/juju/pull/6879
[03:16] <blahdeblah> babbageclunk: Thanks for all the help; adding the correct openstack network sorted that out
[03:19] <thumper> babbageclunk: hold off on that review for a few minutes, found a few more test failures to fix
[03:31] <babbageclunk> blahdeblah: glad to hear it! (I'm in NZ, so it was a great choice!)
[03:35] <blahdeblah> babbageclunk: I went running with some folks at LCA2017, and one is a Linux kernel engineer who works for Microsoft somewhere in Canada.  He reckons running anywhere down to about -30C is fine, except for the ice beard.  I reckon he's nuts. :-)
[03:36] <babbageclunk> blahdeblah: ouch! I've been running at -2C (during a cold snap in London) but I can't imagine running in that kind of cold.
[03:37] <blahdeblah> I'm too wimpy to run below about 10.  :-)
[03:37] <babbageclunk> blahdeblah: also I had to read "linux kernel engineer working for Microsoft" a few times.
[03:38] <blahdeblah> Yeah - pretty funny
[03:38] <blahdeblah> But apparently true
[03:38] <babbageclunk> nice
[03:38] <babbageclunk> a former colleague of mine always says "no such thing as bad weather, just the wrong clothes"
[03:40] <babbageclunk> thumper: let me know when it's good to look at that PR
[03:57] <thumper> babbageclunk: it's there now
[03:57]  * thumper heads off to jitz
[04:00] <bradm> anyone about?  I'm getting a weird message constantly streaming in my juju state server logs - http://pastebin.ubuntu.com/23872991/
[05:33] <thumper> babbageclunk: thanks for the review, I won't land it just yet as I'm about to head off for a week
[05:33] <thumper> babbageclunk: stack.Debug is *way* faster than all the DB access we do
[05:34] <thumper> so any db access will completely hide any cost to stack.Debug
[05:34] <thumper> bradm: that looks badish, what version?
[05:35] <thumper> bradm: I'm guessing pre 2.0
[05:36] <bradm> thumper: nope, 2.0
[05:36] <thumper> bradm: hmm...
[05:37] <bradm> thumper: I had some weird transaction issues, so I cleaned those up with mgopurge, which seemed to work ok
[05:37] <thumper> bradm: what is the background with the controller / models?
[05:37] <bradm> thumper: I've had to end up re-bootstrapping :-/
[05:37] <thumper> I'm guessing... it was mongo data loss...
[05:37] <thumper> which would have caused transaction issues
[05:37] <bradm> thumper: but the background is its a stack being deployed, so its being taken up and down to test out the deployment
[05:37] <thumper> and missing setting doc
[05:38] <thumper> well... poo
[05:38] <bradm> thumper: so long running controller, with the model being deployed, torn down, remade with same name, then deployed again
[05:38] <thumper> hmm
[05:38] <thumper> there shouldn't be a problem
[05:39] <bradm> thumper: the fun part was a juju list-models didn't show it, but a list-models --all whinged about missing data
[05:39] <thumper> the only thing I have seen that causes transaction issues is mongo data loss
[05:39] <thumper> hmm... sounds like a half torn down model
[05:39] <bradm> yeah, I wonder if some corruption happened during tearing down the model
[05:40] <bradm> huh, this is actually 2.0.1, I should probably update to 2.0.2
[05:49] <thumper> good luck folks
[05:49]  * thumper out
[17:39] <deanman> jcastro, Are you available? Following up the action needed to be complete by charmers program owner so charmers can test their centos charms.
[17:40] <mup> Bug #1587644 changed: jujud and mongo cpu/ram usage spike <canonical-bootstack> <canonical-is> <eda> <performance> <juju:Fix Committed by axwalk> <juju-core:Fix Released by axwalk> <juju-core 1.25:Fix Released by axwalk> <https://launchpad.net/bugs/1587644>
[17:40] <mup> Bug #1654528 changed: log sending broke between 1.25.6 and 1.25.9 on trusty <canonical-is> <juju-core:Fix Released by rogpeppe> <https://launchpad.net/bugs/1654528>
[17:40] <jcastro> deanman: sure, I'm free for the next 1h20m, how can I help?
[17:40] <deanman> Using private simplestreams from sinzui and i get the following error http://paste.ubuntu.com/23875938/ and i think it just a simple action from you or Marco
[17:41] <jcastro> marcoceppi_: have you seen that before? ^^
[17:41] <marcoceppi_> jcastro deanman nope.
[17:41] <marcoceppi_> that's weird
[17:41] <jcastro> I wonder if it's a new account that's hit a limit?
[17:41] <jcastro> but we've gotten that before and the error looks nothing like that
[17:43] <deanman> I could search for my chat transcripts we had end of December, sinzui was basically letting me use his private simplestreams with access to centos images. Would that help ?
[17:44] <jcastro> I wonder if you need to accept some eula or something to use those images?
[17:45] <deanman> well from the error message description i can understand that i have, but since i don't have access to aws console, i can't see that through ?
[17:47] <jcastro> does the account holder have access to the console?
[17:48] <jcastro> balloons: yo anyone else on your team know anything about these centos images?
[17:48] <jcastro> I've never even seen this error before
[17:50] <deanman> marcoceppi_, Did you try that link https://aws.amazon.com/marketplace/pp?sku=aw0evgkw8e5c1q413zgy5pjce ?
[17:51] <marcoceppi_> deanman: OH I SEE
[17:51] <marcoceppi_> deanman: okay, I can help, probably
[17:51] <marcoceppi_> deanman: what region?
[17:52] <deanman> im on eu-west-1
[17:54] <marcoceppi_> deanman: try now?
[17:59] <deanman> thank you sir, it worked!
[17:59] <deanman> some weird other behavior though....
[18:06] <deanman> marcoceppi_, i could see the machine being created but gets stuck at pending with the following http://paste.ubuntu.com/23876093/. Does that look normal to you?
[18:06] <marcoceppi_> deanman: that's beyond me
[18:08] <deanman> marcoceppi_, no worries, I'll try look into this more and try get help at some other time
[23:02] <mskalka> Not sure if here or #juju is the right place for this, but does the reactive charm framework support @when('config-changed') or similar? My google-fu is failing
[23:17] <cmars> mskalka, it does.. looking up the state names for this
[23:18] <cmars> mskalka, https://github.com/juju-solutions/layer-basic#reactive-states
[23:20] <mskalka> cmars: Thanks! That's exactly what I was looking for.