[02:17] <wallyworld> babbageclunk: hey, i know you'd love a caas review to finish your week :-D https://github.com/juju/juju/pull/8318
[02:51] <babbageclunk> wallyworld: hey, looking now! Sorry, Tim popped around to say hello.
[02:52] <wallyworld> no rush :-) you may need to ask questions about how it all hangs together. i'm interviewing again for the next hour though
[02:52] <babbageclunk> wallyworld: when you're not interviewing can you look at this? https://github.com/juju/juju/pull/8315
[02:53] <wallyworld> will do
[04:18] <wallyworld> babbageclunk: almost done review; i assume you cribbed from the model config command?
[04:19] <babbageclunk> wallyworld: yup - I looked at consolidating them but there were enough differences that it didn't seem worth it.
[04:19] <wallyworld> no worries, just helps me skim a bit :-)
[04:19] <babbageclunk> cool cool
[04:28] <wallyworld> babbageclunk: lgtm, just a little iffy on the []interface{} setting values
[04:30] <babbageclunk> wallyworld: thanks - I think we need to be able to handle those because that's what schema.List.Coerce returns.
[04:32] <wallyworld> babbageclunk: yeah, i think previously stuff stayed as a string when stored and was coereced on the way back out
[04:32] <wallyworld> can't recall the specifics
[04:33] <wallyworld> but i know there were issues around this stuff (corner cases etc)
[04:36] <babbageclunk> wallyworld: yeah, I could see some pieces of that in the ConfigFlag code
[04:36] <wallyworld> yeah, so there be dragons is all
[04:37] <wallyworld> it would be good to perhaps stick with our existing conventions - i'm sure some model config stuff is a map after being coerced
[04:37] <wallyworld> but is stored as a string
[04:37] <wallyworld> or maybe time to make a change, not sure
[04:38] <babbageclunk> Really? I'll have a look but I didn't change the way the values were stored in controller config (ie they were already being coerced, I just added a list value).
[04:45] <babbageclunk> wallyworld: It looks like we use SetPreserveStringValue for the config options on the deploy command (so I guess that's charm config) but not anywhere else.
[04:47] <wallyworld> babbageclunk: ok, let's get this in - perhaps an email to initiate a wider discussion would be good just to make sure we're all ok with the change. the fact that == changed to reflect.DeepEquals() suggests things used to work one way and now not, or there was a bug
[04:51] <babbageclunk> wallyworld: ok, will to
[04:51] <babbageclunk> s/t/d
[04:51] <babbageclunk> ha
[08:36] <rogpeppe1> anastasiamac: thanks for your quick fix!
[08:37] <anastasiamac> rogpeppe1: nws - need to land it now and see the difference! thank you for the analysis
[08:37] <rogpeppe1> anastasiamac: BTW a good point was raised by axino in #is - do you know why the dead model entry is staying around in the model pool for so long?
[08:38] <anastasiamac> rogpeppe1: it has also sparked an interesting and invigorating discussion about what to do with connections that pool manages ;) thank you for keeping us on our toes?.. ;)
[08:38] <anastasiamac> rogpeppe1: there must b a model connection that is not closed...
[08:39] <anastasiamac> rogpeppe1: we r considering to force close these on dead models after a timeout period (default 5mins but potentially configurable)
[08:39] <anastasiamac> rogpeppe1: i have suspected that we r not closing something somewhere but tracking it down is really really hard in asynch and http context
[08:40] <anastasiamac> rogpeppe1: it's easier to just close connections when needed
[08:40] <rogpeppe1> anastasiamac: there are techniques that can be used to track down stuff like that
[08:41] <anastasiamac> rogpeppe1: yes and i've tried... to what i have seen it grew for me as our connections are long lived and are reused for more than one biz concern/function... anyway, m planning to focus on it next week (tomorrow is public holiday in OZ and am about eod)
[08:42] <rogpeppe1> anastasiamac: i'm trying to remember/find the Go package that i think might help with this
[08:42] <anastasiamac> rogpeppe1: would love to have ur opinion once i come up with solution(s) :D
[08:42] <anastasiamac> rogpeppe1: ohh u'd make my day!
[08:42] <rogpeppe1> anastasiamac: ah, this is what i was thinking of: https://godoc.org/golang.org/x/net/trace
[08:43] <rogpeppe1> anastasiamac: not sure if it would actually help though
[08:43] <anastasiamac> rogpeppe1: niice :) i'll see what i can do :)
[08:43] <rogpeppe1> anastasiamac: i'm not sure that records context of where existing objects are used though.
[08:44] <anastasiamac> rogpeppe1: last time i've checked i had a list o2-pages long of different cllas that have been made on one connection... i needed to pair up with someone at the time but TZs is not always in my favor :) maybe next week :)
[08:44] <rogpeppe1> anastasiamac: i might be inclined to store current user (perhaps the pcs of the stack trace) in the object when it's used
[08:45] <anastasiamac> rogpeppe1: i can record that context myself ;) i was thinking of adding more logging at trace for non-released connections so that we can track what's going on say in jimm...
[08:45] <rogpeppe1> anastasiamac: are connections shared concurrently between multiple things?
[08:45] <anastasiamac> rogpeppe1: i do not believe so
[08:46] <anastasiamac> rogpeppe1: i think it's at least one connection per http connection... but i'd need to check and m starting to checkout: hungry mounths to feed, etc ;)
[08:49] <rogpeppe1> anastasiamac: so, looking at StatePool, i think it might be worth recording the stack pcs as well as the sourceKey in the referenceSources map
[08:50] <rogpeppe1> anastasiamac: then it would be possible to find out where all the Get calls for currently outstanding unreleased State instances were allocated
[08:50] <anastasiamac> rogpeppe1: yep, that what i have on a branch that i have abandoned last year.. was going to pick it up again next week :) and have stuff running off it
[08:51] <rogpeppe1> anastasiamac: you'd need some way of interrogating that too though
[08:53] <anastasiamac> rogpeppe1: my plan was to log it and run locally then just observing the log.. but i like the interogation idea better since it can help in the field too :)
[08:54] <rogpeppe1> anastasiamac: ooo, there's already IntrospectionReport
[08:54] <rogpeppe1> anastasiamac: don't know where it's used tho
[08:56] <rogpeppe1> anastasiamac: cool, looks like you can just GET /statepool
[08:56] <anastasiamac> rogpeppe1: yep
[08:57] <rogpeppe1> anastasiamac: BTW do you know when jam/wallyworld/thumper are back online?
[08:57] <jam> rogpeppe1: wallyworld and thumper are gone this week and I think next. I'm around.
[08:58] <anastasiamac> rogpeppe1: wallyworld is sprinting next week and thumper should b back towards 2nd half next week
[08:58] <rogpeppe1> jam: ah, cool. i've a couple of PRs I wonder if you could take a look at.
[08:59] <rogpeppe1> anastasiamac: https://golang.org/pkg/runtime/#Callers - you could store a []uintptr in each referenceSources entry; relatively low cost, but provides the capability to get full stack trace out
[09:00] <rogpeppe1> jam: https://github.com/juju/juju/pull/8304 and https://github.com/juju/juju/pull/8305
[09:00] <anastasiamac> rogpeppe1: full stack trace in prev look only returned http caontext not actual calls but i'll have a look
[09:00] <rogpeppe1> anastasiamac: i mean recording the stack trace when Get is called
[09:00] <rogpeppe1> anastasiamac: or... maybe that's called too early to be of use :-(
[09:02] <anastasiamac> rogpeppe1: yeah, i found it was a bit early, i had to add some fun callbacks to get the real trace :0 i'll need to revisit my shelf to re-fresh my memory but m about to switch off my machine
[09:02] <anastasiamac> rogpeppe1: so next week :)
[09:03] <rogpeppe1> anastasiamac: have a great weekend!
[09:03] <rogpeppe1> anastasiamac: and thanks for the quick fixes
[09:03] <anastasiamac> rogpeppe1: my pleasure :)
[16:55] <balloons> vern, where you able to conquer jjb and get a new job created?
[16:56] <vern> actually, I forgot and was looking at the joyent job yesterday
[16:57] <vern> I remembered late last night. am tackling jjb now :)
[16:58] <balloons> vern, no worries :-)
[16:59] <balloons> vern, it also seems we should be fairly safe from any big overhauls that would affect the simple use case for backup restore
[17:00] <balloons> I spoke with wallyworld about intentions to confirm
[17:01] <vern> cool. thanks for checking, balloons
[17:58] <hml> anyone tried to bootstrap localhost with juju snap (edge channel) today?
[17:58] <hml> i’m using the lxd snap too
[17:58] <hml> ERROR detecting credentials for "localhost" cloud provider: failed to connecti to local LXD: not found
[17:59] <hml> stable channel is okay
[18:00] <rick_h> hml: sec, I can test it on my machine
[18:00] <hml> rick_h: thank you
[18:00] <hml> my controller from yesterday - can’t connect to the containers anylonger - they lost their ip addresses
[18:01] <rick_h> hml: hmm, I can't get lxd init to go getting "error: Error inserting default into database: attempt to write a readonly database"
[18:01] <hml> i saw that too at one point
[18:02] <hml> i think the lxd snap updated yesterday
[18:02] <hml> i wonder how 2.3.2 can’t bootstrap lxd?
[18:02] <hml> s/can’t/can
[18:03] <rick_h> hml: so I've got installed:  lxd 2.21 from yesterday. Did you have a later one?
[18:03] <hml> rick_h: i’m on the same - it refreshed yesterday
[18:05] <hml> rick_h: lxc init does work for me but the new container is STOPPED
[18:07] <rick_h> hml: sorry, can't help atm as mine is busted up. Looking into what's up.
[18:07] <hml> rick_h: i’m in the same state :-) though knowing lxd is giving others issues helps
[19:02] <hml> rick_h: any luck?  looks like something with the lxc networking, the juju failures lead there - and the container which bootstraped today  lost it’s ip address and juju can no longer reach it
[19:03] <rick_h> hml: no, this error seems to be around a sqlite db not being able to be written and the only bug I see atm is around enough inodes?
[19:03] <rick_h> hml: sorry, was just rebooting the thing and then got sidetracked while it rebooted
[19:03] <hml> :-)
[19:03] <hml> rick_h: i haven’t been able to find any errors in logs around this
[19:04] <hml> rick_h: i tried to revert the snap, but that failed
[19:09] <rick_h> hml: so after reboot my inode issue is ok, lxd init works, and bootstrapping with --debug now
[19:11] <hml> rick_h: hmmmm
[19:11] <rick_h> hml: getting images...wheeee
[19:12] <hml> rick_h: good that it’s back up and running - no cool if it requires a reboot
[19:24] <rick_h> hml: so all done and bootstrapped here
[19:24] <rick_h> 2.4-beta1+develop-dc661f7
[19:25] <hml> rick_h: okay - i’m taking the opporturnity to improve on the “ERROR not found” message
[19:25] <rick_h> hml: lol why? I mean that's clearly ... umm ...
[19:25] <hml> lol
[20:04]  * balloons reads backlog
[20:05] <balloons> I thought it was just me this morning
[20:05] <balloons> sounds like if I try again things will work :)
[20:05] <hml> balloons: reboot first
[20:05] <hml> :-/
[20:06] <balloons> my system was rebooting from the microcode issues
[20:06] <balloons> that was annoying. I think it's stopped today finally
[20:06]  * hml fingers crossed
[20:08] <hml> quick pr review anyone?  https://github.com/juju/juju/pull/8320
[20:12] <balloons> I feel like we might end up playing with the message
[20:16] <hml> balloons: word play suggestions are welcome
[20:17] <balloons> hml, I left a comment and approved
[20:20] <hml> balloons: ty
[20:21] <rick_h> "there once was an error from nantucket..."
[20:22] <hml> ha!
[20:40] <hml> rick_h: balloons: i think my ubuntu box has gremlins :-) - previously failing lxc commands have started working again. and restarted containers have ip addrs (so far)
[21:28] <hml> vern: balloons : do you know where I can look to see why this failed: http://ci.jujucharms.com/job/github-merge-juju/800/console  ?  ty
[21:33] <vern> hml: good question... I'd expect right there but not much more in that output than "ERROR: Build failure, reporting on proposal"
[21:34] <vern> checking the jenkins job...
[21:35] <veebers> hml: go install github.com/juju/juju/acceptancetests/repository/charms/fill-logs/actions: build output "/var/lib/jenkins/workspace/github-merge-juju@tmp/bug1745459-800/tmp.4wz9MQ0PI7/juju-core_2.4-beta1/bin/actions" already exists and is not an object file
[21:35] <veebers> I'm not sure why that's the case those :-\ Although I am currently updating that job
[21:35] <veebers> I lie, I'm not updating that job
[21:35] <veebers> I'm doing the merge check job
[21:36] <hml> veebers: :-) it was the 2nd attempt at merge???  but that “shouldn’t” matter
[21:43] <veebers> hml: shouldn't have, not sure why it choked. Seems happier this time around, hopefully those unit tests pass
[21:46] <hml> veebers: 3rd time the charm?  first was a known transcient failure
[21:46] <vern> hml: for what it's worth: the newer job is already past the point the other failed
[21:46] <hml> good news
[22:03] <veebers> *sigh* any time the config for the check-merge job is saved it re-scans and re-runs for old PRs :-\
[22:04] <veebers> If you see a comment on an older PR pop up this is why