/srv/irclogs.ubuntu.com/2018/01/25/#juju-dev.txt

wallyworldbabbageclunk: hey, i know you'd love a caas review to finish your week :-D https://github.com/juju/juju/pull/831802:17
babbageclunkwallyworld: hey, looking now! Sorry, Tim popped around to say hello.02:51
wallyworldno rush :-) you may need to ask questions about how it all hangs together. i'm interviewing again for the next hour though02:52
babbageclunkwallyworld: when you're not interviewing can you look at this? https://github.com/juju/juju/pull/831502:52
wallyworldwill do02:53
wallyworldbabbageclunk: almost done review; i assume you cribbed from the model config command?04:18
babbageclunkwallyworld: yup - I looked at consolidating them but there were enough differences that it didn't seem worth it.04:19
wallyworldno worries, just helps me skim a bit :-)04:19
babbageclunkcool cool04:19
wallyworldbabbageclunk: lgtm, just a little iffy on the []interface{} setting values04:28
babbageclunkwallyworld: thanks - I think we need to be able to handle those because that's what schema.List.Coerce returns.04:30
wallyworldbabbageclunk: yeah, i think previously stuff stayed as a string when stored and was coereced on the way back out04:32
wallyworldcan't recall the specifics04:32
wallyworldbut i know there were issues around this stuff (corner cases etc)04:33
babbageclunkwallyworld: yeah, I could see some pieces of that in the ConfigFlag code04:36
wallyworldyeah, so there be dragons is all04:36
wallyworldit would be good to perhaps stick with our existing conventions - i'm sure some model config stuff is a map after being coerced04:37
wallyworldbut is stored as a string04:37
wallyworldor maybe time to make a change, not sure04:37
babbageclunkReally? I'll have a look but I didn't change the way the values were stored in controller config (ie they were already being coerced, I just added a list value).04:38
babbageclunkwallyworld: It looks like we use SetPreserveStringValue for the config options on the deploy command (so I guess that's charm config) but not anywhere else.04:45
wallyworldbabbageclunk: ok, let's get this in - perhaps an email to initiate a wider discussion would be good just to make sure we're all ok with the change. the fact that == changed to reflect.DeepEquals() suggests things used to work one way and now not, or there was a bug04:47
babbageclunkwallyworld: ok, will to04:51
babbageclunks/t/d04:51
babbageclunkha04:51
rogpeppe1anastasiamac: thanks for your quick fix!08:36
anastasiamacrogpeppe1: nws - need to land it now and see the difference! thank you for the analysis08:37
rogpeppe1anastasiamac: BTW a good point was raised by axino in #is - do you know why the dead model entry is staying around in the model pool for so long?08:37
anastasiamacrogpeppe1: it has also sparked an interesting and invigorating discussion about what to do with connections that pool manages ;) thank you for keeping us on our toes?.. ;)08:38
anastasiamacrogpeppe1: there must b a model connection that is not closed...08:38
anastasiamacrogpeppe1: we r considering to force close these on dead models after a timeout period (default 5mins but potentially configurable)08:39
anastasiamacrogpeppe1: i have suspected that we r not closing something somewhere but tracking it down is really really hard in asynch and http context08:39
anastasiamacrogpeppe1: it's easier to just close connections when needed08:40
rogpeppe1anastasiamac: there are techniques that can be used to track down stuff like that08:40
anastasiamacrogpeppe1: yes and i've tried... to what i have seen it grew for me as our connections are long lived and are reused for more than one biz concern/function... anyway, m planning to focus on it next week (tomorrow is public holiday in OZ and am about eod)08:41
rogpeppe1anastasiamac: i'm trying to remember/find the Go package that i think might help with this08:42
anastasiamacrogpeppe1: would love to have ur opinion once i come up with solution(s) :D08:42
anastasiamacrogpeppe1: ohh u'd make my day!08:42
rogpeppe1anastasiamac: ah, this is what i was thinking of: https://godoc.org/golang.org/x/net/trace08:42
rogpeppe1anastasiamac: not sure if it would actually help though08:43
anastasiamacrogpeppe1: niice :) i'll see what i can do :)08:43
rogpeppe1anastasiamac: i'm not sure that records context of where existing objects are used though.08:43
anastasiamacrogpeppe1: last time i've checked i had a list o2-pages long of different cllas that have been made on one connection... i needed to pair up with someone at the time but TZs is not always in my favor :) maybe next week :)08:44
rogpeppe1anastasiamac: i might be inclined to store current user (perhaps the pcs of the stack trace) in the object when it's used08:44
anastasiamacrogpeppe1: i can record that context myself ;) i was thinking of adding more logging at trace for non-released connections so that we can track what's going on say in jimm...08:45
rogpeppe1anastasiamac: are connections shared concurrently between multiple things?08:45
anastasiamacrogpeppe1: i do not believe so08:45
anastasiamacrogpeppe1: i think it's at least one connection per http connection... but i'd need to check and m starting to checkout: hungry mounths to feed, etc ;)08:46
rogpeppe1anastasiamac: so, looking at StatePool, i think it might be worth recording the stack pcs as well as the sourceKey in the referenceSources map08:49
rogpeppe1anastasiamac: then it would be possible to find out where all the Get calls for currently outstanding unreleased State instances were allocated08:50
anastasiamacrogpeppe1: yep, that what i have on a branch that i have abandoned last year.. was going to pick it up again next week :) and have stuff running off it08:50
rogpeppe1anastasiamac: you'd need some way of interrogating that too though08:51
anastasiamacrogpeppe1: my plan was to log it and run locally then just observing the log.. but i like the interogation idea better since it can help in the field too :)08:53
rogpeppe1anastasiamac: ooo, there's already IntrospectionReport08:54
rogpeppe1anastasiamac: don't know where it's used tho08:54
rogpeppe1anastasiamac: cool, looks like you can just GET /statepool08:56
anastasiamacrogpeppe1: yep08:56
rogpeppe1anastasiamac: BTW do you know when jam/wallyworld/thumper are back online?08:57
jamrogpeppe1: wallyworld and thumper are gone this week and I think next. I'm around.08:57
anastasiamacrogpeppe1: wallyworld is sprinting next week and thumper should b back towards 2nd half next week08:58
rogpeppe1jam: ah, cool. i've a couple of PRs I wonder if you could take a look at.08:58
rogpeppe1anastasiamac: https://golang.org/pkg/runtime/#Callers - you could store a []uintptr in each referenceSources entry; relatively low cost, but provides the capability to get full stack trace out08:59
rogpeppe1jam: https://github.com/juju/juju/pull/8304 and https://github.com/juju/juju/pull/830509:00
anastasiamacrogpeppe1: full stack trace in prev look only returned http caontext not actual calls but i'll have a look09:00
rogpeppe1anastasiamac: i mean recording the stack trace when Get is called09:00
rogpeppe1anastasiamac: or... maybe that's called too early to be of use :-(09:00
anastasiamacrogpeppe1: yeah, i found it was a bit early, i had to add some fun callbacks to get the real trace :0 i'll need to revisit my shelf to re-fresh my memory but m about to switch off my machine09:02
anastasiamacrogpeppe1: so next week :)09:02
rogpeppe1anastasiamac: have a great weekend!09:03
rogpeppe1anastasiamac: and thanks for the quick fixes09:03
anastasiamacrogpeppe1: my pleasure :)09:03
=== rogpeppe1 is now known as rogpeppe
=== Spads_ is now known as Spads
balloonsvern, where you able to conquer jjb and get a new job created?16:55
vernactually, I forgot and was looking at the joyent job yesterday16:56
vernI remembered late last night. am tackling jjb now :)16:57
balloonsvern, no worries :-)16:58
balloonsvern, it also seems we should be fairly safe from any big overhauls that would affect the simple use case for backup restore16:59
balloonsI spoke with wallyworld about intentions to confirm17:00
verncool. thanks for checking, balloons17:01
hmlanyone tried to bootstrap localhost with juju snap (edge channel) today?17:58
hmli’m using the lxd snap too17:58
hmlERROR detecting credentials for "localhost" cloud provider: failed to connecti to local LXD: not found17:58
hmlstable channel is okay17:59
rick_hhml: sec, I can test it on my machine18:00
hmlrick_h: thank you18:00
hmlmy controller from yesterday - can’t connect to the containers anylonger - they lost their ip addresses18:00
rick_hhml: hmm, I can't get lxd init to go getting "error: Error inserting default into database: attempt to write a readonly database"18:01
hmli saw that too at one point18:01
hmli think the lxd snap updated yesterday18:02
hmli wonder how 2.3.2 can’t bootstrap lxd?18:02
hmls/can’t/can18:02
rick_hhml: so I've got installed:  lxd 2.21 from yesterday. Did you have a later one?18:03
hmlrick_h: i’m on the same - it refreshed yesterday18:03
hmlrick_h: lxc init does work for me but the new container is STOPPED18:05
rick_hhml: sorry, can't help atm as mine is busted up. Looking into what's up.18:07
hmlrick_h: i’m in the same state :-) though knowing lxd is giving others issues helps18:07
hmlrick_h: any luck?  looks like something with the lxc networking, the juju failures lead there - and the container which bootstraped today  lost it’s ip address and juju can no longer reach it19:02
rick_hhml: no, this error seems to be around a sqlite db not being able to be written and the only bug I see atm is around enough inodes?19:03
rick_hhml: sorry, was just rebooting the thing and then got sidetracked while it rebooted19:03
hml:-)19:03
hmlrick_h: i haven’t been able to find any errors in logs around this19:03
hmlrick_h: i tried to revert the snap, but that failed19:04
rick_hhml: so after reboot my inode issue is ok, lxd init works, and bootstrapping with --debug now19:09
hmlrick_h: hmmmm19:11
rick_hhml: getting images...wheeee19:11
hmlrick_h: good that it’s back up and running - no cool if it requires a reboot19:12
rick_hhml: so all done and bootstrapped here19:24
rick_h2.4-beta1+develop-dc661f719:24
hmlrick_h: okay - i’m taking the opporturnity to improve on the “ERROR not found” message19:25
rick_hhml: lol why? I mean that's clearly ... umm ...19:25
hmllol19:25
=== agprado_ is now known as agprado
* balloons reads backlog20:04
balloonsI thought it was just me this morning20:05
balloonssounds like if I try again things will work :)20:05
hmlballoons: reboot first20:05
hml:-/20:05
balloonsmy system was rebooting from the microcode issues20:06
balloonsthat was annoying. I think it's stopped today finally20:06
* hml fingers crossed20:06
hmlquick pr review anyone?  https://github.com/juju/juju/pull/832020:08
balloonsI feel like we might end up playing with the message20:12
hmlballoons: word play suggestions are welcome20:16
balloonshml, I left a comment and approved20:17
hmlballoons: ty20:20
rick_h"there once was an error from nantucket..."20:21
=== agprado is now known as agprado|afk
hmlha!20:22
hmlrick_h: balloons: i think my ubuntu box has gremlins :-) - previously failing lxc commands have started working again. and restarted containers have ip addrs (so far)20:40
hmlvern: balloons : do you know where I can look to see why this failed: http://ci.jujucharms.com/job/github-merge-juju/800/console  ?  ty21:28
=== agprado|afk is now known as agprado
vernhml: good question... I'd expect right there but not much more in that output than "ERROR: Build failure, reporting on proposal"21:33
vernchecking the jenkins job...21:34
veebershml: go install github.com/juju/juju/acceptancetests/repository/charms/fill-logs/actions: build output "/var/lib/jenkins/workspace/github-merge-juju@tmp/bug1745459-800/tmp.4wz9MQ0PI7/juju-core_2.4-beta1/bin/actions" already exists and is not an object file21:35
veebersI'm not sure why that's the case those :-\ Although I am currently updating that job21:35
veebersI lie, I'm not updating that job21:35
veebersI'm doing the merge check job21:35
hmlveebers: :-) it was the 2nd attempt at merge???  but that “shouldn’t” matter21:36
veebershml: shouldn't have, not sure why it choked. Seems happier this time around, hopefully those unit tests pass21:43
hmlveebers: 3rd time the charm?  first was a known transcient failure21:46
vernhml: for what it's worth: the newer job is already past the point the other failed21:46
hmlgood news21:46
veebers*sigh* any time the config for the check-merge job is saved it re-scans and re-runs for old PRs :-\22:03
veebersIf you see a comment on an older PR pop up this is why22:04

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!