[01:24] thumper, menn0 I'm getting a _bunch_ of log message like this when trying to get the status "INFO juju.api apiclient.go:507 dialing "wss://10.0.8.193:17070/model/25914173-c6bf-4a50-86b3-4ba6060f5d59/api"" [01:24] I can see that the lxd containers still exist (my first thought is it being wiped from underneath). While the env is still up what debugging can I do? [01:27] veebers: context please :) [01:27] veebers: what lead up to this/ [01:29] menn0: oh sorry sure, I bootstrap, I add a new model, I deploy a charm (CI dummy charm), I then add a unit to it. I then wait for it to be considered 'started' and with 'workloads'. At this point I check the status in a loop for a couple of minutes with a bit of a sleep inbetween checks [01:30] The first status check succeeded, it output something, the second attempt is now sitting there with that message repeating over and over [01:33] menn0: it's now failed with this: http://pastebin.ubuntu.com/23245012/ [01:35] veebers: the apiclient "dialing" messages are attempts to connect to the Juju API server(s) [01:35] veebers: it keeps trying and then has given up [01:35] veebers: that upgrade-juju thing should be fixed in master now [01:35] veebers: this might mean the controller was unhappy [01:36] menn0: when you're free, can you please review https://github.com/juju/juju/pull/6336 [01:36] axw: sweet, I've just built and about to try it out. [01:37] veebers: can you get logs for the controller machine(s) [01:37] menn0: I have the lxd containers still (have paused the test from cleaning up) is there anything I can grab off them off interest while it's running? (normal log collection will occur once I un-pause it [01:37] ? [01:37] veebers: the normal log collection should be enough [01:37] menn0: oh, I took too long typing :-) Yep I can grab them. I'll just let it complete and grab the logs from there [01:38] axw: will look shortly [01:39] will be some minutes as part of the clean up is to check status, so we'll go through this timeout process again === thumper is now known as thumper-dogwalk [02:22] menn0: ok, I now have the log files, you want them? [02:23] veebers: sure. dropbox? [02:23] veebers: or email if they're not too big [02:24] menn0: which log? or do you want them all? You could probably scp them off the machine itself === thumper-dogwalk is now known as thumper-coffee === thumper-coffee is now known as thumper [04:54] axw: Where would I find juju stuff on the controller machine? I'm logged into an lxd machine that "apparently" is the controller based on the bootstrap output but /var/log/juju is empty and "ps aux | grep -i juju" returns nothing. I find this quite odd [04:55] I've logged in because I have this issue again with "INFO juju.api apiclient.go:507 dialing "wss://10.0.8.180:17070/api" being repeated over and over (I imagine because the controller is borked somehow) [04:56] veebers: that's where you would find the logs. agent and other data is in /var/lib/juju. if the agent failed to come up, there should be something in /var/log/cloud-init-output.log [04:57] axw, this is really odd as the system had been working before, I added models, deployed charms etc. and now it's not responding. /var/lib/juju doesn't even exist on that machine :-\ [04:57] veebers: juju 2.0? [04:58] axw: yep, 2.0-rc2 (although a day or so old) [04:58] veebers: there have been bugs where juju would uninstall itself. I thought it was sorted for 2.0, but possibly not... [04:58] I don't *think* it wipes /var/log/juju on unistnall though [04:58] axw: did that uninstall bug include wiping all logs etc.? [04:58] heh [04:59] /var/log/juju exists, but it's empty [05:00] axw: juju was totally running on this machine before hand, a grep juju /var/logs/syslog says so, also status stopping juju ... etc. [05:01] veebers: the uninstall code doesn't touch logs. so it's something else [05:11] axw: Would you have a couple of minutes to help me work out if it's an existing bug or needs a new one? [05:12] not sure what I can tell you if there's nothing there. I have looked at the uninstall code and it doesn't touch logs, so it's not the same old one I referred to [05:13] axw: ok. I'll have a dig around and see what details I can uncover [05:15] axw: oh btw your fix unblocks me and the functional upgrade test, cheers :-) [05:15] cool === aluria` is now known as aluria [08:15] dimitern: ping [08:25] voidspace: pong, otp though - might be slow to respond [08:25] dimitern: ok [08:26] dimitern: currently the network.Select*Address|HostPort return the first address matching the requested scope [08:26] dimitern: we have a "bug" where IPv6 addresses are being returned when users want an IPv4 one [08:27] dimitern: so I'm changing the functions to prefer IPv4 (but still return IPv6 if that's all that is available as an exact match for the scope) [08:27] dimitern: so the choice is, should the order of preference be: [08:27] dimitern: IPv4, hostname, IPv6 [08:27] dimitern: or === tasdomas` is now known as tasdomas [08:27] dimitern: IPv4, IPv6, hostname [08:28] dimitern: or alternatively: [08:29] dimitern: IPv4, hostname or IPv6 (so *only* prefer IPv4 - no weighting on IPv6 or hostname, just whichever appears first) [08:29] frobware: ping [08:29] dimitern: whatever we pick I have to adjust some tests [08:29] dimitern: I like just preferring IPv4 (option 3) [08:30] option 3 sounds the most reasonable [08:31] hoenir: cool, thanks [08:59] rogpeppe: yt? [09:02] redir: yt? [09:02] redir: ah "you there?" [09:02] redir: yes [09:32] voidspace: sorry, we just finished now - reading scrollback [09:32] dimitern: I'm doing option 3, just fixing tests [09:32] dimitern: so you can ignore it if you like [09:33] voidspace: option 3 sounds reasonable atm [09:34] however we should really move away from the notion of a single preferred address [09:35] voidspace: please ping me with the PR when you're done, I'd like to have a look [09:35] dimitern: sure, it's pretty straightforward really [09:39] here's an update to juju-core that fixes it for the changed Clock interface and updates dependencies: https://github.com/juju/juju/pull/6337 [09:47] thanks rogpeppe [10:46] dimitern: a consequence is that the Select*Addresses/HostPorts (note the plural) only return IPv4 if they are available - not IPv6 at all [10:46] dimitern: I think that's still ok as we'll want to use IPv4 if they're available - still treating IPv6 as a fallback [10:46] dimitern: we can change that if we need to [10:56] dimitern: PR https://github.com/juju/juju/pull/6338 [11:03] voidspace: looking [11:04] dimitern: I'm doing a "juju deploy ubuntu -n 15" on lxd [11:04] dimitern: so far all of the machines have IPv4 addresses [11:04] voidspace: that'll only work if you bump up the limits on number of open files [11:05] also running a full test suite just to check no tests depend on the old behaviour [11:05] dimitern: working fine so far [11:05] voidspace: please, double check what actually runs in each container [11:05] dimitern: well, 9 started so far [11:05] now 10 [11:05] dimitern: what do you mean? [11:05] voidspace: if it's only /sbin/init and a couple of other processes, that's the limit issue - I'll try to find the bug [11:12] dimitern: 14 of the 15 have started ok and are reporting IPv4 addresses [11:12] some fixes to make test pass under Go tip; review appreciated: https://github.com/juju/juju/pull/6339 [11:14] voidspace: ok, if you hit the open file limit issue, they should be stuck in pendingf [11:15] dimitern: nope, all started fine - killing it now because it has ground my machine to a crawl [11:15] running tests at the same time! [11:37] jam: https://github.com/babbageclunk/juju/tree/mongo-ssl [11:46] very interesting babbageclunk, seems all the mongo testing infrastructure already supported not having an SSL cert. I'm a little surprised that juju connecting to mongo isn't complaining that it isn't trusted. [11:46] voidspace: I'll review your PR after lunch, if that's ok - we've been on a HO with jam till now [11:47] ah, but maybe we don't check the SSL of mongo to the same degree that we check the API Conn. [11:47] babbageclunk: have you looked at all to see if we are using an SSL cert on the Controller as well for things like JujuConnSuite, and whether we're are connecting to ourselves over an SSL connection? [11:48] jam: Ah, no - didn't think to check that, sorry. [11:49] jam: Can take a look at that now, if you like? [11:49] babbageclunk: might be worthy of a look at least. [11:50] jam: Ok, I'll have a go now. [11:50] given how small your patch is, and that it is ~10% across the board, I'm starting to lean more towards it being worth doing. [11:52] dimitern: http://paste.ubuntu.com/23246440/ [12:04] anyone wanna sign off on this before i land it? (trivial-ish changes to make Juju tests pass under Go tip) https://github.com/juju/juju/pull/6339 [12:05] rogpeppe: wont that break the tests with other go versions? [12:16] perrito666: it shouldn't, no [12:16] perrito666: the CI bot should make sure that's OK [12:17] perrito666: i tried to change things so that it would work under all versions [12:26] rogpeppe: cool, as long as you tested with other versions ship it [12:42] voidspace: reviewed, with some questions [12:43] voidspace: are you able to help review/QA macgreagoir's branch so we can try to get it into the next rc please? [13:04] rick_h_: are we planning on including the maas bridge fixes in rc2? [13:05] dimitern: yes, that's why the EOD timline [13:05] dimitern: they have to be there to allow openstack to go back to being install-able by folks [13:05] rick_h_: I was also going to try macgreagoir's change today; largely because voidspace reported that he could not repro [13:05] frobware: k, all good [13:05] rick_h_: I've tested the changes, as agreed on maas 1.9, testing on 2.0 now [13:05] dimitern: k, ty === niedbalski_ is now known as niedbalski [13:06] voidspace frobware: I retested earlier this week and can repro :-) Comment in the review. [13:06] rick_h_: I'll propose the PR with the changes; also the last CI run (of the same branch, minus the "configured" handling) passed OK [13:07] dimitern: k [13:14] here's the PR https://github.com/juju/juju/pull/6341 [13:17] natefinch: if the rax stuff works properly now can you file a bug against the docs/rackspace setup with any notes/etc please? [13:17] natefinch: and check the cloud config docs around that as I know there was some question over a domain name and such in the past [13:18] voidspace: assigned a card your way for next please. [13:19] dooferlad: how goes the return? Have you gotten through catch up enough to pick up the hostname/lxd issues today? [13:19] rick_h_: naughty me - didn't move the cards [13:20] dooferlad: cool, ty. Just checking :) [13:20] rick_h_: will do [13:24] dooferlad: hey there, long time no see ;) [13:24] dimitern: hi [13:25] rick_h_: I did QA it and it didn't work for me [13:26] voidspace: ic, ok [13:27] rick_h_: I see you put me on the list-spaces card - I was thinking about picking that one up [13:27] rick_h_: is the kanban board ordered? [13:27] rick_h_: I actually think it's a bug in our code... juju add-credential rackspace asks for a domain name, and shouldn't [13:27] voidspace: somewhat, I do try to stick critical at the top [13:27] natefinch: k, I want to make sure we're polishing off that experience there so that this works ootb. [13:27] kk [13:27] natefinch: if that's a bug in asking for it then we need a card to remove that please [13:28] rick_h_: grabbing coffee before our sync up - be with you in 5 [13:28] voidspace: but it tends to wander sometimes when I don't keep at it every day [13:28] voidspace: rgr [13:28] rick_h_: making the bug then the card, right now [13:28] natefinch: ty [13:31] * dimitern can confirm https://github.com/juju/juju/pull/6341 works as expected on MAAS 1.9 + 2.0 by manual testing [13:37] dimitern: http://reviews.vapour.ws/r/5715/ [13:52] rick_h_: ug, so this seems to be a problem for openstack as well - the interactive add-credentials code is very dumb, and just prompts for everything you could possibly set, which doesn't take into account whether or not it's v2 or v3. Ideally we'd ask the user if it's v2 or v3 and then only prompt for domain name for v3.... but there's no way to do that right now. I *can* fix it for rackspace by overriding the entire [13:52] schema for rackspace to exclude domain-name. [13:53] natefinch: since it's different enough from openstack I'd be +1 to a rackspace specific config [13:55] rick_h_: cool [13:56] rick_h_: I'll make a separate bug for the openstack issue. It's addressable via documentation, but just not user-friendly that way. [13:56] natefinch: understand, ty [13:56] natefinch: look for an existing bug though, I know this domain-name issue has come up and thought it was in bug form. [13:56] but I could be wrong and it was in email or something [13:58] rick_h_: https://bugs.launchpad.net/juju/+bug/1577776 [13:59] rick_h_: seems like we just added domain-name to openstack config, which might be why other things around it are breaking [14:00] natefinch: gotcha [14:00] natefinch: ok, so let's update rackspace so we can try to get it smooth ootb and we'll have to revisit the openstack case [14:01] yep [14:02] rick_h_: ping for standup ;) [14:03] dimitern: I answered your questions by the way [14:05] rick_h_: ping for standup... [14:05] voidspace: cheers, looking [14:06] voidspace: doh omw [14:07] dimitern: thanks! [14:09] voidspace: LGTM [14:10] katco: we need to develop one universal standard error package that covers both juju errors and errgo [14:10] natefinch: "and then you have 3 problems" [14:12] katco`: just reminded me of that xkcd with 14 competing standards :) [14:12] natefinch: not sure if you were serious :) [14:12] katco`: no no :) [14:12] haha [14:13] dooferlad: show us your shirt [14:14] dooferlad: thought that was Nathan Fillion at first [14:14] natefinch: https://en.wikipedia.org/wiki/Con_Man_(web_series) [14:15] oh, it is, ok :) === katco` is now known as katco [14:20] dooferlad: weird, I recognize the title font and stuff, but somehow never looked deeper than that. It sounds amazing [14:20] natefinch: it is rather good. I haven't seen it all yet due to new children. [14:21] dooferlad: totally understand that [14:22] dooferlad: I only recently got past season 3 of Game of Thrones for the same reason :) [14:24] katco: hooly crap. According to godoc.org, Dave's github.com/pkg/errors package is imported by 623 packages [14:36] natefinch: yep. he brought the lessons learned on juju to the whole community [14:39] katco: there have been other errors packages, but I guess star power has its benefits :) [14:40] :) [14:42] Bug #1628155 changed: cmd/juju: juju deploy "see also" refers to non-existent command [14:54] babbageclunk: ping [14:54] voidspace: pong [14:57] dooferlad: how's double-dadhood going? [14:58] babbageclunk: OK. The expected problem (not enough sleep) but from an unexpected source (elder daughter). [14:59] babbageclunk: Mostly loving it though! [15:00] dooferlad: yeah, we found that too - you don't get to take advantage of all the time little babies spend asleep to catch up on some yourself! [15:00] babbageclunk: definitely not when you are working :-) [15:01] oh man, just used go rename in vscode... this is game changing [15:02] natefinch: how so? [15:02] I mean, go rename is an external tool :p [15:03] perrito666: right right, but the CLI is kind of hard to use... in vscode I just click the thing I want to rename and hit F2, type in the new name, and it just works. [15:04] ah never used it in the cli, I use vim-go [15:04] I would guess katco has a similar one for emacs, I dont think anyone uses go rename in the cli [15:05] indeed i do [15:05] comes standard in the emacs go-mode [15:06] i can also visually debug with delve [15:06] yeah, I just started debugging with delve a few days ago, also amazing [15:06] yeah i missed visually debugging hehe [15:06] yep [15:07] it's still not my goto bc it's still a little more cumbersome than i'd like, but it's there and very helpful [15:07] it's pretty great in vscode. Very easy to set breakpoints, give it a command line to run, inspect local variables, etc. [15:07] anyway, gotta run, back after lunch === natefinch is now known as natefinch-afk [15:08] redir: ping [15:18] yo [15:18] otp perrito666 [15:22] redir: just ping me when you hang [15:22] will do perrito666 [15:22] tx [15:24] babbageclunk: this is what it gives me :D http://juju-ci.vapour.ws:8080/job/github-merge-juju/9349/artifact/artifacts/windows-out.log [15:31] dimitern: rick_h_: ping [15:32] voidspace: pong [15:32] rick_h_: landing onto develop now, right? [15:32] voidspace: starting monday [15:32] ah [15:32] voidspace: so not today, getting the ducks in a row to prepare [15:32] rick_h_: so just land this branch on Monday then [15:32] I mean on master [15:33] voidspace: yea, on master please [15:33] rick_h_: I think you did say this in standup... [15:33] kk [15:33] voidspace: yep, all good [15:33] voidspace: pong [15:33] dimitern: unping :-) [15:33] dimitern: landing the prefer IPv4 branch [15:33] bridge everything... LXD not getting on the right subnet https://www.irccloud.com/pastebin/0h06Df4K/ [15:33] voidspace: go for it :) [15:34] dimitern, frobware: ^^ [15:34] dooferlad: which branch are you testing? [15:35] dooferlad: let's do the testing on the master-lp1627037-final branch I am about to push. [15:35] dooferlad, dimitern: it's not clear to me we're all testing the final thing [15:35] This is without your latest changes. Just master + a couple of changes of my own that shouldn't do anything to break this. [15:36] dooferlad: ohhhh.... [15:36] it has bridged everything [15:36] but LXD gave the container a 10. address, not something from the host 192.168.1.0/24... [15:37] dooferlad: that might happen if the 192.168.1.0/24 has no available IPs [15:37] dimitern: there are plenty [15:37] dooferlad: I had this case when I used a subnet which I forgot that I reserved .1-.254 previously [15:38] dimitern: I launched machines since that are fine [15:38] dooferlad: any errors in the log of the host ? [15:39] dimitern: not that I have seen yet [15:40] oh, fun: "ERROR juju.worker.proxyupdater proxyupdater.go:160 lxdbr0 has no ipv4 or ipv6 subnet enabled" [15:41] It looks like your lxdbr0 has not yet been configured. Please configure it via: [15:41] sudo dpkg-reconfigure -p medium lxd [15:41] and then bootstrap again. [15:44] dooferlad: I always get that, and then it works [15:44] dooferlad: I *think* it's normal [15:44] deceiving though, because it takes a while to download the template, so it seems like it hasn't worked and there is this error message in the logs... [15:44] and then about ten minutes later it completes... [15:44] voidspace: you are right - I get it on working machines too. [15:45] annoying, especially at error level [15:45] what a rubbish message! [15:46] perrito666: did you have a look at https://github.com/juju/juju/pull/6339 ? i'd quite like to land it if poss. [15:48] it's total crap yeah [15:48] but it's fine otherwise [15:48] doesn't stop it from working [15:48] Well, the address is wrong for the container... https://www.irccloud.com/pastebin/7BdR7noH/ [15:48] dimitern: or maybe you might have a look - it's basically trivial. https://github.com/juju/juju/pull/6339 [15:49] rogpeppe: we'll be leaving the office in a minute or so, sorry :/ [15:50] dimitern: ok [15:50] dimitern: i might land it anyway [15:50] dimitern: i have one review (not from core though) [16:00] perrito666: what's up? [16:03] rogpeppe: I did, sorry I lgtmd in irc and forget to do it in gh [16:03] redir: priv [16:03] perrito666: thanks === natefinch-afk is now known as natefinch [16:35] perrito666: where are you based again? [16:35] voidspace: argentina [16:35] * perrito666 looks out of the window and sees a rocket from voidspace aproaching [16:36] perrito666: heh [16:36] perrito666: fancy brining me a large rock to the next sprint? [16:37] voidspace: too many variables in that sentences [16:37] perrito666: there are some lovely amethyst geodes in your part of the world and I would really like one [16:37] define large and rock [16:37] perrito666: especially a sphere [16:37] perrito666: a few kilos... [16:38] I presume customs wouldnt have an issue with it so I dont see why not [16:38] perrito666: :-) [16:38] send me more details and ill try to procure one [16:38] perrito666: I'll send you a link to an example and see if you can find one - I think it would be much *cheaper* if you find one than me buying it from a non-indigenous provider... [16:39] certainly I think these things are sort of easy to find here, hippies use them to to all kind of crafts [16:39] hippies are great :-) [16:40] if you say so [16:40] I do, and I'm glad you consider me an authority [16:40] ill go to the flea market and find out once you send me a pic [16:40] lol [16:40] perrito666: this is the sort of thing I would love, the purpler the better [16:40] perrito666: http://www.ebay.co.uk/itm/TOP-GRADE-HUGE-DARK-PURPLE-AMETHYST-GEODE-SPHERE-FROM-URUGUAY-/232091390623?hash=item3609b9929f:g:5E8AAOSwTA9X4-8m [16:40] perrito666: that size is wonderful, that price is not [16:41] should this be a perfectly shaped ball? [16:41] perrito666: ideally but not necessarily, to be fair I love any and all beautiful minerals [16:42] perrito666: as they're cut from a geode they're usually ground to a shape like a sphere with a "bite" taken out of it [16:42] perrito666: so not a perfect sphere - I'd much prefer one with some of the crystals intact [16:43] do search "amatista" in mercadolibre.com [16:43] .com.ar [16:45] perrito666: cool, looking [16:50] perrito666: this is the closest, not really exactly what I'm after (would prefer more purple and more spiky) http://articulo.mercadolibre.com.ar/MLA-629608847-esfera-de-agata-y-amatista-hermosa-115mm-1600grs-_JM [16:50] perrito666: I'll look again, thanks [16:50] perrito666: and if you see anything - or any other large beautiful minerals - let me know :-) [16:51] ill do, ill tlegram you pics [16:51] the hippie fair opens only on weekends [16:52] perrito666: flourite is nice I think there is some in Argentina [16:52] perrito666: I have agate [16:54] perrito666: anyway, thanks :-) [17:13] * rick_h_ goes for lunchables [17:13] I could use reviews in https://github.com/juju/juju/pull/6344 https://github.com/juju/juju/pull/6340 https://github.com/juju/juju/pull/6321 [17:14] greedy [17:15] perrito666: I'm looking at 6344 [17:25] this PR implements letsencrypt certificate support for controllers. anyone fancy taking a look? https://github.com/juju/juju/pull/6345 [17:25] whoa, awesome! [17:28] natefinch: note: if you want to try it out, you need to bootstrap with api-port=443 [17:29] rogpeppe: ahh, interesting [17:29] natefinch: alternatively you can run a port forwarder on the controller instance to forward from 443 to 17070 but then you'll need to open up port 443 in the security group [17:54] rick_h_: so there's some auto-detection of environment variables for openstack, so like you can set OS_TENANT_NAME and we'll pick it up. Currently that's being reused for rackspace... I don't think we should reuse those environment variables, what do you think? [17:59] natefinch: meet you in standup? [18:01] rick_h_: yep [18:03] fastest hangout ever [18:04] :) [18:53] hello, i have a runaway jujud burning up my machine. how do i attach the profiler to it? [18:54] for example, https://paste.ubuntu.com/23247920/ [18:55] cmars: anything interesting in the logs? [18:55] looking [18:58] natefinch, maybe. lots of messages like this are spewing to machine-0.log: https://paste.ubuntu.com/23247941/ [19:01] cmars: can you run juju model-config logging-config="=TRACE" [19:01] rick_h_: when did set-model-config and get-model-config get rolled into one command? [19:02] natefinch, set model-config on the controller model? [19:02] cmars: uh, yes [19:02] cmars: I can never remember if it matters, probably does [19:06] natefinch, controller model worked. here's the last 2000 lines with trace on: https://pastebin.canonical.com/166645/ [19:06] natefinch, about 3 weeks ago regarding the model-config command [19:07] set-config and get-config also got collapsed [19:07] alexisb: personally not a fan of losing the verbness on set [19:10] natefinch: yea, that was the negative, was in beta18. [19:13] * rick_h_ goes to get the boy from school, biab [19:21] rogpeppe: I haven't looked at your letsencrypt branch, just saw the email fly past. Just wanted to say that I love the idea, and very cool that you've done this [19:34] thumper: thanks! [19:39] simple PR anyone? https://github.com/juju/juju/pull/6346 just overriding a couple methods on the rackspace provider to strip out domain name as a valid credential attribute. [19:47] are there daily builds available for juju 1.25? I'd like to try a fix that's committed but not released [19:53] thumper: hey we need a third opinion, got a sec? [19:53] katco: sure [19:53] gQuigs: I don't think so [19:54] thumper: can you TAL at my comment regarding juju/rety here? https://github.com/juju/juju/pull/6321/files [19:54] sure [19:54] thumper: and lmk if that's appropriate? perrito666 would rather use a simple for loop [19:54] ack [20:02] thumper: https://github.com/juju/retry/pull/4 [20:02] * thumper pushes it on the stack [20:02] thumper: just a doc update, no rush [20:03] thumper: sweet, i didn't know you were preemptable. the possibilities... [20:03] queue, no stack [20:03] geez [20:04] aw boo [20:04] nerds [20:04] well [20:04] geeks [20:04] but that is just stating the obvious [20:04] also pedants, evidently [20:04] my kids a beginning to understand just how much of a geek household they live in [20:04] sudo review my comment already! [20:04] well, we do work at pedantical [20:04] lol [20:05] natefinch: comment left on your pr [20:06] katco: added QA steps [20:06] katco: thanks for the reminder [20:07] thumper: ta [20:08] does anyone else think we'll get push back for calling our Kubernetes distro the Canonical Distribution of Kuberneters? Makes it sound like its the official one. [20:08] thumper: so can you clarify when it's appropraite to use juju/retry? [20:08] I think that play on words is hillarious [20:09] I don't think it is [20:09] retry either has time limits or retry limits [20:09] whereas this is polling [20:09] thumper: does juju/retry demand a time/retry limit? [20:09] yeah, it does [20:09] I wonder if I added an infinite? [20:09] I think retry implies "try until you succeed" [20:10] otherwise it would just be called juju/loop [20:10] * thumper checks something [20:10] thumper: ah i see in https://github.com/juju/retry/blob/master/retry.go#L155 [20:11] yeah, it expects either a max duration or max times [20:11] whereas I think perhaps that should be loosened [20:11] to expect one or more of: max duration, max retries, or stop channel [20:11] I was just gonna mention stop channel [20:11] pull request welcome :) [20:12] actually [20:12] see? he's totally preemptable [20:12] if you say Attempts: retry.UnlimitedAttempts [20:12] katco: lol [20:13] i'm just going to keep bringing stuff up [20:13] then that passes [20:13] hey thumper what do you think of mongo? [20:13] it is lovely [20:13] as long as you don't care about all your data [20:29] ah i'm not crazy! no wonder i couldn't figure out where this error message was coming from. different results after first time deploy is run [20:33] hey alexisb, yesterday we talked about azure slowing down on long running deployments. i've got a controller with molasses in its tubes if you're interested. bug 1628206 has my logs.. i can keep it up as long as you want. [20:33] Bug #1628206: azure controller size seems too small [20:33] kwmonroe, thanks for the bug [20:33] np alexisb, thanks for being such a wonderful person. [20:33] axw comes online this afternoon [20:33] :) [20:34] kwmonroe, can you send me info with details on access [20:34] then I can have axw take a look when he comes online [20:34] will do alexisb [20:35] katco if you have time: https://github.com/juju/juju/pull/6347 [20:35] I would like to land this today if possible and it is a simple change [20:36] alexisb: looks simple enough; you grepped for all possible places that reference get-controller, etc.? [20:36] yes [20:37] alexisb: lgtm [20:43] thumper: sorry, back now - hangout? [20:44] babbageclunk: ack [20:48] rick_h_: I presume I should work on the card assigned to me in the todo? this bug: https://bugs.launchpad.net/juju/+bug/1621375 [20:48] Bug #1621375: "juju logout" should clear cookies for the controller [20:49] natefinch: yea, so I suspected that was related to the other one you had in tracking [20:49] natefinch: my thought was that might be why a logout didn't actually log you out [20:58] rick_h_: i need an opinion, hangout rq? === natefinch is now known as natefinch-afk [21:37] rick_h_, ping [21:40] PR for someone: https://github.com/juju/juju/pull/6348 [22:50] katco: comments added [22:56] alexisb: unit leader in status https://github.com/juju/juju/pull/6350 [23:09] thumper, awesome