[01:24] <veebers> thumper, menn0 I'm getting a _bunch_ of log message like this when trying to get the status "INFO  juju.api apiclient.go:507 dialing "wss://10.0.8.193:17070/model/25914173-c6bf-4a50-86b3-4ba6060f5d59/api""
[01:24] <veebers> I can see that the lxd containers still exist (my first thought is it being wiped from underneath). While the env is still up what debugging can I do?
[01:27] <menn0> veebers: context please :)
[01:27] <menn0> veebers: what lead up to this/
[01:29] <veebers> menn0: oh sorry sure, I bootstrap, I add a new model, I deploy a charm (CI dummy charm), I then add a unit to it. I then wait for it to be considered 'started' and with 'workloads'. At this point I check the status in a loop for a couple of minutes with a bit of a sleep inbetween checks
[01:30] <veebers> The first status check succeeded, it output something, the second attempt is now sitting there with that message repeating over and over
[01:33] <veebers> menn0: it's now failed with this: http://pastebin.ubuntu.com/23245012/
[01:35] <menn0> veebers: the apiclient "dialing" messages are attempts to connect to the Juju API server(s)
[01:35] <menn0> veebers: it keeps trying and then has given up
[01:35] <axw> veebers: that upgrade-juju thing should be fixed in master now
[01:35] <menn0> veebers: this might mean the controller was unhappy
[01:36] <axw> menn0: when you're free, can you please review https://github.com/juju/juju/pull/6336
[01:36] <veebers> axw: sweet, I've just built and about to try it out.
[01:37] <menn0> veebers: can you get logs for the controller machine(s)
[01:37] <veebers> menn0: I have the lxd containers still (have paused the test from cleaning up) is there anything I can grab off them off interest while it's running? (normal log collection will occur once I un-pause it
[01:37] <menn0> ?
[01:37] <menn0> veebers: the normal log collection should be enough
[01:37] <veebers> menn0: oh, I took too long typing :-) Yep I can grab them. I'll just let it complete and grab the logs from there
[01:38] <menn0> axw: will look shortly
[01:39] <veebers> will be some minutes as part of the clean up is to check status, so we'll go through this timeout process again
[02:22] <veebers> menn0: ok, I now have the log files, you want them?
[02:23] <menn0> veebers: sure. dropbox?
[02:23] <menn0> veebers: or email if they're not too big
[02:24] <veebers> menn0: which log? or do you want them all? You could probably scp them off the machine itself
[04:54] <veebers> axw: Where would I find juju stuff on the controller machine? I'm logged into an lxd machine that "apparently" is the controller based on the bootstrap output but /var/log/juju is empty and "ps aux | grep -i juju" returns nothing. I find this quite odd
[04:55] <veebers> I've logged in because I have this issue again with "INFO  juju.api apiclient.go:507 dialing "wss://10.0.8.180:17070/api" being repeated over and over (I imagine because the controller is borked somehow)
[04:56] <axw> veebers: that's where you would find the logs. agent and other data is in /var/lib/juju. if the agent failed to come up, there should be something in /var/log/cloud-init-output.log
[04:57] <veebers> axw, this is really odd as the system had been working before, I added models, deployed charms etc. and now it's not responding. /var/lib/juju doesn't even exist on that machine :-\
[04:57] <axw> veebers: juju 2.0?
[04:58] <veebers> axw: yep, 2.0-rc2 (although a day or so old)
[04:58] <axw> veebers: there have been bugs where juju would uninstall itself. I thought it was sorted for 2.0, but possibly not...
[04:58] <axw> I don't *think* it wipes /var/log/juju on unistnall though
[04:58] <veebers> axw: did that uninstall bug include wiping all logs etc.?
[04:58] <veebers> heh
[04:59] <veebers> /var/log/juju exists, but it's empty
[05:00] <veebers> axw: juju was totally running on this machine before hand, a grep juju /var/logs/syslog says so, also status stopping juju ... etc.
[05:01] <axw> veebers: the uninstall code doesn't touch logs. so it's something else
[05:11] <veebers> axw: Would you have a couple of minutes to help me work out if it's an existing bug or needs a new one?
[05:12] <axw> not sure what I can tell you if there's nothing there. I have looked at the uninstall code and it doesn't touch logs, so it's not the same old one I referred to
[05:13] <veebers> axw: ok. I'll have a dig around and see what details I can uncover
[05:15] <veebers> axw: oh btw your fix unblocks me and the functional upgrade test, cheers :-)
[05:15] <axw> cool
[08:15] <voidspace> dimitern: ping
[08:25] <dimitern> voidspace: pong, otp though - might be slow to respond
[08:25] <voidspace> dimitern: ok
[08:26] <voidspace> dimitern: currently the network.Select*Address|HostPort return the first address matching the requested scope
[08:26] <voidspace> dimitern: we have a "bug" where IPv6 addresses are being returned when users want an IPv4 one
[08:27] <voidspace> dimitern: so I'm changing the functions to prefer IPv4 (but still return IPv6 if that's all that is available as an exact match for the scope)
[08:27] <voidspace> dimitern: so the choice is, should the order of preference be:
[08:27] <voidspace> dimitern: IPv4, hostname, IPv6
[08:27] <voidspace> dimitern: or
[08:27] <voidspace> dimitern: IPv4, IPv6, hostname
[08:28] <voidspace> dimitern: or alternatively:
[08:29] <voidspace> dimitern: IPv4, hostname or IPv6 (so *only* prefer IPv4 - no weighting on IPv6 or hostname, just whichever appears first)
[08:29] <redir> frobware: ping
[08:29] <voidspace> dimitern: whatever we pick I have to adjust some tests
[08:29] <voidspace> dimitern: I like just preferring IPv4 (option 3)
[08:30] <hoenir> option 3 sounds the most reasonable
[08:31] <voidspace> hoenir: cool, thanks
[08:59] <redir> rogpeppe: yt?
[09:02] <rogpeppe> redir: yt?
[09:02] <rogpeppe> redir: ah "you there?"
[09:02] <rogpeppe> redir: yes
[09:32] <dimitern> voidspace: sorry, we just finished now - reading scrollback
[09:32] <voidspace> dimitern: I'm doing option 3, just fixing tests
[09:32] <voidspace> dimitern: so you can ignore it if you like
[09:33] <dimitern> voidspace: option 3 sounds reasonable atm
[09:34] <dimitern> however we should really move away from the notion of a single preferred address
[09:35] <dimitern> voidspace: please ping me with the PR when you're done, I'd like to have a look
[09:35] <voidspace> dimitern: sure, it's pretty straightforward really
[09:39] <rogpeppe> here's an update to juju-core that fixes it for the changed Clock interface and updates dependencies: https://github.com/juju/juju/pull/6337
[09:47] <redir> thanks rogpeppe
[10:46] <voidspace> dimitern: a consequence is that the Select*Addresses/HostPorts (note the plural) only return IPv4 if they are available - not IPv6 at all
[10:46] <voidspace> dimitern: I think that's still ok as we'll want to use IPv4 if they're available - still treating IPv6 as a fallback
[10:46] <voidspace> dimitern: we can change that if we need to
[10:56] <voidspace> dimitern: PR https://github.com/juju/juju/pull/6338
[11:03] <dimitern> voidspace: looking
[11:04] <voidspace> dimitern: I'm doing a "juju deploy ubuntu -n 15" on lxd
[11:04] <voidspace> dimitern: so far all of the machines have IPv4 addresses
[11:04] <dimitern> voidspace: that'll only work if you bump up the limits on number of open files
[11:05] <voidspace> also running a full test suite just to check no tests depend on the old behaviour
[11:05] <voidspace> dimitern: working fine so far
[11:05] <dimitern> voidspace: please, double check what actually runs in each container
[11:05] <voidspace> dimitern: well, 9 started so far
[11:05] <voidspace> now 10
[11:05] <voidspace> dimitern: what do you mean?
[11:05] <dimitern> voidspace: if it's only /sbin/init and a couple of other processes, that's the limit issue - I'll try to find the bug
[11:12] <voidspace> dimitern: 14 of the 15 have started ok and are reporting IPv4 addresses
[11:12] <rogpeppe> some fixes to make test pass under Go tip; review appreciated: https://github.com/juju/juju/pull/6339
[11:14] <dimitern> voidspace: ok, if you hit the open file limit issue, they should be stuck in pendingf
[11:15] <voidspace> dimitern: nope, all started fine - killing it now because it has ground my machine to a crawl
[11:15] <voidspace> running tests at the same time!
[11:37] <babbageclunk> jam: https://github.com/babbageclunk/juju/tree/mongo-ssl
[11:46] <jam> very interesting babbageclunk, seems all the mongo testing infrastructure already supported not having an SSL cert. I'm a little surprised that juju connecting to mongo isn't complaining that it isn't trusted.
[11:46] <dimitern> voidspace: I'll review your PR after lunch, if that's ok - we've been on a HO with jam till now
[11:47] <jam> ah, but maybe we don't check the SSL of mongo to the same degree that we check the API Conn.
[11:47] <jam> babbageclunk: have you looked at all to see if we are using an SSL cert on the Controller as well for things like JujuConnSuite, and whether we're are connecting to ourselves over an SSL connection?
[11:48] <babbageclunk> jam: Ah, no - didn't think to check that, sorry.
[11:49] <babbageclunk> jam: Can take a look at that now, if you like?
[11:49] <jam> babbageclunk: might be worthy of a look at least.
[11:50] <babbageclunk> jam: Ok, I'll have a go now.
[11:50] <jam> given how small your patch is, and that it is ~10% across the board, I'm starting to lean more towards it being worth doing.
[11:52] <frobware> dimitern: http://paste.ubuntu.com/23246440/
[12:04] <rogpeppe> anyone wanna sign off on this before i land it? (trivial-ish changes to make Juju tests pass under Go tip) https://github.com/juju/juju/pull/6339
[12:05] <perrito666> rogpeppe: wont that break the tests with other go versions?
[12:16] <rogpeppe> perrito666: it shouldn't, no
[12:16] <rogpeppe> perrito666: the CI bot should make sure that's OK
[12:17] <rogpeppe> perrito666: i tried to change things so that it would work under all versions
[12:26] <perrito666> rogpeppe: cool, as long as you tested with other versions ship it
[12:42] <dimitern> voidspace: reviewed, with some questions
[12:43] <rick_h_> voidspace: are you able to help review/QA macgreagoir's branch so we can try to get it into the next rc please?
[13:04] <dimitern> rick_h_: are we planning on including the maas bridge fixes in rc2?
[13:05] <rick_h_> dimitern: yes, that's why the EOD timline
[13:05] <rick_h_> dimitern: they have to be there to allow openstack to go back to being install-able by folks
[13:05] <frobware> rick_h_: I was also going to try macgreagoir's change today; largely because voidspace reported that he could not repro
[13:05] <rick_h_> frobware: k, all good
[13:05] <dimitern> rick_h_: I've tested the changes, as agreed on maas 1.9, testing on 2.0 now
[13:05] <rick_h_> dimitern: k, ty
[13:06] <macgreagoir> voidspace frobware: I retested earlier this week and can repro :-) Comment in the review.
[13:06] <dimitern> rick_h_: I'll propose the PR with the changes; also the last CI run (of the same branch, minus the "configured" handling) passed OK
[13:07] <rick_h_> dimitern: k
[13:14] <dimitern> here's the PR https://github.com/juju/juju/pull/6341
[13:17] <rick_h_> natefinch: if the rax stuff works properly now can you file a bug against the docs/rackspace setup with any notes/etc please?
[13:17] <rick_h_> natefinch: and check the cloud config docs around that as I know there was some question over a domain name and such in the past
[13:18] <rick_h_> voidspace: assigned a card your way for next please.
[13:19] <rick_h_> dooferlad: how goes the return? Have you gotten through catch up enough to pick up the hostname/lxd issues today?
[13:19] <dooferlad> rick_h_: naughty me - didn't move the cards
[13:20] <rick_h_> dooferlad: cool, ty. Just checking :)
[13:20] <natefinch> rick_h_: will do
[13:24] <dimitern> dooferlad: hey there, long time no see ;)
[13:24] <dooferlad> dimitern: hi
[13:25] <voidspace> rick_h_: I did QA it and it didn't work for me
[13:26] <rick_h_> voidspace: ic, ok
[13:27] <voidspace> rick_h_: I see you put me on the list-spaces card - I was thinking about picking that one up
[13:27] <voidspace> rick_h_: is the kanban board ordered?
[13:27] <natefinch> rick_h_: I actually think it's a bug in our code... juju add-credential rackspace asks for a domain name, and shouldn't
[13:27] <rick_h_> voidspace: somewhat, I do try to stick critical at the top
[13:27] <rick_h_> natefinch: k, I want to make sure we're polishing off that experience there so that this works ootb.
[13:27] <voidspace> kk
[13:27] <rick_h_> natefinch: if that's a bug in asking for it then we need a card to remove that please
[13:28] <voidspace> rick_h_: grabbing coffee before our sync up - be with you in 5
[13:28] <rick_h_> voidspace: but it tends to wander sometimes when I don't keep at it every day
[13:28] <rick_h_> voidspace: rgr
[13:28] <natefinch> rick_h_: making the bug then the card, right now
[13:28] <rick_h_> natefinch: ty
[13:31]  * dimitern can confirm https://github.com/juju/juju/pull/6341 works as expected on MAAS 1.9 + 2.0 by manual testing
[13:37] <macgreagoir> dimitern: http://reviews.vapour.ws/r/5715/
[13:52] <natefinch> rick_h_: ug, so this seems to be a problem for openstack as well - the interactive add-credentials code is very dumb, and just prompts for everything you could possibly set, which doesn't take into account whether or not it's v2 or v3.  Ideally we'd ask the user if it's v2 or v3 and then only prompt for domain name for v3.... but there's no way to do that right now.  I *can* fix it for rackspace by overriding the entire
[13:52] <natefinch> schema for rackspace to exclude domain-name.
[13:53] <rick_h_> natefinch: since it's different enough from openstack I'd be +1 to a rackspace specific config
[13:55] <natefinch> rick_h_: cool
[13:56] <natefinch> rick_h_: I'll make a separate bug for the openstack issue.  It's addressable via documentation, but just not user-friendly that way.
[13:56] <rick_h_> natefinch: understand, ty
[13:56] <rick_h_> natefinch: look for an existing bug though, I know this domain-name issue has come up and thought it was in bug form.
[13:56] <rick_h_> but I could be wrong and it was in email or something
[13:58] <natefinch> rick_h_: https://bugs.launchpad.net/juju/+bug/1577776
[13:59] <natefinch> rick_h_: seems like we just added domain-name to openstack config, which might be why other things around it are breaking
[14:00] <rick_h_> natefinch: gotcha
[14:00] <rick_h_> natefinch: ok, so let's update rackspace so we can try to get it smooth ootb and we'll have to revisit the openstack case
[14:01] <natefinch> yep
[14:02] <natefinch> rick_h_: ping for standup ;)
[14:03] <voidspace> dimitern: I answered your questions by the way
[14:05] <voidspace> rick_h_: ping for standup...
[14:05] <dimitern> voidspace: cheers, looking
[14:06] <rick_h_> voidspace: doh omw
[14:07] <voidspace> dimitern: thanks!
[14:09] <dimitern> voidspace: LGTM
[14:10] <natefinch> katco: we need to develop one universal standard error package that covers both juju errors and errgo
[14:10] <katco`> natefinch: "and then you have 3 problems"
[14:12] <natefinch> katco`: just reminded me of that xkcd with 14 competing standards :)
[14:12] <katco`> natefinch: not sure if you were serious :)
[14:12] <natefinch> katco`: no no :)
[14:12] <katco`> haha
[14:13] <natefinch> dooferlad: show us your shirt
[14:14] <natefinch> dooferlad: thought that was Nathan Fillion at first
[14:14] <dooferlad> natefinch: https://en.wikipedia.org/wiki/Con_Man_(web_series)
[14:15] <natefinch> oh, it is, ok :)
[14:20] <natefinch> dooferlad: weird, I recognize the title font and stuff, but somehow never looked deeper than that.  It sounds amazing
[14:20] <dooferlad> natefinch: it is rather good. I haven't seen it all yet due to new children.
[14:21] <natefinch> dooferlad: totally understand that
[14:22] <natefinch> dooferlad: I only recently got past season 3 of Game of Thrones for the same reason :)
[14:24] <natefinch> katco: hooly crap. According to godoc.org, Dave's github.com/pkg/errors package is imported by 623 packages
[14:36] <katco> natefinch: yep. he brought the lessons learned on juju to the whole community
[14:39] <natefinch> katco: there have been other errors packages, but I guess star power has its benefits :)
[14:40] <katco> :)
[14:42] <mup> Bug #1628155 changed: cmd/juju: juju deploy "see also" refers to non-existent command <helptext> <usability> <juju:Triaged> <https://launchpad.net/bugs/1628155>
[14:54] <voidspace> babbageclunk: ping
[14:54] <babbageclunk> voidspace: pong
[14:57] <babbageclunk> dooferlad: how's double-dadhood going?
[14:58] <dooferlad> babbageclunk: OK. The expected problem (not enough sleep) but from an unexpected source (elder daughter).
[14:59] <dooferlad> babbageclunk: Mostly loving it though!
[15:00] <babbageclunk> dooferlad: yeah, we found that too - you don't get to take advantage of all the time little babies spend asleep to catch up on some yourself!
[15:00] <dooferlad> babbageclunk: definitely not when you are working :-)
[15:01] <natefinch> oh man, just used go rename in vscode... this is game changing
[15:02] <perrito666> natefinch: how so?
[15:02] <perrito666> I mean, go rename is an external tool :p
[15:03] <natefinch> perrito666: right right, but the CLI is kind of hard to use...  in vscode I just click the thing I want to rename and hit F2, type in the new name, and it just works.
[15:04] <perrito666> ah never used it in the cli, I use vim-go
[15:04] <perrito666> I would guess katco has a similar one for emacs, I dont think anyone uses go rename in the cli
[15:05] <katco> indeed i do
[15:05] <katco> comes standard in the emacs go-mode
[15:06] <katco> i can also visually debug with delve
[15:06] <natefinch> yeah, I just started debugging with delve a few days ago, also amazing
[15:06] <katco> yeah i missed visually debugging hehe
[15:06] <natefinch> yep
[15:07] <katco> it's still not my goto bc it's still a little more cumbersome than i'd like, but it's there and very helpful
[15:07] <natefinch> it's pretty great in vscode.  Very easy to set breakpoints, give it a command line to run, inspect local variables, etc.
[15:07] <natefinch> anyway, gotta run, back after lunch
[15:08] <perrito666> redir: ping
[15:18] <redir> yo
[15:18] <redir> otp perrito666
[15:22] <perrito666> redir: just ping me when you hang
[15:22] <redir> will do perrito666
[15:22] <perrito666> tx
[15:24] <anastasiamac> babbageclunk: this is what it gives me :D http://juju-ci.vapour.ws:8080/job/github-merge-juju/9349/artifact/artifacts/windows-out.log
[15:31] <voidspace> dimitern: rick_h_: ping
[15:32] <rick_h_> voidspace: pong
[15:32] <voidspace> rick_h_: landing onto develop now, right?
[15:32] <rick_h_> voidspace: starting monday
[15:32] <voidspace> ah
[15:32] <rick_h_> voidspace: so not today, getting the ducks in a row to prepare
[15:32] <voidspace> rick_h_: so just land this branch on Monday then
[15:32] <voidspace> I mean on master
[15:33] <rick_h_> voidspace: yea, on master please
[15:33] <voidspace> rick_h_: I think you did say this in standup...
[15:33] <voidspace> kk
[15:33] <rick_h_> voidspace: yep, all good
[15:33] <dimitern> voidspace: pong
[15:33] <voidspace> dimitern: unping :-)
[15:33] <voidspace> dimitern: landing the prefer IPv4 branch
[15:33] <dooferlad> bridge everything... LXD not getting on the right subnet https://www.irccloud.com/pastebin/0h06Df4K/
[15:33] <dimitern> voidspace: go for it :)
[15:34] <dooferlad> dimitern, frobware: ^^
[15:34] <dimitern> dooferlad: which branch are you testing?
[15:35] <frobware> dooferlad: let's do the testing on the master-lp1627037-final branch I am about to push.
[15:35] <frobware> dooferlad, dimitern: it's not clear to me we're all testing the final thing
[15:35] <dooferlad> This is without your latest changes. Just master + a couple of changes of my own that shouldn't do anything to break this.
[15:36] <frobware> dooferlad: ohhhh....
[15:36] <dooferlad> it has bridged everything
[15:36] <dooferlad> but LXD gave the container a 10. address, not something from the host 192.168.1.0/24...
[15:37] <dimitern> dooferlad: that might happen if the 192.168.1.0/24 has no available IPs
[15:37] <dooferlad> dimitern: there are plenty
[15:37] <dimitern> dooferlad: I had this case when I used a subnet which I forgot that I reserved .1-.254 previously
[15:38] <dooferlad> dimitern: I launched machines since that are fine
[15:38] <dimitern> dooferlad: any errors in the log of the host ?
[15:39] <dooferlad> dimitern: not that I have seen yet
[15:40] <dooferlad> oh, fun: "ERROR juju.worker.proxyupdater proxyupdater.go:160 lxdbr0 has no ipv4 or ipv6 subnet enabled"
[15:41] <dooferlad> It looks like your lxdbr0 has not yet been configured. Please configure it via:
[15:41] <dooferlad> sudo dpkg-reconfigure -p medium lxd
[15:41] <dooferlad> and then bootstrap again.
[15:44] <voidspace> dooferlad: I always get that, and then it works
[15:44] <voidspace> dooferlad: I *think* it's normal
[15:44] <voidspace> deceiving though, because it takes a while to download the template, so it seems like it hasn't worked and there is this error message in the logs...
[15:44] <voidspace> and then about ten minutes later it completes...
[15:44] <dooferlad> voidspace: you are right - I get it on working machines too.
[15:45] <voidspace> annoying, especially at error level
[15:45] <dooferlad> what a rubbish message!
[15:46] <rogpeppe> perrito666: did you have a look at https://github.com/juju/juju/pull/6339 ? i'd quite like to land it if poss.
[15:48] <dimitern> it's total crap yeah
[15:48] <dimitern> but it's fine otherwise
[15:48] <dimitern> doesn't stop it from working
[15:48] <dooferlad> Well, the address is wrong for the container... https://www.irccloud.com/pastebin/7BdR7noH/
[15:48] <rogpeppe> dimitern: or maybe you might have a look - it's basically trivial. https://github.com/juju/juju/pull/6339
[15:49] <dimitern> rogpeppe: we'll be leaving the office in a minute or so, sorry :/
[15:50] <rogpeppe> dimitern: ok
[15:50] <rogpeppe> dimitern: i might land it anyway
[15:50] <rogpeppe> dimitern: i have one review (not from core though)
[16:00] <redir> perrito666: what's up?
[16:03] <perrito666> rogpeppe: I did, sorry I lgtmd in irc and forget to do it in gh
[16:03] <perrito666> redir: priv
[16:03] <rogpeppe> perrito666: thanks
[16:35] <voidspace> perrito666: where are you based again?
[16:35] <perrito666> voidspace: argentina
[16:35]  * perrito666 looks out of the window and sees a rocket from voidspace aproaching
[16:36] <voidspace> perrito666: heh
[16:36] <voidspace> perrito666: fancy brining me a large rock to the next sprint?
[16:37] <perrito666> voidspace: too many variables in that sentences
[16:37] <voidspace> perrito666: there are some lovely amethyst geodes in your part of the world and I would really like one
[16:37] <perrito666> define large and rock
[16:37] <voidspace> perrito666: especially a sphere
[16:37] <voidspace> perrito666: a few kilos...
[16:38] <perrito666> I presume customs wouldnt have an issue with it so I dont see why not
[16:38] <voidspace> perrito666: :-)
[16:38] <perrito666> send me more details and ill try to procure one
[16:38] <voidspace> perrito666: I'll send you a link to an example and see if you can find one - I think it would be much *cheaper* if you find one than me buying it from a non-indigenous provider...
[16:39] <perrito666> certainly I think these things are sort of easy to find here, hippies use them to to all kind of crafts
[16:39] <voidspace> hippies are great :-)
[16:40] <perrito666> if you say so
[16:40] <voidspace> I do, and I'm glad you consider me an authority
[16:40] <perrito666> ill go to the flea market and find out once you send me a pic
[16:40] <perrito666> lol
[16:40] <voidspace> perrito666: this is the sort of thing I would love, the purpler the better
[16:40] <voidspace> perrito666: http://www.ebay.co.uk/itm/TOP-GRADE-HUGE-DARK-PURPLE-AMETHYST-GEODE-SPHERE-FROM-URUGUAY-/232091390623?hash=item3609b9929f:g:5E8AAOSwTA9X4-8m
[16:40] <voidspace> perrito666: that size is wonderful, that price is not
[16:41] <perrito666> should this be a perfectly shaped ball?
[16:41] <voidspace> perrito666: ideally but not necessarily, to be fair I love any and all beautiful minerals
[16:42] <voidspace> perrito666: as they're cut from a geode they're usually ground to a shape like a sphere with a "bite" taken out of it
[16:42] <voidspace> perrito666: so not a perfect sphere - I'd much prefer one with some of the crystals intact
[16:43] <perrito666> do search "amatista" in mercadolibre.com
[16:43] <perrito666> .com.ar
[16:45] <voidspace> perrito666: cool, looking
[16:50] <voidspace> perrito666: this is the closest, not really exactly what I'm after (would prefer more purple and more spiky) http://articulo.mercadolibre.com.ar/MLA-629608847-esfera-de-agata-y-amatista-hermosa-115mm-1600grs-_JM
[16:50] <voidspace> perrito666: I'll look again, thanks
[16:50] <voidspace> perrito666: and if you see anything - or any other large beautiful minerals - let me know :-)
[16:51] <perrito666> ill do, ill tlegram you pics
[16:51] <perrito666> the hippie fair opens only on weekends
[16:52] <voidspace> perrito666: flourite is nice I think there is some in Argentina
[16:52] <voidspace> perrito666: I have agate
[16:54] <voidspace> perrito666: anyway, thanks :-)
[17:13]  * rick_h_ goes for lunchables
[17:13] <perrito666> I could use reviews in https://github.com/juju/juju/pull/6344 https://github.com/juju/juju/pull/6340 https://github.com/juju/juju/pull/6321
[17:14] <natefinch> greedy
[17:15] <natefinch> perrito666: I'm looking at 6344
[17:25] <rogpeppe> this PR implements letsencrypt certificate support for controllers. anyone fancy taking a look? https://github.com/juju/juju/pull/6345
[17:25] <natefinch> whoa, awesome!
[17:28] <rogpeppe> natefinch: note: if you want to try it out, you need to bootstrap with api-port=443
[17:29] <natefinch> rogpeppe: ahh, interesting
[17:29] <rogpeppe> natefinch: alternatively you can run a port forwarder on the controller instance to forward from 443 to 17070 but then you'll need to open up port 443 in the security group
[17:54] <natefinch> rick_h_: so there's some auto-detection of environment variables for openstack, so like you can set OS_TENANT_NAME and we'll pick it up.  Currently that's being reused for rackspace... I don't think we should reuse those environment variables, what do you think?
[17:59] <rick_h_> natefinch: meet you in standup?
[18:01] <natefinch> rick_h_: yep
[18:03] <natefinch> fastest hangout ever
[18:04] <rick_h_> :)
[18:53] <cmars> hello, i have a runaway jujud burning up my machine. how do i attach the profiler to it?
[18:54] <cmars> for example, https://paste.ubuntu.com/23247920/
[18:55] <natefinch> cmars: anything interesting in the logs?
[18:55] <cmars> looking
[18:58] <cmars> natefinch, maybe. lots of messages like this are spewing to machine-0.log: https://paste.ubuntu.com/23247941/
[19:01] <natefinch> cmars: can you run juju model-config logging-config="<root>=TRACE"
[19:01] <natefinch> rick_h_: when did set-model-config and get-model-config get rolled into one command?
[19:02] <cmars> natefinch, set model-config on the controller model?
[19:02] <natefinch> cmars: uh, yes
[19:02] <natefinch> cmars: I can never remember if it matters, probably does
[19:06] <cmars> natefinch, controller model worked. here's the last 2000 lines with trace on: https://pastebin.canonical.com/166645/
[19:06] <alexisb> natefinch, about 3 weeks ago regarding the model-config command
[19:07] <alexisb> set-config and get-config also got collapsed
[19:07] <natefinch> alexisb: personally not a fan of losing the verbness on set
[19:10] <rick_h_> natefinch: yea, that was the negative, was in beta18.
[19:13]  * rick_h_ goes to get the boy from school, biab
[19:21] <thumper> rogpeppe: I haven't looked at your letsencrypt branch, just saw the email fly past. Just wanted to say that I love the idea, and very cool that you've done this
[19:34] <rogpeppe> thumper: thanks!
[19:39] <natefinch> simple PR anyone?  https://github.com/juju/juju/pull/6346  just overriding a couple methods on the rackspace provider to strip out domain name as a valid credential attribute.
[19:47] <gQuigs> are there daily builds available for juju 1.25?   I'd like to try a fix that's committed but not released
[19:53] <katco> thumper: hey we need a third opinion, got a sec?
[19:53] <thumper> katco: sure
[19:53] <thumper> gQuigs: I don't think so
[19:54] <katco> thumper: can you TAL at my comment regarding juju/rety here? https://github.com/juju/juju/pull/6321/files
[19:54] <thumper> sure
[19:54] <katco> thumper: and lmk if that's appropriate? perrito666 would rather use a simple for loop
[19:54] <thumper> ack
[20:02] <natefinch> thumper: https://github.com/juju/retry/pull/4
[20:02]  * thumper pushes it on the stack
[20:02] <natefinch> thumper: just a doc update, no rush
[20:03] <katco> thumper: sweet, i didn't know you were preemptable. the possibilities...
[20:03] <thumper> queue, no stack
[20:03] <thumper> geez
[20:04] <katco> aw boo
[20:04] <thumper> nerds
[20:04] <thumper> well
[20:04] <thumper> geeks
[20:04] <thumper> but that is just stating the obvious
[20:04] <natefinch> also pedants, evidently
[20:04] <thumper> my kids a beginning to understand just how much of a geek household they live in
[20:04] <katco> sudo review my comment already!
[20:04] <thumper> well, we do work at pedantical
[20:04] <katco> lol
[20:05] <katco> natefinch: comment left on your pr
[20:06] <natefinch> katco: added QA steps
[20:06] <natefinch> katco: thanks for the reminder
[20:07] <katco> thumper: ta
[20:08] <natefinch> does anyone else think we'll get push back for calling our Kubernetes distro the Canonical Distribution of Kuberneters?  Makes it sound like its the official one.
[20:08] <katco> thumper: so can you clarify when it's appropraite to use juju/retry?
[20:08] <thumper> I think that play on words is hillarious
[20:09] <thumper> I don't think it is
[20:09] <thumper> retry either has time limits or retry limits
[20:09] <thumper> whereas this is polling
[20:09] <katco> thumper: does juju/retry demand a time/retry limit?
[20:09] <thumper> yeah, it does
[20:09] <thumper> I wonder if I added an infinite?
[20:09] <natefinch> I think retry implies "try until you succeed"
[20:10] <natefinch> otherwise it would just be called juju/loop
[20:10]  * thumper checks something
[20:10] <katco> thumper: ah i see in https://github.com/juju/retry/blob/master/retry.go#L155
[20:11] <thumper> yeah, it expects either a max duration or max times
[20:11] <thumper> whereas I think perhaps that should be loosened
[20:11] <thumper> to expect one or more of: max duration, max retries, or stop channel
[20:11] <natefinch> I was just gonna mention stop channel
[20:11] <thumper> pull request welcome :)
[20:12] <thumper> actually
[20:12] <katco> see? he's totally preemptable
[20:12] <thumper> if you say Attempts: retry.UnlimitedAttempts
[20:12] <natefinch> katco: lol
[20:13] <katco> i'm just going to keep bringing stuff up
[20:13] <thumper> then that passes
[20:13] <katco> hey thumper what do you think of mongo?
[20:13] <thumper> it is lovely
[20:13] <thumper> as long as you don't care about all your data
[20:29] <katco> ah i'm not crazy! no wonder i couldn't figure out where this error message was coming from. different results after first time deploy is run
[20:33] <kwmonroe> hey alexisb, yesterday we talked about azure slowing down on long running deployments.  i've got a controller with molasses in its tubes if you're interested.  bug 1628206 has my logs.. i can keep it up as long as you want.
[20:33] <mup> Bug #1628206: azure controller size seems too small <juju:Triaged> <https://launchpad.net/bugs/1628206>
[20:33] <alexisb> kwmonroe, thanks for the bug
[20:33] <kwmonroe> np alexisb, thanks for being such a wonderful person.
[20:33] <alexisb> axw comes online this afternoon
[20:33] <alexisb> :)
[20:34] <alexisb> kwmonroe, can you send me info with details on access
[20:34] <alexisb> then I can have axw take a look when he comes online
[20:34] <kwmonroe> will do alexisb
[20:35] <alexisb> katco if you have time: https://github.com/juju/juju/pull/6347
[20:35] <alexisb> I would like to land this today if possible and it is a simple change
[20:36] <katco> alexisb: looks simple enough; you grepped for all possible places that reference get-controller, etc.?
[20:36] <alexisb> yes
[20:37] <katco> alexisb: lgtm
[20:43] <babbageclunk> thumper: sorry, back now - hangout?
[20:44] <thumper> babbageclunk: ack
[20:48] <natefinch> rick_h_: I presume I should work on the card assigned to me in the todo?  this bug: https://bugs.launchpad.net/juju/+bug/1621375
[20:48] <mup> Bug #1621375: "juju logout" should clear cookies for the controller <juju:Triaged by rharding> <https://launchpad.net/bugs/1621375>
[20:49] <rick_h_> natefinch: yea, so I suspected that was related to the other one you had in tracking
[20:49] <rick_h_> natefinch: my thought was that might be why a logout didn't actually log you out
[20:58] <katco> rick_h_: i need an opinion, hangout rq?
[21:37] <alexisb> rick_h_, ping
[21:40] <katco> PR for someone: https://github.com/juju/juju/pull/6348
[22:50] <thumper> katco: comments added
[22:56] <thumper> alexisb: unit leader in status https://github.com/juju/juju/pull/6350
[23:09] <alexisb> thumper, awesome