[00:09] <babbageclunk> 'night everyone! o/
[00:10] <menn0> babbageclunk: good night!
[00:24] <veebers> Who is the best person to talk to about the jujugui charm store?
[00:33]  * redir goes eod
[00:33] <redir> bbl
[00:35] <redir> veebers: I don't know. Maybe https://github.com/juju/charmstore/graphs/contributors that is in a current timezone
[00:36] <veebers> redir: worth a shot, thanks :-)
[00:49] <alexisb> is there a way for me to easily force a unit into error state?
[01:04] <perrito666> Alexis a custom broken charm
[01:04] <perrito666> Just take any charm and blow the install hook with a raise
[01:05] <alexisb> perrito666, ok
[01:24] <menn0> axw: ship it for your first azure PR
[01:25] <axw> menn0: thanks!
[01:25]  * thumper runs restore again...
[02:31] <anastasiamac> natefinch: hi :)
[02:31] <natefinch> anastasiamac: howdy
[02:32] <anastasiamac> natefinch: since m in list-credentials, could u please not do the fix related to https://bugs.launchpad.net/juju/+bug/1596687 for it?
[02:32] <mup> Bug #1596687: command list output not consistent <2.0> <usability> <juju:In Progress by natefinch> <https://launchpad.net/bugs/1596687>
[02:32] <anastasiamac> natefinch: i'll do it in my current pr
[02:32] <anastasiamac> the "No foo.." fix ;)
[02:32] <natefinch> anastasiamac: sure
[02:33] <anastasiamac> natefinch: \o/
[02:36] <natefinch> anastasiamac: all I did was add a check in formatCredentialsTabular, if len(credentials.Credentials) == 0 { fmt.Fprintln(writer, "No credentials to display."); return nil; }
[02:38] <anastasiamac> natefinch: waht about other formats like yaml and json?
[02:38] <anastasiamac> what* :-P
[02:39] <natefinch> anastasiamac: I had assumed we only care about tabular, since that's the "human readable" output.  But maybe it's appropriate for yaml and json too, if we write the notification stderr
[02:40] <rick_h_> those are machine readable and an empty value seems less interesting.
[02:40] <rick_h_> I guess standardizing on either an empty {} or empty string might be good to check
[02:40] <rick_h_> but I'd suggest just sticking with the current readable work and landing that in case we can get it into rc1
[02:41] <anastasiamac> natefinch: rick_h_: for tabular and yaml, I would have thought we'd want human-readable "no foo.." but for json leave it as empty? :D
[02:41] <rick_h_> vs including it all in one card
[02:41] <rick_h_> anastasiamac: no, not for yaml since it's parsable
[02:41] <anastasiamac> rick_h_: ok. so only tabular for noe, the rest leave a sis?
[02:41] <anastasiamac> as is* bleh..
[02:41] <rick_h_> umm ok :)
[02:42] <anastasiamac> :)
[02:42] <anastasiamac> got it \o/ will do for credentials :) tyvm natefinch, rick_h_
[02:43] <natefinch> ok.. just a note, we could still write out to stderr, without messing up people parsing the stdout of yaml and json... to a terminal user, it would look the same, but a script wouldn't see it.
[02:43] <rick_h_> natefinch: rgr, and worth a polish bug as follow up
[02:43] <rick_h_> natefinch: as well as checking we output a standard object when there's no data in each format
[02:43] <rick_h_> natefinch: but as follow up please
[02:44] <natefinch> rick_h_: will do. For now I'll just twiddle the tabular output, which I think is a pretty good 90%+ solution
[02:44] <rick_h_> ty
[02:45] <natefinch> rick_h_: so... I've actually being doing this as stderr, just because that's how our CLI context object works... when you say "write out this informational message", it goes to stderr.  Is that ok, or should I switch it to stdout?
[02:46] <rick_h_> natefinch: hmm, it seems odd since it's not an error, but I feel like this is something I'd tend to get wrong and rely on folks like jam to correct me on
[02:47] <natefinch> rick_h_: AFAIK, the usual mantra is use stderr for logging/human readable info, and use stdout for data output.  But I'm happy to have someone else weigh in.  Juju standards may be different.
[02:54] <alexisb> ok last time I checked this should work right??: charm pull-source cs:xenial/postgresql
[02:54] <alexisb> or just charm pull-source cs:postgressql
[02:54] <alexisb> or just charm pull-source cs:postgresql
[02:58] <alexisb> hmm cassandra works
[03:04] <thumper> fuck what?
[03:05] <thumper> how can this happen?
[03:05]  * thumper digs
[03:15] <natefinch> thumper: hey, do you think this is something we could just change? https://bugs.launchpad.net/juju/+bug/1625194
[03:15] <mup> Bug #1625194: flag parsing error doesn't match juju error styling <usability> <juju:Triaged> <https://launchpad.net/bugs/1625194>
[03:16] <thumper> natefinch: sure... best way would be to just write it to the logger
[03:17] <natefinch> thumper: not sure I follow.
[03:17] <thumper> instead of writing "error: %v"
[03:17] <thumper> use loggo logger
[03:17] <thumper> it will do the coloring
[03:17] <natefinch> ahh, yes
[03:21] <thumper> veebers: still trying to work out why the machine agent isn't getting the api updates
[03:22] <veebers> menn0: would you know the version numbering works when 'juju upgrading'? I started with 2.0-rc1-xenial-amd64, expected 2.0-rc1.1 but got 2.0-rc1.2 (note the .2)
[03:22] <veebers> thumper: ack
[03:38] <veebers> menn0: hmm, I think it might be due to me thinking of the version as returned by the binaries '--version' as opposed to what was reported in the running status
[03:50] <anastasiamac> axw: updated https://github.com/juju/juju/pull/6274 to have the comment and the new output for tabular output when no credentials
[03:50] <anastasiamac> axw: i thinks it's good to go...
[03:54] <thumper> veebers: how do you want these binaries
[03:54] <thumper> ?
[03:56] <veebers> thumper: are you able to scp them up to the machine from yesterday>
[03:56] <thumper> probably
[03:56]  * thumper thinks
[03:56] <axw> anastasiamac: LGTM
[04:03] <anastasiamac> axw: \o/
[04:19] <axw> thumper menn0: will either of you have a chance to review my 3rd azure PR today? I'm going to need 2 reviews, so can't just be OCR. specifically just need https://github.com/juju/juju/pull/6272/commits/8e0933ad924cd72e1e2433e0c9a5fec78a236b73 reviewed
[04:46] <menn0> axw: i'll take a look once i'm off this call with thumper
[04:46] <axw> menn0: thanks
[05:18] <menn0> thumper: I installed the minimal centos7 ISO in a VM while we were talking and it comes with /sbin/service too
[05:18] <thumper> sweet
[05:18] <thumper> does it do what we expect?
[05:20] <menn0> thumper: looks like it
[05:21] <thumper> cool
[05:22] <menn0> thumper-cooking: yep it works as expected (redirecting commands to systemctl)
[05:42] <menn0> axw: review done
[05:42] <axw> menn0-afk: thanks muchly
[05:47] <anastasiamac> axw: did the review too... funny how menn0-afk and i picked on the same things ;)
[05:47] <axw> anastasiamac: TYVM
[05:48] <axw> anastasiamac: I'm just dropping the TODO for now. implementing that means more changes to credentials in general
[05:48] <axw> don't really have time for it right now
[05:49] <anastasiamac> axw: \o/ thank you for doing it - very exciting to see interactive in azure
[05:49] <anastasiamac> axw: I'd drop it too, there does not seem to be much need for it right not
[05:49] <anastasiamac> now*
[06:40] <axw> anastasiamac menn0-afk: thanks for the reviews, it's all landed in time for RC1. woot :)
[06:40] <anastasiamac> axw: \o/ u r the champion
[07:27] <babbageclunk> mgz: ping?
[07:49] <thumper> ha ha
[07:49] <thumper> got it
[07:49] <thumper> well, at least some of it
[08:41] <babbageclunk> mgz: ping?
[08:50] <thumper> anyone feel like reviewing a restore branch? https://github.com/juju/juju/pull/6282
[08:52] <thumper> this is really quite urgent for rc1
[08:52] <thumper> anyway, it is almost 9pm
[08:52]  * thumper is done
[08:52] <anastasiamac> \o/
[09:45] <perrito666> Thumper ship it with a small comment
[09:47] <perrito666> Well github is very reviewable from my phone I love that
[09:59] <mgz> babbageclunk: yo, worry
[10:00] <mgz> -w+s
[10:00] <babbageclunk> mgz: hey - if I've got a last minute change to charm URL formats that I'm rushing to finish, is it too late to get it into the RC1 build?
[10:01] <mgz> it's fine to commit to master either way
[10:01] <mgz> as far as I'm aware, nothing was kicked off in the way of release last night
[10:02] <mgz> if the change lands now, can discuss when coming to it if that rev can go in the rc as well
[10:03] <babbageclunk> Ok great - can you ping me if that's happening and I haven't landed my change? (I'll try to keep an eye out for it.)
[10:04] <mgz> sure thing
[10:04] <babbageclunk> mgz: Thanks!
[10:06] <voidspace> I thought it was a quiet morning, I wasn't signed into IRC...
[10:06] <voidspace> frobware: is the MAAS meeting happening today?
[10:06] <voidspace> frobware: it's in my calendar
[10:07] <frobware> voidspace: yep - as always. It's only me and mpontillo that turn up these days.
[10:07] <voidspace> babbageclunk: if you're free at 2pm you *might* find it useful (if you have any questions or feedback about MAAS)
[10:07] <voidspace> frobware: I can come today, I haven't done much on MAAS, but it's good to stay in touch with what's happening
[10:09] <babbageclunk> voidspace: hopefully I'll have this stuff done, so I'll try to come along.
[10:09] <voidspace> babbageclunk: cool
[10:12] <macgreagoir> frobware: I was there last week. I must have been too quiet :-)
[10:12] <frobware> macgreagoir: ignore my sweeping generalisations...
[10:13]  * macgreagoir ignores frobware untl further notice
[10:15] <babbageclunk> frobware, voidspace, macgreagoir: Can I please get a review of https://github.com/juju/names/pull/74? It's pretty short!
[10:15] <frobware> babbageclunk: on the Q
[10:15] <babbageclunk> frobware: Is that like a train or...
[10:16] <frobware> babbageclunk: wreck.. to complete your analogy! :-D
[10:19] <babbageclunk> frobware: :)
[10:35] <frobware> babbageclunk: reviewed
[10:35] <babbageclunk> frobware: Thanks!
[10:38] <macgreagoir> babbageclunk frobware: I added a review too, but I only see mine :-/
[10:39] <macgreagoir> (PR 74)
[10:40] <babbageclunk> macgreagoir: Thanks!
[10:40] <babbageclunk> frobware, I can't see yours either - are you just pretending to have done reviews for the mad props?
[10:41] <babbageclunk> frobware: Or maybe you just didn't hit the complete button or something? I haven't done a review since we switched to github reviews.
[10:47] <frobware> weird
[10:47] <frobware> babbageclunk: I see my comments in http://reviews.vapour.ws/r/5723/
[10:48] <frobware> babbageclunk: ah... so I'm still using RB.
[10:48] <macgreagoir> frobware: Just seen your review for me in RB too. I was looking it GH :-)
[10:49]  * frobware still wonders why we aren't using Gerrit.
[10:51] <babbageclunk> frobware: Oh yeah, that would do it! Cool, I'll read it there.
[10:55] <frobware> babbageclunk, macgreagoir, jam, voidspace: anyone - http://reviews.vapour.ws/r/5720/ :)
[10:57] <voidspace> frobware: head down right now, but I can get to it in a bit if someone else doesn't
[10:57] <frobware> voidspace: ack.
[11:03] <rick_h_> dimitern: want to sync or are you all busy with the rc1 fixes? we can punt to later on?
[11:03] <dimitern> rick_h_: now is as good as any time :) let's sync
[11:03] <rick_h_> dimitern: k
[11:05] <rick_h_> frobware: any reason not to $$merge$$ the branch?
[11:06] <frobware> rick_h_: not really. didn't want to be both submitter and merger without at least passing it by 1 other person.
[11:06] <frobware> rick_h_: I can merge now
[11:06] <rick_h_> frobware: k, I'll add it then :)
[11:15] <user_____> hi! need your help
[11:15] <user_____> juju add-machine takes forever
[11:16] <user_____> cloud-init-output.log shows “Setting up snapd (2.14.2~16.04) …” (last line in file)
[11:16] <user_____> how to fix this?
[11:21] <dimitern> user_____: can you try bootstrapping again with --config enable-os-refresh-update=false --config enable-os-upgrade=false ?
[11:22] <dimitern> user_____: I've seen this can help in those cases
[11:24] <user_____> thank you! will try
[11:55] <babbageclunk> macgreagoir: Embarrassing review? https://github.com/juju/names/pull/75
[11:56] <macgreagoir> babbageclunk: LGTM :-)
[11:56] <babbageclunk> macgreagoir: Thanks!
[12:59] <babbageclunk> voidspace: Sorry, can't make the meeting - neck deep in test failures at the moment.
[13:11] <marcoceppi> what hook is triggered when I run `juju attach` ?
[13:14] <voidspace> babbageclunk: argh, I'm late anyway!
[13:14] <rick_h_> marcoceppi: upgrade-charm I believe
[13:21] <marcoceppi> rick_h_: believe, or confirmed?
[13:22] <rick_h_> marcoceppi: believe, 90% sure but I've been wrong before
[13:22] <rick_h_> marcoceppi: docs don't seem to state it, so have to check the code.
[13:23] <rick_h_> katco: ^ can you confirm I'm not crazy?
[13:23] <rick_h_> or natefinch ^
[13:26] <rick_h_> marcoceppi: ok, confirmed: https://github.com/juju/juju/blob/c5326a97429362f1c13b593bc58bc56757f9b3c8/resource/context/cmd/get.go#L62
[13:26] <natefinch> I missed the context I think
[13:27] <natefinch> oh, maybe not
[13:27] <natefinch> yes, upgrade-charm
[13:33] <marcoceppi> mbruzek: ^^
[13:35] <mbruzek> ack, marcoceppi I will test this out, thanks
[14:01] <rick_h_> katco: natefinch frobware dimitern ping for standup
[14:02] <frobware> rick_h_: hmm. thought I was in there already.
[14:03] <dimitern> omw
[14:37] <redir> reboot brb
[15:22] <rick_h_> frobware: ping, sent a card your way that's important to at least diagnose before other stuff please.
[15:41] <alexisb> perrito666, ping
[15:54] <rick_h_> katco: can you please review/QA nate's PR? https://github.com/juju/juju/pull/6285
[15:54] <katco> rick_h_: sure
[15:54] <rick_h_> katco: ty
[15:55] <katco> are we not using review board?
[15:55] <rick_h_> katco: evidently not. There was talk of using the new GH review system, but I missed that folks were using it officially
[15:55] <katco> rick_h_: mmm i'm going to review this in RB. i don't have a horse in that race, but i'd prefer we don't do this piecemeal
[15:56] <rick_h_> katco: k
[15:57] <katco> natefinch: needs QA steps: ^^^
[15:58] <natefinch> katco: oops, ok, will add
[16:05] <natefinch> rick_h_: where do we support spaces, do you know?
[16:05] <rick_h_> natefinch: maas and aws
[16:06] <natefinch> rick_h_: ok
[16:08] <frobware> rick_h_: reluctant to go back to beta15. unless they say it's still broken in beta18.
[16:09] <rick_h_> frobware: understand, just need to work with them and have it move forward please. I'm nervous this is actually going to be hostname related because it was rabbit, but the fact that it was nova/etc means it might have been the container issues that just landed.
[16:10] <rick_h_> frobware: so ideally we'd get them to test the RC, but if there's an issue with OIL and OS deploys in 2.0 we need to be on that as it's a bit of bread and butter for us
[16:12]  * rick_h_ goes for lunchables 
[16:13] <frobware> rick_h_: so here's a thing - we need a tool that capture lots of stuff about the environment. In trying to repro this can I just use 3 machines on a flat network? Do I need multiple disks? Etc. The bundle is useful but it doesn't describe the rest of the environment. Or put another way, how do we stop raising bugs that initially require a lot of back-and-forth getting said info.
[16:14] <natefinch> katco: one minute, trying to fix list-spaces... realized I had forgotten it since I hadn't been able to test it
[16:14] <katco> natefinch: k
[16:15] <katco> frobware: the idea of an "environment debugger" tool has come up frequently. a way to both snapshot critical info about the env, and to interact with it in other ways than through the controller
[16:15] <katco> frobware: i think our on-site folks would like such a tool
[16:15] <katco> frobware: and it would help with critsits
[16:15] <frobware> katco: the lack of this gates the speed at which we can fix issues.
[16:16] <katco> agreed
[16:16] <natefinch> it is a little tricky because there's basically infinite information that *could* matter.
[16:16] <perrito666> alexisb: pong
[16:16] <alexisb> perrito666, nevermind
[16:16] <alexisb> we are all good
[16:16] <frobware> natefinch: some is better than none. Right now I need info. it's also close to my EOD. which means another day passes.
[16:17] <frobware> natefinch: but in essence we have compute, storage and network.
[16:17] <perrito666> alexisb: oh, so you only call me when things are bad, so that is how things are
[16:17] <natefinch> frobware: true enough.  And a tool we can continue to extend as we think of things we need would still be helpful.  I wrote something like that for my last job  when people kept filing issues without even grabbing logs.
[16:17] <frobware> natefinch: some description covering the basics would be a head start
[16:18] <alexisb> perrito666, :) that is what happens when you are a loan teammember on
[16:18] <alexisb> tha manager finds you when things go bad
[16:18] <perrito666> loan teammember? something tells me I should check my email :p
[16:20] <alexisb> well reed is at the dentist and christian is close to eod
[16:20] <alexisb> the rest are not online yet
[16:20] <alexisb> so that make syou the go to Ateam dude
[16:20] <alexisb> you lucky dog
[16:21] <perrito666> meh, shame on reed, his mouth is occupied not his hands, he could very well be working while his teeth get fixed
[16:22] <alexisb> lol
[16:32] <natefinch> wtf
[16:32] <natefinch> why does list-spaces default to yaml?
[16:32] <voidspace> natefinch: because we hate you
[16:32] <natefinch> I knew it!
[16:32] <voidspace> :-)
[16:33] <frobware> natefinch: and juju spaces?
[16:38] <natefinch> frobware: juju list-spaces is just an alias for juju spaces
[16:42] <rick_h_> frobware: round one would be to template-ize the questions we tend to ask
[16:42] <rick_h_> frobware: then look at automating those questions with tools
[16:44] <natefinch> frobware: the default for commands should be tabular, with json and yaml as flag options
[17:03]  * alexisb takes a break, back in an hour
[17:16] <natefinch> rick_h_: uh, so, what should I do about the fact that spaces has no tabular output?  That seems like a separate bug.
[17:17] <rick_h_> natefinch: +1, please file and target to 2.0 GA and we'll have to get it cleared up
[17:17] <natefinch> cool
[17:17] <rick_h_> natefinch: ty for the catch
[17:17] <natefinch> rick_h_: it's amazing how many bugs get shaken out just by running a bunch of commands :)
[17:18] <rick_h_> :)
[17:18] <rick_h_> using the product ftw
[17:18] <natefinch> indeed
[17:30] <natefinch> rick_h_: https://bugs.launchpad.net/juju/+bug/1625737
[17:30] <mup> Bug #1625737: list-spaces doesn't have tabular format <juju:New> <https://launchpad.net/bugs/1625737>
[17:43] <natefinch> katco: replied to your comment btw
[17:53] <babbageclunk> Anyone know if there's a cunning debugging trick for getting stack traces out of a running go program?
[17:53] <redir> alexisb-afk: ping
[17:53] <redir> holler when you're back
[17:55] <frobware> babbageclunk: if you send SIGQUIT does that work (cannot remember)
[17:55] <frobware> babbageclunk: "
[17:55] <frobware> 7
[17:55] <frobware> down vote
[17:55] <frobware> According to the docs of the runtime package, sending a SIGQUIT to a Go program will, by default, print a stack trace for every extant goroutine, eliding functions internal to the run-time system, and then exit with exit code 2."
[17:56] <frobware> babbageclunk: read all the way to the end of the sentence. :)
[17:56] <babbageclunk> I mean, I think that'd be ok - it'll get restarted, and I seem to already have bounced it once.
[17:56] <babbageclunk> frobware: The process in question is basically hanging already.
[17:57] <frobware> babbageclunk: get some coffee whilst the go routines are printed. Just general advice. :-D
[17:58] <alexisb> redir, heya, I have a meeting at the top of the hour, but will ping when I am off
[17:58]  * frobware EOD
[17:58] <babbageclunk> Where's it going to print to? The jujud process is a d
[17:58] <babbageclunk> aemon
[18:02] <redir> alexisb: cool
[18:06] <redir> babbageclunk: what are you trying to print?
[18:11] <redir> wb katco :)
[18:19] <natefinch> Me reading a bug report:  Steps to reproduce:  1) Deploy Openstack private cloud.   **rapid banging on the back button**
[18:27] <perrito666> lol
[18:55] <natefinch> rick_h_: I don't understand exactly what is wrong in this bug: https://bugs.launchpad.net/juju/+bug/1616200
[18:55] <mup> Bug #1616200: Error message uses 'local:' URL to refer to local charm. <deploy> <jujuqa> <juju:Triaged by rharding> <https://launchpad.net/bugs/1616200>
[19:02] <rick_h_> natefinch: looking
[19:03] <alexisb> redir, available when you are
[19:03] <rick_h_> natefinch: so local: isn't useful, the invalid name is in the metadata.yaml right?
[19:03] <redir> where alexisb ?
[19:03] <rick_h_> natefinch: so the local: blows smoke
[19:03] <alexisb> 1x1 HO
[19:06] <katco> redir: sorry, o/
[19:06] <katco> natefinch: k, tal
[19:06] <redir> katco: no sweat
[19:07] <katco> natefinch: were you able to get list-spaces working as well?
[19:08] <natefinch> katco: no... there's no tabular format for list-spaces, and tabular is the only place we should have this output, so it's basically N/A for now.  I filed another bug about list-spaces missing the tabular output
[19:09] <katco> natefinch: ah ok. that was the whole thing of list-spaces defaulting to yaml
[19:09] <natefinch> katco: correct... I thought it was just a bad default, but when I want to look at it, tabular just didn't exist
[19:09] <katco> oops
[19:31] <marcoceppi> halp rick_h_ (and others) network spaces in beta18 work against maas and lxd, yeah?
[19:37] <natefinch> rick_h_: FYI, that local: charm bug... it would be a pretty invasive thing to change.  I agree that the UX is kind of bad:
[19:37] <natefinch> $ juju deploy ./star_say
[19:37] <natefinch> ERROR bad charm URL in response: URL has invalid charm or bundle name: "local:win2012r2/star_say-0"
[19:38] <natefinch> marcoceppi: AFAIK it's AWS and Maas only
[19:38] <marcoceppi> natefinch: so, if I do a deploy to LXD on a MAAS node
[19:38] <marcoceppi> no worky?
[19:39] <natefinch> marcoceppi: oh, no, I meant the lxd provider.... not sure how spaces interacts with containers on maas nodes
[19:39] <natefinch> marcoceppi: I know they've been working on that area, might be post-18, not sure
[19:40] <katco> natefinch: point me at that bug? i might have some insight since i've been in that area lately
[19:40] <katco> natefinch: the local deploy bug
[19:40] <natefinch> katco: https://bugs.launchpad.net/juju/+bug/1616200
[19:40] <mup> Bug #1616200: Error message uses 'local:' URL to refer to local charm. <deploy> <jujuqa> <juju:Triaged by rharding> <https://launchpad.net/bugs/1616200>
[19:41] <marcoceppi> natefinch rick_h_ I'm about to walk into a place tomorrow  where we could /really/ use it
[19:41] <marcoceppi> so I'd like to know now
[19:42] <natefinch> katco: updated the bug with an easier repro that makes the problem more obvious
[19:42] <katco> natefinch: ta
[19:42] <natefinch> katco: the error is generated from gopkg.in/juju/charm.v6-unstable/url.go ~line 278
[19:43] <katco> natefinch: can you put that in the bug as well?
[19:43] <natefinch> katco: heh good idea
[19:44] <natefinch> katco: we could change the error to only specify the name, but that might have unintended side effects when the rest of the url matters.... maybe not, but I'm not entirely sure, since it's in code that is probably used in a lot of places.
[19:45] <katco> natefinch: so the bug is that the "local:" schema is used?
[19:45] <katco> in the error message?
[19:46] <katco> natefinch: also, it seems like juju did a lot of work before it validated the charm name... i'm just skimming, but did we do a lot of setup for nothing?
[19:47] <natefinch> katco: seems like the charm is being validated on the server
[19:48] <katco> natefinch: i wonder if we should do both. but seems like validation of the name should be one of the first things we try
[19:49] <natefinch> katco: well, I think validation can be done all at once.  It's not expensive.  Maybe the reason we do it on the server is in case validation rules ever change?  The server is really the thing that has final say as to what's valid.
[19:50] <katco> natefinch: yeah, if we soften/change validation, we wouldn't want the check in a newer client different than an older server
[19:51] <katco> natefinch: well, for that matter i wonder if we can do it client side, because an older server may be *more* permissive than the client
[19:51] <natefinch> yep
[19:52] <rick_h_> marcoceppi: right, only MAAS. there's nothing to setup subnets/etc in lxd yet that we can use. lxd was working on some of that this cycle
[19:52] <katco> natefinch: maybe we just need to move the validation of the charm on the server to before we do any real-work
[19:52] <rick_h_> marcoceppi: so it has to be raw machines on maas, though I guess there's some things that you can stick in a container and as long as the raw machine is in the space the container will be
[19:53] <rick_h_> marcoceppi: openstack works in this way on maas and it's deployed into containers on maas
[19:53] <marcoceppi> rick_h_: well, we're deploying into containers on maas for openstack
[19:53] <rick_h_> marcoceppi: k, well spaces was written and tested with openstack on maas so should be good then
[19:53] <marcoceppi> but for whatever reason, containers are being put on one nic and not the other
[19:54] <rick_h_> marcoceppi: so there was a bunch of container on maas issues that fix landed today and is in the RC being built
[19:54] <marcoceppi> rick_h_: since there are two nics (two networks) one public ext net and the other internal
[19:54] <marcoceppi> rick_h_: good to know
[19:54] <marcoceppi> rick_h_: I'll make them move to rc1 tomorrow
[19:54] <rick_h_> marcoceppi: but it was typically that the container was only getting the br0 vs the whole list of interfaces
[19:55] <rick_h_> natefinch: sorry, ran to get the boy from school. So what's up with the local:?
[19:57] <katco> rick_h_: i think we're first wondering what the bug is. that it's using "local:"?
[19:57] <thumper> morning
[19:57] <rick_h_> katco: yes, that the user is told to fix the name of their charm and the local: is nothing to do with the name of their charm
[19:57] <katco> thumper: heyo
[19:58] <katco> rick_h_: ahh i see. so it should get rid of "local:trusty/" in that example
[19:58] <rick_h_> katco: right
[19:59] <perrito666> thumper: I bow to your sed-fu
[19:59] <katco> rick_h_: ok, i think natefinch was saying that it would be difficult to fix bc it's done server-side
[19:59] <thumper> perrito666: it was actually menn0
[19:59] <rick_h_> katco: I see, and shared code vs something specific to this deploy command situation?
[20:00] <perrito666> thumper: thanks for that fix also, I had hit a wall regarding that issue
[20:00] <katco> rick_h_: that might be it. i wonder why we couldn't just modify that error message though
[20:00] <thumper> perrito666: we were checking many things while trying to work out why the machine agent got the wrong ip address
[20:00] <rick_h_> katco: yea, the revision is also in that error
[20:00] <katco> rick_h_: maybe something to follow up on if he's already out for the day
[20:00] <rick_h_> katco: so it seems we go "bad name, here's the ID to go fix"
[20:00] <thumper> the race with the peergrouper only became apparent late yesterday
[20:00] <natefinch> katco, rick_h_ : we certainly could, it's just that the code is very likely used by a lot of different consumers... I'd be afraid we'd break something
[20:00] <thumper> now to look at the HA CI test
[20:00] <katco> rick_h_: bc the flipside is also true: if it's shared code, we have the opportunity to fix it for all clients
[20:01] <katco> natefinch: we'd just be changing an error's text right?
[20:01] <rick_h_> natefinch: "very likely" doesn't scare me away :P need to see the list of what's effected and if the message makes sense in the other locations then
[20:01] <katco> i would be grumpy if anyone is doing anything fancy with the text of an error message downstream =|
[20:03] <natefinch> katco: I'd bet a reasonable sum of money that a ton of tests fail
[20:03] <natefinch> katco: not that anything actually stops functioning per se
[20:03] <katco> natefinch: so, i think i've found a sane way out of that mess
[20:04] <rick_h_> well should only be a couple asserting the sanity check. Most tests will be working on the assumption the charm name is correct
[20:04] <katco> natefinch: part of the reason our tests are so fragile is that we re-check the same thing all over the place
[20:04] <rick_h_> I'm not sure how a ton of tests should fail on that corner case failure mode
[20:04] <katco> natefinch: so if a bunch of tests fail, we should isolate that check into a test that checks that one thing, and delete the checks in all the other tests
[20:07] <natefinch> rick_h_: have you run our tests? :)
[20:07] <katco> i.e. this is where the principle of "tests should only test one thing" really pays off
[20:07] <rick_h_> natefinch: a couple of times, but yes I'm attmepting to apply logic
[20:07] <rick_h_> natefinch: it seems easy enuogh to see how many will fail
[20:08] <rick_h_> natefinch: the whole "it could be bad" just doesn't jive. If it's bad, let's find out how bad. Guessing the outcome is :(
[20:09] <natefinch> rick_h_, katco: well, I only see 4 places that would fail in core, and like 7 in charm.v6
[20:09] <natefinch> tests that is
[20:10] <natefinch> at least from a quick grep of the message
[20:10] <rick_h_> natefinch: cool, doesn't seem horrible
[20:11] <natefinch> rick_h_: not at all :)  But I also didn't search all of github/juju  ... there are a lot of repos that interact with charms... others might break.  But I guess it should be obvious enough when that happens for whoever to just fix it.
[20:12] <rick_h_> natefinch: +1
[20:59] <thumper> test fix for someone https://github.com/juju/juju/pull/6290
[21:01] <thumper> now... how to create a new branch locally based on the revert of a revert...
[21:01] <thumper> natefinch, katco: either of you know?
[21:01]  * thumper goes to uncle google
[21:01]  * katco thinks
[21:02] <katco> thumper: well the revert is just a commit with changes reversed, right? your goal is what, to rebase your local branch off that commit?
[21:02] <thumper> I landed a branch earlier
[21:02] <thumper> that has just been reverted
[21:02] <thumper> and I want to make a branch that reverts the revert and fix the issue
[21:03] <thumper> I feel it should be simple
[21:03] <thumper> and I think it is
[21:03] <thumper> I just don't remember the incantation
[21:03] <katco> thumper: you should just be able to revert the revert commit... what is it and i'll paste you a command?
[21:03] <redir> anyone made any docs on snapping juju?
[21:04] <thumper> does the revert command make a branch?
[21:05] <alexisb> perrito666, https://bugs.launchpad.net/juju/+bug/1625657
[21:05] <mup> Bug #1625657: add-model fails erroneously when a cloud is specified and a credential is specified and needs to be uploaded. <juju:Triaged by alexis-bruemmer> <https://launchpad.net/bugs/1625657>
[21:06] <katco> thumper: git revert --no-commit master^2
[21:07] <katco> thumper: can't revert the merge-commit, but can revert the revert if that doesn't not make sense :)
[21:08] <thumper> ok
[21:08] <thumper> yeah
[21:08] <thumper> katco: do you remember how to tell go to compile as if another os?
[21:09] <thumper> hmm
[21:09] <thumper> is it just `GOOS=osx go test`?
[21:11] <thumper> yep
[21:11] <thumper> not sure it will actually run the tests though
[21:11] <thumper> but it did find the compile error it was getting
[21:11] <thumper> exec fork error :) - not entirely surprised there
[21:15] <thumper> well... GOOS=darwin
[21:17] <perrito666> alexisb: Cordoba
[21:17] <perrito666> Córdoba
[21:27] <perrito666> alexlist: is there a card for that bug?
[22:00] <alexisb> axw, when you come online hatch will be chasing you down
[22:02] <redir> where do I publsh a juju snap? a namespace? or just make up a name like juju-with-feature-blah
[22:02] <redir> ?
[22:06] <thumper> perrito666: still around?
[22:07] <thumper> perrito666: I'd like to talk restore and HA and juju's expectations
[22:07] <perrito666> thumper: sure, gimme a moment to get my icq going on
[22:07] <thumper> perrito666: we can use a hangout
[22:07] <menn0> redir: the snappy namespace feature didn't really seem to do anything
[22:07] <thumper> it is just alexisb that my HO hates
[22:07] <menn0> redir: I think the idea is you just pick a name like "juju-redir-blahblah"
[22:08] <menn0> I used "juju-menno"
[22:08] <redir> menn0: tx, I can do that
[22:08]  * perrito666 starts his acoustic coupling dial
[22:08] <perrito666> thumper: standup HO is ok?
[22:09] <alexisb> anastasiamac, I saw your side update on release notes in teh standup, can you add it to the 2.0 doc so we can interate
[22:09] <thumper> perrito666: ack
[22:09] <thumper> veebers: would be great if you could join perrito666 and me in the standup HO
[22:10] <anastasiamac> alexisb: sure, it was cut-and-paste from release minutes but i'll add it to release notes too
[22:10] <alexisb> anastasiamac, thanks
[22:11] <veebers> thumper: still there? I can join now
[22:11] <thumper> veebers: yep
[22:12] <anastasiamac> alexisb: it's in as "Done and needs elaborating" under "what's new in rc1" :D
[22:37] <axw> hatch: I'm available for being chased down now
[22:39] <axw> alexisb: I thought you were going to drop the "d" off resolved?
[22:43] <alexisb> axw, one sec
[22:50] <redir> Task 8dd3657e-3470-4262-9253-0558b18d4aef is waiting for execution.
[22:50] <redir> :/
[22:50] <redir> and fails
[22:55] <redir> does the snapcraft yaml included in the repo checkout a fresh repo? Looks like it
[22:56] <menn0> thumper: http://reviews.vapour.ws/r/5733/
[22:56] <redir> to do a build with a feature, does one use a commit has in the source-tag field?
[22:59] <redir> s/commit has/commit hash/
[22:59] <thumper> menn0: shipit
[23:02] <alexisb> ok axw, sorry about that
[23:02] <alexisb> so the issue with resovle is that the command doesnt actaully resovle anything
[23:02] <alexisb> in only marks errors 'resovled'
[23:04] <axw> alexisb: ok. I guess I feel the same thing thumper mentioned yesterday, that it's much the sme as "bzr resolve"
[23:04] <axw> I guess it's fine
[23:04] <axw> at least the not-scary thing is done by default now
[23:05] <alexisb> and thta was the point of the PR (change the default)
[23:05] <alexisb> renaming the command in my mind means changing the behavior too
[23:11] <thumper> alexisb: I was always told that the idea of the "resolved" command was to tell juju that you had resolved the issue yourself
[23:11] <thumper> not that you expect juju to resolve it
[23:11] <thumper> so clearing the flag is fine
[23:16] <alexisb> menn0, ping
[23:18] <mwhudson> hangouts doesn't want to talk to me
[23:22] <rick_h_> axw: hmm, is that 40 right? in looking at the numbers I expected the t2.medium to be more cpu power especially as dual core?
[23:28] <axw> rick_h_: baseline performance is 40%
[23:29] <axw> rick_h_: when idle, t2 machines accrue credits to surge or whatever it's called
[23:29] <rick_h_> axw: of the burstable
[23:29] <rick_h_> axw: hmm, ok
[23:58] <alexisb> axw, I need just a few minutes before we meet
[23:58] <alexisb> is that ok?
[23:58] <axw> alexisb: np, ping when yoiu're ready