[00:52] <anastasiamac> davecheney: ping
[01:10] <anastasiamac> davecheney: all sorted
[01:11] <wallyworld> thumper: menn0: waigani: anyone else: I unblocked trunk by reverting a regression commit and reclassifying the associated bug as High for trunk (still critical for 1.22, 1.21)
[01:11] <wallyworld> this was a per agreement with curtis
[01:11] <thumper> ok, cool, ta
[01:14] <waigani> wallyworld: thanks :)
[01:14] <wallyworld> np :-)
[01:24] <menn0> wallyworld: tyvm
[01:29] <thumper> fuck fuck fuck fuck
[01:30] <rick_h_> thumper: you rang?
[01:30] <rick_h_> :P
[01:30]  * thumper sighs
[01:30] <thumper> we are starting lxc containers on a power machine with amd64 tools
[01:31] <thumper> and wondering why it is erroring weirdly
[01:31] <rick_h_> thumper: hmm, that seems less than ideal
[01:31] <thumper> I'm trying to work out where it got the wrong tools set
[01:35] <thumper> fark
[01:48] <thumper> wallyworld: bot still showing blocked
[01:48] <thumper> wallyworld: regression query not showing blocked
[01:48]  * thumper is confused
[01:48] <wallyworld> thumper: huh? anastasiamac told me she got something accepted
[01:49] <thumper> ok, maybe I tried too soon
[01:49] <wallyworld> there were 2 bugs - one i marked fix released, ther or as high
[01:49] <thumper> it was just after you mentioned above
[01:49] <wallyworld> ah, i was in the process of marking as fix released
[01:49] <wallyworld> sorry
[01:50] <thumper> ok
[01:50] <thumper> I've tried them again
[01:50] <thumper> we'll see if they land now?
[01:54] <anastasiamac> wallyworld: i got something accepted by the bot but not landed
[01:54] <anastasiamac> wallyworld: machine_test.go:1102:
[01:54] <anastasiamac>     c.Assert(sortedUnitNames(got), gc.DeepEquals, expect)
[01:54] <anastasiamac> ... obtained []string = []string{"s2/2", "s3/2"}
[01:54] <anastasiamac> ... expected []string = []string{"s2/2", "s3/1"}
[01:54] <wallyworld> sure, false error
[01:54] <wallyworld> but it wasn't rejected due to policy
[01:54] <anastasiamac> wallyworld: as in re-$$merge
[01:54] <wallyworld> yep
[01:55] <wallyworld> that's one of our aewsome transient errors
[02:01] <ericsnow> wallyworld: I got bit by that one 3 or 4 times in a row last night
[02:01] <wallyworld> sigh
[02:01] <wallyworld> is there a bug raised? who does annotate say added the test?
[02:03] <wwitzel3> ok, so we've run into an issue with the rsyslog direct connections, when we ensure-availability, all of the state machines end up with different rsyslog certs.
[02:03] <wwitzel3> and the logging is distributed to all 3 of them, so we end up with 1 successful connection, and two failed connections for every nodes rsyslog worker, in a constant loop and it floods the error log.
[02:04] <wallyworld> wwitzel3: https://bugs.launchpad.net/bugs/1417875 ??
[02:04] <mup> Bug #1417875: ERROR juju.worker runner.go:219 exited "rsyslog": x509: certificate signed by unknown authority <canonical-bootstack> <logging> <regression> <juju-core:Triaged by wwitzel3> <juju-core 1.21:Triaged by wwitzel3> <juju-core 1.22:Triaged by wwitzel3> <https://launchpad.net/bugs/1417875>
[02:04] <wwitzel3> wallyworld: correct
[02:04] <wallyworld> so you managed to diagnose the cause :-)
[02:04] <wwitzel3> wallyworld: I wasn't able to reproduce until I worked with Paul and figured out it was ensure-availability
[02:04] <wallyworld> ah, good effort!
[02:04] <wwitzel3> wallyworld: yep, all my attempts to reproduce were non-HA
[02:05] <wwitzel3> so manual work around is copy the rsyslog-cert and rsyslog-key from machine 0 to 1 and 2
[02:05] <wwitzel3> that resolves the issue
[02:06] <wallyworld> so the fix will be to hand out the machine 0 certs when adding new state servers?
[02:06] <wwitzel3> wallyworld: right, for the rsyslog certs
[02:07] <wallyworld> or master
[02:07] <wallyworld> s/0/master
[02:07] <wwitzel3> wallyworld: right
[02:07] <wallyworld> a couple of day's work i guess
[02:07] <wwitzel3> wallyworld: I didn't see an easy way
[02:08] <wallyworld> yeah, me either at first glance
[02:11] <anastasiamac> thumper: let's not be greedy with bot...
[02:12] <wallyworld> thumper? greedy? never
[02:13] <thumper> anastasiamac: it is only two
[02:13] <thumper> geez
[02:13] <anastasiamac> thumper: hmm... keeping score?..
[02:14] <thumper> anastasiamac: I'm aiming for that 'nothing better' feeling of landing code
[02:14] <anastasiamac> thumper: ahh, then we'd beta not disturb
[02:18] <thumper> rick_h_: you around?
[02:18] <thumper> still?
[02:27] <anastasiamac> thumper: there goes that feeling
[02:58] <rick_h_> thumper: no, I ran away
[02:58] <thumper> rick_h_: ok, I hear you are ill, go to bed :)
[02:58] <rick_h_> yea, keep thinking about that
[02:59] <rick_h_> figured I'd listen to you guys talk and it'll put me to sleep :P
[03:10] <axw> wallyworld: ah, I just noticed there's a page 2 :)
[03:10] <axw> hence why I didn't see the upgrade step before
[04:06] <jog> wallyworld, I'm seeing "ERROR juju.cmd supercommand.go:323 connection is shut down" intermittently with 1.21. Do you know anything about that?
[04:06] <wallyworld> no :-(
[04:07] <jog> happens when attempting 'juju deploy'
[04:07] <jog> mainly seeing it with MaaS 1.8
[04:08] <jog> but I'm looking for more instances of it on other substrates
[04:09] <jog> wallyworld, the red failures from Feb 11 and 12 on this page are examples: http://juju-ci.vapour.ws:8080/job/test-maas-1_8/
[04:10] <wallyworld> jog: sadly our CLI sucks - it doesn't report the real root cause error
[04:10] <wallyworld> it just gives that connectio shutdown
[04:11] <wallyworld> so you'd need to go look at the debug log
[04:11] <jog> wallyworld, I've added --debug for future runs, will that help?
[04:12] <wallyworld> jog: maybe, but likely not - it's the server side logs (machine-X.log or allmachines.log) that would be needed
[04:12] <jog> ok, I'll see what I can do about capturing those...
[04:14] <dimitern> jog, you could bootstrap the environment with "logging-config: <root>=INFO,unit=DEBUG"
[04:15] <dimitern> jog, and then the unit-X.log files are likely to contain useful data, in addition to the machine logs
[04:16] <jog> dimitern, alright that's an entry in the environments.yaml right?
[04:17] <dimitern> jog, yeah; for existing envs, you can also change it with juju set-env logging-config='<root>=INFO,unit=DEBUG'  << thumper, this should work, right?
[04:29] <jog> dimitern, with MaaS... juju status gives the dns-name: as a FQDN only know to the MaaS server for example: 'maas-node-402.maas'
[04:29] <jog> most clients don't point there DNS as the MaaS server, so they can really do anything with that information.
[04:30] <jog> Is it possible to provide the IP? dimitern if you're not the right person to ask do you know who is?
[04:30] <dimitern> jog, that's true I guess - for that reason I had fixes a few issues around api endpoints (e.g. resolving hosts if possible)
[04:30] <dimitern> jog, by juju you mean?
[04:31] <jog> the 'juju status' output
[04:32] <dimitern> jog, how about public-address field for units?
[04:32] <dimitern> jog, I think this is an IP even with MAAS
[04:32] <jog> it's the same, there is an example: http://juju-ci.vapour.ws:8080/job/test-maas-1_8/22/console
[04:34] <dimitern> jog, how about juju run --unit dummy-source/0 'unit-get private-address' ?
[04:34] <dimitern> jog, well, you can also (e.g. in a script) just try resolving the dns-name using the maas DNS server I guess
[04:38] <jog> dimitern, I'll try the unit-get method and yeah I can script a way to resolve the dns-name... just wondering if it's a usability issue.
[04:39] <dimitern> jog, it has been brought up before
[04:39] <jog> ok, I thought I'd seen a bug for it before
[04:39] <dimitern> jog, and now in pretty much all other providers we do return resolved IPs instead of hostnames, except for maas (as it prefers to refer to nodes by dns names)
[04:51] <jog> dimitern, so I must be doing something wrong when setting logging-config:
[04:52] <jog> I've tired both "logging-config: <root>=INFO,unit=DEBUG" and "logging-config: '<root>=INFO,unit=DEBUG'" in my environments.yaml
[04:53] <jog> but get ERROR juju.cmd supercommand.go:323 there was an issue examining the environment: unknown severity level "INFO,unit=DEBUG"
[04:53] <dimitern> jog, ah, I see - so the second one is fo the command line only, as in juju set-env logging-config='<root>=TRACE'
[04:54] <dimitern> jog, while the first one is for envs.yaml
[04:54] <dimitern> jog, but as now I actually checked "juju help logging", the format uses "; " not "," as separators
[04:55] <menn0> jam: ping
[05:00] <davecheney> review board is a piece of shit
[05:01] <davecheney> why does it publish all your comments, when you push the "publish" buttont
[05:01] <davecheney> ****EXCEPT***** the ones where you have commented in the review branch (not review diff) screen
[05:01] <davecheney> that has a seperate, partially obscured, publish button
[05:02] <dimitern> davecheney, you can do it for single comments, but IIRC you have to publish each one after adding, rather than hitting the "review-wide" publish
[05:02] <davecheney> dimitern: the way it works out for me
[05:02] <davecheney> there are comments when you are reviewing the diff
[05:02] <davecheney> then there is the hovering button at the top of the page to publish or review them
[05:02] <davecheney> there is the 'review' and 'ship it' buttons on the view diff and view review pages
[05:03] <davecheney> but if you're having a conversation in the comments section (not on the view diff)
[05:03] <davecheney> there is an entirely seperate and different "publish" button hidden just below the list of issues on the view reviews page
[05:03] <davecheney> you only know if you ened to push it by looking on the dashboard
[05:03] <davecheney> seeing the icon is orange (?) not blue, and hte mouse over hover says "comments drafted'
[05:03] <dimitern> yep, not quite thought through UX
[05:06] <dimitern> also, beware - if you're editing a comment and you wifi connection drops (or is down) when you submit the comment (still draft) UI-wise it appears on the page, but a few seconds later you're taken to a nasty "connection failed" page and "back" doesn't help getting your comment text back
[05:17] <davecheney> yay, cloud!
[05:44] <thumper> dimitern: we append the unit=DEBUG unless they explicitly set unit to be otherwise
[05:44] <thumper> dimitern: so you were mostly right
[05:44] <thumper> be careful about setting <root> to debug
[05:44] <thumper> as that'll get golxc and other libraries that also use loggo
[05:45] <thumper> juju=DEBUG is good
[07:01] <wallyworld> axw: i've updated the storage pools PR for your viewing "pleasure"
[08:26] <axw> wallyworld: sorry, I was out before. will look shortly
[08:27] <TheMue> morning o/
[08:33] <Muntaner> good morning devs o/
[08:33] <Muntaner> gsamfira: hi! adventuring myself in trying http://www.cloudbase.it/ws2012r2/
[09:14] <jam> menn0: poke
[09:14] <jam> I tried poking earlier but didn't see you online
[09:31] <axw> wallyworld: reviewed
[09:31] <axw> sorry for the delay
[09:32] <frankban> dimitern: morning, I updated my branch as suggested, could you please take another look?
[09:32] <dimitern> frankban, yes, I'm already looking
[09:32] <dimitern> frankban, thanks for doing all that so quickly btw!
[09:33] <frankban> ty
[09:33] <dimitern> frankban, 99% of the PR looks great, I just want to double-check TestMachinePrincipalUnits as I've seen it fail on ppc64el recently due to some sort ordering
[09:34] <frankban> dimitern: sortedUnitNames failing on ppc?
[09:35] <frankban> sounds weird
[09:35] <dimitern> frankban, nope, sorry - run-unit-tests-trusty-amd64 build #2145 was the one that failed
[09:35] <dimitern> frankban, http://juju-ci.vapour.ws:8080/job/run-unit-tests-trusty-amd64/2145/console
[09:38] <dimitern> frankban, it seems the culprit is line #1077 - units[3], err = s3.AllUnits()
[09:39] <dimitern> frankban, so I was thinking if I could suggest a fix for you to include there
[09:40] <frankban> dimitern: go ahead
[09:41] <dimitern> frankban, will do, once I can see where's the issue and reproduce it :)
[09:41] <frankban> dimitern: heh, ok ty
[09:42] <Muntaner> guys - a non Juju related question here - is "sed" the best way to edit conf files via a bash script?
[09:49] <frankban> dimitern: can it be something related to units slices mutated by append? should we reslice in sortedUnitNames(append(a.units, a.subordinates...))?
[09:52] <dimitern> frankban, perhaps, still looking
[09:57] <voidspace> dimitern: TheMue: dooferlad: making coffee, be with you in a couple of minutes
[09:58] <TheMue> voidspace: ok
[09:58] <dimitern> voidspace, ok
[10:00] <jam> team standup
[10:01] <dimitern> TheMue, voidspace, dooferlad, oh, yeah - it's team standup now and ours after
[10:05] <Muntaner> gsamfira: ping
[10:05] <voidspace> oh yeah
[10:08] <voidspace> firefox crash again, yay
[10:16] <wallyworld> axw: team meeting?
[10:16] <axw> wallyworld: bollocks, be there in a minute
[10:16] <jam> mgz: did you want to attend the Juju team meeting ?
[10:26] <dimitern> frankban, you've got a review BTW
[10:27] <frankban> dimitern: ty
[10:34] <wallyworld> axw: thanks for review, will look real soon
[10:35] <voidspace> TheMue: dimitern: dooferlad: standup?
[10:35] <dooferlad> voidspace: works for me
[10:35] <TheMue> voidspace: sounds good for me
[10:35] <dimitern> voidspace, TheMue, dooferlad, I need 5m
[10:36] <voidspace> ok, I'll go make more coffee
[10:36] <TheMue> aka one cigarette ;) *scnr*
[10:36] <dimitern> but go ahead :)
[10:42] <dimitern> voidspace, dooferlad, ok, let's do it
[10:43] <voidspace> omw
[11:00] <frankban> dimitern: done, can I $$merge$$ it?
[11:03] <dimitern> frankban, I'm having a last look
[11:03] <dimitern> frankban, go for it, and thanks again!
[11:04] <frankban> dimitern: ty, should I have to propose it also against 1.21 and 1.22?
[11:04] <Muntaner> gsamfira: ping!
[11:04] <gsamfira> Muntaner: pong
[11:06] <dimitern> frankban, yes, please, but in both 1.21 and 1.22 my patch has landed, so you'll need to do some cherry picking
[11:07] <frankban> dimitern: should we land the revert patch first?
[11:08] <dimitern> frankban, I don't think it's needed, as your patch builds on top of it anyway
[11:14] <Muntaner> hi gsamfira :)
[11:14] <Muntaner> just installed the Windows Server 2012 R2 evaluation by cloudbase
[11:14] <Muntaner> the VM is in the same subnet of Juju VMs
[11:14] <Muntaner> now... how can I interact with this VM with Juju?
[11:14] <Muntaner> or am I confused about what I can do?
[11:19] <gsamfira> Muntaner: a few basics first. Juju has several "providers" it can use to create a juju environment (manual, local, MaaS, OpenStack, etc). The manual provider attempts to connect to an already installed machine and add it to juju. This provider however is not yet supported on windows. It will be in the (hopefully) not to distant future, just not yet.
[11:20] <gsamfira> Muntaner: if you used the image I pointed you to, that means you have an Openstack environment all set up
[11:20] <gsamfira> correct?
[11:20] <Muntaner> gsamfira: yes, everything is working fine atm
[11:21] <Muntaner> downloaded the qcow2 image of Windows, created a VM in the same subnet of juju VMs
[11:21] <Muntaner> juju actually is bootstrapped on this OpenStack environment
[11:21] <gsamfira> cool
[11:21] <Muntaner> (which is a private cloud, basically)
[11:21] <Muntaner> if you need screens/logs, ask me whatever it's needed :)
[11:22] <gsamfira> have you done the whole juju-metadata generate-image song and dance? Or are you using the official simplestream feed?
[11:23] <Muntaner> gsamfira: did nothing with juju :/ just downloaded the image, added it to OpenStack via glance, and created a VM with the image in it
[11:24] <axw> wallyworld_: review can wait for tomorrow, but FYI http://reviews.vapour.ws/r/923/
[11:24] <Muntaner> gsamfira: I did that stuff in the past days, to bootstrap my OpenStack with Juju
[11:24] <wallyworld_> axw: np, will try and look
[11:24] <Muntaner> and basically I used an Ubuntu Cloud 14.04 image to do all the stuff
[11:24] <gsamfira> Muntaner: okay. So you have not yet deployed a massive indispensable infrastructure using juju yet :). There is one last hoop to jump through to get windows in. And that is juju agent and simplestreams that includes a Windows image
[11:26] <Muntaner> gsamfira: afraid I don't know how that works: (
[11:26] <gsamfira> Muntaner: no worries. I can guide you through it.
[11:26] <gsamfira> I just need to get a handle on what you already have
[11:26] <Muntaner> gsamfira: ask me whatever you need to know :)
[11:32] <Muntaner> gsamfira: background. We set up a private OpenStack cloud, with all of the components installed on a single server. We have everything working (cinder, glance, swift, etc). Then, we bootstraped Juju on this Openstack. After some problems we got with metadatas (I opened a launchpad about it), also this step has been successfull, and I have Juju working on this Openstack. Now, I wanted to understand how Juju and Windows machine are r
[11:34] <gsamfira> so first things first. Lets get you the widows agent tools.
[11:34] <gsamfira> I'm guessing you are using 1.21.2 ?
[11:35] <Muntaner> 1.21.1-trusty-amd64, gsamfira
[11:35] <gsamfira> perfect. Building you a windows binary now
[12:20] <frankban> dimitern: 1.22 backport https://github.com/juju/juju/pull/1594
[12:21] <dimitern> frankban, looking
[12:23] <dimitern> frankban, LGTM
[12:23] <frankban> dimitern: thanks, shipping it
[12:25] <dimitern> frankban, +1
[13:08] <wwitzel3> so I am thinking in the rsyslog worker there is the handle config changes, that reacts based on new state servers getting added, so I think I can force the recreation of the rsyslog server certs there and that should fix up the connection issues
[13:09] <wwitzel3> re: https://bugs.launchpad.net/bugs/1417875
[13:09] <mup> Bug #1417875: ERROR juju.worker runner.go:219 exited "rsyslog": x509: certificate signed by unknown authority <canonical-bootstack> <logging> <regression>
[13:09] <mup> <juju-core:In Progress by wwitzel3> <juju-core 1.21:Triaged by wwitzel3> <juju-core 1.22:Triaged by wwitzel3> <https://launchpad.net/bugs/1417875>
[13:35]  * dimitern steps out for 1/2h
[16:15] <voidspace> dimitern: when all the pieces are in place, what's the command line incantation to create a new container on a machine?
[16:21] <dimitern> voidspace, sorry, in a meeting, bbiab
[16:21] <voidspace> dimitern: no problem, dooferlad remembered anyway
[16:21] <dimitern> +1
[16:21] <TheMue> phew, first restructured test works. but nested map[string]interface{} for marshaling tests are a pain in the ass
[16:22]  * TheMue needs to write some helper
[16:31] <dimitern> TheMue, consider using something like type M map[string]interface{} + helpers around it in the tests
[16:31] <TheMue> dimitern: that's how I changing it currently ;)
[16:31] <dimitern> TheMue, cheers :)
[16:33] <dimitern> voidspace, so I'll propose the PCII() API tomorrow as it happens
[16:34] <natefinch> This is why you shouldn't use map[string]interface{} in the first place :)
[16:34] <katco> natefinch: +1
[16:34] <dimitern> natefinch, :) if you can avoid it, yeah
[16:35] <dimitern> natefinch, however writing generic json serialization tests with strings like `....25 lines later...`[1:] is even worse
[16:35] <TheMue> ah, this looks definitely better
[16:36] <dimitern> :) I had a feeling it will
[16:36] <dimitern> ok
[16:36] <dimitern> I'm outta here
[16:36] <TheMue> dimitern: yeah, strings are even worse
[16:36] <dimitern> g'night all, see you tomorrow ;)
[16:36] <TheMue> dimitern: cu
[16:36] <katco> dimitern: take a look at that PR tomorrow?
[16:37] <dimitern> katco, oh, sorry - I promise (bookmarking now)
[16:37] <katco> dimitern: no worries at all... have a wonderful evening :)
[16:37] <frankban> dimitern: here is the 1.21 backport: https://github.com/juju/juju/pull/1595 could you please take a look? (slightly different code base there)
[16:37] <dimitern> katco, thanks!
[16:38] <dimitern> frankban, ok, I can take one last review I guess :)
[16:38] <katco> dimitern: i see how it is! haha jk ;)
[16:38] <dimitern> (for a backport I reviewed already..)
[16:39] <frankban> dimitern: ty
[16:40] <frankban> dimitern: oh, didn't notice you were done for the day, sorry
[16:40] <ericsnow> voidspace: FYI, the webhook worked for PR 1588 yesterday when I tried (after the RB downtime)
[16:40] <dimitern> frankban, np
[16:40] <voidspace> ericsnow: cool, good to hear
[16:40] <dimitern> frankban, ship it!
[16:41] <frankban> dimitern: thanks! and have a nice evening
[16:41] <dimitern> ta!
[16:45] <frankban> sinzui: how do I spell that https://github.com/juju/juju/pull/1595 fixes that bug?
[16:47] <sinzui> frankban, include fixes-1420403
[16:47] <voidspace> frankban: $$fixes-1420403$$ I believe
[16:47] <frankban> sinzui: in the description?
[16:47] <voidspace> frankban: in the merge comment
[16:47] <natefinch> you just need the string fixes-1420403 in a comment somewhere, and then the standard $$merge$$ will work
[16:48] <voidspace> frankban: doesn't need to be in the $$ but it's common
[16:48] <natefinch> ^
[16:48] <frankban> ty
[18:54] <katco> is anyone familiar with the WADL format?
[18:57] <voidspace> right, going jogging
[18:57] <voidspace> probably back around in a bit
[18:57] <katco> have fun voidspace
[19:00] <voidspace> katco: thanks, good luck with the WADL :-)
[19:00] <katco> voidspace: hehe thanks
[19:03] <sinzui> abentley, my azure delete script says juju-functional-backup-restore-tyx8o23ojf is about 24 hours old. If you agree via the console it is more than 12 hours old, I will run the script in real
[19:04] <perrito666> sinzui: that last sentences terrified me
[19:05] <sinzui> perrito666, I am dealing with azure instances being left behind after destroy env. and since the Azure console doesn't work well with chrome or firefox, I am scripting an executioner
[19:05] <perrito666> the onlypart that terrified me is backup-restore :p
[19:06] <perrito666> btw, just to fill you with expectancy, I PRd the last piece of the new restore
[19:07] <sinzui> thank you perrito666
[19:08] <sinzui> I think the restore test job dies because of azure. we never actually bootstrapped http://juju-ci.vapour.ws:8080/view/Juju%20Revisions/job/functional-backup-restore/2180/console
[19:08] <sinzui> well, I can see we aren't running any env like that, so I will use my new axe
[19:12] <perrito666> who here knows anything about multiwatcher?
[19:20] <jw4> perrito666: a very little bit...
[19:20] <perrito666> git says I am looking for francesco banconi
[19:20] <rick_h_> perrito666: that's frankban who's EOD working with dimiter
[19:21] <jw4> perrito666: ah
[19:21] <perrito666> rick_h_: tx, ill try to catch him in my AM
[19:21] <rick_h_> perrito666: we use it on the GUI to track all the things
[19:22] <perrito666> rick_h_: I assume you mean megawatcher and not frankban :p
[19:22] <rick_h_> perrito666: yea pretty much :)
[19:23] <perrito666> "so frank, here is the debug log, you read it all and let us know when things change"
[19:23] <rick_h_> for everyone running a gui, hope you read fast
[19:24] <jw4> lol
[19:25]  * perrito666 sees a chance to "import" a new dell for him.. anyone got one of the new xps13 and ran linux on that?
[19:25] <perrito666> I kind of lack return policy where I am :p
[19:27] <natefinch> perrito666: google will help, or try posting on warthogs.  I think the XPS laptops are generally pretty far up the chain of laptops that Linux geeks want support for (and that Dell wants linux support for), so usually they work pretty well pretty soon after release.
[19:31] <abentley> sinzui: looking...
[19:32] <abentley> sinzui: Not seeing it.  Did you go ahead?
[19:34] <sinzui> abentley, I did. I found the job that created it and it was 24 hours old
[19:41] <perrito666> now, this is rather frustrating:  Error: entity mismatch; got len 1; want 1
[19:44] <jw4> perrito666: uh oh.. which test?
[19:45] <perrito666> storeManagerStateSuite.TestStateBackingGetAllMultiEnv
[19:46] <jw4> perrito666: hmm - just makes me nervous because I was in that general area recently
[19:47] <perrito666> jw4: I know, but this broke something that its still not prd
[19:47] <perrito666> so worry not, alwhough the test's output is all but helpful, I might fix that
[19:47] <jw4> perrito666: lol
[19:47] <jw4> whew
[19:51] <jw4> perrito666: interesting... it looks like setUpScenario returned only one entity instead of two (my naive reading of it)
[19:54] <jw4> perrito666: units not entities
[19:55] <perrito666> jw4: ?
[19:55] <jw4> perrito666: n/m -- I'm chasing the squirrel of understanding your error
[19:56] <perrito666> jw4: you should see my codebase for it
[19:56] <perrito666> :)
[19:56] <jw4> perrito666: heh
[19:57] <perrito666> I have a big change in the middle that breaks Unit.SetStatus and Unit.Status
[19:57] <jw4> perrito666: that explains it
[19:57] <perrito666> jw4: buuut
[19:57] <perrito666> there is a deepequals for 2 maps
[19:58] <perrito666> and then, if that fails, you tell me their lenght missmatches, assuming that the only thing that can fail is getting different lenghts
[19:58] <jw4> perrito666: that's making more sense as I look at the trunk version of assertEntitiesEqual too
[19:59] <jw4> although it looks like there should be more descriptive information after that first message
[20:00] <jw4> but logged at INFO instead of ERROR
[20:00] <perrito666> jw4: there is, but its Logf
[20:00] <perrito666> which sucks in my opinion
[20:00] <jw4> perrito666: +1
[20:10] <sinzui> abentley, a newly stuck machine in azure
[20:10]  * sinzui get's his new axe
[20:12] <perrito666> grc does not work with go's log output... what have I dont to deserve this monochromatic nightmare
[20:53] <perrito666> sinzui: do you know why ssl-hostname-verification could be not working for canonistack?
[21:01] <sinzui> perrito666, no, I never worked that out
[21:01] <perrito666> sinzui: that means this is a known issue?
[21:02] <sinzui> perrito666, it is and I am looking for the bug.
[21:28] <perrito666> jw4: ok I might ask for your help
[21:28] <perrito666> :)
[21:29] <perrito666> do you know how does *watcher prompt for status changes on units?
[21:36] <jw4> perrito666: ah
[21:36] <jw4> gimme a sec
[21:40] <jw4> perrito666: if I understand your question... allWatcherStateBacking has a method Watch(in chan<- watcher.Change) which sets up a watcher for each collection it's watching
[21:40] <perrito666> jw4: tx a lot
[21:40] <jw4> in this case a backingUnit
[21:42] <perrito666> bbl
[21:45] <marcoceppi> who created juju-backup ?
[21:45] <ericsnow> marcoceppi: thumper?
[21:45] <thumper> nope
[21:46] <thumper> marcoceppi: the plugin was created by a team effort
[21:46] <thumper> just over a year ago
[21:46] <thumper> while I was on leave :)
[21:46] <marcoceppi> thumper: it's not a very good plugin, it doesn't have a --description or --help flag and breaks juju help plugins
[21:46] <marcoceppi> ;)
[21:48] <marcoceppi> is it in the juju-core repo?
[21:48] <marcoceppi> I'll submit a patch
[21:48] <thumper> not sure... maybe
[21:48] <thumper> marcoceppi: yes
[21:48] <ericsnow> marcoceppi: cmd/plugins/juju-backup/juju-backup.go
[21:48] <thumper> cmd/plugins/juju-backup
[21:49] <marcoceppi> thumper: well, interesting. the juju-backup I have is a bash script
[21:49] <thumper> marcoceppi: yep, that's the one
[21:49] <thumper> there is no .go at the end
[21:49] <marcoceppi> oic
[21:49] <ericsnow> oh, yep :)
[21:50] <ericsnow> I was thinking of restore
[21:50]  * marcoceppi goes for is 3rd commit to core
[21:50] <marcoceppi> well
[21:51] <marcoceppi> it's kind of moot now, why even maintain the juju-backup command if it just shells out to core again?
[21:55] <natefinch> marcoceppi: juju-backup is a bad plugin for a lot more reasons than its description and help ;)
[21:56] <natefinch> marcoceppi: we're replacing the old backup command with a built-in command.  It is a lot more maintainable
[21:56] <natefinch> sorry, gotta run, but ericsnow and perrito666 are the backup and restore masters :)
[21:59] <marcoceppi> well, it's worse than that, everytime anyone runs juju help plugins, the backup plugin is executed
[21:59] <marcoceppi> so it's just dumping backups everywhere
[21:59] <marcoceppi> it's a pretty big bug, but if 1.21 fixes this (by removing the old logic) then fine
[22:00] <marcoceppi> but 1.21-beta4 still executes the plugin from what I can tell
[22:02] <ericsnow> 1.22 is when we added the new backup command (and updated the plugin)
[22:02] <ericsnow> marcoceppi: we kept the plugin around for backward compatibility
[22:03] <ericsnow> marcoceppi: but it should probably still handle --help and --description
[22:03] <marcoceppi> ericsnow: so if I wanted to patch this for 1.21 how would I do this?
[22:03] <marcoceppi> I'll file a bug now, but since 1.22 is like the next release
[22:03] <ericsnow> handle those two options at the beginning of the shell script
[22:04] <ericsnow> there's already a bug
[22:04] <marcoceppi> right, I understand technically how to implement it in the script
[22:04] <hatch> hey could someone point me to the golang file where the api routes are defined?
[22:04] <marcoceppi> I just don't know the whole process
[22:04] <marcoceppi> I'll just open a merge request
[22:04] <ericsnow> #1389326
[22:04] <mup> Bug #1389326: juju-backup is not a valid plugin <backup-restore> <plugins> <juju-core:Triaged> <https://launchpad.net/bugs/1389326>
[22:05] <hatch> found it
[22:06] <ericsnow> hatch: apiserver/apiserver.go (see the Server.run)
[22:06] <hatch> ericsnow: thanks - I just found it :) github's search is not too great :)
[22:13] <voidspace> ok, g'night all
[22:24] <marcoceppi> ericsnow: https://github.com/juju/juju/pull/1596
[22:28] <ericsnow> marcoceppi: thanks
[22:29] <ericsnow> marcoceppi: would you do me a favor and log into reviews.vapour.ws
[22:30] <marcoceppi> ericsnow: okay, logged in.
[22:31] <ericsnow> marcoceppi: we have a webhook on github for PRs that adds the patch to our review tool there
[22:31] <ericsnow> marcoceppi: but it only works if you've logged in at least once :)
[22:31] <ericsnow> marcoceppi: so the PR should be updated now with a link to the review request
[22:33] <ericsnow> marcoceppi: you should also set the bug status to "In Progress"
[22:33] <ericsnow> marcoceppi: thanks for working on this, by the way :)
[22:39] <ericsnow> marcoceppi: that patch looks good
[22:39] <ericsnow> marcoceppi: you just need to get a "senior reviewer" to check it out before merging
[22:42] <marcoceppi> ericsnow: thanks, updated the testing stuff
[22:42] <ericsnow> marcoceppi: cool
[22:52] <jw4> does anyone know of a state transition diagram for the uniter?
[23:16] <katco> jw4: no, but if you happen to make one, i would love for you to do it in plantuml and place it in the docs like we recently did for storage: https://github.com/juju/juju/blob/master/doc/storage-model.txt
[23:17] <jw4> katco: oh cool - I'm 20 lines into a graphviz .dot file...
[23:17] <katco> jw4: relevant doc if you're interested: http://www.plantuml.com/state.html
[23:17] <jw4> katco: I'll check it out.
[23:17] <katco> jw4: plantuml uses graphviz under the covers, but has a much simpler syntax
[23:17] <jw4> katco: nice.  Yeah I have 3.5 modes semi-accounted for
[23:19] <katco> jw4: online interface if you want an easy repl: http://www.plantuml.com/plantuml/
[23:20] <jw4> katco: suhweet!
[23:21] <jw4> katco: the storage model is quite sophisticated compared to what I was doing... I aspire... :)
[23:21] <katco> jw4: :) axw did that one, and -- as far as i know -- without ever having touched plantuml prior
[23:21] <katco> jw4: in my experience they tend to grow quite organically
[23:22] <jw4> yes, I imagine with a good foundation people like to incrementally improve them over time