[00:15]  * redir eod
[00:15] <redir> thanks for the clue axw
[01:40] <menn0> thumper: http://reviews.vapour.ws/r/5404/ pls
[01:49] <thumper> menn0: done
[01:49] <menn0> thumper: thanks
[01:52] <menn0> thumper: dropped your issue (with an explanation)
[01:52] <thumper> ok
[02:13] <natefinch> axw: I couldn't tell, did you and William come up with a magic way to make (un)marshaling work nicely with jsonschema and our extra properties?  Seems like the least amount of work would be to define schemas in yaml strings and marshal into Schema, with helpers to get metadata out of the "extras".  Not elegant, but a lot less work than making our own custom jsonschema struct.
[02:17] <axw> natefinch: we did not
[02:18] <natefinch> axw: any preference?
[02:20] <axw> natefinch: I'd prefer if it wasn't just a blob of text, but I'm unsure of how it'd look in the end. I guess you would have to duplicate the entire Schema type
[02:20] <axw> and SchemaList etc.
[02:20] <natefinch> I think forking that schema package and adding our properties would also be easy enough... just sort of a maintenance burden to keep up with upstream (which looking at gojsconschema, we usually just don't do).
[02:20] <axw> natefinch: I'd prefer if we didn't fork
[02:20] <axw> for that reason
[02:22] <natefinch> it sucks that there's no nice way to make recursive data structures easily extensible
[02:24] <natefinch> axw: so... you're saying we're boned :)
[02:25] <axw> natefinch: I don't think so? I think we can duplicate the *types* from the jsonschema package, and then introduce functions to extract/inject the from/into the Extras field
[02:26] <axw> natefinch: we would still rely on the jsonschema package for all of the standard parsing and validation
[02:28] <natefinch> axw: interesting, ok.
[02:34] <natefinch> axw: the tricky thing is that unmarshaling into the Schema type is all hand coded in that go-jsschema package, I think mostly because the rules for jsonschema are really wacky (values that can be a boolean or a list of strings, for example).  Some of it may be overengineered... but I'm guessing we might want to copy a lot of the logic in how to unmarshal... which might make it better to just fork the package outright.
[02:34] <natefinch> Note that the validation is actually in a subpackage, the main package is just the types and unmarshalling.
[02:40] <axw> natefinch: I'll have a poke and see if I can come up with something that looks workable
[03:19] <axw> natefinch: https://gist.github.com/axw/0d395b86aa5ac9ee0c4f18a3ca81cb68
[03:19] <axw> natefinch: you would need to have a "toInternal" to do the opposite, putting the juju bits back into the Extras map
[03:20] <menn0> axw or thumper: fairly easy one: http://reviews.vapour.ws/r/5405/
[03:20] <axw> natefinch: I dropped all of the json tags from the cloned struct. instead of having tags, you'd have a MarshalJSON and UnmarshalJSON that use from/toInternal
[03:20] <axw> menn0: looking
[03:21] <natefinch> axw: right
[03:21] <natefinch> axw: it's funny, the jsschema thing has custom json (un)marshaling too, so the json tags aren't really used.
[03:22] <natefinch> axw: that looks pretty good. Reusing as much as possible from the internal type, just replacing the recursive part.  Very nice.
[03:25] <axw> menn0: LGTM
[04:33] <menn0> axw: (delayed) tyvm
[04:33] <axw> np
[05:04]  * thumper waits for the featuretest package to run tests...
[05:04] <thumper> should have used -check.v
[05:04] <thumper> hmm...
[05:04] <thumper> how long to wait...
[05:05] <thumper> killed it
[05:05] <thumper> PASS: dblog_test.go:149: debugLogDbSuite.TestLogsAPI	0.005s
[05:05] <thumper> OK: 1 passed
[05:05] <thumper> PASS
[05:05] <thumper> ok  	github.com/juju/juju/featuretests	7.662s
[05:05] <thumper> 5ms to run the test, the rest of 7.6 seconds for setup / teardown
[05:06] <thumper> w00t
[05:46] <jam> menn0: are you still around?
[05:55] <menn0> jam: yes, for a few more minutes
[07:54] <jam> menn0: sorry I ended up missing you, I'll see you at the tech board if you can make it
[08:00] <rogpeppe> balloons: the best way is to migrate all the leaf repositories first, then juju-core itself
[08:22] <babbageclunk> menn0: ping?
[08:27] <menn0> babbageclunk: hi, in tech board call atm
[08:27] <babbageclunk> menn0: ok
[09:13] <menn0> babbageclunk: done!
[09:14] <menn0> babbageclunk: I imagine you're bugging me about the email you sent which I didn't reply to?
[09:14] <babbageclunk> menn0: Hi! Yes!
[09:14] <menn0> babbageclunk: sorry... I was trying to staying focussed on migrations so email took a back seat today
[09:15] <menn0> babbageclunk: I'll reply now but the short answer is that you don't need to worry about migrations for what you're working on
[09:15] <menn0> babbageclunk: a migration will abort early if there's any dying/dead machines or units
[09:16] <babbageclunk> menn0: Ok - so I can ditch that commit then?
[09:17] <babbageclunk> menn0: No need to reply right now if that's the takeaway.
[09:17] <menn0> babbageclunk: yep, you can ditch that... there will be a higher level check for dying machines
[09:18] <babbageclunk> menn0: Sweet, thanks!
[09:18] <menn0> babbageclunk: should be working on prechecks this week
[09:51] <fwereade> menn0, if you're still around... the nsWhatever thing
[09:51] <babbageclunk> fwereade: I can't see any way of testing apiHandler.ConnectedModel without either adding the test to a suite based on JujuConnSuite (so I can call TestingApiHandler) or adding another TestingApiHandler* func to apiserver/export_test (so I can specify a UUID directly).
[09:52] <fwereade> menn0, I feel like I want *some* way to visibly distinguish these (basically) static/stateless types
[09:52] <fwereade> babbageclunk, hmm
[09:52] <babbageclunk> fwereade: I'm aware that both are eyebrow-provoking.
[09:52] <fwereade> babbageclunk, let me look
[09:52] <fwereade> babbageclunk, wouldn't surprise me if the use-existing-JujuConnSuite were the pragmatic approach
[09:53] <babbageclunk> fwereade: ok, I'll put my test in one of those for now.
[09:55] <fwereade> perrito666, ping re Authorizer.HasPermission
[10:00] <fwereade> perrito666, ehh, don't worry about it
[10:01]  * fwereade needs someone to talk to about structuring/naming some things in state, is off for a ciggie, would be delighted if someone were willing to chat in a few minutes
[10:01] <perrito666> fwereade: i am just waking up, if you want I can ping you in a moment?
[10:13] <fwereade> perrito666, ofc
[10:19] <fwereade> jam, dimitern, babbageclunk: I am going to waffle vaguely about the ns* types I've been adding to state, please jump in if anything springs to mind
[10:19] <fwereade> so
[10:19] <dimitern> fwereade: sure ;)
[10:19] <fwereade> in the payloads work, we made a stab at separating persistence from business logic
[10:20] <fwereade> and it wasn't entirely successful, but when I moved the necessary payloads bits into state, I tried to preserve the distinction
[10:21] <fwereade> and I ended up with an exported Payloads type that clients use, and an unexported `type nsPayloads_ struct{}` to hold all the internal methods that actually implement the persistence side
[10:22] <fwereade> (and an unexported `var nsPayloads nsPayloads_` to actually call methods on)
[10:24] <fwereade> and it's nice that it's entirely stateless, and it has methods like `untrackOp(coll mongo.Collection, docID string) (txn.Op, error)`
[10:24] <fwereade> so I'm pretty happy with the *structure*
[10:24] <fwereade> but the "nsFoo" naming style has raised many eyebrows
[10:25] <fwereade> so the *easy* thing to do is just to drop the ns stuff
[10:25] <fwereade> and just have refcounts.CreateOrIncRefOp, and life.notDeadOp, and so on
[10:26] <fwereade> but
[10:26] <fwereade> the style, while I think it is good, is somewhat at odds with the stateful-model-type approach we're currently using almost everywhere
[10:27] <fwereade> and I can't help but feel it needs *something* to clearly and visibly distinguish it from... all the other code
[10:27] <fwereade> am I overthinking this? from a certain perspective the prefix tells you very little, whatever it might be
[10:27] <dimitern> fwereade: so you separate the external interface of Payloads from the low-level ops, which go in the ns* internal type?
[10:27] <fwereade> dimitern, yeah
[10:29] <dimitern> fwereade: ns for namespace.. I could say it's not quite obvious at first
[10:29] <dimitern> fwereade: I mean namespaces vs entities
[10:29] <fwereade> dimitern, (and in payloads, because it uses that mechanism exclusively, I can implement the Payloads thing largely in terms of distinct Change types which themselves use the ns type)
[10:29] <dimitern> fwereade: things we store
[10:29] <fwereade> dimitern, yeah, indeed
[10:29] <fwereade> dimitern, it is evidently not obvious
[10:30] <dimitern> fwereade: how about persistence ? too long I guess..
[10:30]  * xnox is failing to bootstrap localhost provider with 2.0 beta14 on xenial =(
[10:30] <fwereade> dimitern, yeah, I guess that's the obvious one, but I rue all the characters it costs every time you want to use it
[10:31] <dimitern> fwereade: backing is shorter, and already used here and there
[10:31] <dimitern> it's not perfect though..
[10:32] <dimitern> xnox: about lxdbr0?
[10:32] <fwereade> dimitern, the other solution is to pull them out into their own packages, which is clearly what they *want* to be, but I don't think I can do it non-invasively
[10:32] <fwereade> dimitern, state/persistence/life etc
[10:32] <xnox> dimitern, tools info mismatch
[10:33] <fwereade> dimitern, you know, the more I talk, the less I care about having any prefix at all
[10:33] <dimitern> xnox: oh..
[10:33] <fwereade> dimitern, if they ever make it into packages it'll just be `life.notDeadOp(...)` anyway
[10:33] <dimitern> fwereade: having the prefix is fine I think
[10:34] <fwereade> dimitern, that's seemed to be the major sticking point for people though
[10:34] <dimitern> fwereade: the underscore suffix is making me wince a little
[10:34] <xnox> does 2.0 need agent-stream: devel ?
[10:35] <xnox> ERROR failed to bootstrap model: cannot start bootstrap instance: tools info mismatch ({2.0-alpha1-xenial-amd64  b67c1484745bd58e7fac6ad672a7f6e45042ebef7a1e0e995f3f0f3c2baa7d33 18556414}, {2.0-alpha2-xenial-amd64  ceb165a45206eddadc06a7c986b44a3f76195c71a317d0c87810727c71bcc0f8 18073871})
[10:36] <xnox> $ juju bootstrap --config agent-stream=devel localhost localhost
[10:36] <xnox> seems to work better,
[10:36] <xnox> however specifying --config cancels "interactive" mode of juju bootstrap
[10:36] <dimitern> xnox: nice! I usually always bootstrap from src with --upload-tools
[10:37] <dimitern> xnox: yeah any args do that
[10:40] <dimitern> fwereade: if not ns how about stored? e.g. storedPayloads
[10:40] <dimitern> fwereade: saved ?
[10:42] <dimitern> fwereade: dbPayloads even :D
[10:43] <fwereade> dimitern, yeah, if you can see a way around the suffix that'd be nice, but you only see that when you're implementing it
[10:43] <fwereade> dimitern, hmm
[10:43] <fwereade> dimitern, dbPayloads
[10:43] <fwereade> dimitern, I think maybe I like that
[10:43] <dimitern> db is short and more obvious yeah
[10:43] <fwereade> dimitern, awesome, tyvm
[10:44] <dimitern> :) np
[10:44] <fwereade> jam, babbageclunk: any thoughts? ^^ or anyone else? :)
[10:47]  * dimitern as usual has a lot of things needing attention.. sinks back to bugs
[11:04] <perrito666> ok, this is completely new to me: alarm notification buffer full
[11:10] <babbageclunk> fwereade: Sorry, was following but didn't have much to add. I definitely like db better than ns. I don't think the suffix is a problem, since you only have to look at it in the implementation.
[11:11] <fwereade> babbageclunk, cheers :)
[11:12] <babbageclunk> fwereade: Also, ns is used a lot in iOS objective-C development (from NextStep), so it's weird seeing it in Go code. :)
[11:12] <babbageclunk> Oops NeXTSTEP apparently.
[11:13] <fwereade> babbageclunk, a sense of happy familiarity with the NS prefix did sort of come into it, even if it's ultimately unhelpful
[11:16] <dimitern> fwereade: m_lpszDataSource evokes that sort of familiarity for me, but with the opposite sign :)
[11:17] <fwereade> dimitern, hahaha
[11:17] <babbageclunk> dimitern: lol
[11:17] <fwereade> dimitern, I wish they'd done hungarian notation right
[11:17] <dimitern> oh MFC days...
[11:17] <dimitern> :)
[11:25] <perrito666> does anyone know if the clock interface recently changed?
[11:26] <dimitern> perrito666: rogpeppe was talking about changing something there recently IIRC
[11:26] <rogpeppe> dimitern, perrito666: i haven't actually changed anything recently AFAIR
[11:27] <rogpeppe> dimitern, perrito666: though I have an unfinished PR that revamps the testing clock stuff and the interface
[11:27] <dimitern> rogpeppe: ah ok - wasn't the discussing around alarms though? timers?
[11:27] <rogpeppe> dimitern, perrito666: i just need to find time to write some tests for it
[11:27] <rogpeppe> dimitern: i wanted NewTimer
[11:27] <dimitern> rogpeppe: that's it
[11:28] <rogpeppe> dimitern, perrito666: FWIW this is the PR in question: https://github.com/juju/testing/pull/108/files
[11:29] <perrito666> rogpeppe: I am getting a strange panic in tests (pastebining now)
[11:30] <perrito666> http://pastebin.ubuntu.com/22902510/
[11:30] <perrito666> Ill now go see how in the universe I triggered that without even going near it
[11:31] <rogpeppe> perrito666: you can fix that by reading from the clock.Alarms channel
[11:32] <perrito666> rogpeppe: I am curious on how I originally broke that (more than the actual fix) :)
[11:32] <rogpeppe> perrito666: tbh i think that that clock code shouldn't panic in that case
[11:33] <perrito666> I mean, I know what I changed, I agree with you that this thing should definitely not panic
[11:35] <dimitern> perrito666: I suspect you're using juju/testing/clock whereas before you used juju/juju/testing/clock
[11:35] <perrito666> dimitern: I suspect someone made that change and I made a change and we both clashed :p
[11:35] <rogpeppe> dimitern, perrito666: please, please can we change everything to use juju/testing/clock and remove juju/juju/testing/clock ?
[11:36] <perrito666> rogpeppe: sure you can, ill review the PR :p
[11:36] <rogpeppe> dimitern, perrito666: there's no need for both to exist
[11:36] <dimitern> rogpeppe: I don't mind that :)
[11:36] <rogpeppe> perrito666: most of what i do these days seems to be juju-core code cleanups
[11:37] <perrito666> rogpeppe: and we love you for it
[11:37] <dimitern> rogpeppe: indeed <3 :)
[11:38] <rogpeppe> perrito666: speaking of which... did you see this? https://bugs.launchpad.net/juju-core/+bug/1611379
[11:38] <mup> Bug #1611379: api/backups: restore code needs improvement <juju-core:New> <https://launchpad.net/bugs/1611379>
[11:38] <perrito666> "needs improvement" is perhaps the bigest understatement in history of mankind
[11:38] <dimitern> lol
[11:39] <rogpeppe> perrito666: well, i thought i might be out of order to say "this code is fucking shit" :)
[11:39] <rogpeppe> perrito666: in a bug report
[11:39] <rogpeppe> perrito666: anyway, i rewrote it, but didn't have time to write the tests (the current code has no tests)
[12:43] <rick_h_> macgreagoir: can you review https://github.com/juju/juju/pull/5747 please, I did the QA to make sure it worked when you get a sec
[13:01] <macgreagoir> rick_h_: ack
[13:02] <macgreagoir> dimitern: Ready for HO when you are.
[13:02] <dimitern> macgreagoir: ok, let's use the standup HO I guess?
[13:09] <voidspace> babbageclunk: ping
[13:15] <babbageclunk> voidspace: pong, sorry!
[13:19] <niedbalski> dimitern, https://pastebin.canonical.com/162703/ latest run
[13:19] <niedbalski> dimitern, after this the boot gets stuck.
[13:31] <mup> Bug #1611764 opened: upgraderSuite.SetUpTest An existing connection was forcibly closed <ci> <intermittent-failure> <regression> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1611764>
[13:31] <mup> Bug #1611766 opened: upgradeSuite.TearDownTest sockets in a dirty state <ci> <intermittent-failure> <regression> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1611766>
[13:32] <rick_h_> voidspace: time to sync?
[13:33] <voidspace> rick_h_: yep
[13:34] <balloons> rogpeppe, so I worked with Nate and perrito666 yesterday to get everything settled to do the changeover to the git version of gnuflag, but I've hit a snag. juju/romulus and juju/juju both reference each other. It's a circular dependency.
[13:34]  * fwereade has a problem; is anyone up to date with the details of charm migrations?
[13:34] <fwereade> babbageclunk, voidspace perhaps? ^^
[13:34] <rogpeppe> balloons: that is annoying
[13:34] <rogpeppe> balloons: but
[13:34] <rogpeppe> balloons: there's a way around it
[13:35] <balloons> I guess my thought is does juju/romulus really need to depend on juju/juju? And if so, we have to land without attempting to build for the first commit of one of them -- unless you know something cooler we can do
[13:35] <rogpeppe> balloons: make a feature branch of either juju-core or romulus that uses only the github gnuflag
[13:36] <rogpeppe> balloons: unfortunately it does
[13:36] <rogpeppe> balloons: it's ok (temporarily) for a project to depend on a commit in a feature branch
[13:37] <rogpeppe> balloons: once you've landed romulus depending on the juju feature branch, you can make juju depend on that just-landed romulus branch
[13:38] <rogpeppe> balloons: then you can update romulus to depend on juju master again and delete the juju feature branch
[13:38] <rogpeppe> balloons: or you can do it the other way, making a feature branch in romulus
[13:39] <balloons> this sounds educational
[13:39] <balloons> I think a feature branch in romulus would be preferred perhaps -- avoid invoking a CI run for instance
[13:42] <rogpeppe> balloons: good point
[13:45] <babbageclunk> fwereade: sorry, not me - voidspace has been doing a fair bit on migrations though.
[13:45] <voidspace> fwereade: babbageclunk: not charm migrations, no
[13:47] <fwereade> babbageclunk, voidspace: not to worry
[13:50] <fwereade> babbageclunk, voidspace: (fun fact: the data model *never* actually checked that a given charm existed before creating an application that used it)
[13:51] <voidspace> nice
[13:51] <fwereade> babbageclunk, voidspace: (which is somewhat cramping my style re refcounting charms, there are a number of state tests that add apps without charms... and *all* migrated apps are added without charms, which are then filled in later)
[13:54] <natefinch> sinzui, mgz: the CI scripts ask for an environment... how do I pass a cloud and credentials?
[13:55] <natefinch> (running deploy_job.py)
[13:58] <babbageclunk> fwereade: doh
[14:01] <rick_h_> voidspace: natefinch dimitern standup ping
[14:01] <dimitern> omw
[14:10] <mup> Bug # changed: 1502130, 1523608, 1556961, 1586197
[14:15] <sinzui> natefinch: the ci script only take an env. uses the clouds.yaml and credentials.yaml in cloud-city ($JUJU_HOME). the env is still needed to provide the config.
[14:36] <natefinch> sinzui: so, the thing is, I need to be able to inspect the broken machine, and I thought we had established that I can't rdp into the machines brought up by the cloud-city creds
[14:37] <mup> Bug #1611463 changed: GUI: test_upgrade_gui failed KeyError: 'bootstrap-config' <ci> <juju-gui> <regression> <unit-tests> <juju-core:Invalid by jcsackett> <https://launchpad.net/bugs/1611463>
[14:37] <mup> Bug #1611789 opened: GUI: test_upgrade_gui failed KeyError: 'cloud' <ci> <juju-gui> <regression> <unit-tests> <juju-core:Triaged by jcsackett> <https://launchpad.net/bugs/1611789>
[14:40] <sinzui> natefinch: I don't know how to do that with azure. Can we open rdp using the azure portal?
[14:41] <natefinch> sinzui: yes.  If I can log into azure with the right credentials
[14:41] <sinzui> natefinch: ah. with azure you login as yourself. I can add you to the subscription
[14:42] <natefinch> sinzui: cool
[14:44] <sinzui> natefinch: Register with azure with your canonical address. When Azure knows it, I can add you
[15:03] <arosales> any work arounds folks know to kill a controller
[15:03]  * arosales getting http://paste.ubuntu.com/22918062/
[15:03] <arosales> I destroyed all the models separately
[15:10] <natefinch> sinzui: oh, can you just use my current azure account? it's nate.finch@gmail.com
[15:10] <sinzui> natefinch: no, this is a canonical account
[15:14] <natefinch> sinzui: is that a can't or won't?  I really don't want to have to keep track of multiple microsoft accounts if I can avoid it
[15:14] <sinzui> natefinch: I wont. and I understand the pain
[15:15] <natefinch> sinzui: I guess I don't understand why it matters what email address I used to sign up with
[15:15] <sinzui> natefinch: this isn't my account. it is canonicals and it gets audited
[15:15] <natefinch> sinzui: ahh.  OK, I get it.
[15:20] <natefinch> sinzui: ok, I have a nate.finch@canonical.com azure account now
[15:20] <sinzui> natefinch: reload the page in azure, I think you can see all the resource groups and vms
[15:22] <natefinch> sinzui: ahh yes, once I switch to your directory
[15:45] <natefinch> perrito666: works for me
[15:50] <mgz> balloons: you're not on the other network...
[15:51] <mgz> balloons: see lp:~juju-qa/ubuntu/yakkety/juju/juju-1.25.6 for the base branch without bug fixes or the per-series versions
[15:55] <perrito666> aghh 4 hours chasing an isolation problem
[15:57] <balloons> mgz, ty
[15:58] <mgz> balloons: there are some existing lintian warnings that I didn't fix
[16:00] <balloons> mgz, sorry can you link me again.. I bounced my bouncer
[16:01] <mgz> balloons: lp:~juju-qa/ubuntu/yakkety/juju/juju-1.25.6
[16:05] <redir> morning
[16:34] <balloons> natefinch, sadly looks like updating a dependency perhaps is causing a unit test failure? What do you make of http://juju-ci.vapour.ws:8080/job/github-merge-juju/8699/artifact/artifacts/trusty-out.log/*view*/?
[16:34] <balloons> natefinch, I'm still trying to land that gnuflags update from yesterday :-)
[16:36] <mgz> balloons: did fail the same way both times
[16:36] <balloons> yes it did
[16:37] <mgz> what's the diff between the gnuflag packages before and after updaye?
[16:38] <balloons> nothing. it was merely a move from bzr to git and lp to github
[16:38] <balloons> but I pulled forward some outdated depends in a few other places
[16:39] <natefinch> balloons: looking
[16:40] <natefinch> balloons: just a spurious failure... retry
[16:41] <balloons> what's our opinions on these spurious failures -- Are we proactively fixing these / filing bugs / disabling?
[16:41] <mgz> natefinch: I haven't seen the uniter tests fail like that before
[16:42] <mgz> cannot set invalid status "rebooting" seems... not good
[16:43] <natefinch> yeah I was just looking at that line
[16:43] <natefinch> but that's obviously nothing to do with gnuflags
[16:44] <natefinch> the reboot tests have always been a little flaky
[16:44] <mgz> we've not had that test fail in CI since May
[16:44] <mgz> and then it was a catacomb race
[16:44] <mgz> not this error
[16:44] <natefinch> ERROR juju.apiserver Unable to prime /mnt/tmp/check-2402074302189717898/174/logsink.log (proceeding anyway): chown /mnt/tmp/check-2402074302189717898/174/logsink.log: operation not permitted
[16:45] <mgz> ...also interesting, though likely not fatal
[16:45] <katco> natefinch: mgz: i've seen that before
[16:45] <mgz> katco: which?
[16:46] <katco> mgz: the error message natefinch just posted
[16:46] <mgz> seems like we need to file at least two bugs...
[16:47] <mgz> and I still think balloons' branch is causing this test to fail somehow
[16:47] <katco> mgz: i assumed it was something to do with the test environment
[16:47] <mgz> though I don't have a good guess remaining on why
[16:48] <mgz> katco: yeah, I think it probably is, and likely is just fine in general because the tests probably don't care about logsink
[16:48] <mgz> but the code is obviously expecting to be root when running but happening as a normal user when called from a unit test
[16:48] <mgz> really it's an isolation escape...
[16:49] <mgz> anyway, not causing this failure, just visible as the log is shown
[16:52] <mgz> balloons: so, I don't think it will hurt to try landing again, but I bet it will fail again
[16:53] <balloons> it got conflicted again
[16:53] <balloons> i'll have to fix
[16:53] <mgz> yeah, joy for mass renaming
[16:53] <mgz> balloons: can you split out the bits?
[16:53] <mgz> just do one branch with one dep change
[16:53] <natefinch> balloons: gah, sorry... that's one of the reasons I was trying to rush with LGTMs last night, because I knew it had to go in quick or get conflicted to death
[16:53] <mgz> get that landed
[16:53] <mgz> then do the next
[16:53] <mgz> then do the snapcraft bits?
[16:53] <balloons> I don't mind the conflicts, it's fine
[16:54] <mgz> well, it adds another point of failure
[16:54] <balloons> and I had to make a feature branch this morning to depend on it
[16:54] <mgz> if you misresolve after review
[16:54] <balloons> impossible to conflict with snapcraft almost -- it's new
[17:08] <ionutbalutoiu> Hello, guys! How are the upstream juju tools for 2.0 beta are generated for Windows ? I tried generating them myself and got this weird error when running the juju service on Nano Server: http://paste.ubuntu.com/22928687/. But on the other side, if use the upstream ones, they work on Nano without a problem.
[17:13] <natefinch> ionutbalutoiu: that's a weird error
[17:14] <ionutbalutoiu> Wondering if when generating the tools you guys pass any extra parameters or something. I just pull the sources, do godeps and go install.
[17:15] <natefinch> you're sure the ones we build on nano work?  because I see some Go bugs about not running on nano
[17:15] <alexisb> redir, ping
[17:15] <redir> pong
[17:15] <alexisb> redir, can you please get this in your review q: http://reviews.vapour.ws/r/5403/
[17:15] <redir> yup
[17:16] <alexisb> thanks
[17:16] <ionutbalutoiu> natefinch: Even today I bootstrapped Nano with beta14 and upstream tools and Nano worked.
[17:16] <ionutbalutoiu> natefinch: bootstrapped an env and deployed Nano **
[17:18] <redir> do I need to setup mass to test the proposed change? uh
[17:18]  * redir has no maas
[17:18] <natefinch> ionutbalutoiu: oh, looks like it's maybe not a bug in 1.6.0
[17:19] <natefinch> sinzui: what version of Go do we build our windows tools with?
[17:19] <ionutbalutoiu> natefinch: 1.6.3
[17:31] <natefinch> ionutbalutoiu: there's a bug on this issue for the Go language: https://github.com/golang/go/issues/15286  it seems to indicate that building with 1.6.0 should work.  I'd recommend trying that.
[17:31] <redir> I am seeing http://paste.ubuntu.com/22931893/ this locally, but don't think it should be related to my current branch. Does anyone know what it is?
[17:32] <sinzui> natefinch: 1.6-0 from Ubuntu trusty. it is cross-compiled
[17:34] <ionutbalutoiu> natefinch: Thanks! I'll give it a shot right now and see if it works.
[17:50] <natefinch> sinzui: is there a reason we're not building the tools with a newer version of Go?  There are some security fixes in 1.6.1 and bug fixes in general in .2 and .3
[17:50] <natefinch> (aside from the fact that it'll evidently break windows nano)
[17:51] <sinzui> natefinch: we use what Ubuntu provides. It is possible to use a newer version if the work is scheduled.
[17:52] <ionutbalutoiu> natefinch: Go 1.6.0 worked. :)
[17:53] <natefinch> sinzui: I guess I was just thinking that we control what's in streams, so we can build with whatever version we like, as long as the code also builds with what's in Ubuntu
[17:53] <natefinch> ionutbalutoiu: awesome
[17:57] <sinzui> natefinch: well I am not too keen on arbitray Gos. That is what got us into a mess with utopic. vivid, and wily. I want one go version to do everything. We do want to control the go version. I don't want yakkety's Go 1.7 without using it every where.
[17:58] <sinzui> natefinch: balloons and I want to make CI build all agents separate from ubuntu. Our packages will onyl contain the client. Te client will be buillt with the Go Ubuntu chooses, but the agents use the best Go we can use everywhere
[17:59] <natefinch> sinzui: that sounds awesome
[17:59] <sinzui> natefinch: yeah, I wish the plan was a priority
[18:04] <natefinch> ahh ffs... OSError: [Errno 17] File exists: './logs/controller'
[18:06] <natefinch> there goes 10 minutes of my day :/
[18:19] <redir> natefinch: I think that twice every time I run the test suite
[18:19] <natefinch> yeah
[18:29] <natefinch> sinzui:  I keep seeing this: Provisioning failed. Shrinking a disk from 136367309312 bytes to 34359738880 bytes is not supported.. ResizeDiskError
[18:30] <sinzui> natefinch: on the bootstrap machine?
[18:30] <natefinch> sinzui: this is using origin/master
[18:30] <natefinch> sinzui: no, looks like only on the non-controller model
[18:31] <natefinch> so, windows
[18:31] <sinzui> natefinch: ah, that is interesting. We cannot get logs from the windows machines so we cannot see what is going wrong
[18:33] <natefinch> sinzui: looks like we're not even successfully starting the vm
[18:33] <sinzui> :/
[18:54] <natefinch> sinzui: I'm going to go write some code and poke axw when he comes on.  Maybe he'll have an idea what's going on.  Google hasn't been much help
[18:55] <sinzui> natefinch: ack
[18:56] <mup> Bug #1606278 changed: juju (2.0) deploy <charm-name>/<revision#> fails <juju-core:Triaged> <https://launchpad.net/bugs/1606278>
[19:03] <mbruzek> I am having some problems with the "charm publish" command, I got this : ERROR cannot publish charm or bundle: cannot publish charm or bundle: cannot update base entity for "cs:~containers/kubernetes-6": Field name duplication not allowed with modifiers
[19:03] <balloons> sinzui, do you think there is a risk in removing jujud from 1.25 packages?
[19:03] <mbruzek> oh sorry, wrong channel.
[19:03] <mbruzek> forgive me
[19:03] <sinzui> balloons: from our packages or ubuntus?
[19:04] <balloons> sinzui, ubuntu's really, but I suppose our stable ppa's too
[19:04] <balloons> sinzui, my thought is it's not a kosher change, and we should seek to do it in 2.0 only
[19:04] <sinzui> balloons: you cannot remove jujud from our packages until you change CI to make all the agents and change the rlease scripts to retrieve them instead of getting from the the debs
[19:05] <balloons> sinzui, right, apart from that
[19:06] <balloons> I feel like 1.25 workflows might use it (who knows what some folks are doing), while since 2.0 isn't released yet, and --upload-tools is dying, we can feel better about removing it
[19:06] <natefinch> manual provider needs it
[19:06] <natefinch> doesn't it?
[19:06] <balloons> well, our adt tests will break without it for instance, :-)
[19:06]  * natefinch forgets
[19:07] <sinzui> balloons: by removing jujud from ubuntu's packages, we ensure no bad jujuds get into production. This prevents uses from using --upload-tools. Users need to ise sync-tools and bootstrap --metadata-source to use a closed network
[19:08] <sinzui> natefinch: manual does not need it because manual works across arch
[19:08] <natefinch> sinzui: good point
[19:08] <balloons> sinzui, I agree but I guess I'm saying removing a binary isn't really a bugfix SRU, and I think it's saner just to leave it packaged for 1.x series
[19:09] <sinzui> natefinch: I wish add-machine worked with windows. We could start a win2012r2, then add-machine to prove the agent works if the machine comes up
[19:12] <natefinch> sinzui: is there a bug filed for that?  I didn't realize add-machine didn't work for windows.
[19:14] <sinzui> natefinch: i thought there was. I brought it up many times in the past. windows doesn't come with ssh and juju doesn't support winrm.
[19:19] <natefinch> sinzui: I don't understand why that matters.  We obviously have the ability to bring up a windows machine using juju during deploy.  There's no reason we can't reuse that code and just not deploy anything.
[19:20] <sinzui> natefinch: with the azure provider, it should not matter. add-machine will ask azure to bring up a machine.
[19:20] <rick_h_> perrito666: heads up that macgreagoir looked at the pr, I qa'd it and he's got a question before it can land in case we want to get it into the beta tomorrow https://github.com/juju/juju/pull/5747
[19:21] <perrito666> rick_h_: yup its on my today queue
[19:21] <rick_h_> perrito666: coolio
[19:22] <perrito666> rick_h_: no need to throw bad 90s music at me ill do it
[19:22] <rick_h_> perrito666: :P
[19:24]  * perrito666 sees there is an upgrade for the fortran library... what in the univers is using fortran in my computer
[19:25] <natefinch> lol
[19:56] <mup> Bug #1611789 changed: GUI: test_upgrade_gui failed KeyError: 'cloud' <ci> <juju-gui> <regression> <unit-tests> <juju-core:Invalid by jcsackett> <https://launchpad.net/bugs/1611789>
[21:18] <balloons> natefinch, 3 runs, 3 fails all on that uniter test
[21:19] <balloons> I have to believe it's something in the diff
[21:20] <mgz> balloons: can you land just the tomb change or just the gnuflag change?
[21:21] <balloons> it may be possible to land just the tomb change. But I'm lockstep now. it's already landed in the underlying repos; it can't be split now
[21:25] <balloons> I guess I was hoping for understanding why the test is failing -- gnuflag and tomb did NOT change at all
[21:26] <balloons> so mgz, the only possible source of difference is in juju/utils: https://github.com/juju/utils/pull/230/files
[21:26] <balloons> But to be fair, that's landed, so in theory that should affect everyone's builds right? So there is no reason I can think of why it's failing
[21:27] <balloons> ohh right.. they aren't using it, so indeed, I can say that's the issue :-)
[21:28] <mgz> balloons: so, that utils rev hasn't changed anything, but you mean there are other interveening changes?
[21:28] <balloons> mgz, that utils rev pulls newer versions of other dependencies. newer commits
[21:28] <mgz> balloons: that's actually unimportant
[21:29] <balloons> that's the only place code could have actually changed though.. nothing else has changed
[21:29] <mgz> as when juju itself builds, it goes off its dependencies.tsv which overrides the one in any deps
[21:29] <balloons> true.. I just am at a loss as to what the issue could be
[21:31] <mgz> balloons: run our build tarball script for before and after
[21:31] <mgz> and do a diff on the whole tree, and send me that
[21:32] <mgz> it's possible there's actually no code change and this is compiler oddness, but I bet we do have a functional change hidden in there
[21:32] <anastasiamac> alexisb: rick_h_: release call :)
[21:32] <alexisb> anastasiamac, be there soon
[21:36] <mup> Bug #1418139 changed: Can't reprovision a machine with manual provider <bootstrap> <destroy-environment> <manual-provider> <manual-story> <juju-core:Fix Released by natefinch> <juju-core 1.25:Fix Released by natefinch> <https://launchpad.net/bugs/1418139>
[22:12] <balloons> mgz, pulling the source tarball shows only the expected changes
[22:12] <balloons> I'm giving.. I'll split it tomorrow as you suggest and don't land it together
[22:12] <balloons> perhaps we can piecemeal the problem out
[22:13] <mgz> balloons: I see changes in utils exec code and changes in arg parsing
[22:13] <mgz> the uniter tests are unfortunately a twisty mess so it's unclear why it's now unhappy
[22:13] <mgz> probably just want to bug william (or maybe horacio?) to have a look at the test output vs diff
[22:33] <mup> Bug #1520571 changed: Juju destroy-environment stacktraces on local provider. <landscape> <juju-core:Invalid> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1520571>
[22:34] <mgz> balloons: this is perfectly reproducable locally
[22:45] <mgz> balloons: it fails because of the utils exec change
[22:46] <mgz> juju/utils pr #203
[22:46] <mgz> presumably because the test framework is somewhat bogus and doesn't expect the error right away, which it should
[22:51] <mup> Bug #1610037 changed: Juju2 beta14, missing network stanzas. <sts-needs-review> <juju-core:Invalid> <https://launchpad.net/bugs/1610037>
[22:51] <mup> Bug #1611981 opened: LXD guests not configured due to the lack of DHCP on the interface selected as eth0 <sts> <juju-core:New> <https://launchpad.net/bugs/1611981>
[23:19] <menn0> wallyworld, alexisb: no hangout?
[23:19] <menn0> standup even
[23:20] <wallyworld> menn0: yeah, we are here
[23:20] <wallyworld> https://hangouts.google.com/hangouts/_/canonical.com/a-team-standup?authuser=1
[23:20] <perrito666> menn0: you are in the old one
[23:33] <mup> Bug #1611990 opened: maas bootstrap fails if maasrc is missing <bootstrap> <maas-provider> <juju-core:In Progress by wallyworld> <https://launchpad.net/bugs/1611990>
[23:50] <alexisb> axw, I need a few minutes of your time
[23:50] <alexisb> if you dont mind
[23:50] <axw> alexisb: sure, gtg help with kids soon though
[23:50] <alexisb> it should be quick
[23:50] <alexisb> https://hangouts.google.com/hangouts/_/canonical.com/alexis-bruemmer