[00:04] <thumper> redir: around 9am
[00:08] <wallyworld> veebers: i am blind. i pulled goose master and the change i wanted was there after all. i just missed seeing it in the diff it appears
[00:10] <veebers> wallyworld: sweet, well in that case I accidentally re-built that job on purpose :-)
[00:10] <wallyworld> yes :-)
[00:10] <wallyworld> i need to go get glasses
[00:28] <perrito666> hey, this is a strange question, did any of you went to disney?
[00:28] <anastasiamac> perrito666: no
[00:33] <redir> OK, thumper I'll hit you up around 10 tomorrow.
[00:43] <anastasiamac> perrito666: wallyworld: as the most amazingly persistent ppl that worked on "backup" featurette, do u know if we have fixed this on Juju 2? https://bugs.launchpad.net/juju-core/+bug/1636080
[00:43] <mup> Bug #1636080: Machine unit 0 dies after "juju backup" <canonical-bootstack> <juju:New> <juju-core:Won't Fix> <https://launchpad.net/bugs/1636080>
[00:43] <wallyworld> lol
[00:43] <wallyworld> i don't know that bug
[00:43] <anastasiamac> it's new
[00:43] <anastasiamac> agaisnt 1.25.6
[00:43] <anastasiamac> i just want to know if we've fixed on 2..
[00:44] <perrito666> anastasiamac: I have no clue, but last time I checked backup "just works" in 2.0
[00:44] <anastasiamac> (which part "lol" - most persistent or featurette?)
[00:44] <anastasiamac> awesome \o/
[00:50] <anastasiamac> wallyworld: so as a conclusion of heated discussion.. spaces are only supported on MAAS in JUju2?
[00:50] <anastasiamac> wallyworld: not ec2 or any other provider?
[00:50] <wallyworld> and aws i believe
[00:50] <anastasiamac> k ;)
[00:50] <anastasiamac> then i have a bug \o/
[00:50] <wallyworld> not 100% sure about others
[00:50] <wallyworld> rick is across it all
[00:50] <wallyworld> not sure if a bug is required, there is probably one already
[00:51] <anastasiamac> wallyworld: that's what i mean - i have  abug: m not planing on creating more ;-P
[00:57] <mup> Bug #1636307 changed: cannot deploy to network space <juju:Triaged by rharding> <https://launchpad.net/bugs/1636307>
[01:24] <thumper> veebers: what CI testing do we have around migrations?
[01:34] <veebers> thumper: http://juju-ci.vapour.ws:8080/job/functional-model-migration/
[01:34] <thumper> how can I see what it does?
[01:34] <veebers> thumper: that's the jenkins job for the model migrations that I worked on with menn0
[01:35] <veebers> thumper: you could read the log, or the source or I could take a quick look to refresh my memory and give you a quick summary
[01:35] <veebers> thumper: the test is "assess_model_migration.py" in juju-ci-tools
[01:36]  * thumper looks at console output
[01:37] <thumper> veebers: just wondering, because I have found a failure case, and wondering whether we should add it to the CI
[01:38] <thumper> or at least I think I have a failing case
[01:42] <veebers> thumper: what's the suspected failure case?
[01:42] <thumper> veebers: juju upgrade-charm after a migration
[01:42] <thumper> I'm pretty sure it will fail
[01:42]  * veebers checks test code
[01:43] <veebers> thumper: right, the current CI test does not attempt that, it does add units to the migrated models
[01:44]  * thumper nods
[01:44] <thumper> how easy is it to add?
[01:44] <veebers> thumper: should be straight forward I think
[08:44] <mgz> morning all
[10:59] <frankban> dimitern: could you please take a look at https://github.com/juju/juju/pull/6482 ? it's trivial
[10:59] <dimitern> frankban: sure, I'll get to it shortly
[10:59] <frankban> ty
[11:00] <dimitern> frankban: it is trivial indeed - I'll QA it locally, but I'd appreciate a review on this equally trivial https://github.com/juju/juju/pull/6495
[11:03] <frankban> dimitern: since *state.State is embedded, why defining that method at all?
[11:06] <dimitern> frankban: not sure - I guess so it can be stubbed for testing
[11:07] <mgz> I like just deleting the method
[11:07] <frankban> dimitern: ok, then lgtm
[11:08] <dimitern> frankban: cheers! I'm almost done with yours..
[11:08] <frankban> dimitern: what's vmaas-21?
[11:09] <dimitern> frankban: virtual maas 2.1
[11:09] <frankban> dimitern: cool, so that spins up a virtual maas? where? lxd?
[11:10] <dimitern> frankban: nope, just uses one of my vmaas-es - all in lxd
[11:10] <frankban> dimitern: ah, ok
[11:12] <perrito666> Morning all
[11:18] <dimitern> perrito666: morning :)
[11:19] <perrito666> Dimitern i added a comment to your pr about something that worries me
[11:19] <dimitern> perrito666: yeah, I've just seen it
[11:20] <dimitern> perrito666: I'm open to suggestions how to test this actually :)
[11:20] <perrito666> If you can convince of the comtrary ill stamp it
[11:21] <dimitern> perrito666: it's just badly written code, that's not well tested - and that's because it's only really works on maas - with some assumptions
[11:21] <perrito666> Are there plans to change the state of things?
[11:22] <dimitern> frankban: I'm having trouble with `juju gui` - it seems to return Juju GUI is not available: Juju GUI not found on LXD (always that I can see..)
[11:24] <dimitern> frankban: however I can see with your fix it's dialing "wss://10.40.149.73:17070/api", and without it - "wss://10.40.41.252:17070/model/9b6daa80-d635-43d8-893d-8e5fcaec583d/api", so I can confirm it's using the controller root
[11:25] <dimitern> frankban: ok, I've run `juju upgrade-gui` first and now it works! \o/
[11:25] <frankban> dimitern: cool, so current GUI in simplestreams does not work with juju tip?
[11:26] <frankban> dimitern: or did you bootstrap with --no-gui?
[11:26] <dimitern> frankban: it seems so - I've seen issues around getting the gui commonly
[11:27] <dimitern> frankban: nope, I'm not using --no-gui
[11:27] <frankban> dimitern: I wonder why the GUI was not available on your controller then...
[11:27] <frankban> any errors in the bootstrap output?
[11:29] <dimitern> frankban: yeah - usually `Unable to fetch Juju GUI info: error fetching simplestreams metadata: invalid URL "https://streams.canonical.com/juju/gui/streams/v1/index.sjson" not found`
[11:30] <dimitern> frankban: LGTM
[11:30] <frankban> dimitern: dimitern oh ok then, you have a firewall or similar?
[11:31] <frankban> dimitern: ty
[11:32] <dimitern> frankban: I have a bunch of iptables rules, but I don't think it should interfere with streams.c.c (I've seen intermittent connection errors hitting streams.c.c occasionally though)
[11:33] <dimitern> perrito666: there are always plans :)
[11:33] <dimitern> perrito666: but that code is not yet used for what it's supposed to be used
[11:34] <dimitern> perrito666: it will be fixed in the near future when we have greater spaces support in the providers (LXD at least)
[11:36] <frankban> dimitern: thanks to your fantastic QA instructions a set up a quick and dirty juju-db juju plugin: http://pastebin.ubuntu.com/23378478/
[11:37] <dimitern> frankban: awesome! I'll give it a try :)
[11:37] <frankban> cool
[11:41] <dimitern> frankban: works great, and only needs support for --help and --description I think to become an official plugin :)
[12:16] <frankban> dimitern: http://pastebin.ubuntu.com/23378596/
[12:17] <dimitern> frankban: sweet thanks! btw noticed a typo on 17 (conect)
[12:18] <frankban> dimitern: good catch thanks
[12:39] <dooferlad> dimitern, voidspace, macgreagoir: https://github.com/juju/juju/pull/6496 needs eyes
[12:40] <dimitern> dooferlad: will have a look shortly
[12:40] <dooferlad> thanks dimitern!
[13:01] <natefinch> rick_h_, voidspace: http://pastebin.ubuntu.com/23378736/
[13:02] <natefinch> I was thinking it would be useful to just print out the yaml at the end anyway, so you can see the end result easier.
[13:02] <rick_h_> natefinch: sorry, -1 from my pov on that. I think just finishing with a "Added cloud 'osl' would be about it
[13:03] <rick_h_> natefinch: maybe a sample bootstrap ", can be bootstrapped with juju bootstrap osl"
[13:03] <dimitern> dooferlad: can you clean that diff a bit, so it's easier to follow? It looks like you've moved some things around for no good reason
[13:03] <rick_h_> natefinch: much like add-credential, the goal is to get to where the files don't exist to the end user and having this go so far from that seems not the best direction.
[13:04] <natefinch> rick_h_: well, maybe the output of show-cloud os1?
[13:05] <natefinch> basically the same thing... I just find it nice after all the prompts and stuff to see that I didn't typo anything
[13:05] <rick_h_> natefinch: is it really worth a > 1 line output that you added an entry?
[13:05] <rick_h_> natefinch: but you see what you typed in your terminal above the final line?
[13:05] <natefinch> yes, but it can be hard to see the forest for the trees.  The prompts and stuff make it so it's not a consolidated view
[13:06] <rick_h_> natefinch: nothing else we have dumps out that kind of data that I can think of.
[13:06] <dooferlad> dimitern: code was moved for the very good reason of it being difficult to follow and debug
[13:06] <natefinch> rick_h_: most of our things aren't long complicated interactive commands either
[13:06] <dooferlad> dimitern: like it says in the PR, getBootstrapConfigs contains the only material changes.
[13:06] <dooferlad> + tests
[13:07] <rick_h_> natefinch: I understand, but going to go -1 on starting that here unless there's real user feedback. Start minimal and add only when pushed imo.
[13:07] <natefinch> ok
[13:07] <rick_h_> always easy to add, harder to take away kind of thing
..
[13:07] <dooferlad> dimitern: unfortunately the github diff viewer sucks
[13:08] <dooferlad> dimitern: the one I was using locally make it much clearer :-|
[13:08] <dimitern> dooferlad: *ok*, I'll try to follow it, but since it's >500 lines, somebody else should have a look as well
[13:08] <rick_h_> dimitern: dooferlad +1 on 2 reviews for a large diff
[13:09] <natefinch> rick_h_: so, I added manual yesterday too (which is really trivial, but worth having to keep the UX consistent).  So now I just need tests... but at least it's working, and it should be pretty extensible if neeed
[13:10] <dimitern> dooferlad: re QA: what do you mean by 'check that it is honored' ? whether it's set on get-model-config after bootstrap with --config use-floating-ips=true ?
[13:10] <rick_h_> natefinch: so is this still a PoC or have you gone beyond that?
[13:10] <rick_h_> natefinch: the original goal was a single PoC we could try out and get some review that the code was heading in the right path.
[13:11] <natefinch> rick_h_: it's basically production code that has zero tests
[13:11] <dooferlad> dimitern: if I were you, I would just open https://github.com/dooferlad/juju/blob/9577e379651a6c45d3b4b3050f11ebda2516fbad/cmd/juju/commands/bootstrap.go and look at Run (line 314+) that used to be insanely big and make sure that it reads well. The only logical change is in getBootstrapConfigs (line 729+) where the old code was wrong.
[13:11] <rick_h_> natefinch: heh, ok
[13:11] <rick_h_> natefinch: let's not use the word production yet then :P
[13:11] <natefinch> rick_h_: heh
[13:11] <natefinch> indeed
[13:11] <natefinch> I can build a binary and let people poke at it
[13:12] <dooferlad> dimitern: the other new functions are just moved code so Run is even vaguely readable.
[13:12] <rick_h_> natefinch: +1, shoot me a branch or a binary and I'll tinker with it
[13:12] <dimitern> dooferlad: ok, I'll do that
[13:13]  * dimitern wtf?! bootstrapCmd.Run is more than 500 lines itself!
[13:13] <dooferlad> dimitern: Just clarified the CI steps in response to your question - the bug was in parsing clouds.yaml, so you need to set the config there, not on the CLI
[13:13] <dimitern> that's *insane*
[13:14] <dooferlad> dimitern: it used to be *much* bigger
[13:14] <dimitern> dooferlad: I'm looking at the source before your changes :)
[13:14] <dooferlad> dimitern: oh, yea
[13:14] <dooferlad> dimitern: that is why I changed it!
[13:14]  * dimitern approves :)
[13:14] <dimitern> of the refactoring
[13:16] <natefinch> rick_h_: https://docs.google.com/uc?export=download&confirm=leR2&id=0B-r4AW1RoHJNRFRsZ1U4d1diaHc
[13:19] <natefinch> rick_h_: or use hub checkout https://github.com/juju/juju/pull/6498
[13:20] <rick_h_> natefinch: cool ty
[13:20]  * dimitern tries to remember how to use canonistack
[13:24] <macgreagoir> dimitern: I can give it a test in canonistack, if you get stuck.
[13:25] <dooferlad> macgreagoir: if you could take a look at that https://github.com/juju/juju/pull/6496 too that would be great
[13:26] <macgreagoir> dooferlad: I am, aye. Only 498 lines to go :-)
[13:26] <perrito666> dooferlad: looking
[13:27] <dimitern> macgreagoir: that would be great actually :) thanks!
[13:31] <macgreagoir> dimitern: nw
[14:00] <rick_h_> kadams54: natefinch dimitern ping for standup
[14:00] <rick_h_> bah, katco that is
[14:00] <kadams54> :-(
[14:00] <kadams54> ;-)
[14:01] <dimitern> omw
[14:13] <dimitern> dooferlad: reviewed
[14:13] <dooferlad> dimitern: thanks!
[14:14] <dooferlad> dimitern: my favourite types of changes - minor cosmetic
[14:15] <dimitern> dooferlad: well, the code seems OK AFAICS
[14:15] <dooferlad> dimitern: :-D
[14:16] <rick_h_> dooferlad: picking up our intermittent issue for week #2 and since your work is in review sending your way please.
[14:18] <dooferlad> rick_h_: ack
[14:35] <perrito666> dooferlad: dimitern I am reviewing the pr, it might take me a couple of minutes but I will most likely catch similar stuff than dimitern
[14:48] <perrito666> dooferlad: reviewed
[15:09] <dimitern> rick_h_: good news :) I can't reproduce the bash completion traceback - probably was due to stale cache file or something
[15:09] <rick_h_> dimitern: hmm, I had that in my testing right up to GA
[15:10] <rick_h_> dimitern: /me will try to reproduce I guess
[15:10] <dimitern> rick_h_: however, I found a couple of issues with it and I'd like to propose a small fix so completions will work for source builds and the version check will be properly installed
[15:10] <mgz> yeah, I'm sorry but the completion stuff is still super-heisen
[15:10] <mgz> there is almost certainly a bug or two
[15:11] <mgz> but may depend on specifics of which packages were installed in the past and so on
[15:17] <dimitern> mgz: can you give this a try? https://github.com/juju/juju/pull/6499/ (also a review would be nice :)
[15:17] <dimitern> rick_h_: ^^
[15:17] <mgz> dimitern: taking a look
[15:19] <mgz> dimitern: I don't think try:/except: pass can be what we want
[15:19] <dimitern> mgz: why?
[15:19] <mgz> means there's no record at all if things are borked
[15:20] <dimitern> true, but would you prefer a broken bash env instead? :)
[15:21] <mgz> the style seems to be print something out mid-tab if things are totally screwed
[15:21] <mgz> I agree that normal operation shouldn't do that
[15:21] <dimitern> I don't mind dropping the try/excepts
[15:22] <dimitern> but I really like the block 345-348, which finally made the completions usable for source builds
[15:22] <mgz> yeah, I agree that change seems good
[15:28] <dimitern> a possible workaround will be `sudo ln -s "$GOPATH/bin/juju" /usr/bin/juju-2.0", but that won't work if you *also* have the juju-2.0 package installed
[15:38] <dooferlad> dimitern, perrito666: WRT https://github.com/juju/juju/pull/649 I could just return/fill in structs for the functions that currently return lots of variables but the diff would be bigger. Happy to make the change either now or in a followup. Thoughts?
[15:39] <dooferlad> leave comments on the PR if you want but would like to know my next step soon.
[15:39] <dimitern> dooferlad: that's even better - I meant to suggest it, but forgot
[15:40] <dooferlad> dimitern: OK, I won't squash that commit so you can see what changed.
[15:41] <dooferlad> dimitern: thanks
[16:00] <dimitern> mgz: updated https://github.com/juju/juju/pull/6499
[16:22] <perrito666> dooferlad: sorry was hunting for lunch, please do as a follow up
[18:01] <dooferlad> perrito666: just pushed changes to ^^. Dimiter seemed to want them on the same branch so that was where I was working. https://github.com/juju/juju/pull/6496/commits/5890aa96b97e4db8b945894ce2a8415fdfaab077 is the changes from what you have seen before.
[18:09] <perrito666> dooferlad: checking
[18:43] <deanman> Hello, I'm facing a very wierd behavior of juju2 on xenial guest VM using localhost as a cloud provider. In particular, i can bootstrap the controller just fine and see its LXD in running state but when deploying a charm it complains about not being able to download the image. Guys from #juju were helpful enough and suggested that maybe it is a bug and could get some more triage assistance here?
[18:48] <kwmonroe> to add to deanman's case, this looked fishy to me:  http://paste.ubuntu.com/23379874/.  where does 'stream "devel"' come from, and why is his agent version 2.0.0.1?  he's got the juju/devel ppa, but so do i, and my agent version/stream is 2.0.0/released.
[18:51] <deanman> kwmonroe: You meant juju/stable ppa right? At least i have used 'sudo add-apt-repository ppa:juju/stable' before downloading juju
[18:52] <kwmonroe> deanman: even when i install from the devel ppa, i still get a model-config with 2.0.0/released.
[18:52] <deanman> ah ok!
[18:55] <deanman> kwmonroe: xenial-backports repo is uncomment on source.list, does it make any difference?
[18:56] <kwmonroe> i don't think so deanman.  mine is uncommented as well
[18:57] <kwmonroe> one other fishy thing between deanman's model config (http://pastebin.com/YM9GHtrC) and mine is his says: development / model / true.  mine says:  development / default / false.
[19:02] <kwmonroe> deanman: is it possible you bootstrapped with --config development=true?
[19:02] <deanman> kwmonroe: well i did set it to true myself during bootstraping.
[19:02] <kwmonroe> and you were just all like "this probably isn't important to mention to kwmonroe"?
[19:02] <deanman> "Set whether the model is in development mode", i was in development mode :-)
[19:03] <kwmonroe> :)
[19:03] <kwmonroe> well try turning that sucker off
[19:03] <rogpeppe1> anyone know of a testing utility func in juju that destroys a model and all its applications too?
[19:03] <rogpeppe1> or do i have to write it?
[19:04] <rogpeppe1> i'm talking about testing when i've got a *state.State instance here
[19:04] <rogpeppe1> natefinch: ^ ?
[19:05] <natefinch> rogpeppe1: buh.... no idea.  Look on juju conn suite maybe?
[19:06] <rogpeppe1> i *think* what i have to do is Destroy the model, then go through each of the model's applications and destroy them, then go through each unit and destroy that. or maybe the other way around.
[19:07] <natefinch> I can't tell if juju destroy-model kills applications too (sorta looks like it, but it's not explicit).  Might look to see how it does that
[19:09] <rogpeppe1> natefinch: it doesn't seem to
[19:09] <rogpeppe1> natefinch: i think it waits for everything to tear itself down
[19:09] <natefinch> ahh
[19:10] <katco> rogpeppe1: is this a feature test?
[19:10] <natefinch> rogpeppe1: kill-controller does a forced destruction after it's timeout expires
[19:10] <rogpeppe1> katco: this isn't a test in juju-core
[19:11] <katco> rogpeppe1: ah ok, sorry to bother
[19:11] <rogpeppe1> katco: np
[19:12] <natefinch> do we have a checker that does regex matches across newlines, or do I have to do the strings.replace(s, "\n", "", -1) thing?
[19:14] <natefinch> gah, whoever made checkers in github.com/juju/testing/checkers without godoc deserves a flogging
[19:17]  * rogpeppe1 checks it's not him
[19:17] <perrito666> yeah, foto log that b*****d
[19:18] <perrito666> natefinch: fun fact flog in spanish applies to users of foto logs :p
[19:19] <natefinch> heh, it was tim
[19:19] <natefinch> or at least... hmm
[19:19] <natefinch> I think he must have moved the code
[19:19] <natefinch> because I know I wrote SameContents, but it's flagged as him too
[19:20] <natefinch> I guess we'll never know
[19:20] <natefinch> perrito666: heh
[19:20] <katco> natefinch: if only there was a tool to review the complete history of code =|
[19:20] <perrito666> natefinch: great now my brain has snaps of tim dressed as a emo/dark
[19:23] <natefinch> katco: I don't know how to dig further in the history of that line, and running more than one command requires more effort than I really care to spend :)
[19:24] <perrito666> yeah, if we where all in the same office you could use the more simple method of printting the code, write in red marker "you know who you are, beg that I never do" and put it with a knife in the pin board
[19:25] <perrito666> which I am sure there is a git command for
[19:25] <perrito666> and certainly an emacs shortcut
[19:28] <rogpeppe1> so far i've got this and the last call to RemoveAllModelDocs still fails with a "model not dead" error: http://paste.ubuntu.com/23380364/
[19:31] <rogpeppe1> i wonder what other things i need to kill
[19:32]  * rogpeppe1 delves into the code
[19:32] <rogpeppe1> how i love mgo/txn
[19:33] <natefinch> it's um... something special, for sure
[19:34] <rogpeppe1> perrito666: any idea how i can remove a model, by any chance?
[19:35] <perrito666> rogpeppe1: destroy-model ?
[19:35] <rogpeppe1> perrito666: sorry, i've got a *state.State
[19:36] <perrito666> rogpeppe1: ah, It escapes my memory now, but State.Destroy() isnt a thing?
[19:36] <rogpeppe1> perrito666: no, that would be too obvious :)
[19:37] <perrito666> rogpeppe1: god forbid we do that
[19:37] <rogpeppe1> perrito666: i think Model.DestroyAllModelDocs is the thing
[19:37] <rogpeppe1> perrito666: but it doesn't work if the model isn't dead
[19:37] <rogpeppe1> perrito666: and i can't work out how to make it dead
[19:37] <rogpeppe1> perrito666: there's no EnsureDead or Remove method on Model
[19:39] <perrito666> rogpeppe1: state.Model() then thatModel.Destroy()
[19:39] <rogpeppe1> perrito666: this is what i'm doing currently: http://paste.ubuntu.com/23380415/
[19:39] <perrito666> rogpeppe1: check state/model.go
[19:39] <rogpeppe1> perrito666: but it doesn't work
[19:40] <rogpeppe1> perrito666: i've checked that there are no apps and no machines left
[19:40] <rogpeppe1> perrito666: which is what the code seems to be checking for
[19:40] <perrito666> rogpeppe1: odd, destroy is quite straight forward
[19:41] <rogpeppe1> perrito666: the model life remains at dying
[19:43] <perrito666> I recall there being aworker for this but not much more
[19:44] <rogpeppe1> perrito666: the worker will be outside of state tho'
[19:44] <perrito666> rogpeppe1: most likely running on the controller? not sure
[19:45] <rogpeppe1> perrito666: it shouldn't matter AFAICS - i'm accessing the state directly
[19:47] <rogpeppe1> perrito666: ha, it looks like you can't remove a model if you've called Destroy on it
[19:48] <perrito666> rogpeppe1: even if life advanced to dead?
[19:48] <rogpeppe1> perrito666: the only way of advancing life to dead is by calling Destroy
[19:48] <rogpeppe1> perrito666: (i think)
[19:48] <perrito666> rogpeppe1: afaik destroy will advance it to dying
[19:48] <rogpeppe1> perrito666: but if there are units around and you call Destroy it goes into dying mode
[19:48] <perrito666> then you do something and it gets dead :p
[19:48] <rogpeppe1> perrito666: yeah
[19:49] <rogpeppe1> perrito666: but there's this code in Destroy:
[19:49] <rogpeppe1> 	if m.Life() != Alive {
[19:49] <rogpeppe1> 		return nil, errModelNotAlive
[19:49] <rogpeppe1> 	}
[19:49] <rogpeppe1> perrito666: which looks wrong to me
[19:50] <perrito666> right in this moment I miss fwereade :p
[19:50] <rogpeppe1> perrito666: indeed
[19:50] <rogpeppe1> perrito666: i don't think anyone else understood this stuff
[19:51] <rogpeppe1> perrito666: if i change that line to "m.Life() == Dead", then I get "failed to destroy model: state changing too quickly; try again soon"
[19:52] <rogpeppe1> i can't think how many times i've seen that error and it's never ever been because state was changing underfoot
[19:52] <rogpeppe1> katco: BTW i got bored and landed gopkg.in/retry.v1
[19:53] <katco> rogpeppe1: cheers
[19:53] <katco> rogpeppe1: i was going to try and review that in more depth today before the tech board meeting
[19:54] <rogpeppe1> katco: i dunno what the juju board decision will be, but at least snappy can use it now
[19:54] <katco> rogpeppe1: :)
[19:55] <perrito666> rogpeppe1: with some luck menn0 might understand it better or thumper they both must have been around for multi model work
[19:56] <rogpeppe1> perrito666: it would be nice to have someone with whom i shared some working hours :)
[19:56] <perrito666> rogpeppe1: ah I sort of forgot t hat detail
[19:57] <rogpeppe1> katco: review comments still welcome on the retry PR BTW
[19:58] <katco> rogpeppe1: is that pr now in sync with gopkg.in/retry.v1?
[19:58] <rogpeppe1> katco: pretty much. the only changes are the import paths and that i removed the dependency on juju/utils/clock
[19:58] <katco> rogpeppe1: ok cool. i will go there when/if i can review
[19:58] <rogpeppe1> katco: ta
[19:58] <katco> rogpeppe1: ty!
[19:59] <katco> redir: hey i'm sorry i'm afraid i've been a horrible pairing partner. do you need anything?
[20:02] <rogpeppe1> perrito666: BTW that code does work if you haven't already called Destroy on the model first
[20:02] <perrito666> rogpeppe1: odd
[20:02] <perrito666> rogpeppe1: btw, thumper is there, so technically you are now sharing tz
[20:03] <thumper> FSVO sharing
[20:03] <rogpeppe1> thumper: yo!
[20:03] <thumper> morning
[20:03] <rogpeppe1> thumper: evening :)
[20:03] <thumper> pretty sure if it was evening, I'd be drinking
[20:03] <rogpeppe1> thumper: do you have any idea about model lifecycle states?
[20:03] <katco> rogpeppe1: thumper: it's afternoon you fools!
[20:03] <rogpeppe1> thumper: i drank enough last week to cover this week
[20:03] <thumper> rogpeppe1: what do you mean by that?
[20:04] <rogpeppe1> thumper: so, i'm operating directly on *state.State
[20:04] <thumper> alive, dying, dead?
[20:04] <rogpeppe1> thumper: i have a model with some units
[20:04] <thumper> uh ha
[20:04] <rogpeppe1> thumper: i call Destroy on it
[20:04] <rogpeppe1> thumper: it goes into dying state
[20:04] <rogpeppe1> thumper: (as expected)
[20:04]  * thumper nods
[20:04] <rogpeppe1> thumper: then i destroy/remove all the units, apps and machines
[20:04] <thumper> why?
[20:05] <rogpeppe1> thumper: because i want to remove the model
[20:05] <thumper> destrying a model starts a cascade of cleanup jobs
[20:05] <rogpeppe1> thumper: this is without any workers running
[20:05] <rogpeppe1> thumper: just raw *state.State
[20:05] <thumper> um... ok
[20:05] <thumper> why not call the remove all model docs method?
[20:06] <rogpeppe1> thumper: you can't call it until the model is dead
[20:06] <rogpeppe1> thumper: but it seems that once a model is dying it can never be made dead
[20:06] <thumper> what is the use case for this?
[20:06] <rogpeppe1> thumper: a test
[20:06] <rogpeppe1> thumper: but i kinda hope that state works on its own terms
[20:06] <thumper> it can be made dead
[20:06] <thumper> but a lot of this changed with the tear down of multi model
[20:07] <thumper> i'd have to go and read the code
[20:07] <rogpeppe1> thumper: how can it be made dead?
[20:07]  * thumper goes to look
[20:07] <rogpeppe1> thumper: this code looks suspicious to me:
[20:07] <rogpeppe1> func (m *Model) destroyOps(ensureNoHostedModels, ensureEmpty bool) ([]txn.Op, error) {
[20:07] <rogpeppe1> 	if m.Life() != Alive {
[20:07] <rogpeppe1> 		return nil, errModelNotAlive
[20:07] <rogpeppe1> 	}
[20:08] <rogpeppe1> thumper: I *think* that implies that if the model is dying, then it won't do anything
[20:08] <rogpeppe1> thumper: and that certainly *seems* to be the case in practice
[20:09] <thumper> look at state/model_test.go:111
[20:10] <thumper> although that method is is export_test.go
[20:10] <thumper> is this a state package test or from further out?
[20:10] <rogpeppe1> thumper: further out. in an external project that's integration-testing watcher behaviour
[20:12] <thumper> ProcessDyingModel
[20:13] <thumper> rogpeppe1: moves a model from dying to dead if all the machines are gone
[20:13] <rogpeppe1> thumper: ha
[20:13] <rogpeppe1> thumper: that's... unusual
[20:14] <thumper> part of the undertaker worker
[20:14] <rogpeppe1> thumper: perhaps that should be renamed to EnsureDead
[20:14] <rogpeppe1> thumper: to fit with the other types
[20:14] <thumper> all about the controller making sure the models are cleaned up nicely
[20:14] <thumper> except it doesn't EnsureDead
[20:14] <thumper> ish
[20:14] <rogpeppe1> thumper: or at least Destroy could mention that method
[20:14] <thumper> not entirely sure
[20:14] <thumper> true
[20:15] <thumper> it should
[20:15] <thumper> there should be docs about model lifecycle that mention the states, and how to progress through them
[20:15] <thumper> you are not wrong
[20:16] <rogpeppe1> thumper: thanks anyway. i've spent hours on this :)
[20:16] <thumper> :(
[20:16] <thumper> it is one of those sad cases where everyone wants better docs
[20:16] <thumper> and no one writes them
[20:16] <thumper> or
[20:16] <thumper> we have a tendency to think everything is obvious
[20:16] <thumper> and doesn't need writing down
[20:16] <thumper> until we come back to it in 3 - 6 months
[20:17] <thumper> when we have forgotten the context
[20:17] <thumper> I've done that a lot
[20:19] <rogpeppe1> thumper: yeah, it is easy to do
[20:20] <rogpeppe1> thumper: BTW this is what you need to do do destroy a model in the state from first principles AFAICS: http://paste.ubuntu.com/23380624/
[20:21] <thumper> hmm...
[20:21] <rogpeppe1> thumper: FWIW i think i'd found ProcessDyingModel if it had been a method on Model not State
[20:21] <thumper> good thing we have cleanups that do most of that
[20:22] <rogpeppe1> s/i'd/i'd've/
[20:22]  * rogpeppe1 's test finally passes
[20:22] <thumper> \o/
[20:47] <thumper> hmm...
[20:48] <thumper> seems like if bootstrap can't access public streams, it'll fail, even if it has local tools
[20:48] <thumper> how will this work in secure separate networks?
[20:49] <natefinch> thumper: sounds like a question for Ian.  it should upload tools automatically, but that doesn't help with images
[20:49] <thumper> yeah
[20:50] <thumper> I have intermittent DNS issues
[20:50] <thumper> which makes bootstrap sometimes fail
[20:50] <thumper> and also GUI downloading fail
[20:50]  * thumper thinks there should be a retry in there somewhere
[20:50] <natefinch> the network isn't always reliable?
[20:50] <thumper> nope
[20:50] <thumper> no idea why
[20:50] <thumper> kinda shit
[20:51] <natefinch> well, maybe it's because you're on an island off the coast of an island off the coast of southeast asia
[20:57] <thumper> hmm...
[20:57] <thumper> how do you upgrade a local charm now?
[20:57] <rick_h_> thumper: juju upgrade-charm ./localcharm
[20:58] <thumper> that isn't what the help says...
[20:58] <rick_h_> thumper: sorry, missed the application name in there
[20:58] <thumper> nope
[20:58] <thumper> $ juju upgrade-charm ~/canonical/juju-2.0-beta7/charm-ubuntu ubuntu
[20:58] <thumper> error: unrecognized args: ["ubuntu"]
[20:58] <rick_h_> thumper: hmm, katco reworked that per a bug on that
[20:58] <thumper> help says to specify --path to point to local charm location
[20:58] <thumper> but that fails too
[20:58] <rick_h_> katco: does it remember the same path on disk and need a --switch to take a ne wone?
[20:59] <thumper> $ juju upgrade-charm --path=~/canonical/juju-2.0-beta7/charm-ubuntu ubuntu
[20:59] <thumper> ERROR charm or bundle URL has invalid form: "~/canonical/juju-2.0-beta7/charm-ubuntu"
[20:59] <rick_h_> thumper: right, you need the ./
[20:59] <thumper> that isn't obvious at all
[20:59] <rick_h_> thumper: try --path=./home/thumper/canonical/juju-2.0-beta7/charm-ubuntu or cd there and go
[20:59] <thumper> ./home is just wrong
[20:59] <katco> thumper: rick_h_: right, you have to specify --path in addition to the application name
[20:59] <rick_h_> thumper: except all local charm references across all of juju2 require the ./ as the local indicator?
[20:59] <thumper> is it a path or relative?
[21:00] <katco> thumper: i believe it's any resolvable path... let me check
[21:00] <thumper> $ juju upgrade-charm --path=./../canonical/juju-2.0-beta7/charm-ubuntu ubuntu
[21:00] <thumper> ERROR charm or bundle URL has invalid form: "./../canonical/juju-2.0-beta7/charm-ubuntu"
[21:00] <thumper> ugh from wrong dir
[21:01] <thumper> juju upgrade-charm --path=./canonical/juju-2.0-beta7/charm-ubuntu ubuntu
[21:01] <thumper> Added charm "local:xenial/ubuntu-1" to the model.
[21:01] <thumper> worked when given a relative path
[21:01] <thumper> that points to it
[21:01] <thumper> but if the path is wrong, the error is confusing
[21:02] <thumper> redir: did you want to chat kvm?
[21:03] <redir> thumper: sure if your free
[21:03] <redir> you're even
[21:03] <thumper> rick_h_: funnily enough `juju deploy ~/canonical/juju-2.0-beta7/charm-ubuntu/ ubuntu` works
[21:03] <thumper> rick_h_: but you can't use the same path to upgrade-charm
[21:03] <thumper> we should probably fix that
[21:04] <rick_h_> thumper: right, there was a bug around that. I thought they were made consistent.
[21:04] <katco> thumper: this is the thing that resolves it: https://github.com/juju/charmrepo/blob/v2-unstable/charmpath.go#L51
[21:05] <katco> rick_h_: thumper: https://github.com/juju/juju/commit/51d6437d31b154604d011be146f7d0a231e6e186#diff-aec7c600a6f94b7c3646bb9d9b124854
[21:05] <katco> rick_h_: thumper: "There is commonality between `deploy` and `upgrade-charm` in resolving arguments and doing something useful with them. There is an opportunity to create a component and to pass it into both commands."
[21:07] <katco> rick_h_: i would relish the opportunity to address that :)
[21:07] <rick_h_> katco: yea, sorry. We spent a lot of time there. More to move onto atm.
[21:08] <rick_h_> katco: would be good to get a bug though with the direction and mark it up for 2.2 suggestions
[21:08] <katco> rick_h_: actually did not; you're thinking of deploy
[21:08] <rick_h_> katco: true enough
[21:12] <mup> Bug #1614633 changed: A unit with a failed storage-detaching hook cannot be destroyed <juju:Fix Released by axwalk> <juju-core:Won't Fix> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1614633>
[21:13] <alexisb> anastasiamac, we can meet now if you like
[21:13] <anastasiamac> k :)
[21:14] <anastasiamac> alexisb: m in our 1:1