[02:32] <marcoceppi> rick_h__: it's been uploaded, but it's a bit...special
[02:32] <rick_h__> marcoceppi: heh ty
[02:32] <rick_h__> marcoceppi: yea, saw it in the charmstore
[02:46] <marcoceppi> rick_h__: reminds me, next month we need to spend a bit more time, but it can be deployed / demoed on AWS. I'll make sure the readme is updated next week
[02:46] <rick_h__> marcoceppi: cool ty
[02:46] <rick_h__> marcoceppi: it was enough for me to check out the actions/etc and get an idea for what we did/put things together
[05:54] <marcoceppi> magicaltrout: I may have gone a bit overboard in my attempt to pull out gitlab omnibus stuff, but I think it makes a pretty compelling architecture
[05:55] <marcoceppi> magicaltrout: I'll have a pull req on Tuesday, pretty excited about the possibility of running gitlab "better" than their gitlab.com offering ;)
[08:22] <sam__20> Hi. I'm trying to get started with this https://jujucharms.com/get-started
[08:23] <sam__20> on a VirtualBoxed Ubuntu 14.04
[08:23] <sam__20> juju quickstart mediawiki-single stays forever at "juju-gui/0 deployment is pending. machine 1 provisioning is pending."
[08:23] <sam__20> Any ideas?
[09:21] <magicaltrout> lol marcoceppi I've merged the PR, I'm very curious to see what you have coming.... :)
[09:24] <TheMue> morning
[10:07] <gnuoy> tinwood, got a sec to help me with some git 101 stuff?
[10:09] <tinwood> Sure gnuoy, ask away.
[10:09] <gnuoy> tinwood, I've just discovered "git checkout --theirs" which I think solves my issue. But thanks, sorry for the noise
[10:11] <tinwood> oh.  I've NEVER used that.  What's it solve?
[10:12] <magicaltrout> gnuoy: dunno if you already use it, but on a similar vein to make sure incoming merges work and stuff, git stash is very useful
[10:12] <gnuoy> tinwood, my fork is out of sync with upstream but my fork has nothing I want in it so I was refreshing it by merging master from upstream but I had conflicts. I only wanted the upstream copy
[10:12] <magicaltrout> slightly different usecase I admit
[10:13] <gnuoy> magicaltrout, I shall hit the man page, thanks
[10:13] <tinwood> gnuoy, oh.  I did git checkout master, git fetch/merge and then git rebase for the same.
[10:15] <tinwood> gnuoy, it's so painful I've decided not to fork in the future!
[10:15] <gnuoy> tinwood, but I need a staging area that mojo and amulet can reference, I don't think the review.o.c mp branch can be that
[10:16] <tinwood> gnuoy, oh right.  Fair enough.
[10:17] <BrunoR> how do I get Juju 1.25.4?
[10:18] <gnuoy> BrunoR, looks like ppa:juju/proposed
[10:19] <gnuoy> ( https://launchpad.net/~juju/+archive/ubuntu/proposed )
[10:19] <BrunoR> gnuoy: thx
[10:55] <BlackDex> Hello there..
[10:55] <BlackDex> i get the following error: WARNING juju.apiserver.client status.go:677 error fetching public address: public no address
[10:55] <BlackDex> Or warning actually
[10:55] <BlackDex> But, deployment seems to be stuck
[12:49] <beisner> gnuoy, tinwood - juju-deployer just grew a deploy from refspec feature aiui, and we could plumb our bundles/specs to use those instead of branch: lp:foo.
[12:50] <beisner> ie.  every change/review has a refspec which is an addressable pointer to that change + patchset
[12:51] <beisner> i do plan to wire that up so that we can optionally run relevant mojo specs against a review with more magic word commands via comment
[13:33] <tinwood> o/ beisner - not sure I entirely understood that! :) "aiui"?
[13:34] <beisner> tinwood, as i understand it ;-)  we need to exercise WIP/ change reviews against certain test cases which are already defined in mojo specs.  we just need to rewire the specs to optionally take a 'refspec' so that your proposed charm change can be exercised in the context of that test spec.
[13:35] <tinwood> ah, I see.  That is useful.
[13:36] <beisner> for ex:  someone proposes an improvement or fix for something HA or SSL.  our middle-of-the-road default tests may not exercise the thing that we really should.  we would then say something like  charm-recheck-ssl  ... which would then fire off the mojo ssl test specs.
[13:37] <tinwood> so custom commands as review comments being passed into juju-deployer.  nice.
[13:37] <beisner> the same thing that gets us that, also scratches the itch for needing to exercise some arbitrary gerrit change in a bundle or spec.
[13:37] <beisner> everything is prepped and in place to do that today, except our specs.
[13:38] <beisner> we're already passing the refspec value via env vars to every job.  the specs just need to be fitted to listen for that, and if exists, use it.
[13:41] <beisner> the goal here is that a contributor or reviewer can simply ask, wait for tests, then be armed with more info for their review.   instead of having to manually stand up an deployment, poke at it, run specs, etc., he/she can move onto something else while that bakes.
[13:43] <tinwood> I like that.  Obviously, it's bound into the artifact that's being reviewed.  Or are you saying an 'arbitrary' spec?
[13:43] <beisner> tinwood, right, it takes a review to get a refspec.  so one would have to propose something to take advantage of that.
[13:44] <beisner> tinwood, it'd also be nice to figure out how to grind off the rough edges of forking for WIPs then rebasing and proposing that as a change.  i think that workflow should exist and be usable.
[13:45] <beisner> naturally, that's outside the scope of the test automation, but from a developer standpoint, you may not want to propose every bright idea that comes along, yah?  :-)
[13:46] <tinwood> beisner, I've struggled with it a bit.  To update, I've had to checkout master, pull | fetch/merge, checkout branch, and rebase.  I guess I should also push back my master (as it's been synced with the original fork).
[13:47] <tinwood> beisner, also I needed to gitreview -s (although I did it manually) on my fork to get the ChangeId into the commit for the review to work.
[13:48] <beisner> tinwood, can we do a gist based on what you've found so far, and perhaps some of us can use it / tune it, to eventually add that alternative workflow example to our readme?
[13:49] <tinwood> beisner, sure.  Can do.  I'm currently just sorting out functional_tests in a venv which is interesting :)
[13:49] <beisner> i think there's value in having a solid example.  even if it has some hoops to jump through, at least it's a reference for what needs to be done if someone wants to wip it.
[13:49] <tinwood> sure.
[13:49] <beisner> tinwood, i bet.  that'll also be awesome to have nailed down.
[13:49] <tinwood> "watch this space" :) --- I hope!
[13:50] <beisner> tinwood, much appreciated
[14:31] <beisner> hi gnuoy - is https://review.openstack.org/#/c/292381/ ready for a full test validation, or do you have more work in progress on that one?
[14:32] <gnuoy> beisner, I'd like to run another test before asking for the full amulet run
[14:33] <beisner> gnuoy, ok thanks :-)  i'll let you trigger that whenever you're ready.  i was just making a pass through the reviews to see which ones should have that, and yours is on that list. :-)
[14:33] <beisner> ha.  two smiles even
[14:33] <gnuoy> kk, ta
[15:36] <BlackDex> I'm trying to deploy with deployer, but all the machines are currently stating: "Agent-Status": allocating - "Message": Waiting for agent initialization to finish
[15:58] <cory_fu> BlackDex: What does the [machines] section say?  It sounds like they're failing to provision
[16:18] <BlackDex> cory_fu: Hmm, i just rebooted the bootstrap node, and now everything seems to be continuing
[16:18] <BlackDex> that is trange
[16:19] <cory_fu> Odd
[19:06] <beisner> cholcombe, icey - full rechecks a-ok on these.  i believe these are the ones we discussed retesting and landing.  can you have a look and confirm?  tia.
[19:06] <beisner> https://review.openstack.org/#/c/288969/
[19:06] <beisner> https://review.openstack.org/#/c/286779/
[19:06] <beisner> https://review.openstack.org/#/c/292194/
[19:06] <beisner> https://review.openstack.org/#/c/291914/
[19:07] <beisner> cholcombe, icey - once those have landed, do a charm-recheck-full on https://review.openstack.org/#/c/288177/ and expect it to pass, yah?  :-)
[19:07] <cholcombe> beisner, the 3rd one isn't me
[19:07] <icey> https://review.openstack.org/#/c/292194/ is unrelated to our discussion
[19:07] <cholcombe> but the others look fine
[19:08] <beisner> cholcombe, ah indeed.  i had issued a recheck on that for a different reason.  but, that'll actually be for you two to review and optionally land now that it's had the full amulet test run. :-)
[19:08] <cholcombe> :)
[19:09] <beisner> cholcombe, so we're clear for takeoff on the other 3 from my perspective.
[19:09] <cholcombe> awesome!
[19:10] <cholcombe> beisner, land all the things
[19:10] <stokachu> wallyworld: does the api call to FullStatus only return the status of the default admin model?
[19:10] <stokachu> even if i switch models and call FullStatus it always return that of the admin model
[19:12] <stokachu> also this call https://godoc.org/github.com/juju/juju/api#Client.Status only returns minimal status information for current model
[19:13] <beisner> cholcombe, one thing:  james hadn't reviewed this one, and i'd like to get another os-charmer to code-review it with a +1.  with that, i'll be happy to +2 merge it.  https://review.openstack.org/#/c/291914/1
[19:13] <cholcombe> beisner, that's fine
[19:14] <cholcombe> beisner, i'm nowhere near as behind on that one as with the others
[19:14] <beisner> cholcombe, it doesn't appear to be inter-dependent on the others, is it?
[19:15] <cholcombe> beisner, nope
[19:16] <beisner> cholcombe, have the charmhelpers/*  changes from these have all landed in lp:charm-helpers?   (i've not done the stare & compare on that for these reviews)
[19:16]  * beisner seeks afternoon coffee
[19:17] <cholcombe> beisner, i believe so but let me double check
[19:18] <cholcombe> beisner, nope missing one still: https://code.launchpad.net/~xfactor973/charm-helpers/ceph-to-rados-fix/+merge/288559
[19:18] <cholcombe> lazyPower, want to +1 this? ^^
[19:20]  * lazyPower looks
[19:54] <lazyPower> cholcombe - merged at revision 547
[19:57] <lazyPower> beisner ^
[19:59] <beisner> thx lazyPower
[20:00] <beisner> hey lazyPower - on the charmers topic, since i've got the necessary +1s via email, what's next to get added to the charmers LP group?
[20:01] <lazyPower> beisner - Just a hangout with marcoceppi to get added to the lp group :) Plus the spiel about having the keys to the porche, and dont scratch it
[20:02] <lazyPower> he'll be back in tomorrow, so lets follow up with him then
[20:04] <beisner> lazyPower, ack, thanks!
[20:24] <cholcombe> lazyPower, thx :)
[20:26] <magicaltrout> marcoceppi: one of our guys says he'll probably send over some cards PR's when he gets bored, not sure what they entail, but if you see anything from breno polanski land, thats who it is
[20:30] <marcoceppi> magicaltrout: sweet!
[20:37] <pmatulis> when registering (juju register) a user specifies a password. where is this password used? silly question i know
[20:57] <beisner> cholcombe, cache tier and rolling upgrade changes merged.  initiated full amulet recheck on rolling upgrades.
[20:57] <cholcombe> beisner, woo!
[20:57] <beisner> cholcombe, err uh, rolling upgrades for ceph-mon landed, rechecking rolling upgrades for ceph-osd of course.
[21:59] <magicaltrout> when you delete your work directory that included a version of juju 2 alpha which you are using in production.........
[21:59] <magicaltrout> thank god for github
[22:05] <rick_h__> magicaltrout: ouch
[22:10] <magicaltrout> hehe
[22:10] <rick_h__> magicaltrout: trying b2?
[22:10] <magicaltrout> i just needed to work out roughly when i spun up the bootstrap node and pick a commit ;)
[22:11] <magicaltrout> rick_h__: my dev box runs trunk i do a nightly sync a rebuild
[22:12] <magicaltrout> i just have a random prod box running an old alpha cause i needed M4 instances and they weren't in 1.25, so I just went to 2.0 alpha, which works fine, but of course the config files have all changed :)
[22:12] <magicaltrout> its pretty static though so it doesn't really matter, the boxes it controls just run some mysql nonsense
[22:14] <rick_h__> magicaltrout: cool
[22:45] <wallyworld> stokachu: FullStatus should return the correct status for the relevant model. do you have an examle where that's not the case?
[22:49] <stokachu> wallyworld: https://paste.ubuntu.com/15387713/
[22:49] <stokachu> so basically I juju switch mydev:staging, then load up my api code
[22:49] <stokachu> and run a status against it
[22:50] <stokachu> you can see on line 36 the model name doesn't match the currently selected model
[22:50] <stokachu> but juju status works as expected
[22:50] <wallyworld> stokachu: is this the python client?
[22:50] <stokachu> wallyworld: yea
[22:51] <stokachu> wallyworld: well mine https://github.com/Ubuntu-Solutions-Engineering/macumba
[22:51] <stokachu> not python-jujuclient
[22:51] <wallyworld> ah
[22:52] <wallyworld> i'd day that client needs work to properly support multi-model (as a guess)
[22:52] <wallyworld> the current model is stored locally in the ~/.local/share/juju directory
[22:52] <stokachu> do i have to connect to each model's api server?
[22:52] <wallyworld> that model uuid would need to be passed with the FullStatus request
[22:52] <stokachu> or does each model have its own?
[22:53] <wallyworld> from memory, there's the one api server but the model uuid forms part of the request
[22:53] <stokachu> wallyworld: so the FullStatus isn't documented on godoc any longer
[22:53] <stokachu> just Status
[22:53] <stokachu> and it only takes patterns of some sort
[22:54] <wallyworld> that i was not aware of. i'll have to dig in and see what's been happening
[22:54] <wallyworld> off hand, i suspect it's just a matter of passing the correct model uuid along with the request
[22:54] <stokachu> wallyworld: https://godoc.org/github.com/juju/juju/api#Client.Status
[22:54] <wallyworld> that's api, not apiserver
[22:55] <wallyworld> you want to look at the apiserver docs
[22:55] <stokachu> ah right
[22:55] <wallyworld> https://godoc.org/github.com/juju/juju/apiserver/client#Client.FullStatus
[22:56] <stokachu> wallyworld: so the status params just talks about patterns
[22:57] <stokachu> where is it defined on what parameters can be passed to that endpoint?
[22:57] <wallyworld> i think the model uuid forms part of the url eg 192.168.1.1/models/<uuid> but i'll need to check
[22:57] <wallyworld> i'm not sure where that's documented off hand
[22:58] <wallyworld> i'll find out
[23:09] <wallyworld> stokachu: so yeah, you form the base of the websocket url by appending /model/<uuid> to the ip address of the controller
[23:09] <wallyworld> that's how it knows what model it is using
[23:10] <stokachu> so currently my URL is wss://192.168.1.1/model/<uuid>/api
[23:10] <stokachu> so i just need to make sure to relogin with the updated uuid
[23:10] <stokachu> when i want to 'switch' models effectively
[23:13] <wallyworld> stokachu: yes, correct
[23:13] <wallyworld> that's what the juju cli does
[23:13] <stokachu> wallyworld: ok cool, thanks for clearing that up
[23:13] <wallyworld> np
[23:14] <stokachu> ive just been using the 'controller' uuid
[23:14] <wallyworld> lwt me know if you have any other issues
[23:14] <stokachu> which is essentially the admin model
[23:14] <wallyworld> yep
[23:14] <wallyworld> the controller uuid and admin model share a uuid at the moment
[23:14] <wallyworld> probably forever
[23:15] <stokachu> yea once those name changes come into effect it'll be much clearer
[23:15] <wallyworld> stokachu: hopefully beta3 this week will have the latest multi-model goodness with a default admin model and a host model the user can name on bootstrap
[23:16] <stokachu> wallyworld: ok cool man
[23:25] <stokachu> wallyworld: cool just tested making sure to relogin to the new model uuid and it works as expected
[23:26] <wallyworld> stokachu: great, it works?
[23:28] <stokachu> wallyworld: yea works great now
[23:29] <wallyworld> awesome
[23:56] <naye> hello
[23:56] <naye> hola